WO2022110614A1 - 手势识别方法及装置、电子设备和存储介质 - Google Patents

手势识别方法及装置、电子设备和存储介质 Download PDF

Info

Publication number
WO2022110614A1
WO2022110614A1 PCT/CN2021/086967 CN2021086967W WO2022110614A1 WO 2022110614 A1 WO2022110614 A1 WO 2022110614A1 CN 2021086967 W CN2021086967 W CN 2021086967W WO 2022110614 A1 WO2022110614 A1 WO 2022110614A1
Authority
WO
WIPO (PCT)
Prior art keywords
gesture recognition
human body
area
hand
region
Prior art date
Application number
PCT/CN2021/086967
Other languages
English (en)
French (fr)
Inventor
赵代平
许佳
孔祥晖
孙德乾
Original Assignee
北京市商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京市商汤科技开发有限公司 filed Critical 北京市商汤科技开发有限公司
Publication of WO2022110614A1 publication Critical patent/WO2022110614A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Definitions

  • the present disclosure relates to the field of computer technologies, and in particular, to a gesture recognition method and device, an electronic device, and a storage medium.
  • Gesture interaction can be a human-computer interaction method that uses computer technology to recognize human gesture language and convert it into commands to control electronic devices (such as smart TVs, smart air conditioners, etc.).
  • gesture recognition technology is the key technology to realize gesture interaction.
  • the present disclosure proposes a gesture recognition technical solution.
  • a gesture recognition method including: acquiring a video to be recognized; performing human body detection on the video to obtain the number of first objects included in the video; The number of gesture recognition methods corresponding to the number of , identify the gestures in the video, and obtain a gesture recognition result.
  • the number of the first objects is greater than or equal to two, and the video includes a first video frame; Recognizing the gestures in the video to obtain the gesture recognition results, including: in the first video frame, respectively acquiring the human body area and the hand area of at least one object in the first object; based on the human body area and the first object A positional relationship between preset regions, a second object is determined from the first objects, the human body region of the at least one object includes the first human body region of the second object, the hand of the at least one object The area includes a first hand area of the second object, and the first human body area is located in the first preset area; the positional relationship between the first hand area and the first human body area When the preset position condition is satisfied, gesture recognition is performed on the first hand region to obtain a first gesture recognition result.
  • the first gesture recognition result includes one of a valid gesture recognition result and an invalid gesture recognition result.
  • the video includes a second video frame located after the first video frame; if the first gesture recognition result includes an invalid gesture recognition result, the following and the The gesture recognition method corresponding to the number of the first objects, recognizing the gestures in the video to obtain the gesture recognition result, and further comprising: in the second video frame, respectively acquiring the second human body area of the second object and the second hand area; in the case that the positional relationship between the second hand area and the second human body area satisfies the preset position condition, perform gesture recognition on the second hand area, A second gesture recognition result is obtained.
  • the body area of the at least one subject includes a third body area of a third subject, and the hand area of the at least one subject includes a third hand area of the third subject;
  • the The described method of recognizing gestures in the video according to the gesture recognition mode corresponding to the number of the first objects, to obtain a gesture recognition result further comprises: in the area between the first hand area and the first human body area. In the case that the positional relationship between them does not satisfy the preset position condition, the third object is determined from the first object, and the third human body area is located in the second preset area; If the positional relationship between the hand region and the third human body region satisfies the preset position condition, gesture recognition is performed on the third hand region to obtain a third gesture recognition result.
  • the second preset area partially overlaps with the first preset area, or the second preset area is adjacent to the first preset area.
  • the video includes a second video frame located after the first video frame, and a third video frame located after the second video frame; after obtaining the third gesture recognition After the result, the method further includes: in response to that in the third video frame, the positional relationship between the fourth hand region of the second object and the fourth human body region of the second object satisfies the Preset position conditions, perform gesture recognition on the fourth hand region, and obtain a fourth gesture recognition result.
  • the first preset area includes a central area of the video frame of the video; the first preset area based on the positional relationship between the human body area and the first preset area Determining the second object among one object includes: in the case that the first preset region includes multiple human body regions, minimizing the distance between the multiple human body regions and the first preset region The human body region is determined as the first human body region; the object corresponding to the first human body region is determined as the second object.
  • the method further includes: in the case that the first hand region includes two and the preset gesture is a single-hand gesture, placing one of the first hand regions The hand area is determined as the first hand area.
  • the preset position condition includes: a first height difference between the height of the target object's hand region and the height of the crotch region, which is greater than or equal to a height threshold, and the height threshold is different from the second height threshold.
  • the height difference is positively correlated
  • the second height difference is the height difference between the height of the shoulder region of the target object and the height of the crotch region
  • the target object includes the second object and the third object. at least one.
  • the method further includes: if the gesture recognition result is a valid gesture recognition result, controlling the electronic device to perform an operation corresponding to the valid gesture recognition result.
  • a gesture recognition device comprising: an acquisition module for acquiring a video to be recognized; a detection module for performing human body detection on the video to obtain a first object included in the video
  • the recognition module is configured to recognize the gestures in the video according to the gesture recognition mode corresponding to the number of the first objects, and obtain the gesture recognition result.
  • the number of the first objects is greater than or equal to two, and the video includes a first video frame;
  • the identification module includes: a first acquisition sub-module, configured to In a video frame, the human body area and the hand area of at least one object in the first objects are obtained respectively;
  • the first determination sub-module is used for, based on the positional relationship between the human body area and the first preset area, from A second object is determined from the first objects, the body region of the at least one object includes the first body region of the second object, and the hand region of the at least one object includes the first hand of the second object the first human body region is located in the first preset region;
  • the first identification sub-module is used to satisfy the preset positional relationship between the first hand region and the first human body region In the case of the position condition, gesture recognition is performed on the first hand region to obtain a first gesture recognition result.
  • the first gesture recognition result includes one of a valid gesture recognition result and an invalid gesture recognition result.
  • the video includes a second video frame located after the first video frame; if the first gesture recognition result includes an invalid gesture recognition result, the recognition module, further Including: a second acquisition sub-module, used for respectively acquiring the second human body area and the second hand area of the second object in the second video frame; a second identification sub-module, used for When the positional relationship between the hand region and the second human body region satisfies the preset position condition, gesture recognition is performed on the second hand region to obtain a second gesture recognition result.
  • a second acquisition sub-module used for respectively acquiring the second human body area and the second hand area of the second object in the second video frame
  • a second identification sub-module used for When the positional relationship between the hand region and the second human body region satisfies the preset position condition, gesture recognition is performed on the second hand region to obtain a second gesture recognition result.
  • the body area of the at least one subject includes a third body area of a third subject, and the hand area of the at least one subject includes a third hand area of the third subject;
  • the The identification module further includes: a second determination sub-module, configured to determine from the first hand region and the first human body region the positional relationship between the The third object is determined from the first object, and the third human body area is located in the second preset area; the third identification sub-module is used for between the third hand area and the third human body area In the case that the positional relationship of the first hand satisfies the preset position condition, gesture recognition is performed on the third hand region to obtain a third gesture recognition result.
  • the second preset area partially overlaps with the first preset area, or the second preset area is adjacent to the first preset area.
  • the video includes a second video frame located after the first video frame, and a third video frame located after the second video frame; after obtaining the third gesture recognition
  • the apparatus further includes: a fourth identification sub-module for responding to the fourth hand region of the second object and the fourth body region of the second object in the third video frame The positional relationship between them satisfies the preset position condition, and gesture recognition is performed on the fourth hand region to obtain a fourth gesture recognition result.
  • the first preset area includes a central area of the video frame of the video;
  • the first determination sub-module includes: a human body area determination unit, configured to In the case where the area includes multiple human body areas, the human body area with the smallest distance from the first preset area among the multiple human body areas is determined as the first human body area; the object determination unit, using in determining the object corresponding to the first human body region as the second object.
  • the apparatus further includes: a hand area determination module, configured to determine the first hand area including two and the preset gesture is a single-hand gesture One of the first hand regions is determined as the first hand region.
  • the preset position condition includes: a first height difference between the height of the target object's hand region and the height of the crotch region, which is greater than or equal to a height threshold, and the height threshold is different from the second height threshold.
  • the height difference is positively correlated
  • the second height difference is the height difference between the height of the shoulder region of the target object and the height of the crotch region
  • the target object includes the second object and the third object. at least one.
  • the apparatus further includes: a control module, configured to control the electronic device to perform an operation corresponding to the valid gesture recognition result when the gesture recognition result is a valid gesture recognition result.
  • an electronic device comprising: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to invoke the instructions stored in the memory to execute the above method.
  • a computer-readable storage medium having computer program instructions stored thereon, the computer program instructions implementing the above method when executed by a processor.
  • a computer program comprising computer readable code, when the computer readable code is executed in an electronic device, a processor in the electronic device executes the above method.
  • the corresponding gesture recognition method can be selected for gesture recognition according to the number of characters in the video. In this way, a more targeted gesture recognition method can be adopted based on a single-person scene or a multi-person scene. Especially for multi-person scenarios, it can improve the accuracy of gesture recognition.
  • FIG. 1 shows a flowchart of a gesture recognition method according to an embodiment of the present disclosure.
  • FIG. 2a shows schematic diagram 1 of a preset gesture according to an embodiment of the present disclosure.
  • Fig. 2b shows a second schematic diagram of a preset gesture according to an embodiment of the present disclosure.
  • FIG. 2c shows a schematic diagram 3 of a preset gesture according to an embodiment of the present disclosure.
  • FIG. 3 shows a schematic diagram of a preset area according to an embodiment of the present disclosure.
  • FIG. 4 shows a schematic diagram of a human body key point according to an embodiment of the present disclosure.
  • FIG. 5 shows a schematic diagram of a gesture recognition process according to an embodiment of the present disclosure.
  • FIG. 6 shows a block diagram of a gesture recognition apparatus according to an embodiment of the present disclosure.
  • FIG. 7 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
  • FIG. 8 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
  • FIG. 1 shows a flowchart of a gesture recognition method according to an embodiment of the present disclosure. As shown in FIG. 1 , the gesture recognition method includes:
  • step S11 the video to be identified is acquired.
  • step S12 human body detection is performed on the video to obtain the number of first objects included in the video.
  • step S13 the gesture in the video is recognized according to the gesture recognition method corresponding to the number of the first objects, and the gesture recognition result is obtained.
  • the gesture recognition method may be executed by an electronic device such as a terminal device or a server
  • the terminal device may be a user equipment (User Equipment, UE), a mobile device, a user terminal, a terminal, a cellular phone, a cordless Phones, personal digital assistant (Personal Digital Assistant, PDA) devices, handheld devices, computing devices, vehicle-mounted devices, wearable devices, etc.
  • the method can be implemented by the processor calling the computer-readable instructions stored in the memory.
  • the method may be performed by a server.
  • the video to be identified may be acquired in step S11.
  • the video to be identified may be a video captured by a video capture device.
  • the video capture device may be any known suitable video capture device, such as, but not limited to, a common web camera, a depth camera, a digital camera device, etc. The present disclosure does not limit the specific type of the video capture device.
  • the video capture device in the present disclosure may be a device in an electronic device that executes the gesture recognition method, or may be a device independent of the electronic device that executes the gesture recognition method.
  • the embodiments of the present disclosure do not limit the relationship between the video capture device and the electronic device.
  • any known human body detection method may be used to perform human body detection on the video.
  • the human body detection method may include: extracting human key points (such as human body key points (such as 13 human body key points of joint parts), wherein the number and position of human body key points can be determined according to actual needs, which is not limited here; The embodiment of the present disclosure does not limit which human body detection method is adopted.
  • a video frame including the first object in the video can be determined, that is, a video frame including a human body in the video can be determined.
  • the first object may include one or more objects.
  • the first object may include one or more objects.
  • the number of the first objects included in the video may be determined according to the human body detection result.
  • the human body detection result may be detected human body key points or human body contours, and the number of human bodies in the video frame may be determined according to the human body key points or human body contours, and thus the number of first objects in the video frame may be determined.
  • a gesture recognition method corresponding to the number of the first objects may be preset, and then in step S13, the gestures in the video are recognized according to the gesture recognition method corresponding to the number of the first objects .
  • presetting a gesture recognition manner corresponding to the number of first objects may include: when the number of first objects is equal to one, using the first object as a manipulation Perform gesture recognition on the object's one hand or the hand area of both hands; when the number of the first objects is greater than or equal to two, determine the manipulation object from the first objects according to the position of the first object, and determine the manipulation object from the first object.
  • Gesture recognition on the hand area of one or both hands one hand can be left hand or right hand.
  • the manipulation object may refer to an object that manipulates the electronic device through gestures.
  • the gesture recognition result may include one of a valid gesture recognition result and an invalid gesture recognition result.
  • the valid gesture recognition results may include results that match the preset gestures; the invalid gesture recognition results may include results that do not match the preset gestures.
  • the preset gesture may be a preset gesture pattern corresponding to the manipulation of the electronic device to perform a corresponding operation.
  • the preset gestures may include but are not limited to the gestures shown in Figure 2a, Figure 2b and Figure 2c
  • the gesture in Figure 2a may be corresponding to the confirmation operation
  • the gesture in Figure 2b may be corresponding to the switching operation
  • the gesture in Figure 2c may be Corresponds to the close operation.
  • the gesture recognition result that matches any gesture shown in Figure 2a, Figure 2b and Figure 2c can be a valid gesture recognition result; correspondingly, it does not match all gestures shown in Figure 2a, Figure 2b and Figure 2c.
  • the gesture recognition result may be an invalid gesture recognition result.
  • any known gesture recognition method can be used for gesture recognition in the video, for example, gesture recognition method based on geometric features, gesture recognition method based on orientation histogram, etc.
  • the embodiments of the present disclosure are not limited.
  • a gesture recognition result that matches the preset gesture may be a valid gesture recognition result, and a gesture recognition result that does not match the preset gesture may be an invalid gesture recognition result.
  • Confidence may be a measure used to measure the reliability of the gesture recognition result. Confidence can be obtained from the output of a pre-trained neural network. The higher the confidence level, the higher the confidence level of the recognized gesture recognition result.
  • a confidence threshold may be set for the confidence, and a gesture recognition result corresponding to a confidence higher than the confidence threshold may be regarded as a valid gesture recognition result. The present disclosure does not limit the specific value of the confidence threshold.
  • the corresponding gesture recognition method can be selected according to the number of people in the video for gesture recognition, thereby improving the accuracy of gesture recognition in multi-person scenes.
  • the number of the first objects may be greater than or equal to two.
  • the video may include a first video frame.
  • the gestures in the video are recognized, and the gesture recognition results are obtained, which may include:
  • the human body area and the hand area of at least one object in the first object are obtained respectively;
  • a second object is determined from the first objects based on the positional relationship between the human body region and the first preset region, the human body region of at least one object includes the first human body region of the second object, and the hand region of at least one object includes the first human body region of the second object.
  • the first hand area of the two objects, and the first human body area is located in the first preset area;
  • gesture recognition is performed on the first hand region to obtain a first gesture recognition result.
  • the human body region may be determined according to the human body detection result in the present disclosure.
  • the human body frame can be determined according to the detected human body key points or human body contour, and the area of the human body frame in the video frame can be used as the human body area of each object; or the area of the detected human body contour in the video frame can be directly used as The human body area of each object is not limited in this embodiment of the present disclosure.
  • any known hand detection method may be used to perform hand detection on the video frame, and then determine the hand region of at least one object according to the hand detection result.
  • the hand detection method may include, for example: extracting key points of the hand in the video frame (such as the key points of 20 joint parts of the hand), wherein the number and position of the key points of the hand may be determined according to actual requirements, here No limitation; alternatively, hand contours in video frames, etc. can also be extracted.
  • the embodiment of the present disclosure does not limit which hand detection method is adopted.
  • the hand region may be determined according to the hand detection result in the present disclosure.
  • the hand frame can be determined according to the detected hand key points or hand contour, and the area of the hand frame in the video frame can be used as the hand area of each object; or the detected hand contour can be directly placed in the video frame.
  • the area in as the hand area of each object, is not limited in this embodiment of the present disclosure.
  • the first preset area may be an area for determining the first object in the video frame, for example, the first preset area may be a central area of the video frame.
  • the range and shape of the first preset region can be set according to actual requirements, which is not limited in this embodiment of the present disclosure.
  • human body detection and hand detection may be performed simultaneously, or may be performed sequentially according to the set detection order, which may be determined according to the processing capability of the device that implements the detection function, Factors that may affect the detection sequence, such as the resource occupancy of the device and the time delay limit in the application process, are set, which are not limited in this embodiment of the present disclosure.
  • the positional relationship between the human body region and the first preset region may be the distance between the human body region of at least one object and the first preset region, or may also be the human body of at least one object The degree of overlap between the area and the first preset area.
  • determining the second object from the first object based on the positional relationship between the human body area and the first preset area may be based on the distance between the human body area of at least one object and the first preset area, The object corresponding to the shortest human body area is used as the second object; or the object corresponding to the human body area with the largest overlap degree may be used as the second object according to the degree of overlap between the human body area and the first preset area. make restrictions.
  • the first human body area is located in the first preset area, the entire area of the first human body area may be located in the first preset area, or a part of the first human body area may be located in the first preset area. in a preset area. It can be understood that, in the case that the first human body region and the first preset region have an overlapping region, it is determined that the first human body region is located in the first preset region.
  • the positional relationship between the first hand region and the first human body region of the second object may include a height relationship between the first hand region and the first human body region.
  • the preset position condition may be a preset condition for determining whether to perform gesture recognition based on the positional relationship between the hand area and the human body area.
  • the preset position condition may be that the hand area is higher than the crotch area, or the hand area is higher than the elbow area, and the like.
  • the specific content of the preset location condition may be set according to gestures that can be used to control the electronic device to perform corresponding operations, which is not limited in this embodiment of the present disclosure.
  • the manipulation object when it wishes to control the electronic device through gestures, it usually sends out a control gesture in the state of raising his hand, then the positional relationship between the first hand region and the first human body region satisfies the preset position condition.
  • the second object when the second object is in a raised hand state, performing gesture recognition on the first hand region to obtain the first gesture recognition result can make the recognized gesture more effective and reduce misoperations.
  • human body detection and hand detection may be performed simultaneously, or may be performed sequentially according to a set detection sequence. It can be set according to factors that may affect the detection sequence, such as the processing capability of the device implementing the detection function, the resource occupancy of the device, and the time delay limit in the application process, which is not limited in this embodiment of the present disclosure.
  • gesture recognition may be performed on the first hand region to obtain the first gesture recognition result.
  • any known gesture recognition method can be used to perform gesture recognition on the first hand region, for example, a gesture recognition method based on geometric features, a gesture recognition method based on a direction histogram, etc. limit.
  • the first gesture recognition result may include one of a valid gesture recognition result and an invalid gesture recognition result.
  • the valid gesture recognition results may include results that match the preset gestures; the invalid gesture recognition results may include results that do not match the preset gestures.
  • the preset gesture may be a preset gesture pattern corresponding to the manipulation of the electronic device to perform a corresponding operation.
  • a second object can be determined from a plurality of first objects, and when the second object is in a raised hand state, gesture recognition is performed on the hand area of the second object, so as to determine in a multi-person scenario Effective manipulation objects are obtained, and then gesture recognition is performed on the manipulation objects, which can improve the effectiveness of gesture recognition and reduce misoperations.
  • the method further includes: when the gesture recognition result is a valid gesture recognition result, controlling the electronic device to perform an operation corresponding to the valid gesture recognition result. For example, in the case that the first gesture recognition result matches the gesture shown in FIG. 2a, the electronic device may be controlled to perform a confirmation operation.
  • the electronic device may be controlled to perform the corresponding operation by sending an operation instruction corresponding to the preset gesture.
  • the electronic device to be controlled may be the same as or different from an electronic device (may be referred to as a second electronic device) that executes the gesture recognition method according to an embodiment of the present disclosure .
  • the electronic device may be, for example, a terminal such as a smart TV, a smart phone, etc.
  • human body detection, hand detection, and gesture recognition are implemented, and according to the The recognized gestures perform corresponding operations, such as channel switching, remote photography, etc.
  • the first electronic device may be, for example, a terminal such as a smart TV or a smart phone
  • the second electronic device may be any terminal or server
  • the second electronic device Human body detection, human hand detection, and gesture recognition are realized, and the first electronic device is controlled to perform corresponding operations according to the recognized gestures.
  • This embodiment of the present disclosure is not limited.
  • the video capture device in the present disclosure may be a device belonging to the first electronic device and the second electronic device, or may be a device independent of the first electronic device and the second electronic device.
  • the embodiment of the present disclosure does not limit the relationship between the video capture device and the controlled electronic device.
  • the operation performed by the electronic device can be controlled to remain unchanged, or the operation performed by the electronic device can be controlled again Perform the same operation (for example, switch the channel again); in the case that the valid gesture recognition result of the manipulation object in the current video frame is different from the valid gesture recognition result of the manipulation object in the previous video frame, the electronic device can be controlled to perform the same operation as in the current video frame.
  • the operation corresponding to the valid gesture recognition result of the manipulation object in the case that the valid gesture recognition result of the manipulation object in the current video frame is the same as the valid gesture recognition result of the manipulation object in the previous video frame.
  • the embodiments of the present disclosure it is possible to remotely control an electronic device through gestures, and in a multi-person scenario, the accuracy of gesture recognition can be effectively improved, and user experience can be improved.
  • the video may include video frames based on time series, and video frames after the first video frame may be determined based on the time series. It can be understood that the video frames after the first video frame can be selected according to a certain time interval, or frame number interval, etc. For example, one frame of video frame can be selected every 2 frames, or one frame can be selected every 0.2 seconds. video frames, or you can select all video frames in sequence. The embodiment of the present disclosure does not limit the selection of video frames after the first video frame.
  • the video may include a second video frame located after the first video frame; if the first gesture recognition result includes an invalid gesture recognition result, in step S13, according to the relationship with the first object
  • the gesture recognition method corresponding to the number recognize the gesture in the video, and obtain the gesture recognition result, which also includes:
  • gesture recognition is performed on the second hand region to obtain a second gesture recognition result.
  • the gesture recognition result may include one of a valid gesture recognition result and an invalid gesture recognition result; the invalid gesture recognition result may include a result that does not match a preset gesture. Then, if the first gesture recognition result includes an invalid gesture recognition result, it can be understood that the gesture of the recognized manipulation object does not match the preset gesture.
  • the gesture recognition is performed on the second hand region, which can be understood as: after recognizing the second object in the first video frame When the gesture of the second video frame does not match the preset gesture, and the second object in the second video frame is still in the raised hand state, perform gesture recognition on the hand area of the second object in the second video frame.
  • the second human body region may be determined according to the human body detection result.
  • the second hand region may be determined according to the hand detection result.
  • the human body detection result and the hand detection result may be determined by using the human body detection method and the hand detection method disclosed in the above embodiments of the present disclosure, which are not limited by the embodiments of the present disclosure.
  • the second video frame may be a continuous video frame after the first video frame, or may be a video frame at an interval of a certain number of frames or a certain period of time after the first video frame, to which the embodiments of the present disclosure No restrictions.
  • gesture tracking of the manipulation object can be implemented, gesture recognition can be more targeted, and the number of switching manipulation objects can be reduced, thereby improving processing efficiency.
  • an object when it expects to control an electronic device through gestures, it usually sends out a control gesture in a state of raising his hand.
  • the second object may be in a raised hand state or may be in a non-raised hand state.
  • the second object When the second object is in a state of not raising his hand, it can be considered that the second object may not control the electronic device through gestures.
  • the human body region of at least one object may include a third human body region of a third object, and the hand region of at least one object may include a third hand region of the third object; in step S13, according to The gesture recognition method corresponding to the number of the first objects, recognizes the gestures in the video, and obtains the gesture recognition result, and may further include:
  • gesture recognition is performed on the third hand region to obtain a third gesture recognition result.
  • the third object may be an object other than the second object in the first object.
  • the second preset area may be an area in the video frame for determining the third object.
  • the second preset area may partially overlap with the first preset area, or the second preset area may be adjacent to the first preset area.
  • the degree of overlap between the second preset area and the first preset area may be set according to actual requirements, which is not limited in this embodiment of the present disclosure.
  • the first preset area may include an area in the video frame
  • the second preset area may include one or more areas in the video frame.
  • the range, shape, and quantity of the second preset region may be set according to actual requirements, which are not limited in the embodiments of the present disclosure.
  • FIG. 3 shows a schematic diagram of a preset area according to an embodiment of the present disclosure.
  • the area A may be the first preset area
  • the area B and the area C may be the second preset area.
  • determining the third object from the first object can be understood as: In the first video frame, when the second object is in a state of not raising its hand, the third object located in the second preset area is determined from the first object, so that other objects can be searched for when the second object is in the state of not raising its hand.
  • Gesture recognition which is closer to the gesture recognition situation in the actual scene, and improves the effectiveness of gesture recognition.
  • determining the third object from the first object may be the human body region corresponding to the human body region of each object located in the second preset region that is closest to the human body region of the first object The object is determined as the third object; or the object corresponding to the human body region with the highest degree of overlap between the human body region of each object in the second preset region and the human body region of the first object is determined as the third object, and this
  • the embodiments of the present disclosure are not limited.
  • a coordinate system may be established based on the video frame, and according to the coordinates of the center point or boundary point of the human body area, the human body area of each object located in the second preset area is determined, which is different from the human body area of the first object. The distance between them, and then the object corresponding to the human body area with the closest distance is determined as the third object.
  • the third human body area is located in the second preset area, the entire area of the third human body area may be located in the second preset area, or a part of the third human body area may be located in the second preset area. two preset areas. It can be understood that, in the case that the third human body region and the second preset region have an overlapping region, it is determined that the third human body region is located in the second preset region.
  • the preset position condition may be a preset condition for determining whether to perform gesture recognition based on the positional relationship between the hand area and the human body area.
  • gesture recognition is performed on the third hand region to obtain a third gesture recognition result .
  • performing gesture recognition on the third hand region of the third object to obtain a third gesture recognition result can make the recognized gesture more effective and reduce misoperation.
  • the third object can be determined when the second object does not raise his hand, and when the third object raises his hand, gesture recognition is performed on the hand region of the third object, so that in a multi-person scenario Determine the effective manipulation object, improve the effectiveness of gesture recognition, and reduce misoperation.
  • the object in the center area in the video frame is more likely to be the manipulated object. big.
  • the first preset area is the central area, it may be considered that the second object in the first preset area has a high probability of being a manipulation object, and the second object located in the second preset area has a high probability of being manipulated.
  • the three objects are the next most likely to be manipulated objects.
  • the video may include a second video frame located after the first video frame, and a third video frame located after the second video frame;
  • the gesture recognition method may further include:
  • gesture recognition is performed on the fourth hand region, and the fourth hand region is obtained.
  • the second video frame may be a continuous video frame after the first video frame, or may be a video frame at a certain number of frames or a certain time interval after the first video frame; correspondingly Yes, the third video frame may be a continuous video frame after the second video frame, or may be a video frame spaced by a certain number of frames or a certain period of time after the second video frame, which is not limited in this embodiment of the present disclosure.
  • the third gesture recognition result of the third object may include one of a valid gesture recognition result and an invalid gesture recognition result.
  • the gesture recognition performed on the fourth hand region can be understood as, in the case where gesture recognition is performed on the hand region of the third object in the second video frame, it is detected that the second object in the third video frame is in a raised hand state, Since the second object has a higher probability of being the manipulation object, it is possible to switch to performing gesture recognition on the hand region of the second object.
  • the positional relationship between the fourth hand region of the second object and the fourth human body region of the second object may not satisfy the preset position condition.
  • the fifth hand of the third object may continue to be determined The position relationship between the region and the fifth human body region, and under the condition that the preset position conditions are met, gesture recognition is performed on the fifth hand region to obtain a fifth gesture recognition result.
  • the control object when it is detected that the second object in the first preset area is in a raised hand state, it can be switched to perform gesture recognition on the hand area of the second object, and the probability of more effective determination is higher.
  • the control object can improve the accuracy of gesture recognition.
  • the object in the central area of the video frame has a high probability of being the manipulation object.
  • the first preset area may include a central area of the video frame of the video; the second object is determined from the first object based on the positional relationship between the human body area and the first preset area , which can include:
  • the human body region with the smallest distance from the first preset region among the multiple human body regions is determined as the first human body region;
  • the object is determined to be the second object.
  • the first preset region may include one human body region or multiple human body regions.
  • the human body region can be directly determined as the first human body region.
  • the distance between the human body area and the first preset area may be based on the distance between the center point of the human body area and the center point of the first preset area; or, may also be based on the distance between the human body area and the center point of the first preset area.
  • the coordinate system may be established based on the video frame, and the distances between the multiple human body regions and the first preset region may be determined by the coordinates.
  • the distance and the like may be determined by the center point coordinates of the multiple body regions and the center point coordinates of the first preset region.
  • the embodiment of the present disclosure does not limit how to determine the distances between the multiple body regions and the first preset region.
  • the second object is determined based on the distances between the multiple human body regions and the first preset region, and in the case that the first preset region includes multiple human body regions, it is possible to effectively determine the closer object
  • the object closer to the middle is used as the manipulation object, which can be closer to the actual application scene and improve the accuracy of gesture recognition.
  • the human body usually contains both hands. If the first hand area includes two, and the preset gesture includes a two-hand gesture (for example, the hands form a shape of a heart), gesture recognition can be performed directly on the two first hand areas; if the first hand area includes two and the preset gestures only include one-hand gestures, the hand area can be selected.
  • a two-hand gesture for example, the hands form a shape of a heart
  • the gesture recognition method may further include: in the case that the first hand area includes two and the preset gesture is a single-hand gesture, identifying one hand in the first hand area The first hand region is determined.
  • the left hand or the right hand hand area in the first hand area may be determined as the first hand. area.
  • the identity of the second object can also be identified by means of identity authentication, face recognition, etc., to determine the operating habits of the second object based on user data, and then to measure the selection of the hand area of the left hand.
  • Gesture recognition or select the hand area of the right hand for gesture recognition. For example, for right-handed people, they are accustomed to raising their right hand to operate; while for left-handed people, they are accustomed to raising their left hand to operate. identify.
  • the effectiveness of gesture recognition can be further improved.
  • the preset position condition may be a preset condition for determining whether to perform gesture recognition based on the positional relationship between the hand area and the human body area. According to whether the positional relationship between the hand area and the human body area satisfies the preset position condition, it can be determined whether the object is in the state of raising the hand.
  • the preset position condition may include: a first height difference between the height of the target object's hand area and the height of the crotch area, which is greater than or equal to a height threshold; the height threshold may be the same as the second height.
  • the height difference is positively correlated, the second height difference is the height difference between the height of the shoulder region and the height of the crotch region of the target object, and the target object includes at least one of the second object and the third object.
  • the first height difference between the height of the hand region and the height of the crotch region of the target object may be the height difference between the position of the hand and the crotch region of the target object.
  • the height difference between the height of the shoulder area and the height of the crotch area of the target object may be the height difference between the position of the shoulder and the position of the crotch.
  • the key points of the joint parts of the human body can be determined.
  • the key point coordinates of each human body joint part can be known, so that the positions of the hand region, the crotch region and the shoulder region can be determined.
  • the first height difference between the height of the hand area and the height of the crotch area of the target object can be determined, and the shoulder area of the target object can be determined The height difference between the height and the height of the crotch area.
  • FIG. 4 shows a schematic diagram of a human body key point according to an embodiment of the present disclosure.
  • the numbers 0-13 in Figure 4 represent the detected key points of human joints, for example, 0 represents the head, 1 represents the neck, 2 and 5 represent the shoulders, 3 and 6 represent the elbows, and 4 and 7 represent Hands, 8 and 11 for the crotch, 9 and 12 for the knees, and 10 and 13 for the feet.
  • the position of the hand 7 can be (x7, y7), and the height of the hand area can be y7;
  • the position of the part 5 can be (x5, y5), the height of the shoulder region can be y5;
  • the position of the crotch part 11 can be (x11, y11), and the height of the crotch region can be y11.
  • the first height difference between the height of the hand area and the height of the crotch area may be (y7-y11)
  • the height of the crotch area and the height of the crotch area may be (y7-y11).
  • the second height difference between the heights of the regions may be (y5-y11).
  • the height threshold may be set according to actual requirements.
  • the height threshold may be 1/3 of the second height difference, or 1/2 of the second height difference, etc.
  • the disclosed embodiments are not limiting.
  • the first height difference when the first height difference is greater than or equal to the height threshold, it can be determined that the positional relationship between the hand area and the human body area satisfies the preset position condition, that is, it can be considered that at this time The target object is in a raised hand state; when the first height difference is less than the height threshold, it can be determined that the positional relationship between the hand area and the human body area does not meet the preset position condition, that is, it can be considered that the target object is in The state of not raising the hand;
  • the height threshold is [1/3 ⁇ (y5-y11)], then at (y7-y11) In the case of ⁇ [1/3 ⁇ (y5-y11)], it can be considered that the preset position condition is satisfied between the hand area and the human body area; otherwise, it is considered that the preset position condition is not satisfied.
  • the hand region can be recognized, which can reduce the recognition of the hand region that does not need a response, and improve the gesture recognition efficiency.
  • FIG. 5 shows a schematic diagram of a gesture recognition process according to an embodiment of the present disclosure. As shown in Figure 5, input the video to be recognized; perform human detection and hand detection on the input video according to the time sequence;
  • the first video frame in the input video if there is one first object in the first video frame, it is directly determined as the second object; and between the hand area and the human body area of the second object Recognize the gesture of the second object under the condition that the above-mentioned preset position condition is satisfied (that is, the second object raises his hand);
  • the second object may be determined according to the positional relationship between the above-mentioned human body region and the first preset region;
  • the gesture of the third object is recognized when the above-mentioned preset position condition (that is, the third object raises his hand) is satisfied between the hand region of the third object and the human body region.
  • the above-mentioned preset position condition that is, the third object raises his hand
  • the gesture recognized by the first video frame is the gesture of the second object, track the hand area of the second object, and recognize the hand region of the second object when the second object raises his hand.
  • the gesture recognized by the first video frame is the gesture of the third object
  • the above-mentioned preset position condition is not satisfied between the hand area and the human body area of the second object (that is, the second object still does not raise his hand), and the above-mentioned preset position is satisfied between the hand area and the human body area of the third object Under the condition (that is, the third object still raises his hand), the gesture of the third object is recognized.
  • an effective manipulation object can be accurately determined from a plurality of objects, so as to realize accurate and efficient identification of gestures issued by the manipulation object that need to be responded to.
  • the middleman can be determined from the multiple persons detected in the video frame, and the main operator of the middleman can be determined; the gesture recognition is performed when the main operator of the middleman is raised, Therefore, it is possible to accurately determine the operator and identify the gesture to be responded to, reduce gesture recognition for non-operators and gestures that do not require a response, and reduce misoperation caused by misrecognition, thereby improving gesture recognition efficiency and accuracy.
  • hand recognition is usually used as the dimension.
  • the gesture with the highest confidence is detected first.
  • the gestures of the middleman cannot be correctly distinguished, and the recognition efficiency is relatively high. Low, easy to trigger misoperation.
  • the gesture recognition method of the embodiment of the present disclosure the hands of the middleman can be preferentially recognized in a multi-person scenario, and the hands of the middleman can be detected and tracked all the time without being lost.
  • the method of the embodiment of the present disclosure can reduce the calculation amount of the detection and identification algorithm, improve the processing performance, and is more targeted and better suited to the usage conditions in the actual scene.
  • the gesture recognition method according to the embodiment of the present disclosure can be applied to remote gesture recognition scenarios, and electronic devices, such as TVs, air conditioners, refrigerators, and other hardware devices equipped with cameras, can be intelligently controlled by gestures.
  • electronic devices such as TVs, air conditioners, refrigerators, and other hardware devices equipped with cameras
  • a TV has a built-in or an external intelligent camera module, and an artificial intelligence AI human hand detection and gesture recognition algorithm is set in the module, the gesture recognition result is obtained through the gesture recognition method of the embodiment of the present disclosure, and the TV is controlled by the gesture result,
  • the automatic photographing function is triggered by the victory gesture in Fig. 2b.
  • the present disclosure also provides gesture recognition devices, electronic devices, computer-readable storage media, and programs, all of which can be used to implement any gesture recognition method provided by the present disclosure.
  • gesture recognition devices electronic devices, computer-readable storage media, and programs, all of which can be used to implement any gesture recognition method provided by the present disclosure.
  • FIG. 6 shows a block diagram of a gesture recognition apparatus according to an embodiment of the present disclosure. As shown in FIG. 6 , the apparatus includes:
  • an acquisition module 101 configured to acquire a video to be identified
  • a detection module 102 configured to perform human body detection on the video to obtain the number of first objects included in the video
  • the recognition module 103 is configured to recognize the gestures in the video according to the gesture recognition method corresponding to the number of the first objects, and obtain a gesture recognition result.
  • the number of the first objects is greater than or equal to two, and the video includes a first video frame;
  • the identification module 103 includes: a first acquisition sub-module for In the first video frame, the human body area and the hand area of at least one object in the first objects are obtained respectively;
  • the first determination sub-module is configured to, based on the positional relationship between the human body area and the first preset area, A second object is determined from the first objects, the body region of the at least one object includes the first body region of the second object, the hand region of the at least one object includes the first body region of the second object
  • the hand area, the first human body area is located in the first preset area;
  • the first identification sub-module is used to satisfy the predetermined positional relationship between the first hand area and the first human body area.
  • gesture recognition is performed on the first hand region to obtain a first gesture recognition result.
  • the first gesture recognition result includes one of a valid gesture recognition result and an invalid gesture recognition result.
  • the video includes a second video frame located after the first video frame; if the first gesture recognition result includes an invalid gesture recognition result, the recognition module 103, It also includes: a second acquisition sub-module for acquiring the second human body region and the second hand region of the second object respectively in the second video frame; a second identification sub-module for When the positional relationship between the second hand region and the second human body region satisfies the preset position condition, gesture recognition is performed on the second hand region to obtain a second gesture recognition result.
  • the body area of the at least one subject includes a third body area of a third subject, and the hand area of the at least one subject includes a third hand area of the third subject;
  • the The identification module 103 further includes: a second determination sub-module, configured to determine from the predetermined position condition when the positional relationship between the first hand region and the first human body region does not satisfy the preset position condition.
  • the third object is determined from the first object, and the third human body area is located in the second preset area; the third identification sub-module is used for the third hand area and the third human body area.
  • gesture recognition is performed on the third hand region to obtain a third gesture recognition result.
  • the second preset area partially overlaps with the first preset area, or the second preset area is adjacent to the first preset area.
  • the video includes a second video frame located after the first video frame, and a third video frame located after the second video frame; after obtaining the third gesture recognition
  • the apparatus further includes: a fourth identification sub-module for responding to the fourth hand region of the second object and the fourth body region of the second object in the third video frame The positional relationship between them satisfies the preset position condition, and gesture recognition is performed on the fourth hand region to obtain a fourth gesture recognition result.
  • the first preset area includes a central area of the video frame of the video;
  • the first determination sub-module includes: a human body area determination unit, configured to In the case where the area includes multiple human body areas, the human body area with the smallest distance from the first preset area among the multiple human body areas is determined as the first human body area; the object determination unit, using in determining the object corresponding to the first human body region as the second object.
  • the apparatus further includes: a hand area determination module, configured to determine the first hand area including two and the preset gesture is a single-hand gesture One of the first hand regions is determined as the first hand region.
  • the preset position condition includes: a first height difference between the height of the target object's hand region and the height of the crotch region, which is greater than or equal to a height threshold, and the height threshold is different from the second height threshold.
  • the height difference is positively correlated
  • the second height difference is the height difference between the height of the shoulder region of the target object and the height of the crotch region
  • the target object includes the second object and the third object. at least one.
  • the apparatus further includes: a control module, configured to control the electronic device to perform an operation corresponding to the valid gesture recognition result when the gesture recognition result is a valid gesture recognition result.
  • the corresponding gesture recognition method can be selected according to the number of people in the video for gesture recognition, thereby improving the accuracy of gesture recognition in multi-person scenes.
  • the functions or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the methods described in the above method embodiments.
  • the functions or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the methods described in the above method embodiments.
  • Embodiments of the present disclosure further provide a computer-readable storage medium, on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the foregoing method is implemented.
  • the computer-readable storage medium may be a non-volatile computer-readable storage medium.
  • An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to invoke the instructions stored in the memory to execute the above method.
  • Embodiments of the present disclosure also provide a computer program product, including computer-readable codes.
  • a processor in the device executes a method for implementing the gesture recognition method provided by any of the above embodiments. instruction.
  • Embodiments of the present disclosure further provide another computer program product for storing computer-readable instructions, which, when executed, cause the computer to perform operations of the gesture recognition method provided by any of the foregoing embodiments.
  • Embodiments of the present disclosure also provide a computer program, including computer-readable codes, when the computer-readable codes are executed in an electronic device, a processor in the electronic device executes the above method.
  • the electronic device may be provided as a terminal, server or other form of device.
  • FIG. 7 shows a block diagram of an electronic device 800 according to an embodiment of the present disclosure.
  • electronic device 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, fitness device, personal digital assistant, etc. terminal.
  • an electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power supply component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814 , and the communication component 816 .
  • the processing component 802 generally controls the overall operation of the electronic device 800, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processing component 802 can include one or more processors 820 to execute instructions to perform all or some of the steps of the methods described above.
  • processing component 802 may include one or more modules that facilitate interaction between processing component 802 and other components.
  • processing component 802 may include a multimedia module to facilitate interaction between multimedia component 808 and processing component 802.
  • Memory 804 is configured to store various types of data to support operation at electronic device 800 . Examples of such data include instructions for any application or method operating on electronic device 800, contact data, phonebook data, messages, pictures, videos, and the like. Memory 804 may be implemented by any type of volatile or nonvolatile storage device or combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read only memory
  • EPROM erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Magnetic or Optical Disk Magnetic Disk
  • Power supply assembly 806 provides power to various components of electronic device 800 .
  • Power supply components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to electronic device 800 .
  • Multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user.
  • the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundaries of a touch or swipe action, but also detect the duration and pressure associated with the touch or swipe action.
  • the multimedia component 808 includes a front-facing camera and/or a rear-facing camera.
  • the front camera and/or the rear camera may receive external multimedia data.
  • At least one of the front camera and the rear camera can be a fixed optical lens system or have focal length and optical zoom capability.
  • Audio component 810 is configured to output and/or input audio signals.
  • audio component 810 includes a microphone (MIC) that is configured to receive external audio signals when electronic device 800 is in operating modes, such as calling mode, recording mode, and voice recognition mode.
  • the received audio signal may be further stored in memory 804 or transmitted via communication component 816 .
  • audio component 810 also includes a speaker for outputting audio signals.
  • the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to: home button, volume buttons, start button, and lock button.
  • Sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of electronic device 800 .
  • the sensor assembly 814 can detect the on/off state of the electronic device 800, the relative positioning of the components, such as the display and the keypad of the electronic device 800, the sensor assembly 814 can also detect the electronic device 800 or one of the electronic device 800 Changes in the position of components, presence or absence of user contact with the electronic device 800 , orientation or acceleration/deceleration of the electronic device 800 and changes in the temperature of the electronic device 800 .
  • Sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact.
  • Sensor assembly 814 may also include a light sensor, such as a complementary metal oxide semiconductor (CMOS) or charge coupled device (CCD) image sensor, for use in imaging applications.
  • CMOS complementary metal oxide semiconductor
  • CCD charge coupled device
  • the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 816 is configured to facilitate wired or wireless communication between electronic device 800 and other devices.
  • the electronic device 800 may access a wireless network based on a communication standard, such as wireless network (WiFi), second generation mobile communication technology (2G) or third generation mobile communication technology (3G), or a combination thereof.
  • the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 816 also includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module may be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • electronic device 800 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A programmed gate array (FPGA), controller, microcontroller, microprocessor or other electronic component implementation is used to perform the above method.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable A programmed gate array
  • controller microcontroller, microprocessor or other electronic component implementation is used to perform the above method.
  • a non-volatile computer-readable storage medium such as a memory 804 comprising computer program instructions executable by the processor 820 of the electronic device 800 to perform the above method is also provided.
  • FIG. 8 shows a block diagram of an electronic device 1900 according to an embodiment of the present disclosure.
  • the electronic device 1900 may be provided as a server.
  • electronic device 1900 includes processing component 1922, which further includes one or more processors, and a memory resource represented by memory 1932 for storing instructions executable by processing component 1922, such as applications.
  • An application program stored in memory 1932 may include one or more modules, each corresponding to a set of instructions.
  • the processing component 1922 is configured to execute instructions to perform the above-described methods.
  • the electronic device 1900 may also include a power supply assembly 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input output (I/O) interface 1958 .
  • the electronic device 1900 can operate based on an operating system stored in the memory 1932, such as a Microsoft server operating system (Windows Server TM ), a graphical user interface based operating system (Mac OS X TM ) introduced by Apple, a multi-user multi-process computer operating system (Unix TM ), Free and Open Source Unix-like Operating System (Linux TM ), Open Source Unix-like Operating System (FreeBSD TM ) or the like.
  • Microsoft server operating system Windows Server TM
  • Mac OS X TM graphical user interface based operating system
  • Uniix TM multi-user multi-process computer operating system
  • Free and Open Source Unix-like Operating System Linux TM
  • FreeBSD TM Open Source Unix-like Operating System
  • a non-volatile computer-readable storage medium such as memory 1932 comprising computer program instructions executable by processing component 1922 of electronic device 1900 to perform the above-described method.
  • the present disclosure may be a system, method and/or computer program product.
  • the computer program product may include a computer-readable storage medium having computer-readable program instructions loaded thereon for causing a processor to implement various aspects of the present disclosure.
  • a computer-readable storage medium may be a tangible device that can hold and store instructions for use by the instruction execution device.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Non-exhaustive list of computer readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) or flash memory), static random access memory (SRAM), portable compact disk read only memory (CD-ROM), digital versatile disk (DVD), memory sticks, floppy disks, mechanically coded devices, such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • flash memory static random access memory
  • SRAM static random access memory
  • CD-ROM compact disk read only memory
  • DVD digital versatile disk
  • memory sticks floppy disks
  • mechanically coded devices such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
  • Computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (eg, light pulses through fiber optic cables), or through electrical wires transmitted electrical signals.
  • the computer readable program instructions described herein may be downloaded to various computing/processing devices from a computer readable storage medium, or to an external computer or external storage device over a network such as the Internet, a local area network, a wide area network and/or a wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or network interface in at least one computing/processing device receives computer-readable program instructions from a network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • Computer program instructions for carrying out operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state setting data, or instructions in one or more programming languages.
  • Source or object code written in any combination, including object-oriented programming languages, such as Smalltalk, C++, etc., and conventional procedural programming languages, such as the "C" language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through the Internet connect).
  • LAN local area network
  • WAN wide area network
  • custom electronic circuits such as programmable logic circuits, field programmable gate arrays (FPGAs), or programmable logic arrays (PLAs) can be personalized by utilizing state information of computer readable program instructions.
  • Computer readable program instructions are executed to implement various aspects of the present disclosure.
  • These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that causes the instructions when executed by the processor of the computer or other programmable data processing apparatus , resulting in means for implementing the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • These computer readable program instructions can also be stored in a computer readable storage medium, these instructions cause a computer, programmable data processing apparatus and/or other equipment to operate in a specific manner, so that the computer readable medium on which the instructions are stored includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • Computer readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executing on a computer, other programmable data processing apparatus, or other device to implement the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
  • At least one block in the flowchart or block diagrams may represent a module, segment, or portion of instructions that includes one or more functions for implementing the specified logical function(s) executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • At least one block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or actions , or can be implemented in a combination of dedicated hardware and computer instructions.
  • the computer program product can be specifically implemented by hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), etc. Wait.
  • a software development kit Software Development Kit, SDK

Abstract

一种手势识别方法及装置、电子设备和存储介质,所述方法包括:获取待识别的视频(S11);对视频进行人体检测,得到视频包括的第一对象的数量(S12);按照与第一对象的数量对应的手势识别方式,对视频中的手势进行识别,得到手势识别结果(S13)。

Description

手势识别方法及装置、电子设备和存储介质
本公开要求在2020年11月27日提交中国专利局、申请号为202011363248.6、申请名称为“手势识别方法及装置、电子设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及计算机技术领域,尤其涉及一种手势识别方法及装置、电子设备和存储介质。
背景技术
手势交互可以是利用计算机技术识别人的手势语言,并转化为命令来操控电子设备(例如智能电视、智能空调等)的人机交互方式。其中,手势识别技术是实现手势交互的关键技术。
发明内容
本公开提出了一种手势识别技术方案。
根据本公开的一方面,提供了一种手势识别方法,包括:获取待识别的视频;对所述视频进行人体检测,得到所述视频包括的第一对象的数量;按照与所述第一对象的数量对应的手势识别方式,对所述视频中的手势进行识别,得到手势识别结果。
在一种可能的实现方式中,所述第一对象的数量大于或等于两个,所述视频包括第一视频帧;所述按照与所述第一对象的数量对应的手势识别方式,对所述视频中的手势进行识别,得到手势识别结果,包括:在所述第一视频帧中,分别获取所述第一对象中至少一个对象的人体区域以及手部区域;基于所述人体区域与第一预设区域之间的位置关系,从所述第一对象中确定第二对象,所述至少一个对象的人体区域包括所述第二对象的第一人体区域,所述至少一个对象的手部区域包括所述第二对象的第一手部区域,所述第一人体区域位于所述第一预设区域中;在所述第一手部区域与所述第一人体区域之间的位置关系满足预设位置条件的情况下,对所述第一手部区域进行手势识别,得到第一手势识别结果。
在一种可能的实现方式中,所述第一手势识别结果包括有效手势识别结果和无效手势识别结果中的一项。
在一种可能的实现方式中,所述视频包括位于所述第一视频帧之后的第二视频帧;在所述第一手势识别结果包括无效手势识别结果的情况下,所述按照与所述第一对象的数量对应的手势识别方式,对所述视频中的手势进行识别,得到手势识别结果,还包括:在所述第二视频帧中,分别获取所述第二对象的第二人体区域及第二手部区域;在所述第二手部区域与所述第二人体区域之间的位置关系满足所述预设位置条件的情况下,对所述第二手部区域进行手势识别,得到第二手势识别结果。
在一种可能的实现方式中,所述至少一个对象的人体区域包括第三对象的第三人体区域,所述至少一个对象的手部区域包括所述第三对象的第三手部区域;所述按照与所述第一对象的数量对应的手势识别方式,对所述视频中的手势进行识别,得到手势识别结果,还包括:在所述第一手部区域与所述第一人体区域之间的位置关系不满足所述预设位置条件的情况下,从所述第一对象中确定所述第三对象,所述第三人体区域位于第二预设区域中;在所述第三手部区域与所述第三人体区域之间的位置关系满足所述预设位置条件的情况下,对所述第三手部区域进行手势识别,得到第三手势识别结果。
在一种可能的实现方式中,所述第二预设区域与所述第一预设区域部分重叠,或所述第二预设区域与所述第一预设区域相邻。
在一种可能的实现方式中,所述视频包括位于所述第一视频帧之后的第二视频帧,以及位于所述第二视频帧之后的第三视频帧;在所述得到第三手势识别结果之后,所述方法还包括:响应于在所述第三视频帧中,所述第二对象的第四手部区域与所述第二对象的第四人体区域之间的位置关系满足所述预设位置条件,对所述第四手部区域进行手势识别,得到第四手势识别结果。
在一种可能的实现方式中,所述第一预设区域包括所述视频的视频帧的中心区域;所述基于所述人体区域与第一预设区域之间的位置关系,从所述第一对象中确定所述第二对象,包括:在所述第一 预设区域包括多个人体区域的情况下,将所述多个人体区域中与所述第一预设区域之间的距离最小的人体区域,确定为所述第一人体区域;将所述第一人体区域对应的对象确定为所述第二对象。
在一种可能的实现方式中,所述方法还包括:在所述第一手部区域包括两个,且预设手势为单手手势的情况下,将所述第一手部区域中的一个手部区域,确定为所述第一手部区域。
在一种可能的实现方式中,所述预设位置条件包括:目标对象的手部区域高度与胯部区域高度之间的第一高度差,大于或等于高度阈值,所述高度阈值与第二高度差正相关,所述第二高度差为所述目标对象的肩部区域高度与所述胯部区域高度之间的高度差,所述目标对象包括所述第二对象和第三对象中的至少一项。
在一种可能的实现方式中,所述方法还包括:在所述手势识别结果为有效手势识别结果的情况下,控制电子设备执行与所述有效手势识别结果对应的操作。
根据本公开的一方面,提供了一种手势识别装置,包括:获取模块,用于获取待识别的视频;检测模块,用于对所述视频进行人体检测,得到所述视频包括的第一对象的数量;识别模块,用于按照与所述第一对象的数量对应的手势识别方式,对所述视频中的手势进行识别,得到手势识别结果。
在一种可能的实现方式中,所述第一对象的数量大于或等于两个,所述视频包括第一视频帧;所述识别模块,包括:第一获取子模块,用于在所述第一视频帧中,分别获取所述第一对象中至少一个对象的人体区域以及手部区域;第一确定子模块,用于基于所述人体区域与第一预设区域之间的位置关系,从所述第一对象中确定第二对象,所述至少一个对象的人体区域包括所述第二对象的第一人体区域,所述至少一个对象的手部区域包括所述第二对象的第一手部区域,所述第一人体区域位于所述第一预设区域中;第一识别子模块,用于在所述第一手部区域与所述第一人体区域之间的位置关系满足预设位置条件的情况下,对所述第一手部区域进行手势识别,得到第一手势识别结果。
在一种可能的实现方式中,所述第一手势识别结果包括有效手势识别结果和无效手势识别结果中的一项。
在一种可能的实现方式中,所述视频包括位于所述第一视频帧之后的第二视频帧;在所述第一手势识别结果包括无效手势识别结果的情况下,所述识别模块,还包括:第二获取子模块,用于在所述第二视频帧中,分别获取所述第二对象的第二人体区域及第二手部区域;第二识别子模块,用于在所述第二手部区域与所述第二人体区域之间的位置关系满足所述预设位置条件的情况下,对所述第二手部区域进行手势识别,得到第二手势识别结果。
在一种可能的实现方式中,所述至少一个对象的人体区域包括第三对象的第三人体区域,所述至少一个对象的手部区域包括所述第三对象的第三手部区域;所述识别模块,还包括:第二确定子模块,用于在所述第一手部区域与所述第一人体区域之间的位置关系不满足所述预设位置条件的情况下,从所述第一对象中确定所述第三对象,所述第三人体区域位于第二预设区域中;第三识别子模块,用于在所述第三手部区域与所述第三人体区域之间的位置关系满足所述预设位置条件的情况下,对所述第三手部区域进行手势识别,得到第三手势识别结果。
在一种可能的实现方式中,所述第二预设区域与所述第一预设区域部分重叠,或所述第二预设区域与所述第一预设区域相邻。
在一种可能的实现方式中,所述视频包括位于所述第一视频帧之后的第二视频帧,以及位于所述第二视频帧之后的第三视频帧;在所述得到第三手势识别结果之后,所述装置还包括:第四识别子模块,用于响应于在所述第三视频帧中,所述第二对象的第四手部区域与所述第二对象的第四人体区域之间的位置关系满足所述预设位置条件,对所述第四手部区域进行手势识别,得到第四手势识别结果。
在一种可能的实现方式中,所述第一预设区域包括所述视频的视频帧的中心区域;所述第一确定子模块,包括:人体区域确定单元,用于在所述第一预设区域包括多个人体区域的情况下,将所述多个人体区域中与所述第一预设区域之间的距离最小的人体区域,确定为所述第一人体区域;对象确定单元,用于将所述第一人体区域对应的对象确定为所述第二对象。
在一种可能的实现方式中,所述装置还包括:手部区域确定模块,用于在所述第一手部区域包括 两个,且预设手势为单手手势的情况下,将所述第一手部区域中的一个手部区域,确定为所述第一手部区域。
在一种可能的实现方式中,所述预设位置条件包括:目标对象的手部区域高度与胯部区域高度之间的第一高度差,大于或等于高度阈值,所述高度阈值与第二高度差正相关,所述第二高度差为所述目标对象的肩部区域高度与所述胯部区域高度之间的高度差,所述目标对象包括所述第二对象和第三对象中的至少一项。
在一种可能的实现方式中,所述装置还包括:控制模块,用于在所述手势识别结果为有效手势识别结果的情况下,控制电子设备执行与所述有效手势识别结果对应的操作。
根据本公开的一方面,提供了一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为调用所述存储器存储的指令,以执行上述方法。
根据本公开的一方面,提供了一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述方法。
根据本公开的一方面,提供了一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行上述方法。
在本公开实施例中,通过对视频进行人体检测,得到视频包括的第一对象的数量,按照与第一对象的数量对应的手势识别方式,对视频中的手势进行识别,得到手势识别结果,能够根据视频中人物数量选择对应的手势识别方式进行手势识别。这样可以基于单人场景或是多人场景,采取更有针对性的手势识别方式。尤其是基于多人场景而言,能够提高手势识别的精准度。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本公开。根据下面参考附图对示例性实施例的详细说明,本公开的其它特征及方面将变得清楚。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。
图1示出根据本公开实施例的手势识别方法的流程图。
图2a示出根据本公开实施例的预设手势的示意图一。
图2b示出根据本公开实施例的预设手势的示意图二。
图2c示出根据本公开实施例的预设手势的示意图三。
图3示出根据本公开实施例的一种预设区域的示意图。
图4示出根据本公开实施例的一种人体关键点的示意图。
图5示出根据本公开实施例的手势识别过程的示意图。
图6示出根据本公开实施例的手势识别装置的框图。
图7示出根据本公开实施例的一种电子设备的框图。
图8示出根据本公开实施例的一种电子设备的框图。
具体实施方式
以下将参考附图详细说明本公开的各种示例性实施例、特征和方面。附图中相同的附图标记表示功能相同或相似的元件。尽管在附图中示出了实施例的各种方面,但是除非特别指出,不必按比例绘制附图。
在这里专用的词“示例性”意为“用作例子、实施例或说明性”。这里作为“示例性”所说明的任何实施例不必解释为优于或好于其它实施例。
本文中术语“和/或”,是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。
另外,为了更好地说明本公开,在下文的具体实施方式中给出了众多的具体细节。本领域技术人 员应当理解,没有某些具体细节,本公开同样可以实施。在一些实例中,对于本领域技术人员熟知的方法、手段、元件和电路未作详细描述,以便于凸显本公开的主旨。
图1示出根据本公开实施例的手势识别方法的流程图,如图1所示,所述手势识别方法包括:
在步骤S11中,获取待识别的视频。
在步骤S12中,对视频进行人体检测,得到视频包括的第一对象的数量。
在步骤S13中,按照与第一对象的数量对应的手势识别方式,对视频中的手势进行识别,得到手势识别结果。
在一种可能的实现方式中,所述手势识别方法可以由终端设备或服务器等电子设备执行,终端设备可以为用户设备(User Equipment,UE)、移动设备、用户终端、终端、蜂窝电话、无绳电话、个人数字助理(Personal Digital Assistant,PDA)设备、手持设备、计算设备、车载设备、可穿戴设备等,所述方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。或者,可通过服务器执行所述方法。
在一种可能的实现方式中,可在步骤S11中获取待识别的视频。待识别的视频可以是视频采集设备所采集的视频。视频采集设备可以是已知的任何合适的视频采集设备,例如但不限于,普通网络摄像头、景深摄像头、数码摄像设备等,本公开对视频采集设备的具体类型不做限制。
在一种可能的实现方式中,本公开中的视频采集设备,可以是属于执行手势识别方法的电子设备中的装置;也可以是独立于执行该手势识别方法的电子设备的装置。对于视频采集设备和电子设备之间的关系,本公开实施例不做限制。
在一种可能的实现方式中,在步骤S12中,可以采用任何已知的人体检测方式,对视频进行人体检测,例如,人体检测方式可以包括:提取视频的视频帧中的人体关键点(如13个关节部位的人体关键点),其中,人体关键点的数量及位置可以根据实际需求确定,在此不予限定;或者,还可以通过提取视频帧中的人体轮廓等方式。对于采用何种人体检测方式,本公开实施例不做限制。
可以理解的是,在步骤S11中获取的待识别的视频中,可以有包含人体的视频帧,也可以有不包含人体的视频帧。通过对视频进行人体检测,可以确定出视频中包含第一对象的视频帧,也即确定出视频中包含人体的视频帧。视频帧中的人体可以为一个或多个,相应地,第一对象可以包括一个或多个对象。
在一种可能的实现方式中,如上所述,第一对象可以包括一个或多个对象。在步骤S12中,可以根据人体检测结果,确定视频中包括的第一对象的数量。其中,人体检测结果可以是检测的人体关键点或人体轮廓,根据人体关键点或人体轮廓可以确定视频帧中的人体数量,也就确定出视频帧中第一对象的数量。
在一种可能的实现方式中,可以预设与第一对象的数量所对应的手势识别方式,进而在步骤S13中按照与第一对象的数量对应的手势识别方式,对视频中的手势进行识别。
在一种可能的实现方式中,预设与第一对象的数量所对应的手势识别方式,可以包括:在第一对象的数量等于一个的情况下,将第一对象作为操控对象,对该操控对象的单手或双手的手部区域进行手势识别;在第一对象的数量大于或等于两个的情况下,根据第一对象的位置,从第一对象中确定操控对象,对该操控对象的单手或双手的手部区域进行手势识别。其中,单手可以是左侧手或右侧手。操控对象可以是指通过手势操控电子设备的对象。
在一种可能的实现方式中,手势识别结果可以包括有效手势识别结果和无效手势识别结果中的一项。其中,有效手势识别结果可以包括与预设手势匹配的结果;无效手势识别结果可以包括与预设手势不匹配的结果。其中,预设手势可以是预先设置的与操控电子设备执行相应操作所对应的手势图形。
图2a、图2b及图2c示出根据本公开实施例的预设手势的示意图。预设手势例如可以包括但不限于如图2a、图2b及图2c所示的各手势,图2a的手势可以是对应于确认操作,图2b的手势可以是对应于切换操作,图2c手势可以对应于关闭操作。与图2a、图2b及图2c所示的任一手势相匹配的手势识别结果,可以是有效手势识别结果;相应的,与图2a、图2b及图2c所示的全部手势均不匹配的手势识别结果, 可以是无效手势识别结果。
应理解的是,尽管以作为示例介绍了如上预设手势以及与预设手势对应的操作,但本领域技术人员能够理解,本公开应不限于此。事实上,用户完全可以根据实际应用场景设置不同的预设手势,以及设置与不同预设手势对应的操作,对此本公开实施例不做限制。
在一种可能的实现方式中,对视频中的手势进行手势识别,可以采用任何已知的手势识别方式,例如,基于几何特征的手势识别方式,基于方向直方图的手势识别方式等,对此本公开实施例不做限制。
在一种可能的实现方式中,可以根据手势识别结果的置信度,判断手势识别结果是否与预设手势相匹配。与预设手势相匹配的手势识别结果可以是有效手势识别结果,与预设手势不匹配的手势识别结果可以是无效手势识别结果。置信度可以是用于衡量手势识别结果的可信程度的衡量值。置信度可以由预先训练的神经网络输出得到。置信度越高,代表识别到的手势识别结果的可信程度越高。可以对置信度设定一个置信度阈值,高于该置信度阈值的置信度对应的手势识别结果,可以被认为是有效手势识别结果。本公开对置信度阈值的具体取值不做限制。
在本公开实施例中,通过对视频进行人体检测,得到视频包括的第一对象的数量,按照与第一对象的数量对应的手势识别方式,对视频中的手势进行识别,得到手势识别结果,能够根据视频中人物数量选择对应的手势识别方式进行手势识别,从而提高多人场景的手势识别的准确率。
在一种可能的实现方式中,如上文所述,第一对象的数量可以大于或等于两个。视频可以包括第一视频帧。在步骤S13中,按照与第一对象的数量对应的手势识别方式,对视频中的手势进行识别,得到手势识别结果,可以包括:
在第一视频帧中,分别获取第一对象中至少一个对象的人体区域以及手部区域;
基于人体区域与第一预设区域之间的位置关系,从第一对象中确定第二对象,至少一个对象的人体区域包括第二对象的第一人体区域,至少一个对象的手部区域包括第二对象的第一手部区域,第一人体区域位于第一预设区域中;
在第一手部区域与第一人体区域之间的位置关系满足预设位置条件的情况下,对第一手部区域进行手势识别,得到第一手势识别结果。
在一种可能的实现方式中,可以根据本公开中的人体检测结果,确定人体区域。例如,可以根据检测到的人体关键点或人体轮廓确定人体框,将人体框在视频帧中的区域作为各对象的人体区域;或者还可以直接将检测出人体轮廓在视频帧中的区域,作为各对象的人体区域,对此本公开实施例不做限制。
在一种可能的实现方式中,可以是采用任何已知的手部检测方式,对视频帧进行手部检测,进而根据手部检测结果确定出至少一个对象的手部区域。其中,手部检测方式例如可以包括:提取视频帧中的手部关键点(如20个手部关节部位的关键点),其中,手部关键点的数量及位置可以根据实际需求确定,在此不做限制;或者还可以提取视频帧中的手部轮廓等。对于采用何种手部检测方式,本公开实施例不做限制。
在一种可能的实现方式中,可以根据本公开中的手部检测结果,确定手部区域。例如,可以根据检测到的手部关键点或手部轮廓确定手部框,将手部框在视频帧中的区域作为各对象的手部区域;或者还可以直接将检测出手部轮廓在视频帧中的区域,作为各对象的手部区域,对此本公开实施例不做限制。
在一种可能的实现方式中,第一预设区域可以是用于在视频帧中确定第一对象的区域,例如,第一预设区域可以是视频帧的中心区域。对于第一预设区域的范围以及形状等,可以根据实际需求设定,对此本公开实施例不做限制。
在一种可能的实现方式中,对于视频中的各视频帧,人体检测和手部检测可以同时进行,也可以按照设定的检测顺序先后进行,具体可依据实现检测功能的设备的处理能力、该设备的资源占用情况、应用过程中对于时延的限制等可能影响检测顺序的因素进行设定,对此本公开实施例不做限制。
在一种可能的实现方式中,人体区域与第一预设区域之间的位置关系可以是至少一个对象的人体区域与第一预设区域之间的距离,或者还可以是至少一个对象的人体区域与第一预设区域之间的重叠度。相应的,基于人体区域与第一预设区域之间的位置关系,从第一对象中确定第二对象,可以是根据至少一个对象的人体区域与第一预设区域之间的距离,将距离最短的人体区域对应的对象作为第二对象;或者还可以根据人体区域与第一预设区域的重叠度,将重叠度最大的人体区域对应的对象作为第二对象,对此本公开实施例不做限制。
在一种可能的实现方式中,第一人体区域位于第一预设区域中,可以是第一人体区域的全部区域位于第一预设区域中,也可以是第一人体区域的部分区域位于第一预设区域中。可以理解为,在第一人体区域与第一预设区域存在重叠区域的情况下,确定第一人体区域位于第一预设区域中。
在一种可能的实现方式中,第二对象的第一手部区域与第一人体区域之间的位置关系可以包括第一手部区域与第一人体区域之间的高度关系。预设位置条件可以是预先设置的基于手部区域和人体区域之间的位置关系,判断是否进行手势识别的条件。例如,该预设位置条件可以是手部区域高于胯部区域,或者手部区域高于肘部区域等。对于预设位置条件的具体内容,可根据能够用于控制电子设备执行相应操作的手势进行设置,本公开实施例不做限制。
可以理解的是,操控对象在期望通过手势操控电子设备时,通常是在抬手的状态下发出控制手势,那么在第一手部区域与第一人体区域之间的位置关系满足预设位置条件的情况下,可以理解为,在第二对象处于抬手状态的情况下,对第一手部区域进行手势识别,得到第一手势识别结果,能够使得识别的手势更加有效,减少误操作。
在一种可能的实现方式中,对于视频中的视频帧,人体检测和手部检测可以同时进行,也可以按照设定的检测顺序先后进行。可依据实现检测功能的设备的处理能力、该设备的资源占用情况、应用过程中对于时延的限制等可能影响检测顺序的因素进行设定,对此本公开实施例不做限制。
在一种可能的实现方式中,在第一手部区域与第一人体区域之间的位置关系满足预设位置条件的情况下,可对第一手部区域进行手势识别,得到第一手势识别结果。其中,对第一手部区域进行手势识别,可以采用任何已知的手势识别方式,例如,基于几何特征的手势识别方式,基于方向直方图的手势识别方式等,对此本公开实施例不做限制。
在一种可能的实现方式中,第一手势识别结果可以包括有效手势识别结果和无效手势识别结果中的一项。其中,有效手势识别结果可以包括与预设手势匹配的结果;无效手势识别结果可以包括与预设手势不匹配的结果。其中,预设手势可以是预先设置的与操控电子设备执行相应操作所对应的手势图形。
在本公开实施例中,能够从多个第一对象中确定出第二对象,在第二对象处于抬手状态下,对第二对象的手部区域进行手势识别,从而在多人场景下确定出有效的操控对象,进而对操控对象进行手势识别,可以提高手势识别的有效性,减少误操作。
在一种可能的实现方式中,所述方法还包括:在手势识别结果为有效手势识别结果的情况下,可以控制电子设备执行与有效手势识别结果对应的操作。例如,在第一手势识别结果识别出与图2a所示的手势相匹配的情况下,可以控制电子设备执行确认操作。
在一种可能的实现方式中,在识别出的手势识别结果为有效手势识别结果的情况下,可以是通过发送与预设手势对应的操作指令,控制电子设备执行对应的操作。
在一种可能的实现方式中,被控制的电子设备(可称为第一电子设备)可以与执行根据本公开实施例的手势识别方法的电子设备(可称为第二电子设备)相同或不同。在第一电子设备与第二电子设备为相同的电子设备的情况下,该电子设备可例如为智能电视、智能手机等终端,在同一设备中实现人体检测及手部检测、手势识别,并根据识别到的手势进行相应的操作,例如频道切换、远程拍照等。在第一电子设备与第二电子设备为不同的电子设备的情况下,第一电子设备可例如为智能电视、智能手机等终端,第二电子设备可为任意终端或服务器,由第二电子设备实现人体检测及人手检测、手势识别,并根据识别到的手势控制第一电子设备进行相应的操作。对此本公开实施例不做限制。
在一种可能的实现方式中,本公开中的视频采集设备,可以是属于第一电子设备和第二电子设备中的装置;也可以是独立于第一电子设备和第二电子设备的装置。对于视频采集设备和被控制的电子设备之间的关系,本公开实施例不做限制。
在一种可能的实现方式中,在当前视频帧中操控对象的有效手势识别结果与之前视频帧中操控对象的有效手势识别结果相同的情况下,可以控制电子设备执行的操作不变,或再次执行同样的操作(例如再次切换频道);在当前视频帧中操控对象的有效手势识别结果与之前视频帧中操控对象的有效手势识别结果不同的情况下,可以控制电子设备执行与当前视频帧中操控对象的有效手势识别结果对应的操作。
根据本公开实施例,能够实现通过手势远程控制电子设备,在多人的场景下,可以有效提高手势识别的准确度,提高用户体验。
在一种可能的实现方式中,视频可以包含基于时间序列的视频帧,基于时间序列可以确定出第一视频帧之后的视频帧。可以理解的是,第一视频帧之后的视频帧可以是按照一定的时间间隔,或者帧数间隔等选取的,例如,可以每间隔2帧选取一帧视频帧,或每间隔0.2秒选取一帧视频帧,也可以依序选取所有视频帧。对于第一视频帧之后的视频帧的选取,本公开实施例不做限制。
在一种可能的实现方式中,视频可以包括位于第一视频帧之后的第二视频帧;在第一手势识别结果包括无效手势识别结果的情况下,在步骤S13中,按照与第一对象的数量对应的手势识别方式,对视频中的手势进行识别,得到手势识别结果,还包括:
在第二视频帧中,分别获取第二对象的第二人体区域及第二手部区域;
在第二手部区域与第二人体区域之间的位置关系满足预设位置条件的情况下,对第二手部区域进行手势识别,得到第二手势识别结果。
在一种可能的实现方式中,如上文所述,手势识别结果可以包括有效手势识别结果和无效手势识别结果中的一项;无效手势识别结果可以包括与预设手势不匹配的结果。则第一手势识别结果包括无效手势识别结果的情况,可以理解为识别出的操控对象的手势与预设手势不匹配。
在一种可能的实现方式中,在第二手部区域与第二人体区域之间的位置关系满足预设位置条件的情况下,可以理解为,在第二视频帧中第二对象仍处于抬手状态。在第二手部区域与第二人体区域之间的位置关系满足预设位置条件的情况下,对第二手部区域进行手势识别,可以理解为,在识别出第一视频帧中第二对象的手势与预设手势不匹配,且第二视频帧中第二对象仍处于抬手状态的情况下,对第二视频帧中第二对象的手部区域进行手势识别。
在一种可能的实现方式中,在第二视频帧中,可以根据人体检测结果确定第二人体区域。可以根据手部检测结果确定第二手部区域。其中,人体检测结果以及手部检测结果,可以采用上述本公开实施例中公开的人体检测方式和手部检测方式进行确定,对此本公开实施例不做限制。
在一种可能的实现方式中,第二视频帧可以是第一视频帧之后连续的视频帧,也可以是第一视频帧之后间隔一定帧数或一定时间的视频帧,对此本公开实施例不做限制。
在本公开实施例中,能够实现对操控对象的手势追踪,可以更有针对性的进行手势识别,减少切换操控对象的次数,从而提高处理效率。
如上文所述,对象在期望通过手势操控电子设备时,通常是在抬手状态下发出控制手势。第二对象可能处于抬手状态,也可能处于未抬手状态。在第二对象处于未抬手状态下,可以认为第二对象可能不会通过手势操控电子设备。
在一种可能的实现方式中,至少一个对象的人体区域可以包括第三对象的第三人体区域,至少一个对象的手部区域包括第三对象的第三手部区域;在步骤S13中,按照与第一对象的数量对应的手势识别方式,对视频中的手势进行识别,得到手势识别结果,还可以包括:
在第一手部区域与第一人体区域之间的位置关系不满足预设位置条件的情况下,从第一对象中确定第三对象,第三人体区域位于第二预设区域中;
在第三手部区域与第三人体区域之间的位置关系满足预设位置条件的情况下,对第三手部区域进 行手势识别,得到第三手势识别结果。
在一种可能的实现方式中,第三对象可以是第一对象中除第二对象以外的对象。第二预设区域可以是视频帧中用于确定第三对象的区域。
在一种可能的实现方式中,第二预设区域可以与第一预设区域部分重叠,或第二预设区域可以与第一预设区域相邻。其中,第二预设区域与第一预设区域之间的重叠度,可以根据实际需求设定,对此本公开实施例不做限制。
在一种可能的实现方式中,第一预设区域可以包括视频帧中一块区域,第二预设区域可以包括视频帧中的一块或多块区域。其中,对于第二预设区域的范围、形状以及数量等,可以根据实际需求设定,对此本公开实施例不做限制。
图3示出根据本公开实施例的一种预设区域的示意图。如图3所示,区域A可是第一预设区域,区域B、区域C可以是第二预设区域。
在一种可能的实现方式中,在第一手部区域与第一人体区域之间的位置关系不满足预设位置条件的情况下,从第一对象中确定第三对象,可以理解为,在第一视频帧中第二对象处于未抬手状态下,从第一对象中确定位于第二预设区域的第三对象,从而能够在第二对象处于未抬手状态下,找寻其它对象以进行手势识别,更贴近实际场景中的手势识别情况,提高手势识别的有效性。
在一种可能的实现方式中,从第一对象中确定第三对象,可以是将位于第二预设区域中各对象的人体区域,与第一对象的人体区域距离最近的人体区域所对应的对象,确定为第三对象;或者是将位于第二预设区域中各对象的人体区域,与第一对象的人体区域重叠度最高的人体区域所对应的对象,确定为第三对象,对此本公开实施例不做限制。
在一种可能的实现方式中,可以基于视频帧建立坐标系,根据人体区域的中心点或边界点的坐标,确定位于第二预设区域中各对象的人体区域,与第一对象的人体区域之间的距离,进而将距离最近的人体区域所对应的对象,确定为第三对象。
在一种可能的实现方式中,第三人体区域位于第二预设区域中,可以是第三人体区域的全部区域位于第二预设区域中,也可以是第三人体区域的部分区域位于第二预设区域中。可以理解为,在第三人体区域与第二预设区域存在重叠区域的情况下,确定第三人体区域位于第二预设区域中。
如上文所述,预设位置条件可以是预先设置的基于手部区域和人体区域之间的位置关系,判断是否进行手势识别的条件。
在一种可能的实现方式中,在第三手部区域与第三人体区域之间的位置关系满足预设位置条件的情况下,对第三手部区域进行手势识别,得到第三手势识别结果,可以理解为,在第三对象处于抬手状态的情况下,对第三对象的第三手部区域进行手势识别,得到第三手势识别结果,能够使得识别的手势更加有效,减少误操作。
在本公开实施例中,能够在第二对象未抬手时确定出第三对象,在第三对象抬手的情况下,对第三对象的手部区域进行手势识别,从而在多人场景下确定出有效的操控对象,提高手势识别的有效性,减少误操作。
可以知晓的是,对象在期望操控电子设备时,通常会处相对于电子设备或视频采集设备的视场角的中心区域,也就是说,视频帧中处于中心区域的对象是操控对象的概率较大。在一种可能的实现方式中,在第一预设区域为中心区域的情况下,可以认为处于第一预设区域的第二对象是操控对象的概率较大,处于第二预设区域的第三对象是操控对象的概率次之。
在一种可能的实现方式中,视频可以包括位于第一视频帧之后的第二视频帧,以及位于第二视频帧之后的第三视频帧;
在得到第三手势识别结果之后,所述手势识别方法还可以包括:
响应于在第三视频帧中,第二对象的第四手部区域与第二对象的第四人体区域之间的位置关系满足预设位置条件,对第四手部区域进行手势识别,得到第四手势识别结果。
在一种可能的实现方式中,如上文所述,第二视频帧可以是第一视频帧之后连续的视频帧,也可 以是第一视频帧之后间隔一定帧数或一定时间的视频帧;相应的,第三视频帧可以是第二视频帧之后连续的视频帧,也可以是第二视频帧之后间隔一定帧数或一定时间的视频帧,对此本公开实施例不做限制。
可以理解的是,第三对象的第三手势识别结果可以包括有效手势识别结果和无效手势识别结果中的一项。
在一种可能的实现方式中,所述响应于在第三视频帧中,第二对象的第四手部区域与第二对象的第四人体区域之间的位置关系满足预设位置条件,对第四手部区域进行手势识别,可以理解为,在第二视频帧中是对第三对象的手部区域进行手势识别的情况下,检测到第三视频帧中第二对象处于抬手状态,由于第二对象作为操控对象的概率更大,可以切换到对第二对象的手部区域进行手势识别。
在一种可能的实现方式中,在第三视频帧中,第二对象的第四手部区域与第二对象的第四人体区域之间的位置关系还可能不满足预设位置条件。在第三视频帧中,如果第二对象的第四手部区域与第二对象的第四人体区域之间的位置关系不满足预设位置条件,则可继续判断第三对象的第五手部区域与第五人体区域之间的位置关系,并在满足预设位置条件的情况下,对第五手部区域进行手势识别,得到第五手势识别结果。
在本公开实施例中,能够在检测到处于第一预设区域中的第二对象处于抬手状态下,切换为对第二对象的手部区域进行手势识别,更加有效的确定出概率更大的操控对象,提高手势识别的准确性。
如上文所述,视频帧中处于中心区域的对象是操控对象的概率较大。在一种可能的实现方式中,第一预设区域可以包括视频的视频帧的中心区域;所述基于人体区域与第一预设区域之间的位置关系,从第一对象中确定第二对象,可以包括:
在第一预设区域包括多个人体区域的情况下,将多个人体区域中与第一预设区域之间的距离最小的人体区域,确定为第一人体区域;将第一人体区域对应的对象确定为第二对象。
可以理解的是,第一预设区域内可以包括一个人体区域或多个人体区域。在第一预设区域包括一个人体区域的情况下,可以直接将该人体区域确定为第一人体区域。
在一种可能的实现方式中,人体区域与第一预设区域之间的距离,可以根据人体区域的中心点与第一预设区域的中心点之间的距离;或者,还可以根据人体区域的中心点与第一预设区域的中心线之间的距离;或者还可以是人体区域的边界点与第一预设区域的中心线之间的距离等,对此本公开实施例不做限制。
应理解的是,尽管以中心点、中心线和边界点作为示例介绍了如上人体区域与第一预设区域之间的距离的形式,本公开应不限于此。事实上,用户完全可以根据实际需求设定人体区域与第一预设区域之间距离的形式,本公开实施例不做限制。
在一种可能的实现方式中,可以根据基于视频帧建立坐标系,通过坐标确定多个人体区域与第一预设区域之间的距离。例如,可以通过多个人体区域的中心点坐标和第一预设区域的中心点坐标确定距离等。对于如何确定多个人体区域与第一预设区域之间的距离,本公开实施例不做限制。
在本公开实施例中,基于多个人体区域中与第一预设区域之间的距离确定第二对象,能够在第一预设区域包括多个人体区域的情况下,有效地确定出更靠近中间的对象,将该更靠近中间的对象作为操控对象,能够更贴近实际应用场景,提高手势识别的准确性。
可以知晓的是,人体通常包含两侧手。如果第一手部区域包括两个,且预设手势中包括双手手势(例如双手组成爱心的形状),则可直接对两个第一手部区域进行手势识别;如果第一手部区域包括两个,且预设手势仅包括单手手势,则可进行手部区域的选择。
在一种可能的实现方式中,所述手势识别方法还可以包括:在第一手部区域包括两个,且预设手势为单手手势的情况下,将第一手部区域中的一个手部区域,确定为第一手部区域。
在一种可能的实现方式中,可以设置默认识别右侧手或左侧手。那么在第一手部区域包括两个,且预设手势为单手手势的情况下,可以将第一手部区域中的左侧手或右侧手的手部区域,确定为第一手部区域。
在一种可能的实现方式中,还可以通过身份认证、人脸识别等方式,识别第二对象身份,以基于用户数据判定第二对象的操作习惯,进而衡量选择左侧手的手部区域进行手势识别,或是选择右侧手的手部区域进行手势识别。例如,对于右撇子的人群,习惯抬起右侧手进行操作;而对于左撇子的人群,习惯抬起左侧手进行操作,通过记录对象的操作习惯,可以更有针对性的对手部区域进行识别。
在本公开实施例中,通过将两个手部区域中的一个手部区域,确定为待识别的手部区域,能够进一步提高手势识别的有效性。
如上文所述,预设位置条件可以是预先设置的基于手部区域和人体区域之间的位置关系,判断是否进行手势识别的条件。根据手部区域和人体区域之间的位置关系是否满足预设位置条件,可以确定对象是否处于抬手状态。
在一种可能的实现方式中,所述预设位置条件可以包括:目标对象的手部区域高度与胯部区域高度之间的第一高度差,大于或等于高度阈值;高度阈值可以与第二高度差正相关,第二高度差为目标对象的肩部区域高度与胯部区域高度之间的高度差,目标对象包括第二对象和第三对象中的至少一项。
在一种可能的实现方式中,目标对象的手部区域高度与胯部区域高度之间的第一高度差,可以是目标对象的手部位置与胯部位置之间的高度差。目标对象的肩部区域高度与胯部区域高度之间的高度差,可以是肩部位置与胯部位置之间的高度差。
如上文所述,通过人体关键点检测,可以确定出人体关节部位的关键点。在一种可能的实现方式中,通过基于视频帧建立坐标系,可以知晓各个人体关节部位的关键点坐标,从而可以确定出手部区域、胯部区域和肩部区域的位置。根据确定出的手部区域、胯部区域和肩部区域的位置,能够确定出目标对象的手部区域高度与胯部区域高度之间的第一高度差,和确定出目标对象的肩部区域高度与胯部区域高度之间的高度差。
图4示出根据本公开实施例的一种人体关键点的示意图。图4中的数字0-13分别代表检测出的人体关节部位的关键点,例如,0代表头部,1代表颈部,2和5代表肩部,3和6代表肘部,4和7代表手部,8和11代表胯部,9和12代表膝盖,10和13代表脚部。
以如图4所示的人体关键点为例,说明如何确定第一高度差和第二高度差。在以手部7的手部区域作为第一手部区域,也就是对手部7进行手势识别的情况下,手部7的位置可以是(x7,y7),手部区域高度可以是y7;肩部5的位置可以是(x5,y5),肩部区域高度可以是y5;胯部11的位置可以是(x11,y11),胯部区域高度可以是y11。
在分别确定出手部区域高度、肩部区域高度和胯部区域高度后,手部区域高度与胯部区域高度之间的第一高度差可以是(y7-y11),胯部区域高度与肩部区域高度之间的第二高度差可以是(y5-y11)。
应理解的是,尽管以如图4所示的人体关键点作为示例介绍了如上确定胯部区域高度、肩部区域高度、手部区域高度、第一高度差及第二高度差的方式,本公开应不限于此。事实上,用户完全可以根据实际需求设定确定胯部区域高度、肩部区域高度、手部区域高度、第一高度差及第二高度差的方式,对此本公开实施例不做限制。
在一种可能的实现方式中,高度阈值可以是根据实际需求设定,例如,高度阈值可以是1/3的第二高度差,也可以是1/2的第二高度差等,对此本公开实施例不做限制。
在一种可能的实现方式中,在第一高度差大于或等于高度阈值的情况下,可以确定为手部区域与人体区域之间的位置关系满足预设位置条件,也即,可以认为此时目标对象处于抬手状态;在第一高度差小于高度阈值的情况下,可以确定为手部区域与人体区域之间的位置关系不满足预设位置条件,也即,可以认为此时目标对象处于未抬手状态;
以如图4所示的人体关键点为例,设高度阈值为1/3的第二高度差,也即高度阈值为[1/3×(y5-y11)],则在(y7-y11)≥[1/3×(y5-y11)]的情况下,可以认为手部区域与人体区域之间满足预设位置条件;反之,则认为不满足预设位置条件。
在本公开实施例中,通过确定预设位置条件,能够在对象的手部抬到一定高度时,再对手部区域进行识别,可以减少对无需响应的手部区域的识别,提高手势识别效率。
图5示出根据本公开实施例的手势识别过程的示意图。如图5所示,输入待识别的视频;按照时序对输入的视频进行人体检测和手部检测;
针对输入的视频中的第一视频帧,在该第一视频帧中的第一对象为一个的情况下,将其直接确定为第二对象;并在第二对象的手部区域和人体区域之间满足上述预设位置条件(也即第二对象抬手)的情况下,识别第二对象的手势;
在第一视频帧中的第一对象为多个的情况下,可以根据上述人体区域与第一预设区域之间的位置关系,确定第二对象;
在第二对象的手部区域和人体区域之间满足上述预设位置条件(也即第二对象抬手)的情况下,识别第二对象的手势;
在第二对象的手部区域和人体区域之间不满足上述预设位置条件(也即第二对象未抬手)的情况下,根据各第一对象的人体区域与第二对象的人体区域之间的位置关系,确定第三对象;
在第三对象的手部区域和人体区域之间满足上述预设位置条件(也即第三对象抬手)的情况下,识别第三对象的手势。
针对第一视频帧之后的视频帧,在第一视频帧识别的手势为第二对象的手势的情况下,跟踪第二对象的手部区域,并在第二对象抬手的情况下识别该第二对象的手势;
在第一视频帧识别的手势为第三对象的手势的情况下,判断第二对象的手部区域和人体区域之间是否够满足上述预设位置条件(也即判断第二对象是否抬手);
在第二对象的手部区域和人体区域之间满足上述预设位置条件(也即第二对象抬手)的情况下,切换为识别该第二对象的手势;
在第二对象的手部区域和人体区域之间不满足上述预设位置条件(也即第二对象仍然未抬手),且第三对象的手部区域和人体区域之间满足上述预设位置条件(也即第三对象仍然抬手)的情况下,识别第三对象的手势。
根据本公开实施例,能够从多个对象中准确地确定出有效的操控对象,实现准确高效地识别操控对象发出的需要响应的手势。
根据本公开实施例的手势识别方法,采用中间人优先策略,能够从视频帧中检测到的多个人中确定中间人,并确定中间人的主操作手;在中间人的主操作手抬起时进行手势识别,从而能够准确确定操控人并识别需响应的手势,减少对非操作人的手势识别以及无需响应的手势的识别,减少误识别导致的误操作,从而提高手势的识别效率和准确率。
在相关技术中,通常以手部识别为维度,在多人多手的情况下,优先检测置信度最高的手势,没有多人多手的检测逻辑,无法正确的区分中间人的手势,识别效率较低,容易触发误操作。根据本公开实施例的手势识别方法,能够在多人的场景下优先识别中间人的手,且一直会检测跟踪,不会丢失。与相关技术相比,本公开实施例的方法能够减少检测及识别算法的计算量,提高处理性能,并且更有针对性,更能贴合实际场景中的使用条件。
根据本公开实施例的手势识别方法,可应用于远距离手势识别场景中,通过手势智能操控电子设备,例如电视、空调、冰箱等安装有摄像头的硬件设备。例如,电视内置或外接智能拍摄(camera)模组,模组中设置人工智能AI人体人手检测及手势识别算法,通过本公开实施例的手势识别方法得到手势识别结果,通过手势结果来操控电视,例如通过图2b的胜利手势触发自动拍照功能等。
可以理解,本公开提及的上述各个方法实施例,在不违背原理逻辑的情况下,均可以彼此相互结合形成结合后的实施例,限于篇幅,本公开不再赘述。本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。
此外,本公开还提供了手势识别装置、电子设备、计算机可读存储介质、程序,上述均可用来实现本公开提供的任一种手势识别方法,相应技术方案和描述和参见方法部分的相应记载,不再赘述。
图6示出根据本公开实施例的手势识别装置的框图,如图6所示,所述装置包括:
获取模块101,用于获取待识别的视频;
检测模块102,用于对所述视频进行人体检测,得到所述视频包括的第一对象的数量;
识别模块103,用于按照与所述第一对象的数量对应的手势识别方式,对所述视频中的手势进行识别,得到手势识别结果。
在一种可能的实现方式中,所述第一对象的数量大于或等于两个,所述视频包括第一视频帧;所述识别模块103,包括:第一获取子模块,用于在所述第一视频帧中,分别获取所述第一对象中至少一个对象的人体区域以及手部区域;第一确定子模块,用于基于所述人体区域与第一预设区域之间的位置关系,从所述第一对象中确定第二对象,所述至少一个对象的人体区域包括所述第二对象的第一人体区域,所述至少一个对象的手部区域包括所述第二对象的第一手部区域,所述第一人体区域位于所述第一预设区域中;第一识别子模块,用于在所述第一手部区域与所述第一人体区域之间的位置关系满足预设位置条件的情况下,对所述第一手部区域进行手势识别,得到第一手势识别结果。
在一种可能的实现方式中,所述第一手势识别结果包括有效手势识别结果和无效手势识别结果中的一项。
在一种可能的实现方式中,所述视频包括位于所述第一视频帧之后的第二视频帧;在所述第一手势识别结果包括无效手势识别结果的情况下,所述识别模块103,还包括:第二获取子模块,用于在所述第二视频帧中,分别获取所述第二对象的第二人体区域及第二手部区域;第二识别子模块,用于在所述第二手部区域与所述第二人体区域之间的位置关系满足所述预设位置条件的情况下,对所述第二手部区域进行手势识别,得到第二手势识别结果。
在一种可能的实现方式中,所述至少一个对象的人体区域包括第三对象的第三人体区域,所述至少一个对象的手部区域包括所述第三对象的第三手部区域;所述识别模块103,还包括:第二确定子模块,用于在所述第一手部区域与所述第一人体区域之间的位置关系不满足所述预设位置条件的情况下,从所述第一对象中确定所述第三对象,所述第三人体区域位于第二预设区域中;第三识别子模块,用于在所述第三手部区域与所述第三人体区域之间的位置关系满足所述预设位置条件的情况下,对所述第三手部区域进行手势识别,得到第三手势识别结果。
在一种可能的实现方式中,所述第二预设区域与所述第一预设区域部分重叠,或所述第二预设区域与所述第一预设区域相邻。
在一种可能的实现方式中,所述视频包括位于所述第一视频帧之后的第二视频帧,以及位于所述第二视频帧之后的第三视频帧;在所述得到第三手势识别结果之后,所述装置还包括:第四识别子模块,用于响应于在所述第三视频帧中,所述第二对象的第四手部区域与所述第二对象的第四人体区域之间的位置关系满足所述预设位置条件,对所述第四手部区域进行手势识别,得到第四手势识别结果。
在一种可能的实现方式中,所述第一预设区域包括所述视频的视频帧的中心区域;所述第一确定子模块,包括:人体区域确定单元,用于在所述第一预设区域包括多个人体区域的情况下,将所述多个人体区域中与所述第一预设区域之间的距离最小的人体区域,确定为所述第一人体区域;对象确定单元,用于将所述第一人体区域对应的对象确定为所述第二对象。
在一种可能的实现方式中,所述装置还包括:手部区域确定模块,用于在所述第一手部区域包括两个,且预设手势为单手手势的情况下,将所述第一手部区域中的一个手部区域,确定为所述第一手部区域。
在一种可能的实现方式中,所述预设位置条件包括:目标对象的手部区域高度与胯部区域高度之间的第一高度差,大于或等于高度阈值,所述高度阈值与第二高度差正相关,所述第二高度差为所述目标对象的肩部区域高度与所述胯部区域高度之间的高度差,所述目标对象包括所述第二对象和第三对象中的至少一项。
在一种可能的实现方式中,所述装置还包括:控制模块,用于在所述手势识别结果为有效手势识别结果的情况下,控制电子设备执行与所述有效手势识别结果对应的操作。
在本公开实施例中,通过对视频进行人体检测,得到视频包括的第一对象的数量,按照与第一对象的数量对应的手势识别方式,对视频中的手势进行识别,得到手势识别结果,能够根据视频中人物 数量选择对应的手势识别方式进行手势识别,从而提高多人场景的手势识别的准确率。
在一些实施例中,本公开实施例提供的装置具有的功能或包含的模块可以用于执行上文方法实施例描述的方法,其具体实现可以参照上文方法实施例的描述,为了简洁,这里不再赘述。
本公开实施例还提出一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述方法。计算机可读存储介质可以是非易失性计算机可读存储介质。
本公开实施例还提出一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为调用所述存储器存储的指令,以执行上述方法。
本公开实施例还提供了一种计算机程序产品,包括计算机可读代码,当计算机可读代码在设备上运行时,设备中的处理器执行用于实现如上任一实施例提供的手势识别方法的指令。
本公开实施例还提供了另一种计算机程序产品,用于存储计算机可读指令,指令被执行时使得计算机执行上述任一实施例提供的手势识别方法的操作。
本公开实施例还提供了一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行上述方法。
电子设备可以被提供为终端、服务器或其它形态的设备。
图7示出根据本公开实施例的一种电子设备800的框图。例如,电子设备800可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等终端。
参照图7,电子设备800可以包括以下一个或多个组件:处理组件802,存储器804,电源组件806,多媒体组件808,音频组件810,输入/输出(I/O)的接口812,传感器组件814,以及通信组件816。
处理组件802通常控制电子设备800的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件802可以包括一个或多个处理器820来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件802可以包括一个或多个模块,便于处理组件802和其他组件之间的交互。例如,处理组件802可以包括多媒体模块,以方便多媒体组件808和处理组件802之间的交互。
存储器804被配置为存储各种类型的数据以支持在电子设备800的操作。这些数据的示例包括用于在电子设备800上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器804可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件806为电子设备800的各种组件提供电力。电源组件806可以包括电源管理系统,一个或多个电源,及其他与为电子设备800生成、管理和分配电力相关联的组件。
多媒体组件808包括在所述电子设备800和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件808包括一个前置摄像头和/或后置摄像头。当电子设备800处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。至少一个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件810被配置为输出和/或输入音频信号。例如,音频组件810包括一个麦克风(MIC),当电子设备800处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器804或经由通信组件816发送。在一些实施例中,音频组件810还包括一个扬声器,用于输出音频信号。
I/O接口812为处理组件802和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件814包括一个或多个传感器,用于为电子设备800提供各个方面的状态评估。例如,传感器组件814可以检测到电子设备800的打开/关闭状态,组件的相对定位,例如所述组件为电子设备800的显示器和小键盘,传感器组件814还可以检测电子设备800或电子设备800一个组件的位置改变,用户与电子设备800接触的存在或不存在,电子设备800方位或加速/减速和电子设备800的温度变化。传感器组件814可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件814还可以包括光传感器,如互补金属氧化物半导体(CMOS)或电荷耦合装置(CCD)图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件814还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件816被配置为便于电子设备800和其他设备之间有线或无线方式的通信。电子设备800可以接入基于通信标准的无线网络,如无线网络(WiFi),第二代移动通信技术(2G)或第三代移动通信技术(3G),或它们的组合。在一个示例性实施例中,通信组件816经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件816还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,电子设备800可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机程序指令的存储器804,上述计算机程序指令可由电子设备800的处理器820执行以完成上述方法。
图8示出根据本公开实施例的一种电子设备1900的框图。例如,电子设备1900可以被提供为一服务器。参照图8,电子设备1900包括处理组件1922,其进一步包括一个或多个处理器,以及由存储器1932所代表的存储器资源,用于存储可由处理组件1922的执行的指令,例如应用程序。存储器1932中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的模块。此外,处理组件1922被配置为执行指令,以执行上述方法。
电子设备1900还可以包括一个电源组件1926被配置为执行电子设备1900的电源管理,一个有线或无线网络接口1950被配置为将电子设备1900连接到网络,和一个输入输出(I/O)接口1958。电子设备1900可以操作基于存储在存储器1932的操作系统,例如微软服务器操作系统(Windows Server TM),苹果公司推出的基于图形用户界面操作系统(Mac OS X TM),多用户多进程的计算机操作系统(Unix TM),自由和开放原代码的类Unix操作系统(Linux TM),开放原代码的类Unix操作系统(FreeBSD TM)或类似。
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机程序指令的存储器1932,上述计算机程序指令可由电子设备1900的处理组件1922执行以完成上述方法。
本公开可以是系统、方法和/或计算机程序产品。计算机程序产品可以包括计算机可读存储介质,其上载有用于使处理器实现本公开的各个方面的计算机可读程序指令。
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是(但不限于)电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、静态随机存取存储器(SRAM)、便携式压缩盘只读存储器(CD-ROM)、数字多功能盘(DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。
这里所描述的计算机可读程序指令可以从计算机可读存储介质下载到各个计算/处理设备,或者 通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。至少一个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。
用于执行本公开操作的计算机程序指令可以是汇编指令、指令集架构(ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络—包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(FPGA)或可编程逻辑阵列(PLA),该电子电路可以执行计算机可读程序指令,从而实现本公开的各个方面。
这里参照根据本公开实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本公开的各个方面。应当理解,流程图和/或框图的至少一个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。
附图中的流程图和框图显示了根据本公开的多个实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的至少一个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的至少一个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
该计算机程序产品可以具体通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品具体体现为计算机存储介质,在另一个可选实施例中,计算机程序产品具体体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。
以上已经描述了本公开的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术的改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。

Claims (15)

  1. 一种手势识别方法,其特征在于,所述方法包括:
    获取待识别的视频;
    对所述视频进行人体检测,得到所述视频包括的第一对象的数量;
    按照与所述第一对象的数量对应的手势识别方式,对所述视频中的手势进行识别,得到手势识别结果。
  2. 根据权利要求1所述的方法,其特征在于,所述第一对象的数量大于或等于两个,所述视频包括第一视频帧;
    所述按照与所述第一对象的数量对应的手势识别方式,对所述视频中的手势进行识别,得到手势识别结果,包括:
    在所述第一视频帧中,分别获取所述第一对象中至少一个对象的人体区域以及手部区域;
    基于所述人体区域与第一预设区域之间的位置关系,从所述第一对象中确定第二对象,所述至少一个对象的人体区域包括所述第二对象的第一人体区域,所述至少一个对象的手部区域包括所述第二对象的第一手部区域,所述第一人体区域位于所述第一预设区域中;
    在所述第一手部区域与所述第一人体区域之间的位置关系满足预设位置条件的情况下,对所述第一手部区域进行手势识别,得到第一手势识别结果。
  3. 根据权利要求2所述的方法,其特征在于,所述第一手势识别结果包括有效手势识别结果和无效手势识别结果中的一项。
  4. 根据权利要求2或3所述的方法,其特征在于,所述视频包括位于所述第一视频帧之后的第二视频帧;
    在所述第一手势识别结果包括无效手势识别结果的情况下,所述按照与所述第一对象的数量对应的手势识别方式,对所述视频中的手势进行识别,得到手势识别结果,还包括:
    在所述第二视频帧中,分别获取所述第二对象的第二人体区域及第二手部区域;
    在所述第二手部区域与所述第二人体区域之间的位置关系满足所述预设位置条件的情况下,对所述第二手部区域进行手势识别,得到第二手势识别结果。
  5. 根据权利要求2至4中任意一项所述的方法,其特征在于,所述至少一个对象的人体区域包括第三对象的第三人体区域,所述至少一个对象的手部区域包括所述第三对象的第三手部区域;
    所述按照与所述第一对象的数量对应的手势识别方式,对所述视频中的手势进行识别,得到手势识别结果,还包括:
    在所述第一手部区域与所述第一人体区域之间的位置关系不满足所述预设位置条件的情况下,从所述第一对象中确定所述第三对象,所述第三人体区域位于第二预设区域中;
    在所述第三手部区域与所述第三人体区域之间的位置关系满足所述预设位置条件的情况下,对所述第三手部区域进行手势识别,得到第三手势识别结果。
  6. 根据权利要求5所述的方法,其特征在于,所述第二预设区域与所述第一预设区域部分重叠,或所述第二预设区域与所述第一预设区域相邻。
  7. 根据权利要求5或6所述的方法,其特征在于,所述视频包括位于所述第一视频帧之后的第二视频帧,以及位于所述第二视频帧之后的第三视频帧;
    在所述得到第三手势识别结果之后,所述方法还包括:
    响应于在所述第三视频帧中,所述第二对象的第四手部区域与所述第二对象的第四人体区域之间的位置关系满足所述预设位置条件,对所述第四手部区域进行手势识别,得到第四手势识别结果。
  8. 根据权利要求2-7中任意一项所述的方法,其特征在于,所述第一预设区域包括所述视频的视频帧的中心区域;
    所述基于所述人体区域与第一预设区域之间的位置关系,从所述第一对象中确定所述第二对象,包括:
    在所述第一预设区域包括多个人体区域的情况下,将所述多个人体区域中与所述第一预设区域之间的距离最小的人体区域,确定为所述第一人体区域;
    将所述第一人体区域对应的对象确定为所述第二对象。
  9. 根据权利要求2-8中任意一项所述的方法,其特征在于,所述方法还包括:
    在所述第一手部区域包括两个,且预设手势为单手手势的情况下,将所述第一手部区域中的一个手部区域,确定为所述第一手部区域。
  10. 根据权利要求2-9中任意一项所述的方法,其特征在于,所述预设位置条件包括:
    目标对象的手部区域高度与胯部区域高度之间的第一高度差,大于或等于高度阈值,所述高度阈值与第二高度差正相关,所述第二高度差为所述目标对象的肩部区域高度与所述胯部区域高度之间的高度差,所述目标对象包括所述第二对象和第三对象中的至少一项。
  11. 根据权利要求1-10中任意一项所述的方法,其特征在于,所述方法还包括:
    在所述手势识别结果为有效手势识别结果的情况下,控制电子设备执行与所述有效手势识别结果对应的操作。
  12. 一种手势识别装置,其特征在于,包括:
    获取模块,用于获取待识别的视频;
    检测模块,用于对所述视频进行人体检测,得到所述视频包括的第一对象的数量;
    识别模块,用于按照与所述第一对象的数量对应的手势识别方式,对所述视频中的手势进行识别,得到手势识别结果。
  13. 一种电子设备,其特征在于,包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为调用所述存储器存储的指令,以执行权利要求1至11中任意一项所述的方法。
  14. 一种计算机可读存储介质,其上存储有计算机程序指令,其特征在于,所述计算机程序指令被处理器执行时实现权利要求1至11中任意一项所述的方法。
  15. 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现权利要求1-11中的任一权利要求所述的方法。
PCT/CN2021/086967 2020-11-27 2021-04-13 手势识别方法及装置、电子设备和存储介质 WO2022110614A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011363248.6A CN112328090B (zh) 2020-11-27 2020-11-27 手势识别方法及装置、电子设备和存储介质
CN202011363248.6 2020-11-27

Publications (1)

Publication Number Publication Date
WO2022110614A1 true WO2022110614A1 (zh) 2022-06-02

Family

ID=74308350

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/086967 WO2022110614A1 (zh) 2020-11-27 2021-04-13 手势识别方法及装置、电子设备和存储介质

Country Status (2)

Country Link
CN (1) CN112328090B (zh)
WO (1) WO2022110614A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112328090B (zh) * 2020-11-27 2023-01-31 北京市商汤科技开发有限公司 手势识别方法及装置、电子设备和存储介质
CN113031464B (zh) * 2021-03-22 2022-11-22 北京市商汤科技开发有限公司 设备控制方法、装置、电子设备及存储介质
CN112987933A (zh) * 2021-03-25 2021-06-18 北京市商汤科技开发有限公司 设备控制方法、装置、电子设备及存储介质
CN114167980A (zh) * 2021-11-18 2022-03-11 深圳市鸿合创新信息技术有限责任公司 手势处理方法、装置、电子设备和可读存储介质
CN114546114A (zh) * 2022-02-15 2022-05-27 美的集团(上海)有限公司 移动机器人的控制方法、控制装置及移动机器人

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104750252A (zh) * 2015-03-09 2015-07-01 联想(北京)有限公司 一种信息处理方法及电子设备
CN105843371A (zh) * 2015-01-13 2016-08-10 上海速盟信息技术有限公司 人机隔空交互方法及系统
US20180055295A1 (en) * 2016-08-25 2018-03-01 Boe Technology Group Co., Ltd. Intelligent closestool
CN111209050A (zh) * 2020-01-10 2020-05-29 北京百度网讯科技有限公司 用于切换电子设备的工作模式的方法和装置
CN112328090A (zh) * 2020-11-27 2021-02-05 北京市商汤科技开发有限公司 手势识别方法及装置、电子设备和存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013065112A (ja) * 2011-09-15 2013-04-11 Omron Corp ジェスチャ認識装置、電子機器、ジェスチャ認識装置の制御方法、制御プログラムおよび記録媒体
CN108304817B (zh) * 2018-02-09 2019-10-29 深圳市无限动力发展有限公司 实现手势操作的方法和装置
CN110490794A (zh) * 2019-08-09 2019-11-22 三星电子(中国)研发中心 基于人工智能的人物图像处理方法及装置
CN110619300A (zh) * 2019-09-14 2019-12-27 韶关市启之信息技术有限公司 一种多人脸同时识别的纠正方法
CN110781765B (zh) * 2019-09-30 2024-02-09 腾讯科技(深圳)有限公司 一种人体姿态识别方法、装置、设备及存储介质
CN111062312B (zh) * 2019-12-13 2023-10-27 RealMe重庆移动通信有限公司 手势识别方法、手势控制方法、装置、介质与终端设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105843371A (zh) * 2015-01-13 2016-08-10 上海速盟信息技术有限公司 人机隔空交互方法及系统
CN104750252A (zh) * 2015-03-09 2015-07-01 联想(北京)有限公司 一种信息处理方法及电子设备
US20180055295A1 (en) * 2016-08-25 2018-03-01 Boe Technology Group Co., Ltd. Intelligent closestool
CN111209050A (zh) * 2020-01-10 2020-05-29 北京百度网讯科技有限公司 用于切换电子设备的工作模式的方法和装置
CN112328090A (zh) * 2020-11-27 2021-02-05 北京市商汤科技开发有限公司 手势识别方法及装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN112328090B (zh) 2023-01-31
CN112328090A (zh) 2021-02-05

Similar Documents

Publication Publication Date Title
WO2022110614A1 (zh) 手势识别方法及装置、电子设备和存储介质
CN105488464B (zh) 指纹识别方法及装置
US9953506B2 (en) Alarming method and device
CN108319838B (zh) 屏下指纹验证装置、方法、存储介质和移动终端
CN106484284B (zh) 切换单手模式的方法和装置
US20170123587A1 (en) Method and device for preventing accidental touch of terminal with touch screen
JP2020512604A (ja) 指紋ロック解除方法、装置、プログラム及び記録媒体
US20160241783A1 (en) Portable terminal
KR102165818B1 (ko) 입력 영상을 이용한 사용자 인터페이스 제어 방법, 장치 및 기록매체
WO2018133387A1 (zh) 指纹识别方法及装置
KR101843447B1 (ko) 가상 버튼을 프로세싱 하기 위한 방법, 및 모바일 단말
CN112905136A (zh) 投屏控制方法、装置以及存储介质
CN106778169B (zh) 指纹解锁方法及装置
WO2024067468A1 (zh) 基于图像识别的交互控制方法、装置及设备
CN111988522B (zh) 拍摄控制方法、装置、电子设备及存储介质
CN107133551B (zh) 指纹验证方法及装置
CN108491834B (zh) 指纹识别方法及装置
CN107729733B (zh) 控制移动终端的方法及装置、移动终端
WO2022198821A1 (zh) 人脸和人体匹配的方法、装置、电子设备、存储介质及程序
JP2019207568A (ja) 電子機器、制御装置、電子機器の制御方法、制御プログラム
US20160195992A1 (en) Mobile terminal and method for processing signals generated from touching virtual keys
EP3789849A1 (en) Contactless gesture control method, apparatus and storage medium
CN107861683B (zh) 无人机无按钮操作方法及装置
CN114821678A (zh) 指纹信息处理方法及装置、终端和存储介质
CN112445363A (zh) 电子设备、电子设备的控制方法及装置、存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21896130

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21896130

Country of ref document: EP

Kind code of ref document: A1