WO2021012513A1 - 手势操作方法、装置以及计算机设备 - Google Patents

手势操作方法、装置以及计算机设备 Download PDF

Info

Publication number
WO2021012513A1
WO2021012513A1 PCT/CN2019/117770 CN2019117770W WO2021012513A1 WO 2021012513 A1 WO2021012513 A1 WO 2021012513A1 CN 2019117770 W CN2019117770 W CN 2019117770W WO 2021012513 A1 WO2021012513 A1 WO 2021012513A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame
image
finger image
finger
end frame
Prior art date
Application number
PCT/CN2019/117770
Other languages
English (en)
French (fr)
Inventor
李珊珊
盛思思
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021012513A1 publication Critical patent/WO2021012513A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Definitions

  • This application relates to the field of gesture recognition technology, and in particular to a gesture operation method, device, computer equipment, and non-volatile computer-readable storage medium.
  • the operation of the computer is generally realized through the user's keyboard input and mouse clicks, drags and other actions.
  • the keyboard input can include input instructions or use shortcut keys, etc., mouse clicks or drags Drag can achieve the specified operation.
  • peripheral devices such as mice and keyboards. Therefore, there is an urgent need for a computer that does not rely on peripheral devices such as mice and keyboards. Or the operation method for terminal equipment to control.
  • the prior art proposes an operation method in which gesture recognition technology is used to obtain a user's gesture trajectory, and then a corresponding control instruction is invoked according to the gesture trajectory to control the computer or terminal device.
  • the inventor realizes that most of the existing gesture operation methods are based on two-dimensional plane recognition, and the video images taken by the existing camera unit are not two-dimensional images, and the motion trajectory is not only two-dimensional attributes. Therefore, the existing gesture operation method used in video image recognition and analysis is not very accurate.
  • this application proposes a user gesture operation method, device, computer equipment, and non-volatile computer-readable storage medium, which can recognize the frame images in the video section of the gesture video and extract the outlines. Then the area feature value and the shape feature value of the contour are obtained to calculate the area change value and the shape change value of the finger image contour of the two frames of images to determine whether the gesture trajectory is triggered. When it is determined that the gesture trajectory is triggered, then Identify the finger part image in the finger images of the two frames of images and draw the gesture trajectory, and finally call and execute the operation instruction corresponding to the gesture trajectory. Therefore, the accuracy and accuracy of recognizing finger images in video images are effectively improved.
  • this application provides a gesture operation method, which is applied to a computer device, and the method includes:
  • Obtain a gesture video divide the gesture video into video segments with a preset number of frames; identify a finger image in each frame of the video segment according to a preset finger image recognition model; extract each The contour of the finger image in one frame of image, and sequentially obtain the area feature value and the shape feature value of the contour of the finger image of each frame of image; take out two frames of image from the video section in sequence as the starting frame and ending frame Frame, calculating the area change value and the shape change value of the finger image contour of the start frame and the end frame according to the area characteristic value and the shape characteristic value of the finger image contour of the start frame and the end frame;
  • the area change value of the finger image contour of the start frame and the end frame exceeds a preset first threshold or the shape change value exceeds a preset second threshold, the start frame and the The finger part image in the finger image of the end frame; according to the position information of the finger part image of the start frame and the end frame within the image range, the gesture from the start frame to the end frame is drawn Track; call and execute
  • a gesture operation device which includes:
  • the acquiring module is used to acquire the gesture video and divide the gesture video into video sections with a preset number of frames; the recognition module is used to identify each frame in the video section according to a preset finger image recognition model The image of the finger in the image; the acquisition module is also used to extract the contour of the finger image in each frame of image, and sequentially obtain the area feature value and shape feature value of the finger image contour of each frame of image; calculation The module is used to sequentially take out two frames of images from the video segment as the start frame and the end frame, and calculate according to the area feature value and shape feature value of the finger image contour of the start frame and the end frame The area change value and the shape change value of the finger image contour of the start frame and the end frame; the recognition module is also used to determine the area change value of the finger image contour of the start frame and the end frame When the preset first threshold is exceeded or the shape change value exceeds the preset second threshold, the finger image in the finger images of the start frame and the end frame are respectively identified; The position information of the finger part image of the start
  • the present application also proposes a computer device, the computer device includes a memory and a processor, the memory stores computer-readable instructions that can run on the processor, and the computer-readable instructions are The implementation steps when the processor is executed:
  • Obtain a gesture video divide the gesture video into video segments with a preset number of frames; identify a finger image in each frame of the video segment according to a preset finger image recognition model; extract each The contour of the finger image in one frame of image, and sequentially obtain the area feature value and the shape feature value of the contour of the finger image of each frame of image; take out two frames of image from the video section in sequence as the starting frame and ending frame Frame, calculating the area change value and the shape change value of the finger image contour of the start frame and the end frame according to the area characteristic value and the shape characteristic value of the finger image contour of the start frame and the end frame;
  • the area change value of the finger image contour of the start frame and the end frame exceeds a preset first threshold or the shape change value exceeds a preset second threshold, the start frame and the The finger part image in the finger image of the end frame; according to the position information of the finger part image of the start frame and the end frame within the image range, the gesture from the start frame to the end frame is drawn Track; call and execute
  • this application also provides a non-volatile computer-readable storage medium, the non-volatile computer-readable storage medium stores computer-readable instructions, and the computer-readable instructions can be At least one processor executes, so that the at least one processor executes the steps:
  • Obtain a gesture video divide the gesture video into video segments with a preset number of frames; identify a finger image in each frame of the video segment according to a preset finger image recognition model; extract each The contour of the finger image in one frame of image, and sequentially obtain the area feature value and the shape feature value of the contour of the finger image of each frame of image; take out two frames of image from the video section in sequence as the starting frame and ending frame Frame, calculating the area change value and the shape change value of the finger image contour of the start frame and the end frame according to the area characteristic value and the shape characteristic value of the finger image contour of the start frame and the end frame;
  • the area change value of the finger image contour of the start frame and the end frame exceeds a preset first threshold or the shape change value exceeds a preset second threshold, the start frame and the The finger part image in the finger image of the end frame; according to the position information of the finger part image of the start frame and the end frame within the image range, the gesture from the start frame to the end frame is drawn Track; call and execute
  • the gesture operation method, device, computer equipment, and non-volatile computer-readable storage medium proposed in this application can perform finger image recognition on the frame images in the video section of the gesture video and extract the outline, and then obtain the outline
  • the area feature value and the shape feature value of the two images are calculated to calculate the area change value and the shape change value of the finger image contour of the two frames of images to determine whether the gesture trajectory is triggered.
  • the two The finger part image in the finger image of the frame image is drawn and the gesture track is drawn, and finally the operation instruction corresponding to the gesture track is called and executed. Therefore, the accuracy and accuracy of recognizing finger images in video images are effectively improved.
  • Figure 1 is a schematic diagram of an optional hardware architecture of the computer equipment of the present application.
  • FIG. 2 is a schematic diagram of program modules of an embodiment of the gesture operation device of the present application.
  • FIG. 3 is a schematic flowchart of an embodiment of a gesture operation method of the present application.
  • FIG. 1 is a schematic diagram of an optional hardware architecture of the computer device 1 of the present application.
  • the computer device 1 may include, but is not limited to, a memory 11, a processor 12, and a network interface 13 that can communicate with each other through a system bus.
  • the computer device 1 is connected to a network through a network interface 13 (not shown in FIG. 1), and is connected to other computer devices such as a PC terminal and a mobile terminal through the network.
  • the network may be Intranet, Internet, Global System of Mobile communication (GSM), Wideband Code Division Multiple Access (WCDMA), 4G network, 5G Network, Bluetooth (Bluetooth), Wi-Fi, call network and other wireless or wired networks.
  • FIG. 1 only shows the computer device 1 with components 11-13, but it should be understood that it is not required to implement all the illustrated components, and more or fewer components may be implemented instead.
  • the memory 11 includes at least one type of volatile computer-readable storage medium, and the volatile computer-readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.) ), random access memory (RAM), static random access memory (SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk , CD, etc.
  • the memory 11 may be an internal storage unit of the computer device 1, such as a hard disk or a memory of the computer device 1.
  • the memory 11 may also be an external storage device of the computer device 1, for example, a plug-in hard disk equipped with the computer device 1, a smart media card (SMC), a secure digital ( Secure Digital, SD card, Flash Card, etc.
  • the memory 11 may also include both the internal storage unit of the computer device 1 and its external storage device.
  • the memory 11 is generally used to store an operating system and various application software installed in the computer device 1, such as the program code of the gesture operation device 200, and the like.
  • the memory 11 can also be used to temporarily store various types of data that have been output or will be output.
  • the computer-readable instructions can be executed by at least one processor, so that the at least one processor executes the steps:
  • Obtain a gesture video divide the gesture video into video segments with a preset number of frames; identify a finger image in each frame of the video segment according to a preset finger image recognition model; extract each The contour of the finger image in one frame of image, and sequentially obtain the area feature value and the shape feature value of the contour of the finger image of each frame of image; take out two frames of image from the video section in sequence as the starting frame and ending frame Frame, calculating the area change value and the shape change value of the finger image contour of the start frame and the end frame according to the area characteristic value and the shape characteristic value of the finger image contour of the start frame and the end frame;
  • the area change value of the finger image contour of the start frame and the end frame exceeds a preset first threshold or the shape change value exceeds a preset second threshold, the start frame and the The finger part image in the finger image of the end frame; according to the position information of the finger part image of the start frame and the end frame within the image range, the gesture from the start frame to the end frame is drawn Track; call and execute
  • the processor 12 may be a central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments.
  • the processor 12 is generally used to control the overall operation of the computer device 1, such as performing data interaction or communication-related control and processing.
  • the processor 12 is used to run the program code or process data stored in the memory 11, for example, to run the gesture operation device 200.
  • the network interface 13 may include a wireless network interface or a wired network interface.
  • the network interface 13 is usually used to establish a communication connection between the computer device 1 and other computer devices such as a PC terminal, a mobile terminal, and the like.
  • a gesture operating device 200 when a gesture operating device 200 is installed and running in the computer device 1, when the gesture operating device 200 is running, it can recognize and extract the frame images in the video section of the gesture video. Contour, and then obtain the area feature value and shape feature value of the contour to calculate the area change value and the shape change value of the finger image contour of the two frames of images to determine whether to trigger the gesture trajectory, when it is determined that the gesture trajectory is triggered , The finger part image in the finger images of the two frames of images is recognized and the gesture track is drawn, and finally the operation instruction corresponding to the gesture track is called and executed. Therefore, the accuracy and accuracy of recognizing finger images in video images are effectively improved.
  • this application proposes a gesture operation device 200.
  • FIG. 2 is a program module diagram of an embodiment of the gesture operation device 200 of the present application.
  • the gesture operation device 200 includes a series of computer-readable instructions stored on the memory 11, and when the computer-readable instructions are executed by the processor 12, the gesture operation operations of the embodiments of the present application can be implemented. .
  • the gesture operation apparatus 200 may be divided into one or more modules based on specific operations implemented by various parts of the computer-readable instructions. For example, in FIG. 2, the gesture operation device 200 can be divided into an acquisition module 201, a recognition module 202, a calculation module 203, a drawing module 204, and an execution module 205. among them:
  • the acquisition module 201 is configured to acquire a gesture video, and divide the gesture video into video segments with a preset number of frames.
  • the computer device 1 calls the camera unit to shoot a gesture video within a preset window range with the head.
  • the computer device 1 includes a PC terminal, a mobile terminal, and the like. Therefore, the acquisition module 201 can acquire the gesture video and then perform segmentation processing.
  • the frame rate of the video captured by the camera unit is not less than 24 frames per second, but since the user's gesture operations are not too fast, the preset number of gesture image frames included in each video segment is 8. frame.
  • the recognition module 202 is configured to recognize the finger image in each frame of the image in the video segment according to a preset finger image recognition model.
  • the gesture image is an image captured by the camera unit facing the preset window position, and therefore, not only includes the finger part, but also includes the palm or other background. Therefore, after the recognition module 202 obtains the gesture image and divides it into video segments by the acquisition module 201, it can sequentially divide each frame of the image in the video segment according to the preset finger image recognition model Recognize the finger image in.
  • the finger image recognition model is a deep learning model based on a neural network, and then the finger image recognition model obtained by training a large number of finger images can perform good recognition of the finger part, which uses Image recognition by the deep learning model of the neural network is a common technical method in the prior art, which will not be repeated here.
  • the acquisition module 201 is also used to extract the contour of the finger image in each frame of image, and sequentially obtain the area feature value and the shape feature value of the contour of the finger image of each frame of image.
  • the acquisition module 201 further extracts the contour of the finger in each frame of image.
  • the acquisition module 201 extracts the contour of the finger image of each frame of image based on the edge.
  • a region-based or active contour-based method can also be used for contour extraction.
  • the obtaining module 201 sequentially obtains the area feature value and the shape feature value of the finger image contour of each frame of image.
  • the area feature value of the finger image contour is expressed as the number of pixels occupied by the finger image contour of the finger image in the gesture video image;
  • the shape characteristic value of the finger image contour is expressed as The distribution value of the pixel points occupied by the finger image contour of the finger image in the gesture video image, for example, the gesture video image is divided into blocks, and then the gesture image contour is occupied in each block
  • the value of the number of pixels can represent the characteristic value of the shape.
  • the calculation module 203 is configured to sequentially take out two frames of images from the video segment as the start frame and the end frame, according to the area feature value and shape of the finger image contour of the start frame and the end frame
  • the feature value calculates the area change value and the shape change value of the outline of the finger image of the start frame and the end frame.
  • taking out in order here can be understood as taking out the first frame in the sequence within the video segment, and then taking out the next frame, with an interval of 1 to 6 frames.
  • the start frame is the first frame
  • the end frames are respectively the second frame to the eighth frame; then the subsequent second to seventh frames are used as the start frame, and the subsequent frames are the end frames.
  • the calculation module 203 calculates the area change values of the finger image contours of the start frame and the end frame according to the area feature values of the finger image contours of the start frame and the end frame
  • the steps include: respectively acquiring the number of pixels included in the outline of the finger image of the start frame and the end frame; calculating the number of pixels included in the outline of the finger image of the start frame and the finger image of the end frame The difference between the number of pixels included in the contour and the number of pixels, and then the difference between the number of pixels is divided by the number of pixels with the largest number of pixels included in the finger image contour of the start frame and the end frame to obtain The area change value of the outline of the finger image of the start frame and the end frame.
  • the finger image contour of the initial frame includes 100 pixels, that is, the area feature value is 100
  • the finger image contour of the end frame includes 125 pixels, that is, the area feature value is 125
  • the calculation module 203 calculates the shape change values of the finger image contours of the start frame and the end frame according to the shape feature values of the finger image contours of the start frame and the end frame includes: According to the block mode, the start frame and the end frame are respectively divided into M*N blocks; the division of the finger image contours of the start frame and the end frame in each block is calculated respectively The number of block pixels; calculate the difference between the number of pixel points of each block of the finger image contour of the starting frame and the number of pixel points of the block corresponding to the finger image contour of the end frame, and then The difference in the number of pixels of all the blocks of the finger image contour of the start frame and the end frame is superimposed to obtain the sum of the differences, and then the sum of the differences is divided by the start frame and the end frame
  • the contour of the finger image includes the pixel value with the largest number of pixels to obtain the area change value of the contour of the finger image of the two frames of images.
  • the calculation module 203 divides each frame of image in the video segment into M*N blocks, where M*N is 3*2, then the finger image contour of each frame of image is obtained in 6 blocks.
  • the number of pixels in the finger image contour of the start frame and the end frame is the same in the 3-6 block, and the number of pixels in the 3-6 block is 5, 6, 4, 5, but the start frame has 5 pixels in the first block and 4 pixels in the second block, while the end frame has 1 pixel in the first block and the second block 9 pixels in the block, that is, the shape feature value of the starting frame is (5, 4, 5, 6, 4, 5), and the shape feature value of the ending frame is (1, 9, 5, 6, 4, 5) .
  • the recognition module 202 is further configured to recognize respectively when the area change value of the finger image contour of the start frame and the end frame exceeds a preset first threshold or the shape change value exceeds a preset second threshold. Out the finger part images in the finger images of the start frame and the end frame.
  • the gesture operation device 200 determines that the area change value of the finger image contour of the start frame and the end frame exceeds a preset value.
  • the first threshold or the shape change value exceeds the preset second threshold to determine whether the user has generated an effective gesture operation.
  • the recognition module 202 further calculates the The area change value and the shape change value of the finger image contour of the start frame and the end frame are compared with the budgeted first and second thresholds respectively.
  • the finger image contours of the start frame and the end frame are When the area change value exceeds the preset first threshold value or the shape change value exceeds the preset second threshold value, the finger part images in the finger images of the start frame and the end frame are respectively identified.
  • the preset first threshold is 15% and the second threshold is 20%
  • the calculation module 203 calculates that the area change value of the finger image contour of the start frame and the end frame is 20%, which is greater than The first threshold is 15%, and the shape change value of the finger image contour of the start frame and the end frame is 30% greater than the second threshold of 20%. Therefore, the recognition module 202 continues to recognize the start frame and the The finger part image in the finger image of the end frame.
  • the recognition module 202 recognizes the finger part image in the finger image of the initial frame and marks it as a noise label according to the preset key point detector modeling, and compares the pre-set image according to the noise label.
  • the key point detector is modeled and trained to form a key point checker; and then the key point detector is used to identify the finger part image of the corresponding finger image in the end frame.
  • the key point detector modeling can be a recognition model of a part of the image of a finger with a deep learning capability based on a neural network, which can train and optimize the self-recognition model based on the recognized image data of the finger and part of the finger, and then optimize according to the optimization The subsequent recognition model continues to recognize images.
  • the recognition module 202 can use the key point detector to model and recognize each frame of the image in the video segment, then optimize the key point detector, and then continue to identify and optimize. Therefore, the accuracy of the key point detector in identifying the part of the finger image in the finger image is improved.
  • the neural network-based image recognition and model training technologies are commonly known technologies in the field, and will not be repeated here.
  • the drawing module 204 is configured to draw a gesture trajectory from the start frame to the end frame according to the position information of the finger part images of the start frame and the end frame within the image range.
  • the drawing module 204 draws the gesture trajectory from the start frame to the end frame mainly based on the position information occupied by the finger part image in the finger image of the start frame and the difference between the end frame
  • the position information occupied by the part of the finger image in the finger image is drawn into a vector, and then the corresponding gesture trajectory is found according to the preset vector-gesture trajectory correspondence table.
  • the drawing module 204 points the position information of the abnormal value of the finger part image of the start frame to the position information of the abnormal value of the finger part image of the end frame, so as to draw the vector, for example, pre-image the image Set it as a two-dimensional coordinate plane, and then draw a vector based on the coordinate information of the abnormal value of the finger image in the finger image of the start frame and the end frame, and then draw a vector based on the preset vector-gesture trajectory
  • the correspondence table looks up the corresponding gesture trajectory. For example, the preset vector direction within 0-45 degrees in the southeast direction is a right sliding gesture trajectory, and the vector direction within 45-90 degrees in the southeast direction is a downward sliding gesture trajectory. When the vector is 30 degrees in the southeast direction, it is determined as a right sliding gesture trajectory.
  • the execution module 205 is configured to call and execute corresponding operation instructions according to the gesture track.
  • the drawing module 204 draws the gesture trajectory from the start frame to the end frame, the video segment does not continue to judge other frames, because the video segment is preset The execution time of the user's gesture trajectory.
  • the gesture trajectory drawn by the drawing module 204 represents the user operation of the video segment. Therefore, the execution module 205 directly calls and executes the corresponding operation instruction according to the gesture trajectory and the preset correspondence table of the gesture trajectory and the operation instruction.
  • the computer device 1 can perform finger image recognition on the frame images in the video section of the gesture video and extract the contour, and then obtain the area feature value and the shape feature value of the contour to calculate the two
  • the area change value and the shape change value of the finger image contour of the frame image are used to determine whether the gesture trajectory is triggered.
  • the finger part image in the finger image of the two frames is recognized and the gesture trajectory is drawn , And finally call and execute the operation instruction corresponding to the gesture track. Therefore, the accuracy and accuracy of recognizing finger images in video images are effectively improved.
  • this application also proposes a gesture operation method, which is applied to a computer device.
  • FIG. 3 is a schematic flowchart of an embodiment of the gesture operation method of the present application.
  • the execution order of the steps in the flowchart shown in FIG. 3 can be changed, and some steps can be omitted.
  • Step S500 Obtain a gesture video, and divide the gesture video into video segments with a preset number of frames.
  • the computer device 1 calls the camera unit to shoot a gesture video within a preset window range with the head.
  • the computer device 1 includes a PC terminal, a mobile terminal, and the like. Therefore, the computer device 1 can obtain the gesture video and then perform segmentation processing.
  • the frame rate of the video captured by the camera unit is not less than 24 frames per second, but since the user's gesture operations are not too fast, the preset number of gesture image frames included in each video segment is 8. frame.
  • Step S502 Recognize the finger image in each frame of image in the video segment according to a preset finger image recognition model.
  • the gesture image is an image captured by the camera unit facing the preset window position, and therefore, not only includes the finger part, but also includes the palm or other background. Therefore, after the computer device 1 obtains the gesture image and divides it into video segments, it can sequentially recognize the finger image in each frame of the divided video segment according to the preset finger image recognition model. come out.
  • the finger image recognition model is a deep learning model based on a neural network, and then the finger image recognition model obtained by training a large number of finger images can perform good recognition of the finger part, which uses Image recognition by the deep learning model of the neural network is a common technical method in the prior art, which will not be repeated here.
  • Step S504 Extract the contour of the finger image in each frame of image, and sequentially obtain the area feature value and the shape feature value of the contour of the finger image of each frame of image.
  • the computer device 1 will further extract the contour of the finger in each frame of image after recognizing the finger image in each frame of image in the video section.
  • the computer device 1 extracts the contour of the finger image of each frame of image based on the edge.
  • a region-based or active contour-based method can also be used for contour extraction.
  • the computer device 1 After extracting the contour of the finger image in each frame of the image, the computer device 1 sequentially obtains the area feature value and the shape feature value of the contour of the finger image of each frame of image.
  • the area feature value of the finger image contour is expressed as the number of pixels occupied by the finger image contour of the finger image in the gesture video image;
  • the shape characteristic value of the finger image contour is expressed as The distribution value of the pixel points occupied by the finger image contour of the finger image in the gesture video image, for example, the gesture video image is divided into blocks, and then the gesture image contour is occupied in each block
  • the value of the number of pixels can represent the characteristic value of the shape.
  • Step S506 Take out two frames of images from the video segment in sequence as the start frame and the end frame, and calculate all the image according to the area feature value and the shape feature value of the finger image contour of the start frame and the end frame.
  • taking out in order here can be understood as taking out the first frame in the sequence within the video segment, and then taking out the next frame, with an interval of 1 to 6 frames.
  • the start frame is the first frame
  • the end frames are respectively the second frame to the eighth frame; then the subsequent second to seventh frames are used as the start frame, and the subsequent frames are the end frames.
  • the computer device 1 calculates the area change values of the finger image contours of the start frame and the end frame according to the area feature values of the finger image contours of the start frame and the end frame
  • the steps include: respectively acquiring the number of pixels included in the outline of the finger image of the start frame and the end frame; calculating the number of pixels included in the outline of the finger image of the start frame and the finger image of the end frame The difference between the number of pixels included in the contour and the number of pixels, and then the difference between the number of pixels is divided by the number of pixels with the largest number of pixels included in the finger image contour of the start frame and the end frame to obtain The area change value of the outline of the finger image of the start frame and the end frame.
  • the finger image contour of the initial frame includes 100 pixels, that is, the area feature value is 100
  • the finger image contour of the end frame includes 125 pixels, that is, the area feature value is 125
  • the step of calculating the shape change values of the finger image contours of the starting frame and the ending frame by the computer device 1 according to the shape feature values of the finger image contours of the starting frame and the ending frame includes: According to the block mode, the start frame and the end frame are respectively divided into M*N blocks; the division of the finger image contours of the start frame and the end frame in each block is calculated respectively The number of block pixels; calculate the difference between the number of pixel points of each block of the finger image contour of the starting frame and the number of pixel points of the block corresponding to the finger image contour of the end frame, and then The difference in the number of pixels of all the blocks of the finger image contour of the start frame and the end frame is superimposed to obtain the sum of the differences, and then the sum of the differences is divided by the start frame and the end frame
  • the contour of the finger image includes the pixel value with the largest number of pixels to obtain the area change value of the contour of the finger image of the two frames of images.
  • the computer device 1 divides each frame of image in the video section into M*N blocks, where M*N is 3*2, then the finger image contour of each frame of image is obtained in 6 blocks.
  • the number of pixels in the finger image contour of the start frame and the end frame is the same in the 3-6 block, and the number of pixels in the 3-6 block is 5, 6, 4, 5, but the start frame has 5 pixels in the first block and 4 pixels in the second block, while the end frame has 1 pixel in the first block and the second block 9 pixels in the block, that is, the shape feature value of the starting frame is (5, 4, 5, 6, 4, 5), and the shape feature value of the ending frame is (1, 9, 5, 6, 4, 5) .
  • Step S508 When the area change value of the finger image contour of the start frame and the end frame exceeds a preset first threshold or the shape change value exceeds a preset second threshold, the start frame is identified respectively And the finger part image in the finger image of the end frame.
  • the computer device 1 determines that the area change value of the finger image contour of the start frame and the end frame exceeds a preset value.
  • the first threshold or the shape change value exceeds the preset second threshold, so as to determine whether the user has generated an effective gesture operation.
  • the computer device 1 calculates the area change value and the shape change value of the finger image contour of the start frame and the end frame, it will further compare the start frame and The area change value and the shape change value of the finger image contour of the end frame are respectively compared with the first threshold and the second threshold of the budget.
  • the area change value of the finger image contour of the start frame and the end frame exceeds
  • the preset first threshold or the shape change value exceeds the preset second threshold, the finger part images in the finger images of the start frame and the end frame are respectively identified.
  • the preset first threshold is 15% and the second threshold is 20%
  • the computer device 1 calculates that the area change value of the finger image contours of the start frame and the end frame is 20%, which is greater than The first threshold value is 15%, and the shape change value of the finger image contours of the start frame and the end frame is 30% greater than the second threshold value 20%. Therefore, the computer device 1 continues to identify the start frame and the The finger part image in the finger image of the end frame.
  • the computer device 1 recognizes the finger part image in the finger image of the initial frame according to the preset key point detector modeling and marks it as a noise label, and compares the pre-set image according to the noise label.
  • the key point detector is modeled and trained to form a key point checker; and then the key point detector is used to identify the finger part image of the corresponding finger image in the end frame.
  • the key point detector modeling can be a recognition model of a part of the image of a finger with a deep learning capability based on a neural network, which can train and optimize the self-recognition model based on the recognized image data of the finger and part of the finger, and then optimize according to the optimization The subsequent recognition model continues to recognize images.
  • the computer device 1 can use the key point detector to model and recognize each frame of image in the video segment, then optimize the key point detector, and then continue to identify and optimize. Therefore, the accuracy of the key point detector in identifying the part of the finger image in the finger image is improved.
  • the neural network-based image recognition and model training technologies are commonly known technologies in the field, and will not be repeated here.
  • Step S510 Drawing a gesture track from the start frame to the end frame according to the position information of the finger part images of the start frame and the end frame within the image range.
  • the computer device 1 draws the gesture trajectory from the start frame to the end frame mainly based on the position information occupied by the finger part image in the finger image of the start frame and the difference between the end frame
  • the position information occupied by the part of the finger image in the finger image is drawn into a vector, and then the corresponding gesture trajectory is found according to the preset vector-gesture trajectory correspondence table.
  • the computer device 1 points the position information of the abnormal value of the finger part image of the start frame to the position information of the abnormal value of the finger part image of the end frame, so as to draw the vector, for example, pre-image the image Set it as a two-dimensional coordinate plane, and then draw a vector based on the coordinate information of the abnormal value of the finger image in the finger image of the start frame and the end frame, and then draw a vector based on the preset vector-gesture trajectory
  • the correspondence table looks up the corresponding gesture trajectory. For example, the preset vector direction within 0-45 degrees in the southeast direction is a right sliding gesture trajectory, and the vector direction within 45-90 degrees in the southeast direction is a downward sliding gesture trajectory. When the vector is 30 degrees in the southeast direction, it is determined as a right sliding gesture trajectory.
  • Step S512 Invoke and execute a corresponding operation instruction according to the gesture track.
  • the computer device 1 directly calls and executes the corresponding operation instruction according to the gesture trajectory and the preset correspondence table of the gesture trajectory and the operation instruction.
  • the gesture operation method proposed in this embodiment can perform finger image recognition on the frame images in the video section of the gesture video and extract the contour, and then obtain the area feature value and the shape feature value of the contour to calculate the two frames
  • the area change value and shape change value of the finger image contour of the image are used to determine whether the gesture trajectory is triggered.
  • the finger part image in the finger image of the two frames of images is recognized and the gesture trajectory is drawn.
  • the operation instruction corresponding to the gesture track is called and executed. Therefore, the accuracy and accuracy of recognizing finger images in video images are effectively improved.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种手势操作方法、装置、计算机设备以及非易失性计算机可读存储介质。所述方法包括:将手势视频中的视频区段中的帧图像进行手指图像识别并提取手指图像轮廓,然后获取所述手指图像轮廓的面积特征值和形状特征值以计算出所述两帧图像的手指图像轮廓的面积变化值和形状变化值,然后用来判断是否触发手势轨迹,当判断出触发了手势轨迹时,识别出所述两帧图像的手指图像中的指头部分图像并绘制手势轨迹,最后调用所述手势轨迹对应的操作指令并执行。上述手势操作方法、装置、计算机设备及非易失性计算机可读存储介质能够达到更准确、更精确地对视频图像中的手指图像的手势轨迹识别。

Description

手势操作方法、装置以及计算机设备
本申请要求于2019年07月19日提交中国专利局、申请号为201910655568.X、发明名称为“手势操作方法、装置以及计算机设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及手势识别技术领域,尤其涉及一种手势操作方法、装置、计算机设备及非易失性计算机可读存储介质。
背景技术
现有的计算机或者终端设备在使用过程中,一般通过用户的键盘输入以及鼠标的点击、拖拽等动作实现对计算机的操作,键盘输入可以有输入指令或使用快捷键等,鼠标的点击或拖拽可以实现指定操作。但随着计算机技术的发展和用户需求的多样化,用户越发想要脱离与鼠标、键盘等外设设备直接接触,因此,急迫需要一种不依赖鼠标、键盘等外设设备也能够达到对计算机或者终端设备进行控制的操作方法。
鉴于以上问题,现有技术中提出了利用手势识别技术获取用户的手势轨迹,然后根据手势轨迹调用对应的控制指令对计算机或者终端设备进行控制的操作方法。然而,发明人意识到,现有的大部分的手势操作方法都是基于二维平面识别,而现有的摄像单元拍摄的视频图像并不是二维图像,运动轨迹也并非只有二维属性。因此,现有即使中的手势操作方法用于对视频图像识别和分析并不太准确。
发明内容
有鉴于此,本申请提出一种用户手势操作方法、装置、计算机设备及非易失性计算机可读存储介质,能够将手势视频中的视频区段中的帧图像进行手指图像识别并提取轮廓,然后获取所述轮廓的面积特征值和形状特征值以计算出所述两帧图像的手指图像轮廓的面 积变化值和形状变化值用来判断是否触发手势轨迹,当判断出触发了手势轨迹,则识别出所述两帧图像的手指图像中的指头部分图像并绘制手势轨迹,最后调用所述手势轨迹对应的操作指令并执行。因此,有效提高了对视频图像中的手指图像进行识别的精度和准确度。
首先,为实现上述目的,本申请提供一种手势操作方法,该方法应用于计算机设备,所述方法包括:
获取手势视频,将所述手势视频划分为预设帧数的视频区段;根据预设的手指图像识别模型识别出所述视频区段中的每一帧图像中的手指图像;提取所述每一帧图像中的手指图像轮廓,并依次获取所述每一帧图像的手指图像轮廓的面积特征值和形状特征值;依顺序从所述视频区段内取出两帧图像作为起始帧和结束帧,根据所述起始帧和所述结束帧的手指图像轮廓的面积特征值和形状特征值计算出所述起始帧和所述结束帧的手指图像轮廓的面积变化值和形状变化值;当所述起始帧和所述结束帧的手指图像轮廓的面积变化值超过预设的第一阈值或者形状变化值超过预设的第二阈值时,分别识别出所述起始帧和所述结束帧的手指图像中的指头部分图像;根据所述起始帧和所述结束帧的指头部分图像在所述图像范围内的位置信息,绘制出所述起始帧到所述结束帧的手势轨迹;根据所述手势轨迹调用对应的操作指令并执行。
此外,为实现上述目的,本申请还提供一种手势操作装置,所述装置包括:
获取模块,用于获取手势视频,将所述手势视频划分为预设帧数的视频区段;识别模块,用于根据预设的手指图像识别模型识别出所述视频区段中的每一帧图像中的手指图像;所述获取模块,还用于提取所述每一帧图像中的手指图像轮廓,并依次获取所述每一帧图像的手指图像轮廓的面积特征值和形状特征值;计算模块,用于依顺序从所述视频区段内取出两帧图像作为起始帧和结束帧,根据所述起始帧和所述结束帧的手指图像轮廓的面积特征值和形状特征值计算出所述起始帧和所述结束帧的手指图像轮廓的面积变化值和形状变化值;所述识别模块,还用于当所述起始帧和所述结束帧的手指图像轮廓的面积变化值超过预设的第一阈值或者形状变化值超过预设的第二阈值时,分别识别出所述起始帧和所述结束帧的手指图像中的指头部分图像;绘制模块,用于根据所述起始帧和所述结束帧的指头部分图像在所述图像范围内的位置信息,绘制出所述起始帧到所述结束帧的手势轨迹;执行模块,用于根据所述手势轨迹调用对应的操作指令并执行。
进一步地,本申请还提出一种计算机设备,所述计算机设备包括存储器、处理器,所述存储器上存储有可在所述处理器上运行的计算机可读指令,所述计算机可读指令被所述处理器执行时实现步骤:
获取手势视频,将所述手势视频划分为预设帧数的视频区段;根据预设的手指图像识别模型识别出所述视频区段中的每一帧图像中的手指图像;提取所述每一帧图像中的手指图像轮廓,并依次获取所述每一帧图像的手指图像轮廓的面积特征值和形状特征值;依顺序从所述视频区段内取出两帧图像作为起始帧和结束帧,根据所述起始帧和所述结束帧的手指图像轮廓的面积特征值和形状特征值计算出所述起始帧和所述结束帧的手指图像轮廓的面积变化值和形状变化值;当所述起始帧和所述结束帧的手指图像轮廓的面积变化值超过预设的第一阈值或者形状变化值超过预设的第二阈值时,分别识别出所述起始帧和所述结束帧的手指图像中的指头部分图像;根据所述起始帧和所述结束帧的指头部分图像在所述图像范围内的位置信息,绘制出所述起始帧到所述结束帧的手势轨迹;根据所述手势轨迹调用对应的操作指令并执行。
进一步地,为实现上述目的,本申请还提供一种非易失性计算机可读存储介质,所述非易失性计算机可读存储介质存储有计算机可读指令,所述计算机可读指令可被至少一个处理器执行,以使所述至少一个处理器执行步骤:
获取手势视频,将所述手势视频划分为预设帧数的视频区段;根据预设的手指图像识别模型识别出所述视频区段中的每一帧图像中的手指图像;提取所述每一帧图像中的手指图像轮廓,并依次获取所述每一帧图像的手指图像轮廓的面积特征值和形状特征值;依顺序从所述视频区段内取出两帧图像作为起始帧和结束帧,根据所述起始帧和所述结束帧的手指图像轮廓的面积特征值和形状特征值计算出所述起始帧和所述结束帧的手指图像轮廓的面积变化值和形状变化值;当所述起始帧和所述结束帧的手指图像轮廓的面积变化值超过预设的第一阈值或者形状变化值超过预设的第二阈值时,分别识别出所述起始帧和所述结束帧的手指图像中的指头部分图像;根据所述起始帧和所述结束帧的指头部分图像在所述图像范围内的位置信息,绘制出所述起始帧到所述结束帧的手势轨迹;根据所述手势轨迹调用对应的操作指令并执行。
本申请所提出的手势操作方法、装置、计算机设备及非易失性计算机可读存储介质, 能够将手势视频中的视频区段中的帧图像进行手指图像识别并提取轮廓,然后获取所述轮廓的面积特征值和形状特征值以计算出所述两帧图像的手指图像轮廓的面积变化值和形状变化值用来判断是否触发手势轨迹,当判断出触发了手势轨迹,则识别出所述两帧图像的手指图像中的指头部分图像并绘制手势轨迹,最后调用所述手势轨迹对应的操作指令并执行。因此,有效提高了对视频图像中的手指图像进行识别的精度和准确度。
附图说明
图1是本申请计算机设备一可选的硬件架构的示意图;
图2是本申请手势操作装置一实施例的程序模块示意图;
图3是本申请手势操作方法一实施例的流程示意图。
具体实施方式
参阅图1所示,是本申请计算机设备1一可选的硬件架构的示意图。
本实施例中,所述计算机设备1可包括,但不仅限于,可通过系统总线相互通信连接存储器11、处理器12、网络接口13。
所述计算机设备1通过网络接口13连接网络(图1未标出),通过网络连接到其他计算机设备如PC端,移动终端等。所述网络可以是企业内部网(Intranet)、互联网(Internet)、全球移动通讯系统(Global System of Mobile communication,GSM)、宽带码分多址(Wideband Code Division Multiple Access,WCDMA)、4G网络、5G网络、蓝牙(Bluetooth)、Wi-Fi、通话网络等无线或有线网络。
需要指出的是,图1仅示出了具有组件11-13的计算机设备1,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。
其中,所述存储器11至少包括一种类型的易失性计算机可读存储介质,所述易失性计算机可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等。在一些实施例中,所述存储器11可以是所述计算机设备1的内部存储单元,例如该计 算机设备1的硬盘或内存。在另一些实施例中,所述存储器11也可以是所述计算机设备1的外部存储设备,例如该计算机设备1配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。当然,所述存储器11还可以既包括所述计算机设备1的内部存储单元也包括其外部存储设备。本实施例中,所述存储器11通常用于存储安装于所述计算机设备1的操作系统和各类应用软件,例如手势操作装置200的程序代码等。此外,所述存储器11还可以用于暂时地存储已经输出或者将要输出的各类数据。
当所述存储器11存储有计算机可读指令,所述计算机可读指令可被至少一个处理器执行,以使所述至少一个处理器执行步骤:
获取手势视频,将所述手势视频划分为预设帧数的视频区段;根据预设的手指图像识别模型识别出所述视频区段中的每一帧图像中的手指图像;提取所述每一帧图像中的手指图像轮廓,并依次获取所述每一帧图像的手指图像轮廓的面积特征值和形状特征值;依顺序从所述视频区段内取出两帧图像作为起始帧和结束帧,根据所述起始帧和所述结束帧的手指图像轮廓的面积特征值和形状特征值计算出所述起始帧和所述结束帧的手指图像轮廓的面积变化值和形状变化值;当所述起始帧和所述结束帧的手指图像轮廓的面积变化值超过预设的第一阈值或者形状变化值超过预设的第二阈值时,分别识别出所述起始帧和所述结束帧的手指图像中的指头部分图像;根据所述起始帧和所述结束帧的指头部分图像在所述图像范围内的位置信息,绘制出所述起始帧到所述结束帧的手势轨迹;根据所述手势轨迹调用对应的操作指令并执行。
所述处理器12在一些实施例中可以是中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器、或其他数据处理芯片。该处理器12通常用于控制所述计算机设备1的总体操作,例如执行数据交互或者通信相关的控制和处理等。本实施例中,所述处理器12用于运行所述存储器11中存储的程序代码或者处理数据,例如运行所述的手势操作装置200等。
所述网络接口13可包括无线网络接口或有线网络接口,该网络接口13通常用于在所述计算机设备1与其他计算机设备如PC端,移动终端等之间建立通信连接。
本实施例中,所述计算机设备1内安装并运行有手势操作装置200时,当所述手势操作 装置200运行时,能够将手势视频中的视频区段中的帧图像进行手指图像识别并提取轮廓,然后获取所述轮廓的面积特征值和形状特征值以计算出所述两帧图像的手指图像轮廓的面积变化值和形状变化值用来判断是否触发手势轨迹,当判断出触发了手势轨迹,则识别出所述两帧图像的手指图像中的指头部分图像并绘制手势轨迹,最后调用所述手势轨迹对应的操作指令并执行。因此,有效提高了对视频图像中的手指图像进行识别的精度和准确度。
至此,己经详细介绍了本申请各个实施例的应用环境和相关设备的硬件结构和功能。下面,将基于上述应用环境和相关设备,提出本申请的各个实施例。
首先,本申请提出一种手势操作装置200。
参阅图2所示,是本申请手势操作装置200一实施例的程序模块图。
本实施例中,所述手势操作装置200包括一系列的存储于存储器11上的计算机可读指令,当该计算机可读指令被处理器12执行时,可以实现本申请各实施例的手势操作操作。在一些实施例中,基于该计算机可读指令各部分所实现的特定的操作,手势操作装置200可以被划分为一个或多个模块。例如,在图2中,所述手势操作装置200可以被分割成获取模块201、识别模块202、计算模块203、绘制模块204和执行模块205。其中:
所述获取模块201,用于获取手势视频,将所述手势视频划分为预设帧数的视频区段。
具体地,当用户在计算机设备1上进行手势操作时,所述计算机设备1调用摄像单元进行头拍摄预设窗口范围内的手势视频,其中,所述计算机设备1包括PC端,移动终端等。因此,所述获取模块201可以获取到手势视频,然后还会进行分段处理。例如,所述摄像单元拍摄视频的帧率不低于24帧/秒,但由于用户进行手势操作的动作也不会太快,因此,预设每个视频区段包括的手势图像帧数为8帧。
所述识别模块202,用于根据预设的手指图像识别模型识别出所述视频区段中的每一帧图像中的手指图像。
具体地,在本实施例中,手势图像是摄像单元对着预设窗口位置拍摄的图像,因此,不仅仅只包括手指部分,还包括手掌或者其他背景。因此,所述识别模块202在所述获取模块201获取到手势图像并划分为视频区段之后,能够根据预设的手指图像识别模型依次将划分好的所述视频区段中的每一帧图像中的手指图像识别出来。在本实施例中,所述手指图像识别模型为基于神经网络的深度学习模型,然后通过对大量的手指图像进行训练而 成的手指图像识别模型,能够对手指部分进行很好的识别,其中利用神经网络的深度学习模型进行图像识别为现有常用技术手段,这里不做赘述。
所述获取模块201,还用于提取所述每一帧图像中的手指图像轮廓,并依次获取所述每一帧图像的手指图像轮廓的面积特征值和形状特征值。
具体地,在所述识别模型202将所述视频区段内的每一帧图像识别出其中的手指图像之后,所述获取模块201则会进一步提取所述每一帧图像中的手指轮廓。在本实施例中,所述获取模块201基于边缘的方法提取出所述每一帧图像的手指图像轮廓。当然,在其他实施例中,也可以使用基于区域或者基于活动轮廓的方法进行轮廓提取。所述获取模块201在提取出所述每一帧图像中的手指图像轮廓之后,依次获取所述每一帧图像的手指图像轮廓的面积特征值和形状特征值。在本实施例中,所述手指图像轮廓的面积特征值表现为所述手指图像的手指图像轮廓在所述手势视频图像中所占的像素点数量;所述手指图像轮廓的形状特征值表现为所述手指图像的手指图像轮廓在所述手势视频图像中所占的像素点的分布值,例如,将所述手势视频图像进行分块,然后所述手势图像轮廓在每一个分块中所占有的像素点数量值即可表示所述形状特征值。
所述计算模块203,用于依顺序从所述视频区段内取出两帧图像作为起始帧和结束帧,根据所述起始帧和所述结束帧的手指图像轮廓的面积特征值和形状特征值计算出所述起始帧和所述结束帧的手指图像轮廓的面积变化值和形状变化值。
具体地,依顺序取出,这里可以理解为在所述视频区段内,取出排序在前的一帧,然后再取出排序在后的一帧,中间间隔有1至6帧。例如起始帧为第1帧,结束帧为分别第2帧依次到第8帧;接着后续第2帧至第7帧依次作为起始帧,后续帧为结束帧。在本实施例中,所述计算模块203根据所述起始帧和所述结束帧的手指图像轮廓的面积特征值计算出所述起始帧和所述结束帧的手指图像轮廓的面积变化值的步骤包括:分别获取所述起始帧和所述结束帧的手指图像轮廓包括的像素点数量;计算出所述起始帧的手指图像轮廓包括的像素点数量和所述结束帧的手指图像轮廓包括的像素点数量的像素点数量差值,然后将所述像素点数量差值除以所述起始帧和所述结束帧的手指图像轮廓包括的像素点数量最多的像素点数值从而获得所述起始帧和所述结束帧的手指图像轮廓的面积变化值。例如:起始帧的手指图像轮廓包括的像素点数量为100,即面积特征值为100,结束帧的手指图像 轮廓包括的像素点数量为125,即面积特征值为125,那么起始帧和结束帧的手指图像轮廓的面积变化值为(125-100)/125=20%。
所述计算模块203根据所述起始帧和所述结束帧的手指图像轮廓的形状特征值计算出所述起始帧和所述结束帧的手指图像轮廓的形状变化值的步骤包括:根据相同的分块模式将所述起始帧和所述结束帧分别划分为M*N个分块;分别统计所述起始帧和所述结束帧的手指图像轮廓在每个分块所占的分块像素点数量;计算出所述起始帧手指图像轮廓的每一个分块的分块像素点数量与所述结束帧的手指图像轮廓对应位置的分块的分块像素点数量差值,然后将所述起始帧和所述结束帧的手指图像轮廓的所有分块的像素点数量差值叠加得到差值总和,再将所述差值总和除以所述起始帧和所述结束帧的手指图像轮廓包括的像素点数量最多的像素点数值从而获得所述两帧图像的所述手指图像轮廓的面积变化值。例如:所述计算模块203将视频区段内的每一帧图像划分为M*N个分块,M*N为3*2,那么分别获取每一帧图像的手指图像轮廓在6个分块中所占的像素点数量,例如起始帧和结束帧的手指图像轮廓的像素点在第3-6分块中的像素点数量相同且3-6分块中的像素点数量分别为5、6、4、5,但是起始帧在第1分块中有5个像素点、第2分块中4个像素点,而结束帧在第1分块中有1个像素点、第2分块中9个像素点,即起始帧的形状特征值为(5,4,5,6,4,5),结束帧的形状特征值为(1,9,5,6,4,5)。因此,起始帧和结束帧的第1分块像素点差异5-1=4,第2分块像素点差异的9-4=5,差异像素点数量为4+5=9,起始帧像素点总和为20+5+4=29,结束帧像素点总和为20+1+9=30,像素点分布差异为9/30=30%,即形状变化值为30%。
所述识别模块202,还用于当所述起始帧和所述结束帧的手指图像轮廓的面积变化值超过预设的第一阈值或者形状变化值超过预设的第二阈值时,分别识别出所述起始帧和所述结束帧的手指图像中的指头部分图像。
具体地,由于用户的手指进行手势控制时必然会产生一定的位置变化,因此,所述手势操作装置200通过判断所述起始帧和所述结束帧的手指图像轮廓的面积变化值超过预设的第一阈值或者形状变化值超过预设的第二阈值,从而判断用户是否产生了有效的手势操作。
因此,在本实施例中,在所述计算模块203计算出所述起始帧和所述结束帧的手指图 像轮廓的面积变化值以及形状变化值之后,所述识别模块202则进一步将所述起始帧和所述结束帧的手指图像轮廓的面积变化值以及形状变化值分别与预算的第一阈值和第二阈值进行比较,当所述起始帧和所述结束帧的手指图像轮廓的面积变化值超过预设的第一阈值或者形状变化值超过预设的第二阈值时,则分别识别出所述起始帧和所述结束帧的手指图像中的指头部分图像。例如,预设的第一阈值为15%,第二阈值为20%,然后所述计算模块203计算出所述起始帧和所述结束帧的手指图像轮廓的面积变化值为20%,大于第一阈值15%,所述起始帧和所述结束帧的手指图像轮廓的形状变化值为30%大于第二阈值20%,因此,所述识别模块202继续识别所述起始帧和所述结束帧的手指图像中的指头部分图像。
在本实施例中,所述识别模块202根据预设的关键点检测器建模识别出将所述起始帧的手指图像中的指头部分图像并标记为噪声标签,根据所述噪声标签对预设的关键点检测器建模进行训练以形成关键点检查器;然后再利用所述关键点检测器识别出结束帧中对应的手指图像的指头部分图像。其中,所述关键点检测器建模可以是具有基于神经网络的深度学习能力的手指指头部分图像的识别模型,能够根据识别出的手指指头部分图像数据对自身识别模型进行训练优化,再根据优化后的识别模型继续识别图像。也就是说,所述识别模块202能够利用关键点检测器建模识别所述视频区段内的每一帧图像,然后优化所述关键点检测器,接着继续进行识别和优化。从而提高关键点检测器识别出手指图像中指头部分图像的精确度。而基于神经网络的图像识别以及模型训练技术为本领域较为常用的公知技术,这里不作赘述。
所述绘制模块204,用于根据所述起始帧和所述结束帧的指头部分图像在所述图像范围内的位置信息,绘制出所述起始帧到所述结束帧的手势轨迹。
具体地,所述绘制模块204绘制出所述起始帧到所述结束帧的手势轨迹主要是根据所述起始帧的手指图像中的指头部分图像所占的位置信息与所述结束帧的手指图像中的指头部分图像所占的位置信息绘制成矢量,然后根据预设的矢量-手势轨迹对应表查找出相应的手势轨迹。在本实施例中,所述绘制模块204将起始帧的指头部分图像的异常值的位置信息指向结束帧的指头部分图像的异常值的位置信息,从而描绘制出矢量,例如,将图像预设为一个二维坐标面,然后根据所述起始帧和所述结束帧的手指图像中的指头部分图像的异常值的坐标信息能够绘制出一个矢量,然后再根据预设的矢量-手势轨迹对应表查找出相 应的手势轨迹。例如,预设矢量方向东南方向0-45度内为右滑动手势轨迹,矢量方向东南方向45-90度内为下滑动手势轨迹,当矢量为东南方向30度,则判断为右滑动手势轨迹。
所述执行模块205,用于根据所述手势轨迹调用对应的操作指令并执行。
具体地,当所述绘制模块204绘制出所述起始帧到所述结束帧的手势轨迹之后,那么所述视频区段就不再继续判断其他帧了,因为预设视频区段时考虑了用户的手势轨迹的执行时间,所述绘制模块204绘制出的手势轨迹代表了所述视频区段的用户操作。因此,所述执行模块205则会根据所述手势轨迹以及预设的手势轨迹跟操作指令对应表直接调用对应的操作指令并执行。
从上文可知,所述计算机设备1能够将手势视频中的视频区段中的帧图像进行手指图像识别并提取轮廓,然后获取所述轮廓的面积特征值和形状特征值以计算出所述两帧图像的手指图像轮廓的面积变化值和形状变化值用来判断是否触发手势轨迹,当判断出触发了手势轨迹,则识别出所述两帧图像的手指图像中的指头部分图像并绘制手势轨迹,最后调用所述手势轨迹对应的操作指令并执行。因此,有效提高了对视频图像中的手指图像进行识别的精度和准确度。
此外,本申请还提出一种手势操作方法,所述方法应用于计算机设备。
参阅图3所示,是本申请手势操作方法一实施例的流程示意图。在本实施例中,根据不同的需求,图3所示的流程图中的步骤的执行顺序可以改变,某些步骤可以省略。
步骤S500,获取手势视频,将所述手势视频划分为预设帧数的视频区段。
具体地,当用户在计算机设备1上进行手势操作时,所述计算机设备1调用摄像单元进行头拍摄预设窗口范围内的手势视频,其中,所述计算机设备1包括PC端,移动终端等。因此,所述计算机设备1可以获取到手势视频,然后还会进行分段处理。例如,所述摄像单元拍摄视频的帧率不低于24帧/秒,但由于用户进行手势操作的动作也不会太快,因此,预设每个视频区段包括的手势图像帧数为8帧。
步骤S502,根据预设的手指图像识别模型识别出所述视频区段中的每一帧图像中的手指图像。
具体地,在本实施例中,手势图像是摄像单元对着预设窗口位置拍摄的图像,因此,不仅仅只包括手指部分,还包括手掌或者其他背景。因此,所述计算机设备1在获取到手 势图像并划分为视频区段之后,能够根据预设的手指图像识别模型依次将划分好的所述视频区段中的每一帧图像中的手指图像识别出来。在本实施例中,所述手指图像识别模型为基于神经网络的深度学习模型,然后通过对大量的手指图像进行训练而成的手指图像识别模型,能够对手指部分进行很好的识别,其中利用神经网络的深度学习模型进行图像识别为现有常用技术手段,这里不做赘述。
步骤S504,提取所述每一帧图像中的手指图像轮廓,并依次获取所述每一帧图像的手指图像轮廓的面积特征值和形状特征值。
具体地,所述计算机设备1在将所述视频区段内的每一帧图像识别出其中的手指图像之后,还会进一步提取所述每一帧图像中的手指轮廓。在本实施例中,所述计算机设备1基于边缘的方法提取出所述每一帧图像的手指图像轮廓。当然,在其他实施例中,也可以使用基于区域或者基于活动轮廓的方法进行轮廓提取。所述计算机设备1在提取出所述每一帧图像中的手指图像轮廓之后,依次获取所述每一帧图像的手指图像轮廓的面积特征值和形状特征值。在本实施例中,所述手指图像轮廓的面积特征值表现为所述手指图像的手指图像轮廓在所述手势视频图像中所占的像素点数量;所述手指图像轮廓的形状特征值表现为所述手指图像的手指图像轮廓在所述手势视频图像中所占的像素点的分布值,例如,将所述手势视频图像进行分块,然后所述手势图像轮廓在每一个分块中所占有的像素点数量值即可表示所述形状特征值。
步骤S506,依顺序从所述视频区段内取出两帧图像作为起始帧和结束帧,根据所述起始帧和所述结束帧的手指图像轮廓的面积特征值和形状特征值计算出所述起始帧和所述结束帧的手指图像轮廓的面积变化值和形状变化值。
具体地,依顺序取出,这里可以理解为在所述视频区段内,取出排序在前的一帧,然后再取出排序在后的一帧,中间间隔有1至6帧。例如起始帧为第1帧,结束帧为分别第2帧依次到第8帧;接着后续第2帧至第7帧依次作为起始帧,后续帧为结束帧。在本实施例中,所述计算机设备1根据所述起始帧和所述结束帧的手指图像轮廓的面积特征值计算出所述起始帧和所述结束帧的手指图像轮廓的面积变化值的步骤包括:分别获取所述起始帧和所述结束帧的手指图像轮廓包括的像素点数量;计算出所述起始帧的手指图像轮廓包括的像素点数量和所述结束帧的手指图像轮廓包括的像素点数量的像素点数量差值,然 后将所述像素点数量差值除以所述起始帧和所述结束帧的手指图像轮廓包括的像素点数量最多的像素点数值从而获得所述起始帧和所述结束帧的手指图像轮廓的面积变化值。例如:起始帧的手指图像轮廓包括的像素点数量为100,即面积特征值为100,结束帧的手指图像轮廓包括的像素点数量为125,即面积特征值为125,那么起始帧和结束帧的手指图像轮廓的面积变化值为(125-100)/125=20%。
所述计算机设备1根据所述起始帧和所述结束帧的手指图像轮廓的形状特征值计算出所述起始帧和所述结束帧的手指图像轮廓的形状变化值的步骤包括:根据相同的分块模式将所述起始帧和所述结束帧分别划分为M*N个分块;分别统计所述起始帧和所述结束帧的手指图像轮廓在每个分块所占的分块像素点数量;计算出所述起始帧手指图像轮廓的每一个分块的分块像素点数量与所述结束帧的手指图像轮廓对应位置的分块的分块像素点数量差值,然后将所述起始帧和所述结束帧的手指图像轮廓的所有分块的像素点数量差值叠加得到差值总和,再将所述差值总和除以所述起始帧和所述结束帧的手指图像轮廓包括的像素点数量最多的像素点数值从而获得所述两帧图像的所述手指图像轮廓的面积变化值。例如:所述计算机设备1将视频区段内的每一帧图像划分为M*N个分块,M*N为3*2,那么分别获取每一帧图像的手指图像轮廓在6个分块中所占的像素点数量,例如起始帧和结束帧的手指图像轮廓的像素点在第3-6分块中的像素点数量相同且3-6分块中的像素点数量分别为5、6、4、5,但是起始帧在第1分块中有5个像素点、第2分块中4个像素点,而结束帧在第1分块中有1个像素点、第2分块中9个像素点,即起始帧的形状特征值为(5,4,5,6,4,5),结束帧的形状特征值为(1,9,5,6,4,5)。因此,起始帧和结束帧的第1分块像素点差异5-1=4,第2分块像素点差异的9-4=5,差异像素点数量为4+5=9,起始帧像素点总和为20+5+4=29,结束帧像素点总和为20+1+9=30,像素点分布差异为9/30=30%,即形状变化值为30%。
步骤S508,当所述起始帧和所述结束帧的手指图像轮廓的面积变化值超过预设的第一阈值或者形状变化值超过预设的第二阈值时,分别识别出所述起始帧和所述结束帧的手指图像中的指头部分图像。
具体地,由于用户的手指进行手势控制时必然会产生一定的位置变化,因此,所述计算机设备1通过判断所述起始帧和所述结束帧的手指图像轮廓的面积变化值超过预设的第 一阈值或者形状变化值超过预设的第二阈值,从而判断用户是否产生了有效的手势操作。
因此,在本实施例中,所述计算机设备1在计算出所述起始帧和所述结束帧的手指图像轮廓的面积变化值以及形状变化值之后,还会进一步将所述起始帧和所述结束帧的手指图像轮廓的面积变化值以及形状变化值分别与预算的第一阈值和第二阈值进行比较,当所述起始帧和所述结束帧的手指图像轮廓的面积变化值超过预设的第一阈值或者形状变化值超过预设的第二阈值时,则分别识别出所述起始帧和所述结束帧的手指图像中的指头部分图像。例如,预设的第一阈值为15%,第二阈值为20%,然后所述计算机设备1计算出所述起始帧和所述结束帧的手指图像轮廓的面积变化值为20%,大于第一阈值15%,所述起始帧和所述结束帧的手指图像轮廓的形状变化值为30%大于第二阈值20%,因此,所述计算机设备1继续识别所述起始帧和所述结束帧的手指图像中的指头部分图像。
在本实施例中,所述计算机设备1根据预设的关键点检测器建模识别出将所述起始帧的手指图像中的指头部分图像并标记为噪声标签,根据所述噪声标签对预设的关键点检测器建模进行训练以形成关键点检查器;然后再利用所述关键点检测器识别出结束帧中对应的手指图像的指头部分图像。其中,所述关键点检测器建模可以是具有基于神经网络的深度学习能力的手指指头部分图像的识别模型,能够根据识别出的手指指头部分图像数据对自身识别模型进行训练优化,再根据优化后的识别模型继续识别图像。也就是说,所述计算机设备1能够利用关键点检测器建模识别所述视频区段内的每一帧图像,然后优化所述关键点检测器,接着继续进行识别和优化。从而提高关键点检测器识别出手指图像中指头部分图像的精确度。而基于神经网络的图像识别以及模型训练技术为本领域较为常用的公知技术,这里不作赘述。
步骤S510,根据所述起始帧和所述结束帧的指头部分图像在所述图像范围内的位置信息,绘制出所述起始帧到所述结束帧的手势轨迹。
具体地,所述计算机设备1绘制出所述起始帧到所述结束帧的手势轨迹主要是根据所述起始帧的手指图像中的指头部分图像所占的位置信息与所述结束帧的手指图像中的指头部分图像所占的位置信息绘制成矢量,然后根据预设的矢量-手势轨迹对应表查找出相应的手势轨迹。在本实施例中,所述计算机设备1将起始帧的指头部分图像的异常值的位置信息指向结束帧的指头部分图像的异常值的位置信息,从而描绘制出矢量,例如,将图像预设为 一个二维坐标面,然后根据所述起始帧和所述结束帧的手指图像中的指头部分图像的异常值的坐标信息能够绘制出一个矢量,然后再根据预设的矢量-手势轨迹对应表查找出相应的手势轨迹。例如,预设矢量方向东南方向0-45度内为右滑动手势轨迹,矢量方向东南方向45-90度内为下滑动手势轨迹,当矢量为东南方向30度,则判断为右滑动手势轨迹。
步骤S512,根据所述手势轨迹调用对应的操作指令并执行。
具体地,当所述计算机设备1绘制出所述起始帧到所述结束帧的手势轨迹之后,那么所述视频区段就不再继续判断其他帧了,因为预设视频区段时考虑了用户的手势轨迹的执行时间,所述计算机设备1绘制出的手势轨迹代表了所述视频区段的用户操作。因此,所述计算机设备1则会根据所述手势轨迹以及预设的手势轨迹跟操作指令对应表直接调用对应的操作指令并执行。
本实施例所提出的手势操作方法能够将手势视频中的视频区段中的帧图像进行手指图像识别并提取轮廓,然后获取所述轮廓的面积特征值和形状特征值以计算出所述两帧图像的手指图像轮廓的面积变化值和形状变化值用来判断是否触发手势轨迹,当判断出触发了手势轨迹,则识别出所述两帧图像的手指图像中的指头部分图像并绘制手势轨迹,最后调用所述手势轨迹对应的操作指令并执行。因此,有效提高了对视频图像中的手指图像进行识别的精度和准确度。
以上仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。

Claims (20)

  1. 一种手势操作方法,应用于计算机设备,所述方法包括步骤:
    获取手势视频,将所述手势视频划分为预设帧数的视频区段;
    根据预设的手指图像识别模型识别出所述视频区段中的每一帧图像中的手指图像;
    提取所述每一帧图像中的手指图像轮廓,并依次获取所述每一帧图像的手指图像轮廓的面积特征值和形状特征值;
    依顺序从所述视频区段内取出两帧图像作为起始帧和结束帧,根据所述起始帧和所述结束帧的手指图像轮廓的面积特征值和形状特征值计算出所述起始帧和所述结束帧的手指图像轮廓的面积变化值和形状变化值;
    当所述起始帧和所述结束帧的手指图像轮廓的面积变化值超过预设的第一阈值或者形状变化值超过预设的第二阈值时,分别识别出所述起始帧和所述结束帧的手指图像中的指头部分图像;
    根据所述起始帧和所述结束帧的指头部分图像在所述图像范围内的位置信息,绘制出所述起始帧到所述结束帧的手势轨迹;
    根据所述手势轨迹调用对应的操作指令并执行。
  2. 如权利要求1所述的手势操作方法,所述手指图像轮廓的面积特征值表现为所述手指图像的手指图像轮廓在所述手势视频图像中所占的像素点数量。
  3. 如权利要求1所述的手势操作方法,所述手指图像轮廓的形状特征值表现为所述手指图像的手指图像轮廓在所述手势视频图像中所占的像素点的分布值。
  4. 如权利要求2所述的手势操作方法,所述“根据所述起始帧和所述结束帧的手指图像轮廓的面积特征值计算出所述起始帧和所述结束帧的手指图像轮廓的面积变化值”的步骤包括:
    分别获取所述起始帧和所述结束帧的手指图像轮廓包括的像素点数量;
    计算出所述起始帧的手指图像轮廓包括的像素点数量和所述结束帧的手指图像轮廓包括的像素点数量的像素点数量差值,然后将所述像素点数量差值除以所述起始帧和所述 结束帧的手指图像轮廓包括的像素点数量最多的像素点数值从而获得所述起始帧和所述结束帧的手指图像轮廓的面积变化值。
  5. 如权利要求3所述的手势操作方法,所述“根据所述起始帧和所述结束帧的手指图像轮廓的形状特征值计算出所述起始帧和所述结束帧的手指图像轮廓的形状变化值”的步骤包括:
    根据相同的分块模式将所述起始帧和所述结束帧分别划分为M*N个分块;
    分别统计所述起始帧和所述结束帧的手指图像轮廓在每个分块所占的分块像素点数量;
    计算出所述起始帧手指图像轮廓的每一个分块的分块像素点数量与所述结束帧的手指图像轮廓对应位置的分块的分块像素点数量差值,然后将所述起始帧和所述结束帧的手指图像轮廓的所有分块的像素点数量差值叠加得到差值总和,再将所述差值总和除以所述起始帧和所述结束帧的手指图像轮廓包括的像素点数量最多的像素点数值从而获得所述两帧图像的所述手指图像轮廓的面积变化值。
  6. 如权利要求1所述的手势操作方法,所述“识别出所述起始帧和所述结束帧的手指图像中的指头部分图像”的步骤包括:
    根据预设的关键点检测器建模识别出将所述起始帧的手指图像中的指头部分图像并标记为噪声标签,根据所述噪声标签对预设的关键点检测器建模进行训练以形成关键点检查器;利用所述关键点检测器识别出结束帧中对应的手指图像的指头部分图像。
  7. 如权利要求1所述的手势操作方法,所述绘制出所述起始帧到所述结束帧的手势轨迹主要是根据所述起始帧的手指图像中的指头部分图像所占的位置信息与所述结束帧的手指图像中的指头部分图像所占的位置信息绘制成矢量,然后根据预设的矢量-手势轨迹对应表查找出相应的手势轨迹。
  8. 一种手势操作装置,所述装置包括:
    获取模块,用于获取手势视频,将所述手势视频划分为预设帧数的视频区段;
    识别模块,用于根据预设的手指图像识别模型识别出所述视频区段中的每一帧图像中 的手指图像;
    所述获取模块,还用于提取所述每一帧图像中的手指图像轮廓,并依次获取所述每一帧图像的手指图像轮廓的面积特征值和形状特征值;
    计算模块,用于依顺序从所述视频区段内取出两帧图像作为起始帧和结束帧,根据所述起始帧和所述结束帧的手指图像轮廓的面积特征值和形状特征值计算出所述起始帧和所述结束帧的手指图像轮廓的面积变化值和形状变化值;
    所述识别模块,还用于当所述起始帧和所述结束帧的手指图像轮廓的面积变化值超过预设的第一阈值或者形状变化值超过预设的第二阈值时,分别识别出所述起始帧和所述结束帧的手指图像中的指头部分图像;
    绘制模块,用于根据所述起始帧和所述结束帧的指头部分图像在所述图像范围内的位置信息,绘制出所述起始帧到所述结束帧的手势轨迹;
    执行模块,用于根据所述手势轨迹调用对应的操作指令并执行。
  9. 如权利要求8所述的手势操作装置,所述手指图像轮廓的面积特征值表现为所述手指图像的手指图像轮廓在所述手势视频图像中所占的像素点数量。
  10. 如权利要求8所述的手势操作装置,所述手指图像轮廓的形状特征值表现为所述手指图像的手指图像轮廓在所述手势视频图像中所占的像素点的分布值。
  11. 如权利要求9所述的手势操作装置,所述计算模块还用于:
    分别获取所述起始帧和所述结束帧的手指图像轮廓包括的像素点数量;
    计算出所述起始帧的手指图像轮廓包括的像素点数量和所述结束帧的手指图像轮廓包括的像素点数量的像素点数量差值,然后将所述像素点数量差值除以所述起始帧和所述结束帧的手指图像轮廓包括的像素点数量最多的像素点数值从而获得所述起始帧和所述结束帧的手指图像轮廓的面积变化值。
  12. 如权利要求10所述的手势操作装置,所述计算模块还用于:
    根据相同的分块模式将所述起始帧和所述结束帧分别划分为M*N个分块;
    分别统计所述起始帧和所述结束帧的手指图像轮廓在每个分块所占的分块像素点数量;
    计算出所述起始帧手指图像轮廓的每一个分块的分块像素点数量与所述结束帧的手指图像轮廓对应位置的分块的分块像素点数量差值,然后将所述起始帧和所述结束帧的手指图像轮廓的所有分块的像素点数量差值叠加得到差值总和,再将所述差值总和除以所述起始帧和所述结束帧的手指图像轮廓包括的像素点数量最多的像素点数值从而获得所述两帧图像的所述手指图像轮廓的面积变化值。
  13. 如权利要求8所述的手势操作装置,所述识别模块还用于:
    根据预设的关键点检测器建模识别出将所述起始帧的手指图像中的指头部分图像并标记为噪声标签,根据所述噪声标签对预设的关键点检测器建模进行训练以形成关键点检查器;利用所述关键点检测器识别出结束帧中对应的手指图像的指头部分图像。
  14. 如权利要求8所述的手势操作装置,所述绘制出所述起始帧到所述结束帧的手势轨迹主要是根据所述起始帧的手指图像中的指头部分图像所占的位置信息与所述结束帧的手指图像中的指头部分图像所占的位置信息绘制成矢量,然后根据预设的矢量-手势轨迹对应表查找出相应的手势轨迹。
  15. 一种计算机设备,所述计算机设备包括存储器、处理器,所述存储器上存储有可在所述处理器上运行的计算机可读指令,所述计算机可读指令被所述处理器执行时实现步骤:
    获取手势视频,将所述手势视频划分为预设帧数的视频区段;
    根据预设的手指图像识别模型识别出所述视频区段中的每一帧图像中的手指图像;
    提取所述每一帧图像中的手指图像轮廓,并依次获取所述每一帧图像的手指图像轮廓的面积特征值和形状特征值;
    依顺序从所述视频区段内取出两帧图像作为起始帧和结束帧,根据所述起始帧和所述结束帧的手指图像轮廓的面积特征值和形状特征值计算出所述起始帧和所述结束帧的手指图像轮廓的面积变化值和形状变化值;
    当所述起始帧和所述结束帧的手指图像轮廓的面积变化值超过预设的第一阈值或者形状变化值超过预设的第二阈值时,分别识别出所述起始帧和所述结束帧的手指图像中的指 头部分图像;
    根据所述起始帧和所述结束帧的指头部分图像在所述图像范围内的位置信息,绘制出所述起始帧到所述结束帧的手势轨迹;
    根据所述手势轨迹调用对应的操作指令并执行。
  16. 如权利要求15所述的计算机设备,当所述手指图像轮廓的面积特征值表现为所述手指图像的手指图像轮廓在所述手势视频图像中所占的像素点数量时,所述计算机可读指令被所述处理器执行时还实现步骤:
    分别获取所述起始帧和所述结束帧的手指图像轮廓包括的像素点数量;
    计算出所述起始帧的手指图像轮廓包括的像素点数量和所述结束帧的手指图像轮廓包括的像素点数量的像素点数量差值,然后将所述像素点数量差值除以所述起始帧和所述结束帧的手指图像轮廓包括的像素点数量最多的像素点数值从而获得所述起始帧和所述结束帧的手指图像轮廓的面积变化值。
  17. 如权利要求15所述的计算机设备,当所述手指图像轮廓的形状特征值表现为所述手指图像的手指图像轮廓在所述手势视频图像中所占的像素点的分布值时,所述计算机可读指令被所述处理器执行时还实现步骤:
    根据相同的分块模式将所述起始帧和所述结束帧分别划分为M*N个分块;
    分别统计所述起始帧和所述结束帧的手指图像轮廓在每个分块所占的分块像素点数量;
    计算出所述起始帧手指图像轮廓的每一个分块的分块像素点数量与所述结束帧的手指图像轮廓对应位置的分块的分块像素点数量差值,然后将所述起始帧和所述结束帧的手指图像轮廓的所有分块的像素点数量差值叠加得到差值总和,再将所述差值总和除以所述起始帧和所述结束帧的手指图像轮廓包括的像素点数量最多的像素点数值从而获得所述两帧图像的所述手指图像轮廓的面积变化值。
  18. 一种非易失性计算机可读存储介质,所述非易失性计算机可读存储介质存储有计 算机可读指令,所述计算机可读指令可被至少一个处理器执行,以使所述至少一个处理器执行步骤:
    获取手势视频,将所述手势视频划分为预设帧数的视频区段;
    根据预设的手指图像识别模型识别出所述视频区段中的每一帧图像中的手指图像;
    提取所述每一帧图像中的手指图像轮廓,并依次获取所述每一帧图像的手指图像轮廓的面积特征值和形状特征值;
    依顺序从所述视频区段内取出两帧图像作为起始帧和结束帧,根据所述起始帧和所述结束帧的手指图像轮廓的面积特征值和形状特征值计算出所述起始帧和所述结束帧的手指图像轮廓的面积变化值和形状变化值;
    当所述起始帧和所述结束帧的手指图像轮廓的面积变化值超过预设的第一阈值或者形状变化值超过预设的第二阈值时,分别识别出所述起始帧和所述结束帧的手指图像中的指头部分图像;
    根据所述起始帧和所述结束帧的指头部分图像在所述图像范围内的位置信息,绘制出所述起始帧到所述结束帧的手势轨迹;
    根据所述手势轨迹调用对应的操作指令并执行。
  19. 如权利要求18所述的非易失性计算机可读存储介质,当所述手指图像轮廓的面积特征值表现为所述手指图像的手指图像轮廓在所述手势视频图像中所占的像素点数量时,所述计算机可读指令被所述处理器执行时还实现步骤:
    分别获取所述起始帧和所述结束帧的手指图像轮廓包括的像素点数量;
    计算出所述起始帧的手指图像轮廓包括的像素点数量和所述结束帧的手指图像轮廓包括的像素点数量的像素点数量差值,然后将所述像素点数量差值除以所述起始帧和所述结束帧的手指图像轮廓包括的像素点数量最多的像素点数值从而获得所述起始帧和所述结束帧的手指图像轮廓的面积变化值。
  20. 如权利要求18所述的非易失性计算机可读存储介质,当所述手指图像轮廓的形状特征值表现为所述手指图像的手指图像轮廓在所述手势视频图像中所占的像素点的分布值 时,所述计算机可读指令被所述处理器执行时还实现步骤:
    根据相同的分块模式将所述起始帧和所述结束帧分别划分为M*N个分块;
    分别统计所述起始帧和所述结束帧的手指图像轮廓在每个分块所占的分块像素点数量;
    计算出所述起始帧手指图像轮廓的每一个分块的分块像素点数量与所述结束帧的手指图像轮廓对应位置的分块的分块像素点数量差值,然后将所述起始帧和所述结束帧的手指图像轮廓的所有分块的像素点数量差值叠加得到差值总和,再将所述差值总和除以所述起始帧和所述结束帧的手指图像轮廓包括的像素点数量最多的像素点数值从而获得所述两帧图像的所述手指图像轮廓的面积变化值。
PCT/CN2019/117770 2019-07-19 2019-11-13 手势操作方法、装置以及计算机设备 WO2021012513A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910655568.X 2019-07-19
CN201910655568.XA CN110532863A (zh) 2019-07-19 2019-07-19 手势操作方法、装置以及计算机设备

Publications (1)

Publication Number Publication Date
WO2021012513A1 true WO2021012513A1 (zh) 2021-01-28

Family

ID=68660428

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/117770 WO2021012513A1 (zh) 2019-07-19 2019-11-13 手势操作方法、装置以及计算机设备

Country Status (2)

Country Link
CN (1) CN110532863A (zh)
WO (1) WO2021012513A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114564104A (zh) * 2022-02-17 2022-05-31 西安电子科技大学 一种基于视频中动态手势控制的会议演示系统
CN116614666A (zh) * 2023-07-17 2023-08-18 微网优联科技(成都)有限公司 一种基于ai摄像头特征提取系统及方法

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112926364B (zh) * 2019-12-06 2024-04-19 北京四维图新科技股份有限公司 头部姿态的识别方法及系统、行车记录仪和智能座舱

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101699469A (zh) * 2009-11-09 2010-04-28 南京邮电大学 课堂录像中教师黑板书写动作的自动识别方法
CN102063618A (zh) * 2011-01-13 2011-05-18 中科芯集成电路股份有限公司 互动系统中的动态手势识别方法
CN102446032A (zh) * 2010-09-30 2012-05-09 中国移动通信有限公司 基于摄像头的信息输入方法及终端
US20120242566A1 (en) * 2011-03-23 2012-09-27 Zhiwei Zhang Vision-Based User Interface and Related Method
CN104317385A (zh) * 2014-06-26 2015-01-28 青岛海信电器股份有限公司 一种手势识别方法和系统

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763429B (zh) * 2010-01-14 2012-01-25 中山大学 一种基于颜色和形状特征的图像检索方法
JP5264844B2 (ja) * 2010-09-06 2013-08-14 日本電信電話株式会社 ジェスチャ認識装置及び方法
JP2013080433A (ja) * 2011-10-05 2013-05-02 Nippon Telegr & Teleph Corp <Ntt> ジェスチャ認識装置及びそのプログラム
CN103576848B (zh) * 2012-08-09 2016-07-13 腾讯科技(深圳)有限公司 手势操作方法和手势操作装置
CN103679145A (zh) * 2013-12-06 2014-03-26 河海大学 一种手势自动识别方法
CN104766038B (zh) * 2014-01-02 2018-05-18 株式会社理光 手掌开合动作识别方法和装置
CN108351708B (zh) * 2016-10-14 2020-04-03 华为技术有限公司 三维手势解锁方法、获取手势图像的方法和终端设备
CN109614922B (zh) * 2018-12-07 2023-05-02 南京富士通南大软件技术有限公司 一种动静态手势识别方法和系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101699469A (zh) * 2009-11-09 2010-04-28 南京邮电大学 课堂录像中教师黑板书写动作的自动识别方法
CN102446032A (zh) * 2010-09-30 2012-05-09 中国移动通信有限公司 基于摄像头的信息输入方法及终端
CN102063618A (zh) * 2011-01-13 2011-05-18 中科芯集成电路股份有限公司 互动系统中的动态手势识别方法
US20120242566A1 (en) * 2011-03-23 2012-09-27 Zhiwei Zhang Vision-Based User Interface and Related Method
CN104317385A (zh) * 2014-06-26 2015-01-28 青岛海信电器股份有限公司 一种手势识别方法和系统

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114564104A (zh) * 2022-02-17 2022-05-31 西安电子科技大学 一种基于视频中动态手势控制的会议演示系统
CN116614666A (zh) * 2023-07-17 2023-08-18 微网优联科技(成都)有限公司 一种基于ai摄像头特征提取系统及方法
CN116614666B (zh) * 2023-07-17 2023-10-20 微网优联科技(成都)有限公司 一种基于ai摄像头特征提取系统及方法

Also Published As

Publication number Publication date
CN110532863A (zh) 2019-12-03

Similar Documents

Publication Publication Date Title
CN110147717B (zh) 一种人体动作的识别方法及设备
CN110738101B (zh) 行为识别方法、装置及计算机可读存储介质
CN108960163B (zh) 手势识别方法、装置、设备和存储介质
US10599914B2 (en) Method and apparatus for human face image processing
US9916012B2 (en) Image processing apparatus, image processing method, and program
RU2711029C2 (ru) Классификация касаний
CN106934333B (zh) 一种手势识别方法及系统
WO2021012513A1 (zh) 手势操作方法、装置以及计算机设备
CN107679448B (zh) 眼球动作分析方法、装置及存储介质
CN110135246A (zh) 一种人体动作的识别方法及设备
US8417026B2 (en) Gesture recognition methods and systems
CN106845384B (zh) 一种基于递归模型的手势识别方法
US10489636B2 (en) Lip movement capturing method and device, and storage medium
CN107357414B (zh) 一种点击动作的识别方法及点击动作识别装置
US10650234B2 (en) Eyeball movement capturing method and device, and storage medium
US10922535B2 (en) Method and device for identifying wrist, method for identifying gesture, electronic equipment and computer-readable storage medium
CN107004073A (zh) 一种面部验证的方法和电子设备
CN111754391A (zh) 人脸转正方法、设备及计算机可读存储介质
Vivek Veeriah et al. Robust hand gesture recognition algorithm for simple mouse control
Gharasuie et al. Real-time dynamic hand gesture recognition using hidden Markov models
CN111986229A (zh) 视频目标检测方法、装置及计算机系统
KR20130073934A (ko) 제스처 기반으로 한 인간과 컴퓨터의 상호작용에 대한 방법, 시스템 및 컴퓨터 기록 매체
CN111368674B (zh) 图像识别方法及装置
CN116311526A (zh) 图像区域确定方法、装置、电子设备及存储介质
KR20190132885A (ko) 영상으로부터 손을 검출하는 장치, 방법 및 컴퓨터 프로그램

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19938889

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19938889

Country of ref document: EP

Kind code of ref document: A1