WO2017101496A1 - 一种手势识别方法及装置 - Google Patents

一种手势识别方法及装置 Download PDF

Info

Publication number
WO2017101496A1
WO2017101496A1 PCT/CN2016/096485 CN2016096485W WO2017101496A1 WO 2017101496 A1 WO2017101496 A1 WO 2017101496A1 CN 2016096485 W CN2016096485 W CN 2016096485W WO 2017101496 A1 WO2017101496 A1 WO 2017101496A1
Authority
WO
WIPO (PCT)
Prior art keywords
center
gravity
coordinates
gravity point
projection
Prior art date
Application number
PCT/CN2016/096485
Other languages
English (en)
French (fr)
Inventor
李艳杰
Original Assignee
乐视控股(北京)有限公司
乐视致新电子科技(天津)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 乐视控股(北京)有限公司, 乐视致新电子科技(天津)有限公司 filed Critical 乐视控股(北京)有限公司
Publication of WO2017101496A1 publication Critical patent/WO2017101496A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Definitions

  • the embodiments of the present invention relate to the field of gesture recognition technologies, and in particular, to a gesture recognition method and apparatus.
  • the embodiment of the present invention provides a gesture recognition method and device, which are used to solve the technical defect that the movement speed of the human hand is too fast and the recognition rate is low in the prior art.
  • the method proposed in this patent can quickly judge the horizontal and vertical movement directions of the hand, and has strong anti-interference ability, and can realize the recognition of the gesture even when the hand movement speed is fast.
  • An embodiment of the present application provides a gesture recognition method, including the following steps:
  • a gesture recognition apparatus including:
  • An obtaining module configured to acquire a frame image, and calculate a center of gravity of the human hand in the image
  • a saving module configured to calculate a distance between the center of gravity point and the last saved center of gravity point, and determine whether the distance is greater than a preset distance threshold. When the determination result is yes, the center of gravity point is obtained and saved;
  • the determining module is configured to determine whether the number of consecutively successful acquisitions of the center of gravity point exceeds a preset number of thresholds. When the determination result is yes, the direction of motion of the human hand is determined according to coordinates of the plurality of the center of gravity points that are successfully acquired.
  • the embodiment of the present application provides an electronic device, including: the above gesture recognition device.
  • the embodiment of the present application provides a non-transitory computer readable storage medium, wherein the non-transitory computer readable storage medium can store a computer program, which can implement a part of the gesture recognition method described above when executed. Or all steps.
  • An embodiment of the present application further provides an electronic device, including: one or more processors; and a memory; wherein the memory stores instructions executable by the one or more processors, the instructions being The one or more processors are executed to enable the one or more processors to perform the gesture recognition method of any of the above-described embodiments of the present application.
  • An embodiment of the present application provides a computer program product, the computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions, when the program instructions are executed by a computer, The computer is caused to perform the gesture recognition method of any of the above embodiments of the present application.
  • the gesture recognition method and device provided by the embodiment of the present invention determine the movement direction of the human hand through the coordinates of the center of gravity of the human hand, and change the technical defect that the manual movement is too fast and the recognition rate is low in the prior art, and the movement direction of the human hand can be quickly determined.
  • the anti-interference is strong, the recognition precision is high, and the movement direction of the human hand can be accurately recognized when the human hand moves quickly, and the gesture recognition is realized.
  • FIG. 1 is a flowchart of a gesture recognition method in an embodiment of the present application.
  • FIG. 2 is a schematic diagram of an x-axis and a y-axis in an image coordinate system according to an embodiment of the present application;
  • FIG. 3 is a schematic diagram of a fitting straight line based on a least squares method in the embodiment of the present application
  • FIG. 4 is a schematic diagram of a projection vector and an x-axis vector rotation angle in an image coordinate system according to an embodiment of the present application
  • FIG. 5 is a schematic diagram of an application process of a gesture recognition method according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of an embodiment of a gesture recognition apparatus according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a hardware structure of a device for a gesture recognition method according to an embodiment of the present disclosure.
  • the dynamic gesture recognition method in the prior art generally requires that the movement speed of the human hand is not too fast, and in the case where the movement of the human hand is too fast, the gesture recognition rate is low, and it is difficult to recognize the dynamic gesture. People are limited by the speed of manual movement, and the experience is not good. If the hand can move quickly, the gesture can be recognized accurately and the direction of movement of the human hand can be recognized, which will greatly improve the user experience.
  • FIG. 1 is a flowchart of a gesture recognition method according to an embodiment of the present application.
  • a gesture recognition method according to Embodiment 1 of the present application includes the following steps:
  • Step S101 Acquire a frame image, and calculate a center of gravity of the human hand in the image
  • Obtaining one frame image in the multi-frame image, detecting whether there is a human hand in the acquired image, and detecting the human hand may be detected by using an image segmentation method, or detecting the pixel value of the pixel in the image to detect the human hand acquisition.
  • the image is a binary image, in which the pixel value of the area of the human hand is 255, and the pixel value of the remaining area is 0.
  • the pixel value of each pixel is obtained, and whether the pixel value is included or not is included.
  • the human hand area pixel value when the judgment pixel value includes the human hand pixel value, the human hand exists in the image, and step S102 is performed. If there is no human hand in the image, the other frame image is re-acquired, and whether the human hand exists in the acquired image is re-detected.
  • calculating the center of gravity of the human hand in the image preferably, calculating the center of gravity of the human hand in the image, further comprising: calculating the center of gravity using the following formula:
  • image(x, y) is the pixel value at the coordinates (x, y) of the center of gravity point
  • x g and y g are the centers of gravity of the center of gravity point in the x-axis direction and the y-axis direction, respectively
  • FIG. 2 is the present application. Schematic diagram of the x-axis and y-axis in the image coordinate system in the embodiment
  • Step S102 calculating a distance between the center of gravity point and the last saved center of gravity point, determining whether the distance is greater than a preset distance threshold, and when the determination result is yes, acquiring a center of gravity point and saving;
  • the distance threshold may be the acquired image. 10% of the height, or other user-defined distance threshold, is not specifically limited here.
  • step S103 When the distance between the center of gravity point and the last saved center of gravity point is greater than the preset distance threshold, the center of gravity point is acquired and saved, and the current center point point is successfully acquired, and step S103 is performed.
  • step S101 is performed.
  • Step S103 It is determined whether the number of consecutively successful acquisition of the center of gravity point exceeds a preset number of times threshold.
  • the determination result is YES
  • the motion direction of the human hand is determined according to the coordinates of the plurality of center of gravity points that are successfully acquired continuously.
  • step S102 After successfully obtaining the center of gravity point in step S102, it is determined whether the number of consecutively successful center of gravity points exceeds a preset number of thresholds, and the number of times that the center of gravity point is successfully obtained is determined to ensure that the movement of the human hand is continuous, the distance is long enough, and the false detection is reduced. .
  • the continuous acquisition of the center of gravity point can be achieved in two ways.
  • the center of gravity point is successfully obtained continuously, including: successfully obtaining the center of gravity point in the image of the continuous frame, or successfully obtaining the image of the preset frame in the preset period. Focus on the point.
  • the steps of successfully obtaining the center of gravity point in the images of consecutive frames are: acquiring a plurality of images, the image frames of the plurality of images are continuous, and successfully acquiring the center of gravity of the human hand through the step S102 in the acquired images of consecutive frames.
  • the number of consecutive successful acquisition of the center of gravity point is several times, for example, obtaining six consecutive frames of images, and successfully acquiring the center of gravity of the human hand in the images of six consecutive frames, and continuously obtaining the center of gravity point continuously.
  • the number of times is six.
  • the steps of successfully obtaining the center of gravity point in the image of the preset frame in the preset period are as follows: the multi-frame image is included in the preset period, the image of the preset frame is obtained in the multi-frame image, and the image is adopted in the acquired image.
  • step S102 the center of gravity of the human hand is successfully acquired, and several images are acquired, that is, the number of times of successively obtaining the center of gravity point is several times, for example, 12 frames are included in 0.5 seconds, and even frames are respectively obtained in 12 frames.
  • the image, that is, the image of six frames is acquired, and the center of gravity of the human hand is successfully acquired in the acquired image, and the number of times of successively obtaining the center of gravity point is six times.
  • the preset number of thresholds in the embodiment of the present application includes, but is not limited to, six times, and may be a user-defined number of times, which is not specifically limited herein.
  • the motion direction of the hand is determined according to the coordinates of the plurality of center points of gravity that are successfully acquired, and the center of gravity is successfully obtained several times in succession, and the hand is determined according to the coordinates of the acquired center of gravity point. The direction of movement.
  • the specific embodiment further includes the following sub-steps.
  • the direction of motion of the human hand is determined according to the coordinates of the plurality of center-of-gravity points that are successfully acquired, and further includes:
  • Sub-step 1 fitting a straight line using least squares method according to the coordinates of successively successfully acquired plurality of gravity points;
  • FIG. 3 is a schematic diagram of a fitting straight line based on the least squares method in the embodiment of the present application.
  • Sub-step 2 Calculate the projection coordinates of the coordinates of the center of gravity point on the fitted straight line according to the coordinates and the fitting straight line of the successively successfully acquired centroid points, and calculate the projected coordinate connection of the first center of gravity point among the plurality of center of gravity points.
  • the coordinates of the center of gravity point are known according to the equation of the fitted line of the straight line fitted by the least square method based on the coordinates of the successively successfully acquired centroid points. According to the fitting straight line equation and the coordinates of the center of gravity point, the center of gravity can be calculated.
  • the projection coordinates of the point coordinates on the fitted straight line preferably, calculating the projection coordinates of the coordinates of the center of gravity point on the fitted straight line, further comprising: calculating the projection of the center of gravity point on the fitted straight line by using the following formula coordinate:
  • the projection coordinates of the plurality of center-of-gravity points on the fitted straight line are calculated, and the projection coordinates of the first acquired center-of-gravity point and the projected coordinates of the last acquired center-of-gravity point are obtained, and the center of gravity point is obtained on the fitted straight line.
  • projection point, the center of gravity of the projection point of the first projection point acquired and the last acquired gravity point connection a center of gravity calculation is connected the last projected point projected center of gravity and a point of projection vector V h (x h , y h ), where (x h , y h ) represents the coordinates of the projection vector V h .
  • the vector V x (x x , y y ) of the x-axis is also calculated, where (x x , y y ) represents the coordinate of the vector V x of the x -axis, which is the embodiment of the present application.
  • the coordinates of the axis include, but are not limited to, (1, 0), and other coordinates that are user-defined, which are not specifically limited in this embodiment.
  • the projection vector V h and the vector V x of the x-axis are calculated, and sub-step 3 is performed.
  • Sub-step 3 Calculate the rotation angle of the projection vector with respect to the X-axis vector, and judge the movement direction of the human hand according to the rotation angle.
  • Calculating the rotation angle of the projection vector relative to the X-axis vector further includes calculating a rotation angle of the projection vector relative to the X-axis vector using the following formula:
  • V h is a projection vector obtained by connecting the projection coordinates of the first centroid point of the plurality of centroid points to the projection coordinates of the last centroid point, and the coordinates of the projection vector are (x h , y h ), V x is a vector of the x-axis;
  • FIG. 4 is a projection vector and an x-axis vector in the image coordinate system in the embodiment of the present application.
  • determining the movement direction of the human hand according to the rotation angle further includes: determining the movement direction of the hand according to the positive and negative angles of the rotation angle ⁇ and the value range. According to the positive and negative values of the corners, it can be obtained whether the direction of motion of the human hand is horizontal or vertical, and the embodiment of the present application includes but is not limited to The following range of angles determine the direction of movement of the human hand:
  • the user can also customize the other corner ranges for judgment.
  • the embodiment of the present application does not specifically limit the reason.
  • the reason why the angle range of the vertical direction is larger than the angle range of the horizontal direction is because the relative is easier for people to do.
  • the action in the horizontal direction is because the relative is easier for people to do.
  • FIG. 5 is a schematic diagram of an application flow of the gesture recognition method in the embodiment of the present application, as shown in FIG. 5 . Show:
  • Start gesture recognition obtain a frame of image, detect whether there is a human hand in the image, and calculate the center of gravity of the human hand when the human hand is detected, otherwise return to acquire a frame of image again;
  • the least square method is used to fit a straight line according to the coordinates of the center of gravity point, and the projection vector of the center of gravity point on the fitted straight line is calculated according to the coordinates of the center of gravity point and the fitted straight line, and the projection vector is calculated relative to the projection vector.
  • the rotation angle of the x-axis vector determines the direction of motion of the human hand according to the rotation angle, otherwise it returns to reacquire an image of one frame.
  • the non-transitory computer readable storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).
  • FIG. 6 is a schematic structural diagram of an embodiment of a gesture recognition apparatus according to an embodiment of the present application. As shown in FIG. 6 , the embodiment of the present application further provides a gesture recognition apparatus, including:
  • Obtaining module 1 for acquiring a frame image and calculating a center of gravity of the human hand in the image
  • the saving module 2 is configured to calculate a distance between the center of gravity point and the last saved center of gravity point, and determine whether the distance is greater than a preset distance threshold. When the determination result is yes, the center of gravity point is obtained and saved;
  • the determining module 3 is configured to determine whether the number of consecutively successful acquisition of the center of gravity point exceeds a preset number of thresholds. When the determination result is yes, the motion direction of the human hand is determined according to the coordinates of the plurality of centroid points that are successfully acquired continuously.
  • the obtaining module 1 is further configured to:
  • image(x, y) is the pixel value at the coordinates (x, y) of the center of gravity point
  • x g and y g are the centroids in the x-axis direction and the y-axis direction, respectively.
  • the determining module 3 is configured to: successfully obtain the center of gravity point in the images of consecutive frames, or successfully obtain the center of gravity point in the image of the preset frame in the preset period.
  • the determining module 3 is further configured to:
  • the rotation angle of the projection vector with respect to the X-axis vector is calculated, and the direction of motion of the human hand is determined according to the rotation angle.
  • the determining module 3 is further configured to:
  • Calculating the rotation angle of the projection vector relative to the X-axis vector further includes calculating a rotation angle of the projection vector relative to the X-axis vector using the following formula:
  • V h is a projection vector obtained by connecting the projection coordinates of the first centroid point of the plurality of centroid points to the projection coordinates of the last centroid point, and the coordinates of the projection vector are (x h, y h), V x is the vector of the x-axis;
  • the projection vector rotates counterclockwise with respect to the X axis, then ⁇ is a negative value, and clockwise rotation, then ⁇ is a positive value;
  • Determining the direction of motion of the human hand according to the corner further comprising: determining the direction of motion of the hand according to the positive and negative angles of the rotation angle ⁇ and the range of values.
  • the apparatus shown in FIG. 6 can perform the method of the embodiment shown in FIG. 1 and FIG. 5, and the implementation principle and technical effects are referred to the embodiment shown in FIG. 1 and FIG. 6, and details are not described herein again.
  • the gesture recognition method and apparatus determine the movement direction of the human hand through the coordinates of the center of gravity of the human hand, and the technical defects of the manual movement and the recognition rate are low in the prior art. Quickly judge the direction of movement of the human hand, strong anti-interference, high recognition accuracy, and can accurately recognize the movement direction of the human hand when the human hand moves quickly, and realize the recognition of the gesture.
  • the embodiment of the present application further provides a non-transitory computer readable storage medium, wherein the non-transitory computer readable storage medium can store a program, and when executed, the program can implement a gesture provided by any one of the foregoing embodiments. Identify some or all of the steps in each implementation of the method.
  • FIG. 7 is a schematic structural diagram of hardware of an electronic device according to a gesture recognition method according to an embodiment of the present disclosure. As shown in FIG. 7, the device includes:
  • processors 710 and memory 720 one processor 710 is taken as an example in FIG.
  • the apparatus for performing the gesture recognition method may further include: an input device 730 and an output device 740.
  • the processor 710, the memory 720, the input device 730, and the output device 740 may be connected by a bus or other means, as exemplified by a bus connection in FIG.
  • the memory 720 is used as a non-transitory computer readable storage medium, and can be used to store a non-volatile software program, a non-volatile computer executable program, and a module, such as a program instruction corresponding to the gesture recognition method in the embodiment of the present application. Module.
  • the processor 710 executes various functional applications and data processing of the electronic device by executing non-volatile software programs, instructions, and modules stored in the memory 720, that is, the gesture recognition method of the above-described method embodiments.
  • the memory 720 may include a storage program area and an storage data area, wherein the storage program area may store an operating system, an application required for at least one function; the storage data area may store data created according to usage of the gesture recognition device, and the like.
  • memory 720 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device.
  • memory 720 can optionally include a memory remotely located relative to processor 710 that can be coupled to the gesture recognition device via a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • Input device 730 can receive input numeric or character information and generate key signal inputs related to user settings and function control of the gesture recognition device.
  • the output device 740 can include a display device such as a display screen.
  • the one or more modules are stored in the memory 720, and when executed by the one or more processors 710, perform the gesture recognition method in any of the above method embodiments.
  • the electronic device of the embodiment of the invention exists in various forms, including but not limited to:
  • Mobile communication devices These devices are characterized by mobile communication functions and are mainly aimed at providing voice and data communication.
  • Such terminals include: smart phones (such as iPhone), multimedia phones, functional phones, and low-end phones.
  • Ultra-mobile personal computer equipment This type of equipment belongs to the category of personal computers, has computing and processing functions, and generally has mobile Internet access.
  • Such terminals include: PDAs, MIDs, and UMPC devices, such as the iPad.
  • Portable entertainment devices These devices can display and play multimedia content. Such devices include: audio, preview players (such as iPod), handheld game consoles, e-books, and smart toys and portable car navigation devices.
  • the server consists of a processor, a hard disk, a memory, a system bus, etc.
  • the server is similar to a general-purpose computer architecture, but because of the need to provide highly reliable services, processing power and stability High reliability in terms of reliability, security, scalability, and manageability.
  • the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment. Those of ordinary skill in the art can understand and implement without deliberate labor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

一种手势识别方法及装置,该方法包括如下步骤:获取一帧图像,计算所述图像中所述人手的重心点(S101);计算本次所述重心点与上一次保存的重心点之间的距离,判断所述距离是否大于预设距离阈值,当判断结果为是,则获取所述重心点并保存(S102);判断连续成功获取所述重心点的次数是否超过预设次数阈值,当判断结果为是,则根据连续成功获取的多个所述重心点的坐标判断所述人手的运动方向(S103)。该方法可以快速判断人手的运动方向,抗干扰性强,识别精度高,在人手快速运动时也可以使精确识别人手的运动方向。

Description

一种手势识别方法及装置
本申请要求于2015-12-18提交中国专利局、申请号为2015109648067、发明名称为“一种手势识别方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及手势识别技术领域,尤其涉及一种手势识别方法及装置。
背景技术
随着社会的计算机化的发展,计算机在现代社会中的影响越来越大,人与计算机的交互方式也不仅仅局限于只是通过鼠标键盘,基于手势识别的人机交互方式在慢慢被人们所熟知并普遍应用,手势识别的交互方式直观性使得人机交互的过程变得更灵活、更直接、更方便。
然而,在目前手势识别的技术中,按照是否识别手的运动状态来区分,可以分为静态手势识别和动态手势识别,静态手势识别仅需根据单帧图像识别出静态手势,动态手势识别则需要根据多帧图像识别动态手势。但是,现有技术中的动态手势识别方法一般要求人手的运动速度不能太快,在人手运动过快的情况下,手势识别率低,很难识别出动态手势。
因此,一种新的手势识别方法及装置亟待提出。
发明内容
本申请实施例提供一种手势识别方法及装置,用以解决现有技术中人手的运动速度过快,使得识别率低的技术缺陷。本专利提出的方法可以快速判断手的水平和竖直运动方向,而且抗干扰性强,即使在手运动速度很快时,也可以实现对手势的识别。
本申请实施例提供一种手势识别方法,包括如下步骤:
获取一帧图像,计算所述图像中所述人手的重心点;
计算本次所述重心点与上一次保存的重心点之间的距离,判断所述距离是否大于预设距离阈值,当判断结果为是,则获取所述重心点并保存;
判断连续成功获取所述重心点的次数是否超过预设次数阈值,当判断结果为是,则根据连续成功获取的多个所述重心点的坐标判断所述人手的运动方向。
相应地,本申请实施例提供一种手势识别装置,包括:
获取模块,用于获取一帧图像,计算所述图像中所述人手的重心点;
保存模块,用于计算本次所述重心点与上一次保存的重心点之间的距离,判断所述距离是否大于预设距离阈值,当判断结果为是,则获取所述重心点并保存;
判断模块,用于判断连续成功获取所述重心点的次数是否超过预设次数阈值,当判断结果为是,则根据连续成功获取的多个所述重心点的坐标判断所述人手的运动方向。
本申请实施例提供了一种电子设备,包括:上述的一种手势识别装置。
本申请实施例提供了一种非暂态计算机可读存储介质,其中,该非暂态计算机可读存储介质可存储有计算机程序,该计算机程序执行时可实现上述的一种手势识别方法的部分或全部步骤。
本申请实施例还提供了一种电子设备,包括:一个或多个处理器;以及,存储器;其中,所述存储器存储有可被所述一个或多个处理器执行的指令,所述指令被所述一个或多个处理器执行,以使所述一个或多个处理器能够执行本申请上述任一项手势识别方法。
本申请实施例提供一种计算机程序产品,所述计算机程序产品包括存储在非暂态计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,使所述计算机执行本申请实施例上述任一项手势识别方法。
本申请实施例提供的手势识别方法及装置,通过人手的重心点的坐标判断人手的运动方向,改变了现有技术中人手运动过快,识别率低的技术缺陷,可以快速判断人手的运动方向,抗干扰性强,识别精度高,在人手快速运动时也可以使精确识别人手的运动方向,实现对手势的识别。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例中手势识别方法流程图;
图2为本申请实施例中图像坐标系下的x轴和y轴示意图;
图3为本申请实施例中基于最小二乘法的拟合直线的示意图;
图4为本申请实施例中图像坐标系下投影向量与x轴向量转角的示意图;
图5为本申请实施例中手势识别方法的应用流程示意图;
图6为本申请实施例中手势识别装置实施例结构示意图;
图7是本申请实施例提供的手势识别方法的设备的硬件结构示意图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
在目前手势识别的技术中,按照是否识别手的运动状态来区分,可以分为静态手势识别和动态手势识别,静态手势识别仅需根据单帧图像识别出静态手势,动态手势识别则需要根据多帧图像识别动态手势。但是,现有技术中的动态手势识别方法一般要求人手的运动速度不能太快,在人手运动过快的情况下,手势识别率低,很难识别出动态手势。人们受到人手运动速度的局限,体验不佳,如果能够在人手快速移动的情况下,也能很准确地识别到手势,识别出人手的运动方向,将大大提高用户的体验。为使本申请的目的、技术方案和优点更加清楚,以下结合附图及具体实施例,对本申 请作进一步地详细说明。
实施例一
图1为本申请实施例中手势识别方法流程图,结合图1,本申请实施例一、一种手势识别方法,包括如下步骤:
步骤S101:获取一帧图像,计算图像中人手的重心点;
获取多帧图像中的一帧图像,检测获取的图像中是否存在人手,检测人手的方式可以采用图像分割的方法进行检测,或者,通过图像中的像素点的像素值进行检测,检测人手获取的图像是一幅二值图像,其中人手的区域的像素值为255,其余区域的像素值为0,通过逐个扫描图像中的像素点,获取每个像素点的像素值,判断像素值中是否包含人手区域像素值,当判断像素值中包含人手像素值,则图像中存在人手,执行步骤S102,若图像中不存在人手,则重新获取另一帧图像,重新检测获取的图像中是否存在人手。
在二值图像中检测出存在人手,计算图像中人手的重心点,优选地,计算图像中人手的重心点,进一步包括:采用如下公式计算重心点:
Figure PCTCN2016096485-appb-000001
Figure PCTCN2016096485-appb-000002
其中,image(x,y)是重心点的坐标(x,y)处的像素值,xg和yg分别是本次重心点在x轴方向和y轴方向的重心,图2为本申请实施例中图像坐标系下的x轴和y轴示意图;
步骤S102:计算本次重心点与上一次保存的重心点之间的距离,判断距离是否大于预设距离阈值,当判断结果为是,则获取重心点并保存;
通过步骤S101计算得到本次重心点在x轴方向和y轴方向的重心后,计算本次重心点与上一次保存的重心点之间的距离,判断距离是否大于预设距离阈值,计算本次重 心点与上一次保存的重心点之间的距离并判断其是否在距离阈值之内,是为了防止人手的运动距离太短,引起误检测,在本申请实施例中距离阈值可以为获取的图像的高度的10%,或者其他用户自定义的距离阈值,此处不做具体限定。
当本次重心点与上一次保存的重心点之间的距离大于预设距离阈值时,则获取重心点并保存,本次重心点获取成功,执行步骤S103。
若本次重心点与上一次保存的重心点之间的距离小于或等于预设距离阈值,则执行步骤S101。
步骤S103:判断连续成功获取重心点的次数是否超过预设次数阈值,当判断结果为是,则根据连续成功获取的多个重心点的坐标判断人手的运动方向。
通过步骤S102成功获取重心点后,判断连续成功获取重心点的次数是否超过预设次数阈值,判断连续成功得到重心点的次数,是为了保证人手的运动是连续的,距离足够长,减少误检测。
连续获取重心点可以通过两种方式实现,优选地,连续成功获取重心点,包括:在连续帧的图像中均成功获得重心点,或在预设周期内的预设帧的图像中均成功获得重心点。其中,在连续帧的图像中均成功获得重心点步骤为:获取多个图像,多个图像的图像帧是连续的,并且在获取的连续帧的图像中通过步骤S102均成功获取了人手的重心点,获取几个图像,即为连续成功获取重心点的次数为几次,例如,获取连续六帧图像,在连续六帧的图像中均成功获取人手的重心点,则连续成功获取重心点的次数为六次。
在预设周期内的预设帧的图像中均成功获得重心点步骤为:在预设周期内包含多帧图像,在多帧图像中获取其中预设帧的图像,并且在获取的图像中通过步骤S102均成功获取了人手的重心点,获取几个图像,即为连续成功获取重心点的次数为几次,例如,在0.5秒内包含12帧图像,在12帧图像中分别获取偶数帧的图像,即为获取六帧图像,并且在获取的图像中均成功获取人手的重心点,则连续成功获取重心点的次数为六次。
本申请实施例中预设次数阈值包括但不限于六次,也可以为用户自定义的次数,此处不做具体限定。
当连续成功获取重心点的次数超过预设次数阈值,则根据连续成功获取的多个重心点的坐标判断人手的运动方向,连续成功获取几次重心点,则根据获取的重心点的坐标判断人手的运动方向。
对于步骤S103,具体实施方式还包括以下的子步骤,优选地,根据连续成功获取的多个重心点的坐标判断人手的运动方向,进一步包括:
子步骤一:根据连续成功获取的多个重心点的坐标使用最小二乘法拟合一条直线;
由于人手运动方向的检测会受到噪声等因素的干扰,使用若干个手的重心拟合出来的直线可以更准确地反应手的运动方向,拟合直线采用最小二乘法,此方法的计算速度很快,最小二乘法的目标是找到一条直线,使得所有的点到此直线的距离最短,本申请实施例采用OpenCV提供的函数cvFitLine实现此功能,根据连续成功获取的多个重心点的坐标使用最小二乘法拟合一条直线,得到拟合直线后,执行子步骤二,图3为本申请实施例中基于最小二乘法的拟合直线的示意图。
子步骤二:根据连续成功获取的多个重心点的坐标和拟合直线,计算重心点的坐标在拟合直线上的投影坐标,计算多个重心点中第一个重心点的投影坐标连接最后一个重心点的投影坐标而成的投影向量以及x轴向量;
已知重心点的坐标,根据连续成功获取的多个重心点的坐标使用最小二乘法拟合的直线的拟合直线的方程已知,根据拟合直线方程和重心点坐标,就可以计算得到重心点坐标在拟合直线上的投影坐标,对于子步骤二中,优选地,计算重心点的坐标在拟合直线上的投影坐标,进一步包括:采用如下公式计算重心点在拟合直线上的投影坐标:
Figure PCTCN2016096485-appb-000003
其中,(x0,y0)为已知重心点的坐标,Ax+By+C=0为拟合直线的方程,(xt,yt)为重心点在拟合直线上的投影坐标。
通过上述公式,计算出多个重心点在拟合直线上的投影坐标,通过第一个获取的 重心点的投影坐标和最后一个获取的重心点的投影坐标,获取重心点在拟合直线上的投影点,将第一个获取的重心点的投影点和最后一个获取的重心点的投影点连接,计算第一个重心点的投影点连接最后一个重心点的投影点而成的投影向量Vh(xh,yh),其中,(xh,yh)表示投影向量Vh的坐标。同时,计算得到投影向量后,还需计算x轴的向量Vx(xx,yy),其中,(xx,yy)表示x轴的向量Vx的坐标,本申请实施例中x轴的坐标包括但不限于(1,0),也可以为用户自定义的其他坐标,本申请实施例不做具体限定。
计算得到投影向量Vh和x轴的向量Vx,执行子步骤三。
子步骤三:计算投影向量相对于X轴向量的转角,根据转角判断人手的运动方向。
计算投影向量相对于X轴向量的转角,进一步包括:采用如下公式计算投影向量相对于X轴向量的转角:
Figure PCTCN2016096485-appb-000004
其中,θ为投影向量相对于X轴向量的转角,Vh为多个重心点中第一个重心点的投影坐标连接最后一个重心点的投影坐标而成的投影向量,投影向量的坐标为(xh,yh),Vx为x轴的向量;
θ的正负则取决于三维空间中的Vh×Vx的z分量,采用如下公式计算z的值:z=xh-yh
其中,z为三维空间中的Vh×Vx的分量,如果z大于零,则θ为正,反之θ为负;
在图像坐标系下,投影向量相对X轴逆时针旋转,则θ为负值,顺时针旋转,则θ为正值;图4为本申请实施例中图像坐标系下投影向量与x轴向量转角的示意图,如图4所示,箭头表示投影向量。
计算得到投影向量相对与X轴向量的转角θ后,根据转角判断人手的运动方向,进一步包括:根据转角θ的正负和取值范围判断手的运动方向。根据转角的正负和取值,就可以得到人手的运动方向是水平还是竖直,本申请实施例中包括但不限于根据 下述角度范围判断人手的运动方向:
-35°<θ<35°:水平向右;
-145°<θ≤-180°或-180°≤θ-145°:水平向左;
60°<θ<150°:竖直向上;
-150°<θ<-60°:竖直向下。
用户也可以自定义其他转角范围进行判断,本申请实施例不做具体限定,之所以定义竖直方向的角度取值范围大于水平方向的角度取值范围,是因为相对来说,人更容易做出水平方向的动作。
下面举例介绍本申请实施例手势识别方法的应用,注意本例只是本申请实施例应用的一种,并非全部应用,图5为本申请实施例中手势识别方法的应用流程示意图,如图5所示:
开始手势识别,获取一帧图像,检测图像中是否存在人手,检测到人手则计算人手的重心点,否则返回重新获取一帧图像;
计算本次的重心点与上一次保存的重心点的距离是否大于获取图像的高度的10%,如果计算距离大于获取图像高度的10%,则判断连续获取重心点的次数是否超过6次,否则返回重新获取一帧图像;
判断连续获取重心点的次数超过6次,则根据重心点坐标使用最小二乘法拟合一条直线,根据重心点坐标和拟合直线计算重心点在拟合直线上的投影向量,计算投影向量相对于x轴向量的转角,根据转角判断人手的运动方向,否则返回重新获取一帧图像。
最后需要说明的是,本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一非暂态计算机可读存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的非暂态计算机可读存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(RandomAccessMemory,RAM)等。
实施例2:
图6为本申请实施例中手势识别装置实施例结构示意图,如图6所示:本申请实施例还提供一种手势识别装置,包括:
获取模块1,用于获取一帧图像,计算图像中人手的重心点;
保存模块2,用于计算本次重心点与上一次保存的重心点之间的距离,判断距离是否大于预设距离阈值,当判断结果为是,则获取重心点并保存;
判断模块3,用于判断连续成功获取重心点的次数是否超过预设次数阈值,当判断结果为是,则根据连续成功获取的多个重心点的坐标判断人手的运动方向。
优选地,获取模块1,进一步用于:
采用如下公式计算重心点:
Figure PCTCN2016096485-appb-000005
Figure PCTCN2016096485-appb-000006
其中,image(x,y)是重心点的坐标(x,y)处的像素值,xg和yg分别是x轴方向和y轴方向的重心。
优选地,判断模块3,用于:在连续帧的图像中均成功获得重心点,或在预设周期内的预设帧的图像中均成功获得重心点。
优选地,判断模块3,进一步用于:
根据连续成功获取的多个重心点的坐标使用最小二乘法拟合一条直线;
根据连续成功获取的多个重心点的坐标和拟合直线,计算重心点的坐标在拟合直线上的投影坐标,计算多个重心点中第一个重心点的投影坐标连接最后一个重心点的投影坐标而成的投影向量以及x轴向量;
计算投影向量相对于X轴向量的转角,根据转角判断人手的运动方向。
另外,优选地,判断模块3,进一步用于:
采用如下公式计算重心点在拟合直线上的投影坐标:
Figure PCTCN2016096485-appb-000007
其中,(x0,y0)为已知重心点的坐标,Ax+By+C=0为拟合直线的方程,(xt,yt)为重心点在拟合直线上的投影坐标;
计算投影向量相对于X轴向量的转角,进一步包括:采用如下公式计算投影向量相对于X轴向量的转角:
Figure PCTCN2016096485-appb-000008
其中,θ为投影向量相对于X轴向量的转角,Vh为多个重心点中第一个重心点的投影坐标连接最后一个重心点的投影坐标而成的投影向量,投影向量的坐标为(xh,yh),Vx为x轴的向量;
采用如下公式计算z的值:
z=xh-yh
其中,z为三维空间中的Vh×Vx的分量,如果z大于零,则θ为正,反之θ为负;
在图像坐标系下,投影向量相对X轴逆时针旋转,则θ为负值,顺时针旋转,则θ为正值;
根据转角判断人手的运动方向,进一步包括:根据转角θ的正负和取值范围判断手的运动方向。
图6所示装置可以执行图1以及图5所示实施例的方法,实现原理和技术效果参考图1以及图6所示实施例,不再赘述。
综上所述,根据本申请实施例提供的手势识别方法及装置,通过人手的重心点的坐标判断人手的运动方向,改变了现有技术中人手运动过快,识别率低的技术缺陷,可以快速判断人手的运动方向,抗干扰性强,识别精度高,在人手快速运动时也可以使精确识别人手的运动方向,实现对手势的识别。
本申请实施例还提供一种非暂态计算机可读存储介质,其中,该非暂态计算机可读存储介质可存储有程序,该程序执行时可实现执行前述任意一个实施例提供的一种手势识别方法的各实现方式中的部分或全部步骤。
图7是本申请实施例提供的手势识别方法的电子设备的硬件结构示意图,如图7所示,该设备包括:
一个或多个处理器710以及存储器720,图7中以一个处理器710为例。
执行手势识别方法的设备还可以包括:输入装置730和输出装置740。
处理器710、存储器720、输入装置730和输出装置740可以通过总线或者其他方式连接,图7中以通过总线连接为例。
存储器720作为一种非暂态计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如本申请实施例中的手势识别方法对应的程序指令/模块。处理器710通过运行存储在存储器720中的非易失性软件程序、指令以及模块,从而执行电子设备的各种功能应用以及数据处理,即实现上述方法实施例的手势识别方法。
存储器720可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据手势识别装置的使用所创建的数据等。此外,存储器720可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器720可选包括相对于处理器710远程设置的存储器,这些远程存储器可以通过网络连接至手势识别装置。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
输入装置730可接收输入的数字或字符信息,以及产生与手势识别装置的用户设置以及功能控制有关的键信号输入。输出装置740可包括显示屏等显示设备。
所述一个或者多个模块存储在所述存储器720中,当被所述一个或者多个处理器710执行时,执行上述任意方法实施例中的手势识别方法。
上述产品可执行本申请实施例所提供的方法,具备执行方法相应的功能模块和有益效果。未在本实施例中详尽描述的技术细节,可参见本申请实施例所提供的方法。
本发明实施例的电子设备以多种形式存在,包括但不限于:
(1)移动通信设备:这类设备的特点是具备移动通信功能,并且以提供话音、数据通信为主要目标。这类终端包括:智能手机(例如iPhone)、多媒体手机、功能性手机,以及低端手机等。
(2)超移动个人计算机设备:这类设备属于个人计算机的范畴,有计算和处理功能,一般也具备移动上网特性。这类终端包括:PDA、MID和UMPC设备等,例如iPad。
(3)便携式娱乐设备:这类设备可以显示和播放多媒体内容。该类设备包括:音频、预览播放器(例如iPod),掌上游戏机,电子书,以及智能玩具和便携式车载导航设备。
(4)服务器:提供计算服务的设备,服务器的构成包括处理器、硬盘、内存、系统总线等,服务器和通用的计算机架构类似,但是由于需要提供高可靠的服务,因此在处理能力、稳定性、可靠性、安全性、可扩展性、可管理性等方面要求较高。
(5)其他具有数据交互功能的电子装置。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如ROM/RAM、磁碟、 光盘等,包括若干指令用以使得一台计算机装置(可以是个人计算机,服务器,或者网络装置等)执行各个实施例或者实施例的某些部分所述的方法。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (13)

  1. 一种手势识别方法,其特征在于,应用于电子设备,包括如下步骤:
    获取一帧图像,计算所述图像中所述人手的重心点;
    计算本次所述重心点与上一次保存的重心点之间的距离,判断所述距离是否大于预设距离阈值,当判断结果为是,则获取所述重心点并保存;
    判断连续成功获取所述重心点的次数是否超过预设次数阈值,当判断结果为是,则根据连续成功获取的多个所述重心点的坐标判断所述人手的运动方向。
  2. 根据权利要求1所述的方法,其特征在于,计算所述图像中所述人手的重心点,进一步包括:
    采用如下公式计算所述重心点:
    Figure PCTCN2016096485-appb-100001
    Figure PCTCN2016096485-appb-100002
    其中,image(x,y)是所述重心点的坐标(x,y)处的像素值,xg和yg分别是所述本次重心点在x轴方向和y轴方向的重心。
  3. 根据权利要求1所述的方法,其特征在于,连续成功获取所述重心点,包括:在连续帧的图像中均成功获得所述重心点,或在预设周期内的预设帧的图像中均成功获得所述重心点。
  4. 根据权利要求1所述的方法,其特征在于,根据连续成功获取的多个所述重心点的坐标判断所述人手的运动方向,进一步包括:
    根据连续成功获取的多个所述重心点的坐标使用最小二乘法拟合一条直线;
    根据连续成功获取的多个所述重心点的坐标和拟合直线,计算所述重心点的 坐标在所述拟合直线上的投影坐标,计算多个所述重心点中第一个所述重心点的投影坐标连接最后一个所述重心点的投影坐标而成的投影向量以及x轴向量;
    计算所述投影向量相对于X轴向量的转角,根据所述转角判断人手的运动方向。
  5. 根据权利要求4所述的方法,其特征在于,计算所述重心点的坐标在所述拟合直线上的投影坐标,进一步包括:
    采用如下公式计算所述重心点在所述拟合直线上的投影坐标:
    Figure PCTCN2016096485-appb-100003
    其中,(x0,y0)为已知所述重心点的坐标,Ax+By+C=0为所述拟合直线的方程,(xt,yt)为所述重心点在所述拟合直线上的投影坐标;
    计算所述投影向量相对于X轴向量的转角,进一步包括:采用如下公式计算所述投影向量相对于X轴向量的转角:
    Figure PCTCN2016096485-appb-100004
    其中,θ为所述投影向量相对于X轴向量的转角,Vh为多个所述重心点中第一个所述重心点的投影坐标连接最后一个所述重心点的投影坐标而成的投影向量,所述投影向量的坐标为(xh,yh),Vx为x轴的向量;
    采用如下公式计算z的值:
    z=xh-yh
    其中,z为三维空间中的Vh×Vx的分量,如果z大于零,则θ为正,反之θ为负;
    在图像坐标系下,所述投影向量相对X轴逆时针旋转,则θ为负值,顺时针旋转,则θ为正值;
    根据所述转角判断人手的运动方向,进一步包括:根据转角θ的正负和取值范围判断手的运动方向。
  6. 一种手势识别装置,其特征在于,包括:
    获取模块,用于获取一帧图像,计算所述图像中所述人手的重心点;
    保存模块,用于计算本次所述重心点与上一次保存的重心点之间的距离,判断所述距离是否大于预设距离阈值,当判断结果为是,则获取所述重心点并保存;
    判断模块,用于判断连续成功获取所述重心点的次数是否超过预设次数阈值,当判断结果为是,则根据连续成功获取的多个所述重心点的坐标判断所述人手的运动方向。
  7. 根据权利要求6所述的装置,其特征在于,所述获取模块,进一步用于:
    采用如下公式计算所述重心点:
    Figure PCTCN2016096485-appb-100005
    Figure PCTCN2016096485-appb-100006
    其中,image(x,y)是所述重心点的坐标(x,y)处的像素值,xg和yg分别是所述本次重心点在x轴方向和y轴方向的重心。
  8. 根据权利要求6所述的装置,其特征在于,所述判断模块,用于:在连续帧的图像中均成功获得所述重心点,或在预设周期内的预设帧的图像中均成功获得所述重心点。
  9. 根据权利要求6所述的装置,其特征在于,所述判断模块,进一步用于:
    根据连续成功获取的多个所述重心点的坐标使用最小二乘法拟合一条直线;
    根据连续成功获取的多个所述重心点的坐标和拟合直线,计算所述重心点的坐标在所述拟合直线上的投影坐标,计算多个所述重心点中第一个所述重心点的 投影坐标连接最后一个所述重心点的投影坐标而成的投影向量以及x轴向量;
    计算所述投影向量相对于X轴向量的转角,根据所述转角判断人手的运动方向。
  10. 根据权利要求9所述的装置,其特征在于,所述判断模块,进一步用于:
    采用如下公式计算所述重心点在所述拟合直线上的投影坐标:
    Figure PCTCN2016096485-appb-100007
    其中,(x0,y0)为已知所述重心点的坐标,Ax+By+C=0为所述拟合直线的方程,(xt,yt)为所述重心点在所述拟合直线上的投影坐标;
    计算所述投影向量相对于X轴向量的转角,进一步包括:采用如下公式计算所述投影向量相对于X轴向量的转角:
    Figure PCTCN2016096485-appb-100008
    其中,θ为所述投影向量相对于X轴向量的转角,Vh为多个所述重心点中第一个所述重心点的投影坐标连接最后一个所述重心点的投影坐标而成的投影向量,所述投影向量的坐标为(xh,yh),Vx为x轴的向量;
    采用如下公式计算z的值:
    z=xh-yh
    其中,z为三维空间中的Vh×Vx的分量,如果z大于零,则θ为正,反之θ为负;
    在图像坐标系下,所述投影向量相对X轴逆时针旋转,则θ为负值,顺时针旋转,则θ为正值;
    根据所述转角判断人手的运动方向,进一步包括:根据转角θ的正负和取值范围判断手的运动方向。
  11. 一种非暂态计算机可读存储介质,其特征在于,所述非暂态计算机可读存储介质存储计算机程序,所述计算机程序用于使所述计算机执行权利要求1-5任一所述方法。
  12. 一种电子设备,包括:
    一个或多个处理器;以及,
    存储器;其特征在于,
    所述存储器存储有可被所述一个或多个处理器执行的指令,所述指令被所述一个或多个处理器执行,以使所述一个或多个处理器:
    获取一帧图像,计算所述图像中所述人手的重心点;
    计算本次所述重心点与上一次保存的重心点之间的距离,判断所述距离是否大于预设距离阈值,当判断结果为是,则获取所述重心点并保存;
    判断连续成功获取所述重心点的次数是否超过预设次数阈值,当判断结果为是,则根据连续成功获取的多个所述重心点的坐标判断所述人手的运动方向。
  13. 一种计算机程序产品,所述计算机程序产品包括存储在非暂态计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,使所述计算机执行权利要求1-5所述的方法。
PCT/CN2016/096485 2015-12-18 2016-08-24 一种手势识别方法及装置 WO2017101496A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510964806.7 2015-12-18
CN201510964806.7A CN105912974A (zh) 2015-12-18 2015-12-18 一种手势识别方法及装置

Publications (1)

Publication Number Publication Date
WO2017101496A1 true WO2017101496A1 (zh) 2017-06-22

Family

ID=56744327

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/096485 WO2017101496A1 (zh) 2015-12-18 2016-08-24 一种手势识别方法及装置

Country Status (2)

Country Link
CN (1) CN105912974A (zh)
WO (1) WO2017101496A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112507918A (zh) * 2020-12-16 2021-03-16 康佳集团股份有限公司 一种手势识别方法
CN113031464A (zh) * 2021-03-22 2021-06-25 北京市商汤科技开发有限公司 设备控制方法、装置、电子设备及存储介质
CN114422762A (zh) * 2021-12-25 2022-04-29 深圳市幕工坊科技有限公司 投影幕动作控制系统

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108197596B (zh) 2018-01-24 2021-04-06 京东方科技集团股份有限公司 一种手势识别方法和装置
CN110754906A (zh) * 2018-07-27 2020-02-07 深圳博科智能科技有限公司 一种窗帘的控制方法及智能窗帘
CN111709969A (zh) * 2020-06-17 2020-09-25 济南大学 仿真实验中分液漏斗的操控识别装置及方法
CN112114666A (zh) * 2020-08-25 2020-12-22 武汉海微科技有限公司 基于触摸板的动态手势识别算法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013021385A2 (en) * 2011-08-11 2013-02-14 Eyesight Mobile Technologies Ltd. Gesture based interface system and method
CN103353935A (zh) * 2013-07-19 2013-10-16 电子科技大学 一种用于智能家居系统的3d动态手势识别方法
CN104571482A (zh) * 2013-10-22 2015-04-29 中国传媒大学 一种基于体感识别的数字设备操控方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9213890B2 (en) * 2010-09-17 2015-12-15 Sony Corporation Gesture recognition system for TV control
CN102053702A (zh) * 2010-10-26 2011-05-11 南京航空航天大学 动态手势控制系统与方法
CN104392210A (zh) * 2014-11-13 2015-03-04 海信集团有限公司 一种手势识别方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013021385A2 (en) * 2011-08-11 2013-02-14 Eyesight Mobile Technologies Ltd. Gesture based interface system and method
CN103353935A (zh) * 2013-07-19 2013-10-16 电子科技大学 一种用于智能家居系统的3d动态手势识别方法
CN104571482A (zh) * 2013-10-22 2015-04-29 中国传媒大学 一种基于体感识别的数字设备操控方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LIU HONG: "Acceleration Gesture Recognition Based on Random Projection", JOURNAL OF COMPUTER APPLICATIONS, vol. 35, no. 1, 10 January 2015 (2015-01-10), pages 189 - 193 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112507918A (zh) * 2020-12-16 2021-03-16 康佳集团股份有限公司 一种手势识别方法
CN112507918B (zh) * 2020-12-16 2024-05-21 康佳集团股份有限公司 一种手势识别方法
CN113031464A (zh) * 2021-03-22 2021-06-25 北京市商汤科技开发有限公司 设备控制方法、装置、电子设备及存储介质
CN114422762A (zh) * 2021-12-25 2022-04-29 深圳市幕工坊科技有限公司 投影幕动作控制系统
CN114422762B (zh) * 2021-12-25 2023-10-13 深圳市幕工坊科技有限公司 投影幕动作控制系统

Also Published As

Publication number Publication date
CN105912974A (zh) 2016-08-31

Similar Documents

Publication Publication Date Title
WO2017101496A1 (zh) 一种手势识别方法及装置
CN108960163B (zh) 手势识别方法、装置、设备和存储介质
US11625841B2 (en) Localization and tracking method and platform, head-mounted display system, and computer-readable storage medium
JP6043856B2 (ja) Rgbdカメラを用いた頭部ポーズ推定
KR102230630B1 (ko) 빠른 제스처 재접속
US8873841B2 (en) Methods and apparatuses for facilitating gesture recognition
US8306267B1 (en) Object tracking
CN104049760B (zh) 一种人机交互命令的获取方法及系统
CN111553282A (zh) 用于检测车辆的方法和装置
KR20140117369A (ko) 사운드 및 기하학적 분석을 갖는 증강 현실
WO2020220809A1 (zh) 目标对象的动作识别方法、装置和电子设备
US20140375552A1 (en) Information processing apparatus, information processing method, and storage medium
US11307668B2 (en) Gesture recognition method and apparatus, electronic device, and storage medium
US11244154B2 (en) Target hand tracking method and apparatus, electronic device, and storage medium
US10872455B2 (en) Method and portable electronic device for changing graphics processing resolution according to scenario
CN113359995A (zh) 人机交互方法、装置、设备以及存储介质
CN106569716B (zh) 单手操控方法及操控系统
WO2022095318A1 (zh) 字符检测方法、装置、电子设备、存储介质及程序
US20170168582A1 (en) Click response processing method, electronic device and system for motion sensing control
CN106547339B (zh) 计算机设备的控制方法和装置
WO2023070933A1 (zh) 手势识别方法、装置、设备及介质
CN111639573B (zh) 基于orb算法的手势识别方法、存储介质及电子设备
KR20140046197A (ko) 동작인식 장치 및 방법, 그리고 프로그램을 저장한 컴퓨터로 판독 가능한 기록매체
JP2014513371A (ja) スクリーン上のオブジェクトの回転
WO2022121036A1 (zh) 红外触控方法、装置、设备及计算机存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16874556

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16874556

Country of ref document: EP

Kind code of ref document: A1