WO2014048170A1 - 应用于终端的空中手势识别方法及装置 - Google Patents

应用于终端的空中手势识别方法及装置 Download PDF

Info

Publication number
WO2014048170A1
WO2014048170A1 PCT/CN2013/080717 CN2013080717W WO2014048170A1 WO 2014048170 A1 WO2014048170 A1 WO 2014048170A1 CN 2013080717 W CN2013080717 W CN 2013080717W WO 2014048170 A1 WO2014048170 A1 WO 2014048170A1
Authority
WO
WIPO (PCT)
Prior art keywords
gesture
instant
stroke
point
trajectory
Prior art date
Application number
PCT/CN2013/080717
Other languages
English (en)
French (fr)
Inventor
余方波
Original Assignee
炬才微电子(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 炬才微电子(深圳)有限公司 filed Critical 炬才微电子(深圳)有限公司
Publication of WO2014048170A1 publication Critical patent/WO2014048170A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Definitions

  • the present invention relates to the field of electronics, and in particular to an air gesture recognition method and apparatus applied to a terminal. Background of the invention
  • many smart terminals such as mobile phones, smart TVs, tablets, game consoles and smart handles, can realize aerial gesture recognition.
  • aerial gesture recognitions can be generated by waving the arm or by waving the smart terminal.
  • Air gestures can be used for a wide range of applications, such as operating smart TV, playing games, and implementing air input methods.
  • the air gesture is not only a simple operation, but also vivid and interesting. How to effectively identify the air gesture becomes a technical difficulty and key technology.
  • the prior art aerial gesture recognition technology is based on analyzing post-track images generated by aerial gestures, or fixed several preset gesture models, and is identified by complicated algorithm operations.
  • the present invention provides an air gesture recognition method applied to a terminal, which improves the recognition efficiency and accuracy of the air gesture.
  • the present invention also provides a control gesture recognition device applied to a terminal, the device is improved The recognition efficiency and accuracy of gestures.
  • An air gesture recognition method applied to a terminal comprising:
  • the air gesture is identified according to the encoding of the air gesture, and the terminal is controlled.
  • the instant point position information of the air gesture obtained by setting the sampling frequency is: using the first instant point as a starting point, and sampling the relative position information of the instant point relative to the starting point as the instant point position information at the set sampling frequency.
  • the determining, according to the point location information, the gesture stroke and the stroke number corresponding to the instant track segment are:
  • a curve in which all the point position information is connected in order is recorded as a trajectory of the spatial gesture; whether the trajectory of the spatial gesture is divided into a plurality of instantaneous trajectory segments according to whether the angle attributes of the adjacent sampling points are the same,
  • the instant track segments are compared with the data stored in the database to obtain a gesture stroke and stroke number corresponding to each instant track segment.
  • the curve connecting all the spot point position information in a prioritized manner is recorded as a trajectory of the spatial gesture; the trajectory of the spatial gesture is divided into a plurality of instant trajectory segments according to whether the angle attributes of the adjacent sample points are the same, so that Each instant track segment corresponds to a gesture stroke including:
  • the alpha and beta properties undergo a fundamental change to determine that the alpha and beta properties change radically when the change in alpha and beta exceeds a predetermined threshold.
  • the code corresponding to the gesture stroke is:
  • the stroke coding table obtains the gesture code corresponding to the current track segment.
  • the encoding of the gesture strokes corresponding to all the instant track segments included in the air gesture is combined in order to obtain the code of the air gesture as:
  • each instant track segment corresponds to a gesture stroke code
  • the gesture stroke codes are combined in order to obtain the code of the air gesture.
  • An air gesture recognition device applied to a terminal the identification device comprising: a sensor unit, configured to acquire a point-in-time location information of an air gesture at a set sampling frequency;
  • a judging unit configured to determine an instant trajectory segment and the corresponding gestation stroke and stroke number according to the instant point location information
  • a stroke code acquisition unit configured to use the stroke number according to the stroke number and according to the instant point position Obtaining the displacement of the real-time track segment calculated by the information, acquiring the code corresponding to the gesture stroke; the gesture code acquisition unit, configured to combine the gesture strokes corresponding to all the real track segments included in the air gesture in sequence to obtain the air gesture Coding
  • a gesture control unit configured to identify an air gesture according to the encoding of the air gesture, and control the terminal.
  • the sensor unit further includes:
  • the sampling module is configured to use the first instant point as a starting point, and sample the relative position information of the instant point relative to the starting point at the set sampling frequency.
  • the determining unit further includes:
  • the comparison module the curve for connecting all the point position information in order is the trajectory of the spatial gesture; dividing the trajectory of the spatial gesture into multiple instants according to whether the angle attributes of the adjacent sampling points are the same
  • the trajectory segment compares the plurality of real trajectory segments with data in a pre-stored database to obtain a gesturing stroke and a stroke number corresponding to each of the instant trajectory segments.
  • the comparison module further includes:
  • the angle judgment sub-module is configured to establish a coordinate system with the starting point of the current instant track segment as an origin; track the instantaneous point coordinates of the motion track, and calculate an angle ⁇ between the line connecting the start point of the current point and the start point of the current instantaneous track segment and the X coordinate axis, Calculating an angle ⁇ between the line connecting the instant point and the previous sampling point and the X coordinate axis; according to ⁇ and ⁇ , obtaining a moving direction, determining a gesturing stroke and a stroke number corresponding to the current instantaneous trajectory segment;
  • the ⁇ attribute changes fundamentally, the current track segment ends, the immediate point is the end point of the current track segment, and the immediate point is also the start point of the next immediate track segment;
  • the fundamental change of the ⁇ and ⁇ properties is specifically determined to determine a fundamental change in the ⁇ and ⁇ properties when the change values of the ⁇ and ⁇ exceed a preset threshold.
  • the stroke code acquisition unit further includes: a displacement judging module, configured to calculate a displacement of the current real-time track segment according to the start point and the end point of the current instant track segment; and according to the displacement, distinguishing the long gesture from the short gesture according to the preset displacement threshold;
  • the code query module is configured to obtain a gesture code corresponding to the current track segment by using a stroke number of the gesture stroke and a gesture stroke code table pre-stored by the displacement query.
  • the gesture code extraction unit further includes:
  • a segmentation module configured to divide the trajectory of the air gesture into one trajectory segment, each trajectory segment corresponding to a gesture stroke code
  • the combination module is configured to combine the gestation stroke codes in order to obtain the coding of the air gesture.
  • a terminal characterized in that the terminal comprises the above identification device.
  • the present invention fully recognizes that the air gesture is a feature of motion by recognizing the air gesture by motion trajectory, and at the same time, the present invention provides a simple and effective method for air gesture recognition, and the method for encoding the air gesture.
  • the air gesture can be recognized efficiently and accurately, and then the terminal is controlled by the air gesture.
  • FIG. 1 is a flowchart of an air gesture recognition method applied to a terminal according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of an air gesture recognition apparatus applied to a terminal according to an embodiment of the present invention
  • ⁇ 3 shows a gesture stroke coding table provided by an embodiment of the present invention
  • FIG. 4 is a schematic diagram 1 of a gesture stroke provided by an embodiment of the present invention.
  • FIG. 5 is a schematic diagram 2 of a gesture stroke provided by an embodiment of the present invention.
  • FIG. 6 is a schematic diagram showing an example of air gesture recognition provided by an embodiment of the present invention. Mode for carrying out the invention
  • the technical solution provided by the present invention is only for the recognition of the air gesture, and the identification of the contact touch gesture of the terminal is not involved.
  • the technical solution of the embodiment of the present invention is directed not only to a two-dimensional aerial gesture but also to a three-dimensional aerial gesture.
  • FIG. 1 is a flow chart of an air gesture recognition method applied to a terminal according to an embodiment of the present invention, where specific steps are as follows:
  • Step 101 Obtain the instantaneous point position information of the air gesture at the set sampling frequency; in this step, the sampling frequency can be set as needed, and the embodiment provided by the present invention is not limited;
  • Step 102 Determine, according to the point-in-time location information, the instant track segment and the corresponding gesture stroke and the stroke number of the gesture stroke;
  • Step 103 Acquire a code corresponding to the gesture stroke according to the number of the gesture stroke and the calculated displacement of the instantaneous track segment;
  • Step 104 Combine the codes of the gesture strokes corresponding to all the instant track segments included in the air gesture in sequential order to obtain the code of the air gesture.
  • the method for implementing the step 101 may specifically be: when acquiring the trajectory of the air gesture, always acquiring the instantaneous point position information at a certain sampling frequency, as shown in FIG. 4, the aerial gesture is sampled to 7 points. , for B, C, D, E, F, and G, where point A is the starting point of the air gesture, and point G is the end point of the air gesture; generally starting with the first point of the instant, such as point A, The starting point is a reference, and the relative position information of the instant point relative to the starting point is used as the point location information.
  • the method for implementing step 102 may specifically be:
  • the trajectory of the air gesture is a curve connecting all the sampling points in order; the trajectory of an air gesture can be divided into the same or different real trajectory segments according to whether the angular attributes of the adjacent sampling points are the same, and the plurality of The real-time track segments are respectively compared with the data stored in the database, and each instant track segment corresponds to a gesture stroke, and the data in the database can be added or deleted as needed, and the database stores the correspondence between the instant track segments and the gesture strokes. And storing the stroke code of the gesture stroke;
  • the coordinate system is established with the starting point A of the current instant track segment as the origin; the instantaneous point position information of the track of the air gesture is tracked, that is, the point coordinates, such as the point C coordinate, and the point C and the starting point A are calculated.
  • the angle ⁇ between the line and the X coordinate axis, and the angle ⁇ between the line connecting the point C and the previous sample point ⁇ and the X coordinate axis are calculated; according to ⁇ and ⁇ , the moving direction of the gesture is obtained, and the instant is determined.
  • ⁇ and 13 attributes are fundamentally changed as follows:
  • ⁇ ⁇ ⁇ 45.
  • the present invention does not limit the specific value of the threshold.
  • the specific process of step 103 is:
  • the current instantaneous trajectory segment is calculated with the current instant point as the end point; according to the displacement, the long gesture and the short gesture are distinguished according to the preset displacement threshold; the Query is based on the gesture stroke number and the displacement
  • the pre-stored gesture stroke coding table acquires the gesture code corresponding to the current track segment.
  • step 104 the specific process of step 104 is: after the air gesture track is divided into one real track segment, each instant track segment corresponds to a gesture stroke code, and the gesture stroke codes are combined in order to obtain the air gesture. coding.
  • the above-described aerial gesture coding method relies on a set of previously determined gesture stroke coding sets, as shown in Fig. 3, which is a gesture stroke coding table of an embodiment. It is easy to find that each hand stroke in the table is the smallest independent unit of the air gesture track, and each has an independent number.
  • the attributes of each gesture are: The starting point, the ending point, the moving direction and the angle are essentially different.
  • the gesture stroke coding is represented by a letter, and each gesture stroke code corresponds to two codes according to the magnitude of the displacement, that is, the system sets a displacement threshold value, which is expressed as a ⁇ value, and is used to Different from long gestures and short gestures, long gestures are encoded in lowercase letters, short gestures are encoded in uppercase letters, and long gestures can be encoded in uppercase letters, and short gestures in lowercase letters.
  • the gesture stroke defined in the present case can be better understood.
  • the dotted line divides the entire circumference into 8 intervals by angle, which corresponds to 8 gesture strokes.
  • the gesture stroke number is also 1, but the code is a, and so on. Since the air gesture coding provided by the technical solution of the present invention subtly records the process of the air gesture motion track completely by the gesture stroke sequence, it is advantageous to improve the recognition accuracy of the air gesture.
  • the air gesture code is a character code, which can be stored, and can also support intelligent matching. By matching with the pre-stored code, the meaning of the air gesture can be recognized.
  • An apparatus for identifying an air gesture according to an embodiment of the present invention is an application terminal of the above air gesture recognition method. As shown in FIG. 2, the method includes:
  • the sensor unit 21 is configured to acquire the instantaneous point position information of the air gesture at the set sampling frequency
  • the determining unit 22 is configured to determine, according to the point-in-time location information, an instant track segment and a corresponding gesture stroke and a stroke number of the gesture stroke;
  • the stroke code acquisition unit 23 is configured to acquire the code of the gesture stroke according to the stroke number and the displacement of the instantaneous track segment calculated according to the current point position information;
  • the gesture code acquisition unit 24 is configured to combine the gesture stroke codes corresponding to all the real track segments included in the air gesture in order to obtain the code of the air gesture; the gesture control unit 25 is configured to encode according to the air gesture Identify the air gestures and control the terminal.
  • the sensor unit 21 further includes:
  • the sampling module 21 1 is configured to use the first starting point as a starting point to set the sampling The frequency samples the relative position information of the point in time relative to the starting point as the point location information.
  • the determining unit 22 further includes:
  • the comparison module 221, the curve for connecting all the point position information in a prioritized manner is a trajectory of the air gesture; dividing the trajectory of the spatial gesture into multiple instants according to whether the angle attribute of each adjacent sampling point is the same
  • the track segment compares the plurality of instant track segments with the data in the pre-stored database to obtain a stroke stroke corresponding to each of the instant track segments and the stroke number of the hand stroke.
  • the comparison module 221 further includes:
  • the angle judging sub-module 221 1 is configured to establish a coordinate system with the starting point of the current real-time trajectory as an origin; track the instantaneous point coordinates of the moving trajectory, and calculate an angle ⁇ between the line connecting the starting point and the starting point of the current real-time trajectory segment and the X coordinate axis.
  • the fundamental change of the ⁇ and ⁇ properties is specifically determined to determine a fundamental change in the ⁇ and ⁇ properties when the change values of the ⁇ and ⁇ exceed a preset threshold.
  • the stroke code obtaining unit 23 further includes:
  • the displacement judging module 231 is configured to calculate a displacement of the current instant trajectory segment according to the starting point and the end point of the current real-time trajectory segment; and according to the displacement, refer to a preset displacement threshold to distinguish a long gesture and a short gesture;
  • the code query module 232 is configured to obtain a gesture code corresponding to the current track segment according to the stroke number of the gesture stroke and the gesture query code table that is pre-stored according to the displacement query.
  • the gesture code obtaining unit 24 further includes:
  • a segmentation module 241 configured to divide the trajectory of the air gesture into an instant trajectory segment, Each instant track segment corresponds to a gesture stroke code;
  • the combination module 242 is configured to combine the gestation stroke codes in order to obtain an encoding of the air gesture.
  • aerial gesture examples are listed.
  • the dotted line in the figure indicates the air gesture trajectory acquired at a certain sampling frequency, and the solid line is represented as the astigmatism stroke decomposition auxiliary line.
  • the 10 aerial gesture examples are encoded. It should be noted that, according to the encoding method described in the present case, the following encoding is dynamically completed during the air gesture generation process:
  • Gesture c (AhgfedcbA), the starting point is on the y-axis, assuming that the displacement of the first track segment and the last track segment are less than the delta value;
  • Gesture f (aBHa), ili ⁇ i The displacement of the two track segments is less than the delta value;
  • Gesture g (abha);
  • Gesture h (CAGE), assuming that the displacement of all track segments is less than the delta value
  • Gesture j (geca).
  • This embodiment can achieve some unexpected effects through some matching operations of aerial gesture coding.
  • Gesture a and gesture b are graphically a triangle gesture, but because of the different starting points, gesture a and gesture b are encoded differently. However, if the terminal wants gesture a and gesture b For the same gesture processing, the terminal gesture recognition module can be implemented by a ring matching operation, and the steps are:
  • Gesture a The coded after shifting (heb) is the same as the encoding of gesture b.
  • Gesture c and gesture d from a graphical point of view, are all circular gestures, but the starting point is different. Gesture c and gesture d are differently encoded, and the encoding of gesture c produces two "A"s at the beginning and the end. Similarly, if the terminal desires that the gesture c and the gesture d are treated as the same gesture, the gesture recognition module of the air gesture recognition apparatus can be implemented by performing operations such as rounding merge, ring matching, and ignoring case. The steps are as follows:
  • Negative case matching can be performed by using the code converted by gesture c (Ahgfedcb) and the code of gesture d (ahgfedcb).
  • the intention to make an air gesture may be consistent, that is, gesture e, but in the process of making an air gesture, there are different amplitudes of up and down jitter, according to whether the jitter amplitude is greater than
  • the delta value gives a different encoding of the gesture f and the gesture g. For example, if the gesture recognition device in the cutout wants to treat the three gestures as the same gesture, it is not allowed.
  • the gesture g can only be handled as another gesture, and the gesture e and the gesture f can be realized by neglecting small gesture coding, rounding and the like.
  • the steps are:
  • Gesture h and gesture i from the graph, are all rectangles, but because of the different gestures, the hand The potential h and the gesture i are encoded differently. If the terminal wants to treat the gesture h and the gesture i as the same gesture, the terminal gesture recognition module only needs to ignore the case matching operation.
  • the gesture j and the gesture i are different in starting point.
  • the ring matching operation can also be implemented as the same gesture.
  • the examples of 10 air gestures, the codes of the air gestures are different, and can be strictly regarded as 10 air gestures.
  • the strict identification may not be possible.
  • the intention of making the air gesture user is reached. Therefore, some algorithms can be defined on the air gesture code matching operation, so that not only the different gestures can be accurately recognized, but also the recognition is more humanized to recognize the intention of the user who made the air gesture.
  • the identification method of the present invention is not limited to the air gesture strokes shown in this embodiment.
  • each module or unit included is only divided according to functional logic, but It is not limited to the above-mentioned division, as long as the corresponding functions can be realized; in addition, the specific names of the respective functional modules are only for the purpose of facilitating mutual differentiation, and are not intended to limit the scope of protection of the present invention.
  • the embodiment of the present invention further provides a terminal, where the terminal includes the above-mentioned air gesture recognition device.
  • An embodiment of the present invention further provides a terminal, where the terminal includes: a sensor and a processor, where the sensor is connected to a processor;
  • the sensor is configured to acquire point-in-time location information of an air gesture at a set sampling frequency, and send the point-in-time location information to the processor;
  • the processor is configured to determine, by the point-in-time location information, an instant track segment and a corresponding gesture stroke and a stroke number of the gesture stroke; according to the number of the gesture stroke and the location information according to the instant point Calculating the displacement of the instantaneous track segment, obtaining the code corresponding to the gesture stroke; combining the codes corresponding to the gesture strokes in the order of the gesture strokes The code to the air gesture.
  • the processor is specifically configured to establish a coordinate system according to a starting point of the real-time track segment, convert the received current point position information into coordinates in the coordinate system, and calculate a point in time according to the coordinates of the immediate point.
  • Corresponding gesture stroke number when the gesture stroke number changes, the start point and end point coordinates of the current track segment and the gesture stroke number are sent to the displacement judgment module; the ⁇ and ⁇ attributes are fundamentally changed, specifically, when the ⁇ When the change value of ⁇ and ⁇ exceeds the threshold value, it is determined that the ⁇ and ⁇ properties are fundamentally changed.
  • the processor is further configured to calculate a displacement of the current track segment according to the received start point and the end point coordinate, and the displacement together with the received gesture stroke number; query the pre-stored gesture stroke coding table according to the received gesture stroke number and displacement, and obtain The gesture code corresponding to the current track segment.
  • all or part of the compensation method of the nonlinear tolerance provided by the embodiment of the present invention can be completed by hardware related to the program instruction.
  • the program can be stored on a readable storage medium such as a random access memory, a magnetic disk, an optical disk, or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本发明实施例公开了一种应用于终端的空中手势识别方法及装置,该方法包括:设定的采样频率获取空中手势的即时点位置信息;根据即时点位置信息确定即时轨迹段及其所对应的手势笔画及笔画编号;根据该手势笔画的编号和计算出的即时轨迹段的位移,获取该手势笔画对应的编码;将空中手势中包含的所有即时轨迹段所对应的手势笔画的编码按先后顺序组合起来得到该空中手势的编码。本发明实施例提供的技术方案具有识别快、精度高的优点。

Description

应用于终端的空中手势识别方法及装置
技术领域
本发明涉及电子领域, 特别涉及一种应用于终端的空中手势识别方 法及装置。 发明背景
随着各种智能传感器的广泛应用, 许多智能的终端, 比如, 手机、 智能电视、 平板电脑、 游戏机和智能手柄等, 可以实现空中手势识别。 这些空中手势识别可以是通过挥动手臂产生, 也可以是通过挥动智能的 终端产生。 空中手势可以有^ ί艮广泛的应用, 如操作智能电视、 玩游戏及 实现空中输入法等。
空中手势不但操作筒单, 且生动有趣, 怎样有效地给识别空中手势 成为一项技术难点和关键技术。
现有技术的空中手势识別技术是根据分析空中手势产生的事后轨 迹图片来识别, 或者固定化几个预置的手势模型, 通过复杂的算法运算 识别。
因此, 现有技术的技术方案不但忽视了空中手势运动产生的特征, 同时运算复杂, 系统资源占用多, 难以适应复杂的空中手势应用, 并且 现有技术无法做到准确识别空中手势。 发明内容
有鉴于此, 本发明提供一种应用于终端的空中手势识别方法, 该方 法提高空中手势的识别效率和准确率。
本发明还提供一种应用于终端的控制手势识别装置, 该装置提高空 中手势的识别效率和准确率。
为了实现本发明的发明目的, 本发明的技术方案为:
一种应用于终端的空中手势识别方法, 所述方法包括:
以设定的采样频率获取空中手势的即时点位置信息;
根据所述即时点位置信息确定即时轨迹段及其所对应的手势笔画 及笔画编号;
根据所述笔画编号和依据所述即时点位置信息计算的即时轨迹段 的位移, 获取所述手势笔画对应的编码;
将空中手势中包含的所有即时轨迹段所对应的手势笔画的编码按 先后顺序组合起来得到该空中手势的编码;
根据所述空中手势的编码对空中手势进行识別, 控制所述终端。 所述以设定的采样频率获取空中手势的即时点位置信息为: 将最开始的即时点作为起点, 以设定的采样频率采样即时点相对起 点的相对位置信息, 作为即时点位置信息。
所述根据即时点位置信息确定即时轨迹段所对应的手势笔画及笔 画编号为:
将所有即时点位置信息按先后次序连接起来的曲线记为空间手势 的轨迹; 按各相邻的采样点的角度属性是否相同将所述空间手势的轨迹 划分为多个即时轨迹段, 将该多个即时轨迹段与存储在数据库中的数据 进行比对得到每个即时轨迹段对应的一个手势笔画及笔画编号。
所述将所有即时点位置信息按先后次序连接起来的曲线记为空间 手势的轨迹; 按各相邻的采样点的角度属性是否相同将所述空间手势的 轨迹划分为多个即时轨迹段, 使得每个即时轨迹段对应一个手势笔画包 括:
以当前即时轨迹段起点为原点建立坐标系; 跟踪运动轨迹的即时点 坐标, 计算即时点与当前即时轨迹段起点的连线和 X坐标轴之间的角度 α, 计算所述即时点与上一釆样点的连线与 X坐标轴之间的角度 β; 根据 α和 β, 得出移动方向, 确定当前即时轨迹段所对应的手势笔画及手势 笔画的编码; 当 α和 β属性发生根本变化时, 则当前轨迹段结束, 所述 即时点为当前轨迹段的终点, 同时所述即时点也为下一个即时轨迹段的 起点;
所述 α和 β属性发生根本变化为, 当所述 α和 β的变化值超出预先 设置的阈值时, 确定所述 α和 β属性发生根本变化。
所述根据笔画编号和根据所述即时点位置信息计算出的即时轨迹 段的位移, 获取所述手势笔画对应的编码为:
根据当前即时轨迹段的所述起点和终点计算当前即时轨迹段的位 移; 根据所述位移, 参照预先设置的位移阀值区分长手势和短手势; 根据手势笔画的笔画编号和位移查询预存的手势笔画编码表获取 当前轨迹段对应的手势编码。
所述将空中手势中包含的所有即时轨迹段所对应的手势笔画的编 码按先后顺序组合起来得到该空中手势的编码为:
将空中手势的轨迹分成一个个即时轨迹段后, 每个即时轨迹段对应 一个手势笔画编码;
按先后顺序将所述手势笔画编码组合起来得到空中手势的编码。 一种应用于终端的空中手势识別装置 , 所述识別装置包括: 传感器单元, 用于以设定的采样频率获取空中手势的即时点位置信 息;
判断单元, 用于根据所述即时点位置信息确定即时轨迹段及其所对 应的手势笔画及笔画编号;
笔画编码获取单元, 用于根据所述笔画编号和依据所述即时点位置 信息计算出的即时轨迹段的位移, 获取所述手势笔画对应的编码; 手势编码获取单元, 用于将空中手势中包含的所有即时轨道段所对 应的手势笔画按先后顺序组合起来得到该空中手势的编码;
手势控制单元, 用于根据所述空中手势的编码对空中手势进行识 别, 并控制终端。
所述传感器单元进一步包括:
采样模块, 用于将第一个即时点作为起点, 以设定的采样频率采样 即时点相对起点的相对位置信息。
所述判断单元进一步包括:
比对模块, 用于将所有即时点位置信息按先后次序连接起来的曲线 即为空间手势的轨迹; 按各相邻的采样点的角度属性是否相同将所述空 间手势的轨迹划分为多个即时轨迹段, 将该多个即时轨迹段与预先存储 的数据库中的数据进行比对得到每个即时轨迹段对应的一个手势笔画 及笔画编号。
所述对比模块进一步包括:
角度判断子模块, 用于以当前即时轨迹段起点为原点建立坐标系; 跟踪运动轨迹的即时点坐标, 计算即时点与当前即时轨迹段起点的连线 和 X坐标轴之间的角度 α, 同时计算所述即时点与上一采样点的连线与 X坐标轴之间的角度 β; 根据 α和 β, 得出移动方向, 确定当前即时轨迹 段所对应的手势笔画及笔画编号; 当 α和 β属性发生根本变化时, 则当 前轨迹段结束, 所述即时点为当前轨迹段的终点, 同时所述即时点也为 下一个即时轨迹段的起点;
所述 α和 β属性发生根本变化具体为, 当所述 α和 β的变化值超出 预先设置的阈值时, 确定所述 α和 β属性发生根本变化。
所述笔画编码获取单元进一步包括: 位移判断模块, 用于根据当前即时轨迹段的所述起点和终点计算当 前即时轨迹段的位移; 根据所述位移, 根据预先设置的位移阀值区分长 手势和短手势;
编码查询模块, 用于 居手势笔画的笔画编号和位移查询预存的手 势笔画编码表获取当前轨迹段对应的手势编码。
所述手势编码荻取单元进一步包括:
分割模块, 用于将空中手势的轨迹分成一个个即时轨迹段后, 每个 即时轨迹段对应一个手势笔画编码;
组合模块, 用于按先后顺序将所述手势笔画编码组合起来得到空中 手势的编码。
一种终端, 其特征在于, 所述终端包括上述识別装置。
从上述方案可以看出, 本发明通过运动轨迹识别空中手势充分考虑 到空中手势是运动的特点, 同时, 本发明提供了一种简便有效的方法进 行空中手势识别, 通过这种空中手势的编码方法, 可以艮高效、 准确地 识别空中手势, 进而通过空中手势控制终端。 附图简要说明
为了更清楚地说明本发明实施例中的技术方案, 下面将对实施例中 所需要使用的附图作简单地介绍, 显而易见地, 下面描述中的附图仅仅 是本发明的一些实施例, 对于本领域普通技术人员来讲, 在不付出创造 性劳动性的前提下, 还可以根据这些附图获得其他的附图。
图 1示出了本发明实施例提供的一种应用于终端的空中手势识别方 法流程图;
图 2示出了本发明实施例提供的一种应用于终端的空中手势识別装 置示意图; 囹 3示出了本发明实施例提供的手势笔画编码表;
图 4示出了本发明实施例提供的手势笔画原理图一;
图 5示出了本发明实施例提供的手势笔画原理图二;
图 6示出了本发明实施例提供的空中手势识別例子示意图。 实施本发明的方式
为使本发明的目的、 技术方案及优点更加清楚明白, 下面将结合本 发明实施例中的附图, 对本发明实施例中的技术方案进行清楚、 完整地 描述, 显然, 所描述的实施例仅是本发明一部分实施例, 而不是全部的 实施例。 基于本发明中的实施例, 本领域普通技术人员在没有作出创造 性劳动前提下所获得的所有其他实施例, 都属于本发明保护的范围。
本发明提供的技术方案仅仅针对于空中手势的识别, 对终端的接触 式触摸手势的识别并不涉及。 另外, 本发明实施例的技术方案不仅针对 二维的空中手势, 还针对三维的空中手势。
图 1为本发明实施例提供的一种应用于终端的空中手势识别方法流 程图, 其具体步骤为:
步骤 101、 以设定的采样频率获取空中手势的即时点位置信息; 在该步骤中, 釆样频率可以根据需要设置, 本发明提供的实施例并 不限制;
步骤 102、 根据即时点位置信息确定即时轨迹段及其所对应的手势 笔画及该手势笔画的笔画编号;
步骤 103、 艮据该手势笔画的编号和计算出的即时轨迹段的位移, 获取该手势笔画对应的编码;
步驟 104、 将空中手势中包含的所有即时轨迹段所对应的手势笔画 的编码按先后顺序組合起来得到该空中手势的编码。 在本发明中, 实现步骤 101的方法具体可以为: 获取空中手势的轨迹时, 总是以一定釆样频率获取即时点位置信息 的, 如图 4所示, 该空中手势被采样到 7个点, 为 、 B、 C、 D、 E、 F 和 G, 其中点 A为该空中手势的起点, 点 G为该空中手势的终点; 一般以最先开始的即时点作为起点, 比如点 A, 以起点为参考, 釆 样即时点相对起点的相对位置信息作为即时点位置信息。
在本发明中, 实现步骤 102的方法具体可以为:
空中手势的轨迹为按先后次序将所有采样点连接起来的曲线; 一个 空中手势的轨迹又可按各相邻采样点的角度属性是否相同划分为相同 或不同的即时轨迹段, 将所述多个即时轨迹段分別与数据库中存储的数 据比对, 得到每个即时轨迹段对应一个手势笔画, 在数据库中的数据可 以根据需要进行增加或删除, 该数据库存储着即时轨迹段与手势笔画的 对应关系, 并存储着该手势笔画的笔画编码;
如图 4所示, 以当前即时轨迹段起点 A为原点建立坐标系; 跟踪空 中手势的轨迹的即时点位置信息, 也就是即时点坐标, 比如点 C坐标, 计算即时点 C与起始点 A的连线和 X坐标轴之间的角度 α, 同时计算即 时点 C与上一采样点 Β的连线与 X坐标轴之间的角度 β; 根据 α和 β, 得出手势的移动方向, 确定即时轨迹段, 进一步确定当前即时轨迹段所 对应的手势笔画及该手势笔画的笔画编号; 当 α和 β属性发生根本变化 时, 则当前轨迹段结束, 该即时点 C为当前轨迹段的终点, 同时该即时 点 C也为下一个即时轨迹段的起点, 如此类推; 上述 α和 13属性发生根 本变化具体为: 当所述 α和 β的变化值超出阈值时, 确定所述 α和 13属 性发生根本变化, 角度的阀值为角度的范围, 可以根据需要设定, 比如 -22.5° <α<=22.5° , 也可以设定为 -45。 < β <=45。 , 本发明并不限定阀 值的具体值。 在本发明中, 步骤 103的具体过程为:
当一个即时轨迹段结束时, 以当前即时点为终点计算当前即时轨迹 段的位移; 根据该位移, 参照预先设置的位移阀值, 区分长手势和短手 势; 根据手势笔画编号和该位移, 查询预存的手势笔画编码表, 获取当 前轨迹段对应的手势编码。
在本发明中, 步骤 104的具体过程为: 将空中手势轨迹分成一个个即时轨迹段后, 每个即时轨迹段对应一 个手势笔画编码, 按先后顺序将这些手势笔画编码组合起来得到该空中 手势的编码。
上述空中手势编码方法依赖一套子贞先确定的手势笔画编码集合, 如 图 3所示, 为一实施例的手势笔画编码表。 很容易发现, 表中的每个手 势笔画都是空中手势轨迹的最小独立单元, 都有一个独立编号, 各手势 笔画的属性: 起点、 终点、 移动方向和角度等有本质区别。
需要说明的是,本手势笔画编码集合中,手势笔画编码以字母表示, 每个手势笔画编码按位移的大小分别对应两个编码, 即, 系统设置一个 位移阔值, 表示为 Δ值, 用来区分长手势和短手势, 长手势以小写字母 编码, 短手势以大写字母编码, 当然也可以为长手势以大写字母编码, 短手势以小写字母编码。
结合图 5 , 可以更好地理解本案中所定义的手势笔画, 在图 5 中, 虚线按角度将整个圆周划分为 8个区间, 正好对应 8个手势笔画。 以 Δ 值为半径的小圓很好的说明大手势和小手势的区别, 比如从原点 0为起 点,采样点都落在区域 I中时,即: -22.5。 <α<=22.5。 ,且 -22.5° <β<=22.5 。 , 则该手势笔画编号为 1, 编码为 Α; 如果釆样点都落在区域 II中时, 即: -22.5。 <α<=22.5。 , 且 -22.5。 <β<=22.5° , 则该手势笔画编号也为 1, 但编码为 a, 依次类推。 由于本发明的技术方案所提供的空中手势编码很巧妙地通过手势笔 画序列将空中手势运动轨迹的过程完整的记录下来了, 因此有利于提高 空中手势的识别准确度。
空中手势编码为字符编码, 可以存储, 还可以支持智能匹配, 通过 与预存的编码匹配, 可以艮好的识別空中手势的含义。
本发明实施例提供的一种空中手势的识别装置, 是上述空中手势识 别方法的一应用终端, 如图 2所示, 包括:
21、 传感器单元;
22、 判断单元;
23、 笔画编码获取单元;
24、 手势编码获取单元;
25、 手势控制单元;
其中
传感器单元 21,用于以设定的采样频率获取空中手势的即时点位置 信息;
判断单元 22,用于根据所述即时点位置信息确定即时轨迹段及其所 对应的手势笔画及该手势笔画的笔画编号;
笔画编码获取单元 23 ,用于根据所述笔画编号和依据所述即时点位 置信息计算出的即时轨迹段的位移, 获取所述手势笔画的编码;
手势编码获取单元 24,用于将空中手势中包含的所有即时轨道段所 对应的手势笔画编码按先后顺序组合起来得到该空中手势的编码; 手势控制单元 25 ,用于根据所述空中手势的编码对空中手势进行识 别, 并控制终端。
可选地, 上述传感器单元 21进一步包括:
采样模块 21 1 , 用于将最先开始的即时点作为起点, 以设定的采样 频率采样即时点相对起点的相对位置信息, 作为即时点位置信息。 可选的, 上述判断单元 22进一步包括:
比对模块 221 , 用于将所有即时点位置信息按先后次序连接起来的 曲线为空中手势的轨迹; 按各相邻的采样点的角度属性是否相同将所述 空间手势的轨迹划分为多个即时轨迹段, 将该多个即时轨迹段分别与预 先存储的数据库中的数据进行比对得到每个即时轨迹段对应一个手势 笔画及该手笔画的笔画编号。
可选地, 上述对比模块 221进一步包括:
角度判断子模块 221 1,用于以当前即时轨迹段起点为原点建立坐标 系; 跟踪运动轨迹的即时点坐标, 计算即时点与当前即时轨迹段起点的 连线和 X坐标轴之间的角度 α , 同时计算所述即时点与上一采样点的连 线与 X坐标轴之间的角度 β; 根据 α和 β, 得出移动方向, 并进一步确 定当前即时轨迹段所对应的手势笔画及其笔画编号; 当 α和 β属性发生 根本变化时, 则当前轨迹段结束, 所述即时点为当前轨迹段的终点, 同 时所述即时点也为下一个即时轨迹段的起点;
所述 α和 β属性发生根本变化具体为, 当所述 α和 β的变化值超出 预先设置的阈值时, 确定所述 α和 β属性发生根本变化。
可选地, 上述笔画编码获取单元 23进一步包括:
位移判断模块 231 , 用于根据当前即时轨迹段的所述起点和终点计 算当前即时轨迹段的位移; 根据所述位移, 参照预先设置的位移阀值区 分长手势和短手势;
编码查询模块 232 , 用于根据手势笔画的笔画编号和位移查询预存 的手势笔画编码表获取当前轨迹段对应的手势编码。
可选的, 手势编码获取单元 24进一步包括:
分割模块 241 , 用于将空中手势的轨迹分成一个个即时轨迹段后, 每个即时轨迹段对应一个手势笔画编码;
组合模块 242, 用于按先后顺序将所述手势笔画编码組合起来得到 空中手势的编码。
如图 6所示,列出了 10个空中手势例子, 图中虛线表示以一定采样 频率获取的空中手势轨迹, 实线表示为手势笔画分解辅助线, 通过分析 这些空中手势例子可以更深的理解本发明所述的空中手势编码方法和 终端的优点。
首先按本发明所述编码方法,对这 10个空中手势例子进行编码,需 要指出的是, 依照本案所述编码方法, 以下编码是在空中手势产生过程 中动态完成的:
手势 a: (bhe);
手势 b: (heb);
手势 c: (AhgfedcbA), 起始点在 y轴, 假设第一个轨迹段和最后一 个轨迹段的位移都小于 Δ值;
手势 d: (ahgfedcb);
手势 e: (a);
手势 f: (aBHa), ili^i中间 2个轨迹段的位移小于 Δ值;
手势 g: (abha);
手势 h: (CAGE), 假设所有轨迹段的位移都小于 Δ值;
手势 i: (cage);
手势 j: (geca)。
该实施例可以通过一些空中手势编码的匹配运算, 达到一些意想不 到的效果。 如:
手势 a和手势 b, 从图形上看都是一个三角形手势, 但由于起点不 同, 手势 a和手势 b编码不同。 但是, 如果终端希望手势 a和手势 b当 作同一手势处理, 则终端手势识别模块可以通过环形匹配运算来实现, 步骤为:
(1) .手势 a的编码 (bhe),忽略起点和终点,进行移位,得到编码 (heb);
(2) . 手势 a移位后的编码 (heb)与手势 b的编码相同。
手势 c和手势 d, 从图形上看, 都是一个圆性手势, 但同样是起点 不同, 手势 c和手势 d编码不同,而且手势 c的编码中产生了首尾两个 "A" 。 同样, 如杲终端希望手势 c和手势 d当作同一手势处理, 则空 中手势识別装置手势识別模块可以通过去舍合并、 环形匹配、 忽略大小 写等运算来实现, 步骤为:
(1) .将手势 c的编码尾部 "A" 移到首位, 得到编码 (AAhgfedcb);
(2) .根据相邻相同编码去舍原则, 舍去第 2 个 "A" , 得到编码 (Ahgfedcb);
(3) .在以手势 c转换后的编码 (Ahgfedcb)与手势 d的编码 (ahgfedcb) 进行忽视大小写匹配, 即可。
手势 e、 f、 g, 从图形上看, 做出空中手势的意图可能是一致的, 即 手势 e, 但是在做出空中手势的过程中, 出现了不同幅度的上下抖动, 根据抖动幅度是否大于 Δ值, 得到手势 f和手势 g不同的编码。 如杲空 中手势识别装置希望将这三个手势作为同一手势处理, 则是不允许的。 但从空中手势的产生看, 允许小幅度抖动是可以做到的, 即手势 g只能 作为另一个手势来处理, 手势 e和手势 f可以通过忽略小手势编码、 去 舍合并等运算来实现, 步骤为:
(1) .忽略手势 f中的小手势编码, 得到编码 (aa);
(2) . 根据相邻相同编码去舍原则, 舍去第 2个 "a" , 得到编码 (a);
(3) .以手势 f转换后的编码 (a)与手势 e的编码 (a)进行匹配。
手势 h和手势 i, 从图形上看, 都是矩形, 但由于手势幅度不同, 手 势 h和手势 i编码不同。 如果终端希望将手势 h和手势 i作为同一手势 处理, 则终端手势识别模块只需做忽视大小写匹配运算即可。
手势 j和手势 i起点不同, 同理, 也可以通过环形匹配运算来实现 当作同一手势来处理。
从以上例子来看, 10个空中手势例子, 空中手势的编码都不相同, 严格上可以当作 10个空中手势来识别, 但考虑到空中手势产生的特证, 完全严格的识别, 可能并不能达到做出空中手势用户的本意, 因此, 可 以在空中手势编码匹配运算上, 定义一些算法, 从而实现不但准确识別 不同手势, 而且让识别更人性化识别到做出空中手势的用户的意图。
通过以上例子分析, 更体现出本发明空中手势识别的方法特点与优 势。 但需要说明的是, 本发明所述识别方法不局限与本实施例中所示的 空中手势笔画, 上述单元和系统实施例中, 所包括的各个模块或单元只 是按照功能逻辑进行划分的, 但并不局限于上述的划分, 只要能够实现 相应的功能即可; 另外, 各功能模块的具体名称也只是为了便于相互区 分, 并不用于限制本发明的保护范围。
本发明实施例还提供一种终端, 该终端包括上述空中手势的识别装 置。
本发明实施例还提供一种终端, 所述终端包括: 传感器和处理器, 其中, 所述传感器与处理器连接;
所述传感器, 用于以设定的采样频率获取空中手势的即时点位置信 息, 并将即时点位置信息发送给所述处理器;
所述处理器, 用于通过所述即时点位置信息确定即时轨迹段及其所 对应的手势笔画及所述手势笔画的笔画编号; 才艮据该手势笔画的编号和 依据所述即时点位置信息计算出的即时轨迹段的位移, 获取该手势笔画 对应的编码; 将手势笔画对应的编码按手势笔画的先后顺序组合起来得 到该空中手势的编码。
可选地,上述处理器具体用于按即时轨迹段起点为原点建立坐标系, 将接收到的即时点位置信息, 换算为该坐标系中的坐标, 并根据即时点 的坐标, 算出即时点与坐标系原点的连线和坐标 X轴之间的角度 α , 算 出即时点与上一釆样点的连线与坐标 X轴之间的角度 β, 并根据这两个 角度, 确定当前即时轨迹段所对应的手势笔画编号, 当手势笔画编号发 生变化时, 将当前轨迹段的起点与终点坐标、 手势笔画编号发给位移判 断模块; 所述 α和 β属性发生根本变化具体为, 当所述 α和 β的变化值 超出阔值时, 确定所述 α和 β属性发生根本变化。
所述处理器还用于根据接收的起点和终点坐标, 计算当前轨迹段的 位移, 并该位移连同接收的手势笔画编号; 根据接收的手势笔画编号和 位移, 查询预存的手势笔画编码表, 获取当前轨迹段对应的手势编码。
本领域技术人员可以理解, 本发明实施例提供的非线性容限的补偿 方法中, 其全部或部分步骤是可以通过程序指令相关的硬件来完成。 比 如可以通过计算机运行程来完成。 该程序可以存储在可读取存储介质, 例如, 随机存储器、 磁盘、 光盘等。
以上所述仅为本发明的较佳实施例而已, 并不用以限制本发明, 凡 在本发明的精神和原则之内, 所做的任何修改、 等同替换、 改进等, 均 应包含在本发明保护的范围之内。

Claims

权利要求书
1、 一种应用于终端的空中手势识别方法, 其特征在于, 所述方法 包括:
以设定的采样频率获取空中手势的即时点位置信息;
根据所述即时点位置信息确定即时轨迹段及其所对应的手势笔画 及笔画编号;
根据所述笔画编号和依据所述即时点位置信息计算的即时轨迹段 的位移, 获取所述手势笔画对应的编码;
将空中手势中包含的所有即时轨迹段所对应的手势笔画的编码按 先后顺序组合起来得到该空中手势的编码;
根据所述空中手势的编码对空中手势进行识別, 控制所述终端。
2、 根据权利要求 1 所述的方法, 其特征在于, 所述以设定的采样 频率获取空中手势的即时点位置信息为:
将最开始的即时点作为起点, 以设定的采样频率采样即时点相对起 点的相对位置信息, 作为即时点位置信息。
3、 根据权利要求 1 所述的方法, 其特征在于, 所述根据即时点位 置信息确定即时轨迹段所对应的手势笔画及笔画编号为:
将所有即时点位置信息按先后次序连接起来的曲线记为空间手势 的轨迹; 按各相邻的采样点的角度属性是否相同将所述空间手势的轨迹 划分为多个即时轨迹段, 将该多个即时轨迹段与存储在数据库中的数据 进行比对得到每个即时轨迹段对应的一个手势笔画及笔画编号。
4、 根据权利要求 3 所述的方法, 其特征在于, 所述将所有即时点 位置信息按先后次序连接起来的曲线记为空间手势的轨迹; 按各相邻的 采样点的角度属性是否相同将所述空间手势的轨迹划分为多个即时轨 迹段, 使得每个即时轨迹段对应一个手势笔画包括: 以当前即时轨迹段起点为原点建立坐标系; 跟踪运动轨迹的即时点 坐标, 计算即时点与当前即时轨迹段起点的连线和 X坐标轴之间的角度 α, 计算所述即时点与上一采样点的连线与 X坐标轴之间的角度 β; 根据 α和 β , 得出移动方向, 确定当前即时轨迹段所对应的手势笔画及手势 笔画的编码; 当 α和 β属性发生根本变化时, 则当前轨迹段结束, 所述 即时点为当前轨迹段的终点, 同时所述即时点也为下一个即时轨迹段的 起点;
所述 α和 β属性发生根本变化为, 当所述 α和 β的变化值超出预先 设置的阈值时, 确定所述 α和 β属性发生根本变化。
5、 根据权利要求 1 所述的方法, 其特征在于, 所述根据笔画编号 和根据所述即时点位置信息计算出的即时轨迹段的位移, 获取所述手势 笔画对应的编码为:
根据当前即时轨迹段的所述起点和终点计算当前即时轨迹段的位 移; 根据所述位移, 参照预先设置的位移阀值区分长手势和短手势; 根据手势笔画的笔画编号和位移查询预存的手势笔画编码表获取 当前轨迹段对应的手势编码。
6、 根据权利要求 1 所述的方法, 其特征在于, 所述将空中手势中 包含的所有即时轨迹段所对应的手势笔画的编码按先后顺序组合起来 得到该空中手势的编码为:
将空中手势的轨迹分成一个个即时轨迹段后, 每个即时轨迹段对应 一个手势笔画编码;
按先后顺序将所述手势笔画编码组合起来得到空中手势的编码。
7、 一种应用于终端的空中手势识别装置, 其特征在于, 所述识别 装置包括:
传感器单元, 用于以设定的采样频率获取空中手势的即时点位置信 息;
判断单元, 用于根据所述即时点位置信息确定即时轨迹段及其所对 应的手势笔画及笔画编号;
笔画编码获取单元, 用于根据所述笔画编号和依据所述即时点位置 信息计算出的即时轨迹段的位移, 获取所述手势笔画对应的编码; 手势编码荻取单元, 用于将空中手势中包含的所有即时轨道段所对 应的手势笔画按先后顺序組合起来得到该空中手势的编码;
手势控制单元, 用于根据所述空中手势的编码对空中手势进行识 另 t 并控制终端。
8、 根据权利要求 7所述的识別装置, 其特征在于, 所述传感器单 元进一步包括:
采样模块, 用于将第一个即时点作为起点, 以设定的采样频率采样 即时点相对起点的相对位置信息。
9、 根据权利要求 7所述的识别装置, 其特征在于, 所述判断单元 进一步包括:
比对模块, 用于将所有即时点位置信息按先后次序连接起来的曲线 即为空间手势的轨迹; 按各相邻的采样点的角度属性是否相同将所述空 间手势的轨迹划分为多个即时轨迹段, 将该多个即时轨迹段与预先存储 的数据库中的数据进行比对得到每个即时轨迹段对应的一个手势笔画 及笔画编号。
10、 根据权利要求 9所述的识别装置, 其特征在于, 所述对比模块 进一步包括:
角度判断子模块, 用于以当前即时轨迹 £¾点为原点建立坐标系; 跟踪运动轨迹的即时点坐标, 计算即时点与当前即时轨迹段起点的连线 和 X坐标轴之间的角度 α, 同时计算所述即时点与上一采样点的连线与 x坐标轴之间的角度 β; 根据 α和 β, 得出移动方向, 确定当前即时轨迹 段所对应的手势笔画及笔画编号; 当 α和 β属性发生根本变化时, 则当 前轨迹段结束, 所述即时点为当前轨迹段的终点, 同时所述即时点也为 下一个即时轨迹段的起点;
所述 α和 β属性发生根本变化具体为, 当所述 α和 β的变化值超出 预先设置的阈值时, 确定所述 α和 β属性发生根本变化。
11、 根据权利要求 7所述的识别装置, 其特征在于, 所述笔画编码 获取单元进一步包括:
位移判断模块, 用于根据当前即时轨迹段的所述起点和终点计算当 前即时轨迹段的位移; 根据所述位移, 根据预先设置的位移阀值区分长 手势和短手势;
编码查询模块, 用于 #居手势笔画的笔画编号和位移查询预存的手 势笔画编码表获取当前轨迹段对应的手势编码。
12、 根据权利要求 7所述的识别装置, 其特征在于, 所述手势编码 获取单元进一步包括:
分割模块, 用于将空中手势的轨迹分成一个个即时轨迹段后, 每个 即时轨迹段对应一个手势笔画编码;
组合模块, 用于按先后顺序将所述手势笔画编码组合起来得到空中 手势的编码。
13、 一种终端, 其特征在于, 所述终端包括权利要求 7_ 12任一种 识别装置。
PCT/CN2013/080717 2012-09-29 2013-08-02 应用于终端的空中手势识别方法及装置 WO2014048170A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201210374890.3 2012-09-29
CN201210374890.3A CN103713730B (zh) 2012-09-29 2012-09-29 应用于智能终端的空中手势识别方法及装置

Publications (1)

Publication Number Publication Date
WO2014048170A1 true WO2014048170A1 (zh) 2014-04-03

Family

ID=50386948

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2013/080717 WO2014048170A1 (zh) 2012-09-29 2013-08-02 应用于终端的空中手势识别方法及装置

Country Status (2)

Country Link
CN (1) CN103713730B (zh)
WO (1) WO2014048170A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114327615A (zh) * 2022-03-09 2022-04-12 湖南云畅网络科技有限公司 一种基于大数据的接口文档生成方法及系统

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017021461A (ja) * 2015-07-08 2017-01-26 株式会社ソニー・インタラクティブエンタテインメント 操作入力装置および操作入力方法
CN106648039B (zh) * 2015-10-30 2019-07-16 富泰华工业(深圳)有限公司 手势控制系统及方法
CN107728775A (zh) * 2016-08-10 2018-02-23 富士通株式会社 动作识别方法及其装置
CN107633227B (zh) * 2017-09-15 2020-04-28 华中科技大学 一种基于csi的细粒度手势识别方法和系统
CN109189218B (zh) * 2018-08-20 2019-05-10 广州市三川田文化科技股份有限公司 一种手势识别的方法、装置、设备及计算机可读存储介质
CN109528121B (zh) * 2018-11-30 2021-02-26 佛山市顺德区美的洗涤电器制造有限公司 洗碗机和识别操作轨迹的方法、装置、设备及介质
CN112947836A (zh) * 2019-12-11 2021-06-11 北京集创北方科技股份有限公司 基于拐点特征的手势识别方法、系统、存储介质、触屏设备
CN112306242A (zh) * 2020-11-09 2021-02-02 幻境虚拟现实(广州)智能科技研究院有限公司 一种基于书空手势的交互方法和系统

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1173682A (zh) * 1996-06-03 1998-02-18 日本电气株式会社 利用标准笔划识别输入字符的在线字符识别系统
CN101739118A (zh) * 2008-11-06 2010-06-16 大同大学 视讯手写文字输入装置及其方法
CN101777250A (zh) * 2010-01-25 2010-07-14 中国科学技术大学 家用电器的通用遥控装置及方法

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5687254A (en) * 1994-06-06 1997-11-11 Xerox Corporation Searching and Matching unrecognized handwriting
CN101980107A (zh) * 2010-10-20 2011-02-23 陆钰明 一种基于直线基本手势的手势码的实现方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1173682A (zh) * 1996-06-03 1998-02-18 日本电气株式会社 利用标准笔划识别输入字符的在线字符识别系统
CN101739118A (zh) * 2008-11-06 2010-06-16 大同大学 视讯手写文字输入装置及其方法
CN101777250A (zh) * 2010-01-25 2010-07-14 中国科学技术大学 家用电器的通用遥控装置及方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114327615A (zh) * 2022-03-09 2022-04-12 湖南云畅网络科技有限公司 一种基于大数据的接口文档生成方法及系统
CN114327615B (zh) * 2022-03-09 2022-06-28 湖南云畅网络科技有限公司 一种基于大数据的接口文档生成方法及系统

Also Published As

Publication number Publication date
CN103713730B (zh) 2018-03-20
CN103713730A (zh) 2014-04-09

Similar Documents

Publication Publication Date Title
WO2014048170A1 (zh) 应用于终端的空中手势识别方法及装置
CN110991319B (zh) 手部关键点检测方法、手势识别方法及相关装置
US10043308B2 (en) Image processing method and apparatus for three-dimensional reconstruction
CN103353935B (zh) 一种用于智能家居系统的3d动态手势识别方法
CN204463032U (zh) 一种3d场景中输入手势的系统和虚拟现实头戴设备
CN104731307B (zh) 一种体感动作识别方法及人机交互装置
CN102810008B (zh) 一种空中输入系统、方法及空中输入采集设备
US20110299737A1 (en) Vision-based hand movement recognition system and method thereof
US20150026646A1 (en) User interface apparatus based on hand gesture and method providing the same
CN103150018B (zh) 手势识别方法及装置
WO2012154832A2 (en) Object tracking
EP2996067A1 (en) Method and device for generating motion signature on the basis of motion signature information
WO2012163124A1 (zh) 基于空间运动的输入方法及终端
Ruan et al. Dynamic gesture recognition based on improved DTW algorithm
CN113378770A (zh) 手势识别方法、装置、设备、存储介质以及程序产品
She et al. A real-time hand gesture recognition approach based on motion features of feature points
CN103164696A (zh) 手势识别方法及装置
CN107292295B (zh) 手势分割方法及装置
CN110363811B (zh) 用于抓取设备的控制方法、装置、存储介质及电子设备
CN105867595A (zh) 联合语音信息与手势信息的人机交互方式以及实施装置
TWI717772B (zh) 呼叫目標功能的方法、裝置、行動終端及儲存媒體
AU2020294217A1 (en) Gesture recognition method and apparatus, electronic device, and storage medium
WO2024045454A1 (zh) 目标识别方法、存储介质及设备
KR20150075481A (ko) 손 자세 인식 기반 사용자 인터페이스 방법 및 시스템
CN109657440A (zh) 基于区块链的生物特征信息处理方法和装置、终端设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13842872

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205N DATED 08/06/2015)

122 Ep: pct application non-entry in european phase

Ref document number: 13842872

Country of ref document: EP

Kind code of ref document: A1