WO2012152205A1 - 一种人机交互设备 - Google Patents

一种人机交互设备 Download PDF

Info

Publication number
WO2012152205A1
WO2012152205A1 PCT/CN2012/075069 CN2012075069W WO2012152205A1 WO 2012152205 A1 WO2012152205 A1 WO 2012152205A1 CN 2012075069 W CN2012075069 W CN 2012075069W WO 2012152205 A1 WO2012152205 A1 WO 2012152205A1
Authority
WO
WIPO (PCT)
Prior art keywords
friction
human
audio
machine interaction
microphone
Prior art date
Application number
PCT/CN2012/075069
Other languages
English (en)
French (fr)
Inventor
张博宁
钱跃良
王向东
Original Assignee
中国科学院计算技术研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院计算技术研究所 filed Critical 中国科学院计算技术研究所
Publication of WO2012152205A1 publication Critical patent/WO2012152205A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Definitions

  • the present invention relates to the field of computer application technologies, and in particular, to a human-machine interaction device worn on a finger. Background technique
  • Multi-touch interaction mode which is now widely used in various devices.
  • the principle of the device is that when a person's finger touches the touch screen component, the capacitance or resistance of the point changes accordingly. The position of the touch can be judged by collecting the values of different points.
  • the limitations of this type of device are: The operator must use it in front of the device and touch the touch screen, and the device cannot be operated when it is away from the device.
  • the somatosensory human-computer interaction mode includes: interacting with the motion of the user's hand through an acceleration sensor in the remote controller, or through a special camera, the camera can collect depth information and then pass the image The process reconstructs the skeleton of the user to interact.
  • the disadvantage of the somatosensory interaction method is that the user must be able to be accurately judged as an interactive action in front of the device, and the minute action is unacceptable.
  • An interactive device for muscle calculation determines the movement of the human hand by collecting the electromyogram signal of the human muscle, thereby interacting.
  • This type of equipment is currently immature, and the equipment is cumbersome and difficult to carry.
  • a data glove that collects the movements of a person's hand.
  • the principle of the glove is that a curved sensor is attached to each finger, and the movement of the hand of the human body is reproduced by the acquisition of the change of the bending sensor to finally complete the interaction.
  • Such equipment is extremely expensive to manufacture and relatively cumbersome to wear, especially in hot summer months where the device is not suitable for ordinary users.
  • the object of the present invention is to provide a novel human-computer interaction device, which supports a new human-computer interaction mode: any solid surface can be connected as a touch plane, and various gestures can be completed by touching the surface. And, with this action, control the computer or smart device to implement related operations.
  • a human-machine interaction device provided for the purpose of the present invention includes:
  • Motion position sensor for collecting motion trajectories and attitudes
  • Microphone used to collect frictional audio between the device and other surfaces
  • Processing module for obtaining frictional audio based on the collected motion trajectory and other surfaces of the device
  • a human-machine interaction command of "drag and drop” a human-machine interaction command for "drag and drop”
  • a human-computer interaction command for obtaining a tapping action by determining the motion position sensor data collected and calculating the instantaneous power of the audio collected by the microphone, and obtaining a human-computer interaction command of the "tapping" command; Interact with the appropriate device or computer.
  • the human-machine interaction device realizes human-computer interaction by connecting with other devices in a wireless manner.
  • the processing module includes:
  • the dragging command module is configured to determine, according to the collected motion trajectory, a gesture of the person between the starting point and the ending point during the movement, and determine the starting point and the ending time of the friction according to the friction audio of the device and other surfaces, a dragging action;
  • Clicking on the command module is used to calculate the instantaneous power of the audio collected by the microphone and an axial linear acceleration of the motion position sensor to obtain a corresponding value, and the algorithm is used to determine whether the value is a click action.
  • the drag command module includes:
  • the audio processing sub-module is configured to transmit sound waves to the microphone of the device when the device is slidingly rubbed against other surfaces, and determine whether it is a friction sound by an algorithm, thereby obtaining a starting point of friction; and a displacement processing sub-module for passing Modeling the motion position sensor to calculate the displacement trajectory of the device.
  • the audio processing sub-module calculates the MFCC parameter, the zero-crossing rate and the power indicator of the sound segment, and uses the SVM classifier to obtain a starting point determined by whether the audio is friction.
  • the human-computer interaction device is designed to be worn on a finger by a ring, and a sensor for collecting finger movement and posture is arranged above the ring, and a microphone with a sound hole facing the finger skin is installed under the ring, and the ring is battery-powered. Connect wirelessly to other devices.
  • the sensor is one or more of an acceleration sensor, a gyroscope.
  • the audio processing sub-module is configured to transmit sound waves through the finger bones to the microphone of the device when the finger is slidingly rubbed against other surfaces, and determine whether it is a friction sound by an algorithm, thereby obtaining a starting point of friction.
  • the human-machine interaction device is designed as a pen type, and the microphone and the motion position sensor are installed at a pen tip.
  • the audio processing sub-module performs the following operations:
  • the obtained segment is processed, the Hamming window is added, the MFCC is calculated, the power of the segment is calculated, and the zero-crossing rate of the segment is calculated as a feature, and the feature is a multi-dimensional vector formed by merging the MFCC and the segment power zero-crossing rate;
  • the calculated features are classified by SVM, and the small segments are divided into friction and non-friction. Obviously from non-friction to friction is the starting point of friction, and from friction to non-friction is the end.
  • the beneficial effects of the invention are:
  • the device of the invention is small in size (close to the ring volume), is easy to wear, does not require a dedicated operating surface, and can recognize minute movements, and is comfortable to use.
  • FIG. 1 is a schematic structural diagram of a human-machine interaction device of the present invention
  • FIG. 2 is a schematic diagram of an embodiment of a human-machine interaction device of the present invention.
  • Figure 3 is a schematic illustration of the embodiment of Figure 2 worn on a finger. The best way to implement the invention
  • the human-machine interaction device of the present invention is composed of a motion position sensor for collecting finger motion and posture (such as one or more of a linear acceleration sensor and a gyro sensor) and a microphone for the sound hole toward the finger skin.
  • a motion position sensor for collecting finger motion and posture such as one or more of a linear acceleration sensor and a gyro sensor
  • a microphone for the sound hole toward the finger skin.
  • FIG. 1 is a schematic structural diagram of a human-machine interaction device according to the present invention. As shown in FIG. 1, the device 1 includes: a motion position sensor 11 for collecting motion trajectories and postures;
  • a microphone 12 for collecting frictional audio between the device and other surfaces
  • the processing module 13 is configured to obtain a "drag" human-computer interaction command according to the collected motion trajectory and the friction audio of the device and other surfaces; and calculate the instantaneous power of the audio collected by the microphone through the collected motion position sensor data Make a judgment on the tapping action, and obtain a human-computer interaction command of the "tap" command; realize interaction with the corresponding device or computer.
  • the processing module 13 includes:
  • the drag command module 131 is configured to determine, according to the collected motion trajectory, a gesture of the person between the start point and the end point during the movement, and determine a start point and an end time of the friction according to the friction audio of the device and other surfaces. Get a drag action;
  • the drag command module 131 includes:
  • the audio processing sub-module 1311 is configured to: when the device is slidingly rubbed against other surfaces, the sound wave is transmitted to the microphone of the device, and an algorithm is used to determine whether it is a friction sound, thereby obtaining a starting point of friction;
  • the audio processing sub-module obtains a starting point of friction by using an SVM classifier by calculating a MFCC parameter, a zero-crossing rate, and a power indicator of the sound. Specifically do the following:
  • the feature is a multi-dimensional vector spliced by MFCC and segment power zero-crossing rate. Made.
  • the displacement processing sub-module 1312 is configured to calculate a displacement trajectory of the device by modeling the motion position sensor.
  • the displacement processing sub-module calculates a displacement trajectory of the device by filtering and then correcting the re-integration.
  • Clicking on the command module 132 for calculating the instantaneous power of the audio collected by the microphone and an axial linear acceleration of the motion position sensor, and obtaining a corresponding value, which is determined by using a threshold method. Process to determine if it is a click action.
  • the human-machine interaction device 1 performs human-computer interaction by connecting with other devices in a wireless manner.
  • the invention collects two different interactive commands: drag and drop.
  • the device or computer obtains the corresponding interactive command by calculating the above data, that is, dragging and clicking to perform further interaction on the corresponding device or computer.
  • the drag of the present invention is defined as the behavior of sliding friction between the device and other surfaces (desktops, clothes, skin, other fingers, etc.), which has corresponding trajectories and directions. There are two aspects to the drag and drop. One is the starting point of dragging, and the other is the movement of the finger during the dragging process. When sliding friction occurs with other surfaces, the sound waves are transmitted to the microphone of the device. The device then determines whether it is a frictional sound through the corresponding algorithm for audio processing, thereby obtaining the starting point of the friction.
  • the dragged trajectory is calculated by modeling the line acceleration sensor and gyroscope on the device.
  • the starting point of collecting friction in the invention is realized by an algorithm, and the MFCC parameter, the zero-crossing rate, and the power index of the sound are calculated, and the SVM classifier is obtained through training.
  • this is not the only method, and some other methods may be used. There are technologies.
  • the drag track obtained in the present invention is completed by the existing mature technology, and the method of filtering and then correcting the re-integration is completed. All of the above are just implementation methods. In fact, there are other similar methods to get these results.
  • the click of the present invention is defined as the behavior of the device colliding with the surface of other objects (expandable to click, double click, etc.).
  • the algorithm can be used to determine whether it is a click action.
  • the instantaneous power and the calculated degree of an axis are collected, and then the threshold (the threshold is calculated by statistics) is used to determine whether it is a click.
  • FIG. 2 is a schematic diagram of an embodiment of a human-machine interaction device of the present invention
  • FIG. 3 is a schematic diagram of the embodiment of FIG. 2 worn on a finger, as shown in FIG. 2 and FIG. 3, as an implementable manner
  • the overall device is designed with a ring worn on the finger, and a sensor for collecting finger motion and posture (one or more of accelerometers, gyroscopes, etc.) can be deployed above the ring, one embodiment of which is to deploy a line acceleration sensor ).
  • the ring is battery powered and wirelessly connected (Bluetooth, Digital, etc.) to other devices.
  • the human-machine interaction device 2 includes:
  • a motion position sensor 21 configured to collect a moving position of a finger movement and a posture
  • Microphone 22 the sound hole faces the finger skin, the friction audio of the finger and other surfaces used for acquisition, and the instantaneous power of the microphone;
  • the processing module 23 is configured to determine a gesture of the finger between the start point and the end point according to the movement position of the finger movement and the posture to obtain a gesture of the person during the movement, and when the finger and the other surface are slidably rubbed, the sound wave passes
  • the finger bone is transmitted to the microphone of the device, and an algorithm is used to determine whether it is a friction sound, thereby obtaining a starting point of friction, obtaining a human-machine interaction command of the "drag" command; and collecting the motion position sensor data and the microphone through
  • the instantaneous power makes a judgment on the tapping action, and obtains a human-computer interaction command of the "tap" command;
  • the human-machine interaction device 2 is connected to other devices through a wireless method 24 (Bluetooth, digital transmission, etc.), and uses the gesture of the finger movement process to perform human-computer interaction.
  • a wireless method 24 Bluetooth, digital transmission, etc.
  • any solid surface can be connected as a touch plane, various gestures can be completed by touching the surface, and the computer or smart device can be controlled by the action to implement related operations.
  • the human-computer interaction device of the invention is mainly used in the field of human-computer interaction, and can have many different shapes. As another implementable manner, the shape of the pen can be made, and the difference is that if the pen is made, the microphone and the sensor should be installed. At the nib.
  • the present invention has many advantages over conventional somatosensory devices, or remote control devices:
  • the invention does not require a large movement of the user during the interaction, that is, the interaction between the device and the device can be easily accomplished with a very small friction and click of the finger;
  • the present invention can be made into a wearable interactive device, which means that it is convenient for anytime, anywhere interaction;
  • the object of the present invention is to "turn all the planes into a touchpad", which is very important for improving the interactive experience.
  • the invention provides a human-computer interaction device, which makes the human-computer interaction device small in size (close to the finger volume), is convenient to wear, does not require a dedicated operation surface, and can recognize small movements, and is comfortable to use. .

Abstract

本发明公开了一种人机交互设备,包括:运动位置传感器,用于采集运动轨迹与姿态;麦克风,用于采集与其他表面的摩擦音频;处理模块,用于根据采集的运动轨迹和与其他表面的摩擦音频,得到"拖拽"的人机交互命令;以及通过采集到的运动位置传感器数据和计算出麦克风采集到音频的瞬时功率对于敲击动作做出判断,得到"敲击"命令的人机交互命令;实现对相应的设备或计算机进行交互。所述设备能够接任何固体表面作为触摸平面,通过触摸该表面完成各种手势动作,并以该动作控制计算机或智能设备实现相关操作。

Description

一种人机交互设备 技术领域
本发明涉及计算机应用技术领域, 特别是涉及一种佩戴在手指上的人机交 互设备。 背景技术
现代的新型人机交互有很多种形式, 包括多点的触摸屏, 基于传感器的体 感设备, 基于图像识别的交互, 基于肌肉计算的交互等。 以上的各种交互方式 能够弥补很多以前存在的不足, 但是这些交互方式也存在一些新问题和缺点。
多点触摸的交互方式, 这种交互方式现在已经非常普遍的应用在各种设备 中了, 该设备的原理是当人的手指接触触摸屏元件时, 该点的电容或者电阻会 发生相应的变化, 通过采集不同点的数值就可以判断出触摸的位置。 但是这种 设备的局限在于: 操作者必须在设备前使用并与触摸屏接触, 当远离设备时是 无法操作该设备的。
体感型人机交互方式, 其包括: 通过遥控器内的加速度传感器感知使用者 手部的运动来进行交互, 或是通过一种特殊的摄像头, 这种摄像头可以采集景 深信息, 然后通过对图像的处理重构出使用者的骨架来进行交互。 体感交互方 式的缺点在于使用者必须在设备前大幅度的动作才能够被准确的判断为交互动 作, 微小的动作是不被接受的。
一种肌肉计算的交互设备, 设备通过采集人体肌肉的肌电信号来判断人体 手部的动作, 从而进行交互。 这种设备目前还不成熟, 设备笨重且不易携带。
采集人手部动作的数据手套, 该手套的原理是每一个手指都附着一个弯曲 传感器, 通过对弯曲传感器的变化的采集来重现人体的手部的动作最终完成交 互。 这种设备造价极为昂贵, 且佩带起来相对麻烦, 尤其是在炎热的夏季这种 设备不适合普通用户使用。 发明公开
本发明的目的在于提供一种新型人机交互设备, 该设备支持一种全新人机 交互方式: 可接任何固体表面作为触摸平面, 通过触摸该表面完成各种手势动 作, 并以该动作控制计算机或智能设备实现相关操作。
为实现本发明的目的而提供的一种人机交互设备, 包括:
运动位置传感器, 用于采集运动轨迹与姿态;
麦克风, 用于采集设备与其他表面的摩擦音频;
处理模块, 用于根据采集的运动轨迹和设备与其他表面的摩擦音频, 得到
"拖拽" 的人机交互命令; 以及通过采集到的运动位置传感器数据和计算出麦 克风采集到音频的瞬时功率对于敲击动作做出判断, 得到 "敲击"命令的人机 交互命令; 实现对相应的设备或计算机进行交互。
所述人机交互设备通过无线的方式和其他设备进行连接实现人机交互。 所述处理模块, 包括:
拖拽命令模块, 用于根据采集的运动轨迹, 判断在起点和终点之间的设备 在运动过程中人的手势, 根据设备与其他表面的摩擦音频, 来确定摩擦发生的 起点和终点时间, 得到一个拖拽动作;
点击命令模块, 用于通过计算麦克风采集的音频瞬时功率和所述运动位置 传感器的一个轴向线加速度, 得到相应的数值, 利用算法对该数值进行处理判 断是否是一个点击动作。
所述拖拽命令模块, 包括:
音频处理子模块, 用于当设备与其他表面发生滑动摩擦时, 声波传导到设 备的麦克风中, 通过算法来判断是否是一个摩擦声, 从而得到摩擦的起始点; 位移处理子模块, 用于通过对运动位置传感器进行建模处理, 计算所述设 备的位移轨迹。
所述音频处理子模块,通过计算声音片段的 MFCC参数、过零率和功率指标, 经过训练利用 SVM分类器得到该段音频是否是摩擦来确定的起始点。
所述人机交互设备, 设计成一个戒指佩戴在手指上, 在戒指的上方部署用 于采集手指运动与姿态的传感器, 在戒指的下方安装一个音孔朝向手指皮肤的 麦克风, 戒指是电池供电, 通过无线的方式和其他设备进行连接。
所述传感器是加速度传感器、 陀螺仪中的一个或多个。
所述音频处理子模块, 用于当手指与其他表面发生滑动摩擦时, 声波会通 过手指骨传导到设备的麦克风中, 通过算法来判断是否是一个摩擦声, 从而得 到摩擦的起始点。 所述人机交互设备, 设计成笔型, 所述麦克风和所述运动位置传感器是安 装在笔尖处。
所述音频处理子模块, 具体执行下列操作:
通过麦克风采集到音频, 将音频变成数字化的信号也就是音频文件; 通过截断的方法, 把音频切成片段;
对获得的片段进行处理, 加海明窗, 计算 MFCC, 计算片段的功率, 计算片 段的过零率作为特征,该特征为一个多维的向量由 MFCC和片段功率过零率拼接 而成;
对获得的片段进行标注, 并训练 SVM模型;
通过 SVM对计算出来的特征进行分类, 把小片段分为摩擦和非摩擦。 显然 的从非摩擦到摩擦就是一个摩擦的起点,而由摩擦到非摩擦就是终点。
本发明的有益效果是: 本发明的设备体积小 (接近戒指体积), 佩戴方便, 无需专用的操作表面, 且可识别微小动作, 使用舒适自由。 附图简要说明
图 1是本发明的人机交互设备的结构示意图;
图 2是本发明的一种人机交互设备的一实施例的示意图;
图 3是图 2的实施例佩戴在手指上的示意图。 实现本发明的最佳方式
为了使本发明的目的、 技术方案及优点更加清楚明白, 以下结合附图及实 施例, 对本发明的一种人机交互设备进行进一歩详细说明。 应当理解, 此处所 描述的具体实施例仅仅用以解释本发明, 并不用于限定本发明。
本发明的人机交互设备, 由用于采集手指运动与姿态的运动位置传感器(如 线加速度传感器、 陀螺仪传感器中的一种或多种) 和音孔朝向手指皮肤的麦克 风共同构成。 通过判断摩擦起始点这种新颖的方式解决了传统体感设备难以得 到动作的起始数据的问题。 另外通过麦克风指向手指皮肤来采集骨传导音频的 方式也大大的提高了摩擦声音的信噪比。
下面结合上述目标详细介绍本发明的一种人机交互设备, 图 1是本发明的 人机交互设备的结构示意图, 如图 1所示, 所述设备 1, 包括: 运动位置传感器 11, 用于采集运动轨迹与姿态;
麦克风 12, 用于采集设备与其他表面的摩擦音频;
处理模块 13, 用于根据采集的运动轨迹和设备与其他表面的摩擦音频, 得 到 "拖拽" 的人机交互命令; 以及通过采集到的运动位置传感器数据和计算出 麦克风采集到音频的瞬时功率对于敲击动作做出判断, 得到 "敲击"命令的人 机交互命令; 实现对相应的设备或计算机进行交互。
所述处理模块 13, 包括:
拖拽命令模块 131, 用于根据采集的运动轨迹, 判断在起点和终点之间的 设备在运动过程中人的手势, 根据设备与其他表面的摩擦音频, 来确定摩擦发 生的起点和终点时间, 得到一个拖拽动作;
所述拖拽命令模块 131, 包括:
音频处理子模块 1311, 用于当设备与其他表面发生滑动摩擦时, 声波传导 到设备的麦克风中, 通过算法来判断是否是一个摩擦声, 从而得到摩擦的起始 点;
所述音频处理子模块, 通过计算声音的 MFCC参数、 过零率和功率指标, 经 过训练利用 SVM分类器得到摩擦的起始点。 具体执行下列操作:
1.通过麦克风采集到音频, 将音频变成数字化的信号。
2. 通过截断的方法, 把音频切成很短的片段。
3. 对获得的片段进行处理, 加窗(海明窗), 计算 MFCC, 计算片段的功率, 计算片段的过零率作为特征,该特征为一个多维的向量由 MFCC和片段功率过零 率拼接而成。
4. 对获得的片段进行标注, 并训练 SVM模型。
5. 通过 SVM对计算出来的特征进行分类, 把小片段分为摩擦和非摩擦。 显 然的从非摩擦到摩擦就是一个摩擦的起点,而由摩擦到非摩擦就是终点。
位移处理子模块 1312, 用于通过对运动位置传感器进行建模处理, 计算所 述设备的位移轨迹。
所述位移处理子模块, 利用滤波然后修正再积分的方法计算所述设备的位 移轨迹。
点击命令模块 132, 用于通过计算麦克风采集的音频瞬时功率和所述运动 位置传感器的一个轴向线加速度, 得到相应的数值, 利用阈值的方法对该数值 进行处理判断是否是一个点击动作。
所述人机交互设备 1通过无线的方式和其他设备进行连接实现人机交互。 本发明一共采集两种不同的交互命令: 拖拽和点击。
设备或者计算机通过对以上数据的计算, 得到相应的交互命令, 即拖拽和 点击来对相应的设备或计算机进行进一歩的交互。 本发明的拖拽定义为, 该设备与其他物体表面 (桌面, 衣服, 皮肤, 其他 手指等) 发生滑动摩擦的行为, 这个拖拽是有相应的轨迹与方向的。 关于拖拽 一共分为 2个方面, 一方面是拖拽的起始点, 一方面是拖拽过程中手指的运动 轨迹。 当与其他表面发生滑动摩擦时, 声波会传导到设备的麦克风中。 然后设 备通过相应的对音频处理的算法来判断是否是一个摩擦声, 从而得到摩擦的起 始点。 而拖拽的轨迹, 是通过对设备上的线加速度传感器和陀螺仪进行建模处 理, 计算出来的。
本发明中采集摩擦的起始点, 是通过算法实现的, 计算声音的 MFCC参数、 过零率, 和功率指标, 通过训练利用 SVM分类器得到, 当然这不是唯一的方法, 还可以采用其他一些现有技术。
本发明中获得拖拽轨迹是通过现有成熟技术完成的, 利用滤波然后修正再 积分的方法完成。 以上都只是实现方法, 实际上还可以有其他类似的方法得到 这些结果。
点击:
本发明的点击定义为, 该设备与其他物体表面发生碰撞的行为 (可扩展为 单击, 双击等)。 通过计算麦克风采集的音频瞬时功率和线加速度传感器的一个 轴向线加速度, 可以得到相应的数值, 利用算法对该数值进行处理, 就可以判 断是否是一个点击动作。 采集瞬时功率和一个轴的计算度数值, 然后通过判断 阈值 (阈值通过统计得出) 的方法来确定是不是一个点击。
图 2是本发明的一种人机交互设备的一实施例的示意图, 图 3是图 2的实 施例佩戴在手指上的示意图, 如图 2和图 3所示, 作为一种可实施方式, 整体 设备设计成一个戒指佩戴在手指上, 在戒指的上方部署用于采集手指运动与姿 态的传感器 (可部署加速度传感器、 陀螺仪等中的一个或多个, 一种实施方式 为部署线加速度传感器)。 在戒指的下方安装一个音孔朝向手指皮肤的麦克风。 戒指是电池供电, 通过无线的方式 (蓝牙、 数传等) 和其他设备进行连接。 所述人机交互设备 2, 包括:
运动位置传感器 21, 用于采集手指运动与姿态的运动位置;
麦克风 22, 音孔朝向手指皮肤, 用于采集的手指与其他表面的摩擦音频以 及麦克风的瞬时功率;
处理模块 23, 用于根据采集手指运动与姿态的运动位置, 判断在起点和终 点之间的手指的姿态来获得在运动过程中人的手势, 当手指与其他表面发生滑 动摩擦时, 声波会通过手指骨传导到设备的麦克风中, 通过算法来判断是否是 一个摩擦声, 从而得到摩擦的起始点, 得到 "拖拽"命令的人机交互命令; 以 及通过采集到的运动位置传感器数据和麦克风的瞬时功率对于敲击动作做出判 断, 得到 "敲击"命令的人机交互命令;
所述人机交互设备 2通过无线的方式 24 (蓝牙、 数传等) 和其他设备进行 连接, 利用手指移动过程的手势进行人机交互。
将所述人机交互设备 2佩戴在人的手指或手臂上后, 可接任何固体表面作 为触摸平面, 通过触摸该表面完成各种手势动作, 并以该动作控制计算机或智 能设备实现相关操作。 本发明的人机交互设备主要用于人机交互领域, 可以有 很多种不同的外形, 作为另一种可实施方式, 可以做成笔的形状, 区分在于做 成笔的话, 麦克风和传感器应该安装在笔尖处。
相对于传统的体感设备, 或者遥控设备, 本发明有很多优点:
首先该发明不要求使用者在交互动作中大幅度的运动, 即可以用手指非常 小幅的摩擦和点击就可以轻松完成和设备之间的交互;
另外本发明可以制成一种可穿戴式的交互设备, 也就是说对于随时随地的 交互都提供了便利;
最后本发明的目的是 "把所有的平面都变成触摸板" , 这对提高交互体验 有着非常重要的促进作用。
通过结合附图对本发明具体实施例的描述, 本发明的其它方面及特征对本 领域的技术人员而言是显而易见的。
以上对本发明的具体实施例进行了描述和说明, 这些实施例应被认为其只 是示例性的, 并不用于对本发明进行限制, 本发明应根据所附的权利要求进行 解释。 工业应用性
本发明通过提供的一种人机交互设备, 使得人机交互设备体积小 (接近戒 指体积),佩戴方便, 无需专用的操作表面, 且可识别微小动作, 使用舒适自由。 。

Claims

权利要求书
1. 一种人机交互设备, 其特征在于, 所述设备, 包括:
运动位置传感器, 用于采集运动轨迹与姿态;
麦克风, 用于采集设备与其他表面的摩擦音频;
处理模块, 用于根据采集的运动轨迹和与其他表面的摩擦音频, 得到"拖 拽"的人机交互命令; 以及通过采集到的运动位置传感器数据和计算出麦克风 采集到音频的瞬时功率对于敲击动作做出判断, 得到 "敲击"命令的人机交互 命令; 实现对相应的设备或计算机进行交互。
2.根据权利要求 1所述的人机交互设备, 其特征在于, 所述人机交互设备 通过无线的方式和其他设备进行连接实现人机交互。
3.根据权利要求 1所述的人机交互设备, 其特征在于, 所述处理模块, 包 括:
拖拽命令模块,用于根据采集的运动轨迹,判断在起点和终点之间的设备 在运动过程中人的手势, 以及根据与其他表面的摩擦音频, 来确定摩擦发生的 起点和终点时间, 得到一个拖拽动作;
点击命令模块,用于通过计算麦克风的采集音频瞬时功率和所述运动位置 传感器的一个轴向线加速度, 得到相应的数值, 利用算法对该数值进行处理判 断是否是一个点击动作。
4. 根据权利要求 3所述的人机交互设备, 其特征在于, 所述拖拽命令模 块, 包括:
音频处理子模块, 用于当与其他表面发生滑动摩擦时, 声波传导到设备的 麦克风中, 通过算法来判断是否是一个摩擦声, 从而得到摩擦的起始点;
位移处理子模块, 用于通过对运动位置传感器进行建模处理,计算所述设 备的位移轨迹。
5.根据权利要求 4所述的人机交互设备, 其特征在于, 所述音频处理子模 块, 通过计算声音片段的 MFCC参数、 过零率和功率指标作为特征, 利用经过 训练的 SVM分类器得到摩擦的起始点。
6 根据权利要求 4所述的人机交互设备,其特征在于,所述人机交互设备, 设计成一个戒指佩戴在手指上,在戒指的上方部署用于采集手指运动与姿态的 传感器,在戒指的下方安装一个音孔朝向手指皮肤的麦克风,通过无线的方式 和其他设备进行连接。
7.根据权利要求 6所述的人机交互设备, 其特征在于, 所述传感器是加速 度传感器、 陀螺仪中的一个或多个。
8.根据权利要求 6所述的人机交互设备, 其特征在于, 所述音频处理子模 块, 用于当手指与其他表面发生滑动摩擦时, 声波会通过手指骨传导到设备的 麦克风中, 通过算法来判断是否是一个摩擦声, 从而得到摩擦的起始点。
9. 根据权利要求 1 所述的人机交互设备, 其特征在于, 所述人机交互设 备, 设计成笔型, 所述麦克风和所述运动位置传感器是安装在笔尖处。
10.根据权利要求 5所述的人机交互设备, 其特征在于, 所述音频处理子 模块, 具体执行下列操作:
通过麦克风采集到音频, 将音频变成数字化的信号也就是音频文件; 通过截断的方法, 把音频切成片段;
对获得的片段进行处理, 加海明窗, 计算片段的 MFCC, 计算片段的功率, 计算片段的过零率作为特征向量; 该特征为一个多维的向量由 MFCC和片段功 率过零率拼接而成;
对获得的片段的特征进行标注, 并训练 SVM模型;
通过 SVM对计算出来片段的特征进行分类, 分为摩擦和非摩擦。 显然的从 非摩擦到摩擦就是一个摩擦的起点, 而由摩擦到非摩擦就是终点。
PCT/CN2012/075069 2011-05-06 2012-05-04 一种人机交互设备 WO2012152205A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2011101176642A CN102184011B (zh) 2011-05-06 2011-05-06 一种人机交互设备
CN201110117664.2 2011-05-06

Publications (1)

Publication Number Publication Date
WO2012152205A1 true WO2012152205A1 (zh) 2012-11-15

Family

ID=44570195

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2012/075069 WO2012152205A1 (zh) 2011-05-06 2012-05-04 一种人机交互设备

Country Status (2)

Country Link
CN (1) CN102184011B (zh)
WO (1) WO2012152205A1 (zh)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184011B (zh) * 2011-05-06 2013-03-27 中国科学院计算技术研究所 一种人机交互设备
CN103034323A (zh) * 2011-09-30 2013-04-10 德信互动科技(北京)有限公司 人机互动系统和方法
CN103186226A (zh) * 2011-12-28 2013-07-03 北京德信互动网络技术有限公司 人机互动系统和方法
CN102662474B (zh) 2012-04-17 2015-12-02 华为终端有限公司 控制终端的方法、装置和终端
CN102866789B (zh) * 2012-09-18 2015-12-09 中国科学院计算技术研究所 一种人机交互戒指
CN103105945B (zh) * 2012-12-17 2016-03-30 中国科学院计算技术研究所 一种支持多点触摸手势的人机交互戒指
CN104375625B (zh) * 2013-08-12 2018-04-27 联想(北京)有限公司 一种识别方法及电子设备
CN103729056A (zh) * 2013-12-17 2014-04-16 张燕 通过敲击实现电子设备操控的系统及方法
TW201539249A (zh) * 2014-04-02 2015-10-16 zhi-hong Tang 3c智慧型指環
CN105437236A (zh) * 2014-08-30 2016-03-30 赵德朝 一种仿人工程机器人
CN104503575B (zh) * 2014-12-18 2017-06-23 大连理工大学 一种低功耗手势识别电路装置的设计方法
CN106155274B (zh) * 2015-03-25 2019-11-26 联想(北京)有限公司 电子设备及信息处理方法
CN104834907A (zh) * 2015-05-06 2015-08-12 江苏惠通集团有限责任公司 手势识别方法、装置、设备以及基于手势识别的操作方法
CN106201283A (zh) * 2015-05-07 2016-12-07 阿里巴巴集团控股有限公司 一种智能终端的人机交互方法及装置
CN105718557B (zh) * 2016-01-20 2019-05-24 Oppo广东移动通信有限公司 一种信息搜索方法及装置
CN106137244A (zh) * 2016-08-05 2016-11-23 北京塞宾科技有限公司 一种低噪声电子听诊器
CN107885311A (zh) * 2016-09-29 2018-04-06 深圳纬目信息技术有限公司 一种视觉交互的确认方法、系统和设备
CN109783049A (zh) * 2019-02-15 2019-05-21 广州视源电子科技股份有限公司 操作控制方法、装置、设备及存储介质
CN111580660B (zh) * 2020-05-09 2022-03-18 清华大学 一种操作触发方法、装置、设备及可读存储介质
CN112129399A (zh) * 2020-09-17 2020-12-25 江苏精微特电子股份有限公司 一种敲击传感器算法及其敲击传感器

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006068357A1 (en) * 2004-12-20 2006-06-29 Electronics And Telecommunications Research Institute System for wearable general-purpose 3-dimensional input
US20090157206A1 (en) * 2007-12-13 2009-06-18 Georgia Tech Research Corporation Detecting User Gestures with a Personal Mobile Communication Device
JP2010049583A (ja) * 2008-08-24 2010-03-04 Teruhiko Yagami 腕時計型電子メモ装置
CN102184011A (zh) * 2011-05-06 2011-09-14 中国科学院计算技术研究所 一种人机交互设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1285994C (zh) * 2004-06-11 2006-11-22 上海大学 人机交互方法及装置
CN100460167C (zh) * 2007-03-22 2009-02-11 上海交通大学 仿人机器人头部系统
CN101947182B (zh) * 2010-09-26 2012-06-13 东南大学 智能导盲人机互动装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006068357A1 (en) * 2004-12-20 2006-06-29 Electronics And Telecommunications Research Institute System for wearable general-purpose 3-dimensional input
US20090157206A1 (en) * 2007-12-13 2009-06-18 Georgia Tech Research Corporation Detecting User Gestures with a Personal Mobile Communication Device
JP2010049583A (ja) * 2008-08-24 2010-03-04 Teruhiko Yagami 腕時計型電子メモ装置
CN102184011A (zh) * 2011-05-06 2011-09-14 中国科学院计算技术研究所 一种人机交互设备

Also Published As

Publication number Publication date
CN102184011A (zh) 2011-09-14
CN102184011B (zh) 2013-03-27

Similar Documents

Publication Publication Date Title
WO2012152205A1 (zh) 一种人机交互设备
US10649549B2 (en) Device, method, and system to recognize motion using gripped object
Lu et al. A hand gesture recognition framework and wearable gesture-based interaction prototype for mobile devices
CN103105945B (zh) 一种支持多点触摸手势的人机交互戒指
WO2016058387A1 (zh) 触控交互的处理方法、装置和系统
KR101413539B1 (ko) 자세인식을 이용하는 제어신호 입력장치 및 제어신호 입력방법
Song et al. GaFinC: Gaze and Finger Control interface for 3D model manipulation in CAD application
WO2019024577A1 (zh) 基于多感知数据融合的人机自然交互系统
CN106695794A (zh) 一种基于表面肌电信号的移动机器臂系统及其控制方法
CN103513770A (zh) 基于三轴陀螺仪的人机接口设备及人机交互方法
WO2016026365A1 (zh) 实现非接触式鼠标控制的人机交互方法和系统
KR102297473B1 (ko) 신체를 이용하여 터치 입력을 제공하는 장치 및 방법
CN111290572A (zh) 一种基于eog信号和头部姿态的驱动装置及驱动方法
CN102866789B (zh) 一种人机交互戒指
CN106406544A (zh) 一种语义式人机自然交互控制方法及系统
CN104571837A (zh) 一种实现人机交互的方法及系统
CN109189219A (zh) 基于手势识别的非接触式虚拟鼠标的实现方法
TWI721317B (zh) 一種控制指令輸入方法和輸入裝置
US8749488B2 (en) Apparatus and method for providing contactless graphic user interface
US20160085311A1 (en) Control unit and method of interacting with a graphical user interface
CN102135794A (zh) 掌指互动变化的3d无线鼠标
CN103995585A (zh) 显示大墙的手势控制设备与方法
CN206162390U (zh) 基于惯性传感器和触觉反馈的手势识别设备
CN103425433A (zh) 智能化人机界面系统及其控制方法
CN203038229U (zh) 一种残疾人专用鼠标

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12781563

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12781563

Country of ref document: EP

Kind code of ref document: A1