CN110727353A - Method and device for controlling control components defined based on two-dimensional intent - Google Patents

Method and device for controlling control components defined based on two-dimensional intent Download PDF

Info

Publication number
CN110727353A
CN110727353A CN201911186185.9A CN201911186185A CN110727353A CN 110727353 A CN110727353 A CN 110727353A CN 201911186185 A CN201911186185 A CN 201911186185A CN 110727353 A CN110727353 A CN 110727353A
Authority
CN
China
Prior art keywords
control
action
posture
human body
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911186185.9A
Other languages
Chinese (zh)
Inventor
李远清
肖景
翟军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Brain Control (guangdong) Intelligent Technology Co Ltd
Original Assignee
South China Brain Control (guangdong) Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Brain Control (guangdong) Intelligent Technology Co Ltd filed Critical South China Brain Control (guangdong) Intelligent Technology Co Ltd
Publication of CN110727353A publication Critical patent/CN110727353A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Dermatology (AREA)
  • Neurosurgery (AREA)
  • Neurology (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Position Input By Displaying (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

基于二维意图所定义的控制部件控制方法和装置,其特征在于,包括用于表达方向控制意图的人体姿态动作和用于表达确认意图的头部表情动作,方向控制意图旨在对控制部件的运动方向予以控制,确认意图旨在确认控制部件的当前操作及其操作类型;其有益的技术效果在于利用人体姿态动作与头部表情动作或语音指令的二维组合,不仅建立了适配于身体动作的方向控制意图指令机制,而且也建立了适配的但又独立的“确认”意图指令机制,克服了单独了语音技术的无节制问题,也克服了单独的EOG眼电技术的局限性问题,它们之间的有效配合有效地解决了对人机交互设备的多维度控制问题,以及人体动作与控制意图之间的协调性问题。

Figure 201911186185

A control component control method and device defined based on a two-dimensional intent, characterized in that it includes a human body gesture action for expressing directional control intent and a head expression motion for expressing confirmation intent. The movement direction is controlled, and the purpose of confirming the intention is to confirm the current operation of the control component and its operation type; its beneficial technical effect is to use the two-dimensional combination of human body gestures and head expression movements or voice commands, not only to establish a body suitable for the body. The direction of the action controls the intention command mechanism, and an adaptive but independent "confirmation" intention command mechanism is also established, which overcomes the problem of uncontrolled voice technology and the limitations of EOG technology alone. , the effective cooperation between them effectively solves the problem of multi-dimensional control of human-computer interaction equipment, as well as the coordination problem between human actions and control intentions.

Figure 201911186185

Description

基于二维意图所定义的控制部件控制方法和装置Method and device for controlling control components defined based on two-dimensional intent

技术领域technical field

本发明属于人机交互技术领域,具体涉及一种基于二维意图所定义的控制部件控制方法及其控制装置,例如可以是基于EOG信号及头部姿态的控制方法及装置。EOG(electro-oculogram)膜眼电图是眼球运动的电学记录,它是一种检测眼睛电位。The present invention belongs to the technical field of human-computer interaction, and in particular relates to a control method and a control device for a control component defined based on two-dimensional intent, such as a control method and device based on EOG signals and head posture. EOG (electro-oculogram) membrane electro-oculogram is an electrical recording of eye movements, which is a detection of eye potential.

背景技术Background technique

在现有技术中,人与操作设备(如头戴显示控制部件、计算机、手机等生活用设备之间的交互主要是通过手工操作的交互方式。例如,人与头戴显示设备进行交互时,可以利用物理按键进行提高音量、播放或暂停等操作;人与计算机进行交互时,需要手工操作键盘或鼠标进行播放或打开等操作;但对于残障人士或者双手暂时没有空闲(例如双手正在进行洗漱、做饭、吃饭等活动)进行操作的用户,利用传统的输入设备(例如鼠标、键盘、操作器等)实现人机交互就非常困难。In the prior art, the interaction between people and operating equipment (such as head-mounted display control components, computers, mobile phones and other living equipment is mainly through manual operation. For example, when a person interacts with a head-mounted display device, Physical buttons can be used to increase the volume, play or pause; when people interact with the computer, they need to manually operate the keyboard or mouse to play or open; It is very difficult to realize human-computer interaction by using traditional input devices (such as mouse, keyboard, manipulator, etc.)

在人机交互领域中,眼睛作为又一种重要的信息交互通道,而视线反应人的注意方向,因而将视线应用于人机交互领域具有其自然性、直接性和交互性等特点,备受人们的关注。专利公开号为中国CN104866100B公开了一种眼控装置及其眼控方法和眼控系统,其中,该眼控装置包括:注视点获取单元、人眼动作检测单元和控制信号生成单元,注视点获取单元用于获取人眼在待操作器件上的注视点的位置信息;人眼动作检测单元用于检测人眼是否作出预设动作,并在检测出人眼作出预设动作时,控制注视点获取单元将当前人眼在待操作器件上的注视点的当前位置信息发送给控制信号生成单元;控制信号生成单元用于根据预先存储的与待操作器件相应的位置控制对应表生成与当前位置信息相对应的控制信号,并将控制信号发送至待操作器件以供待操作器件执行相应的操作。该技术方案可有效的实现利用人眼动作来对操作器件进行控制,但这种控制方法过于简单而且维度单一,不适用于在有多维控制需求的人机交互设备上使用。In the field of human-computer interaction, the eyes are another important information interaction channel, and the line of sight reflects the direction of human attention. Therefore, the application of the line of sight to the field of human-computer interaction has the characteristics of naturalness, directness and interactivity, and has attracted much attention. people's attention. Patent Publication No. China CN104866100B discloses an eye control device, an eye control method and an eye control system, wherein the eye control device includes: a gaze point acquisition unit, a human eye motion detection unit and a control signal generation unit. The unit is used to obtain the position information of the gaze point of the human eye on the device to be operated; the human eye action detection unit is used to detect whether the human eye has made a preset action, and when it is detected that the human eye has made a preset action, it controls the gaze point to obtain The unit sends the current position information of the gaze point of the current human eye on the device to be operated to the control signal generation unit; the control signal generation unit is used to generate a corresponding table corresponding to the current position information according to the pre-stored position control correspondence table corresponding to the device to be operated. corresponding control signals, and send the control signals to the device to be operated so that the device to be operated performs corresponding operations. The technical solution can effectively realize the use of human eye movements to control the operating device, but this control method is too simple and has a single dimension, and is not suitable for use on human-computer interaction equipment with multi-dimensional control requirements.

发明内容SUMMARY OF THE INVENTION

为了解决在不利生活环境下轻松解决人机交互问题,至少人们首先想到了语言控制技术,即利用人的语言本身给出的指令例如上下左右移动、行走等让被控设备按该语言命令本身所包含的含义去执行,但是语言本身的时延性及控制程度的有限性即控制的无节制问题,决定了单一的语音技术并不能完全和可靠地使用在人机交互领域;其次有人也首先想到了上面的现有技术中提及的眼控技术,也称为EOG眼电技术,显然该方法在控制上也是单一控制模式。当人机交互设备需要多维度控制时,这些技术都不能很好地完成。In order to easily solve the problem of human-computer interaction in an unfavorable living environment, at least people first thought of language control technology, that is, using the instructions given by the human language itself, such as moving up, down, left, right, walking, etc., to let the controlled device follow the language command itself. However, the delay of the language itself and the limited degree of control, that is, the problem of uncontrolled control, determine that a single speech technology cannot be used completely and reliably in the field of human-computer interaction; secondly, someone also thought of it first. The eye control technology mentioned above in the prior art is also called EOG technology, and obviously this method is also a single control mode in control. None of these techniques can be done well when human-computer interaction devices require multi-dimensional control.

为了解决人机交互设备等多维度的控制问题,而且还需保证控制的便利性和可靠性,我们设计了几种控制方法:In order to solve multi-dimensional control problems such as human-computer interaction equipment, and to ensure the convenience and reliability of control, we have designed several control methods:

第一种,基于二维意图所定义的控制部件控制方法,其特征在于,包括用于表达方向控制意图的人体姿态动作和用于表达确认意图的头部表情动作,所述方向控制意图旨在对控制部件的运动方向予以控制,所述确认意图旨在确认所述控制部件的当前操作及其操作类型;The first is a control component control method defined based on a two-dimensional intention, which is characterized in that it includes a human body gesture action for expressing a direction control intention and a head expression action for expressing a confirmation intention, and the direction control intention is to controlling the direction of movement of the control part, the confirmation is intended to confirm the current operation of the control part and its type of operation;

实施所述人体姿态动作,拾取表征所述人体姿态动作的动作表象本身的姿态特征值,设置姿态融合计算模块,根据所拾取的所述姿态特征值并基于所述姿态融合计算模块识别出对应于所述人体姿态动作的运动方向控制指令,通过所述运动方向控制指令控制所述控制部件的运动方向;Implement the human body posture action, pick up the posture feature value of the action representation itself that characterizes the human body posture action, set the posture fusion calculation module, and identify the corresponding posture according to the picked up posture feature value and based on the posture fusion calculation module. The movement direction control command of the human body posture action, and the movement direction of the control component is controlled by the movement direction control command;

实施所述头部表情动作,拾取关联于所述头部表情动作而导致的头部生物电信号特征值,设置生物电信号计算模块,根据所拾取的所述生物电信号特征值并基于所述生物电信号计算模块识别出对应于所述头部表情动作的确认指令,通过所述确认指令确认所述控制部件的当前操作及其操作类型。Implement the head expression action, pick up the bioelectric signal characteristic value of the head caused by the head expression action, set up a bioelectric signal calculation module, according to the picked up bioelectric signal characteristic value and based on the The bioelectric signal calculation module identifies a confirmation instruction corresponding to the head expression, and confirms the current operation of the control component and its operation type through the confirmation instruction.

本发明还提供一种实现第一种控制方法的基于二维意图的控制装置,其特征在于,包括:The present invention also provides a two-dimensional intention-based control device for realizing the first control method, which is characterized by comprising:

姿态特征值拾取装置,所述姿态特征值拾取装置用于拾取表征人体姿态动作的动作表象本身的姿态特征值并给出相应的姿态特征信号,所述人体姿态动作用于表达对控制部件的运动方向予以控制的控制意图;An attitude feature value pickup device, the attitude feature value pickup device is used to pick up the attitude feature value of the action representation itself that characterizes the posture action of the human body and give the corresponding posture feature signal, and the human body posture action is used to express the movement of the control part the control intention to control the direction;

头部生物电信号特征值拾取装置,所述头部生物电信号特征值拾取装置用于拾取关联于头部表情动作而导致的头部生物电信号特征值并给出相应的生物电特征信号,所述头部表情动作用于表达对控制部件的当前操作及其操作类型予以确认的意图;A head bioelectric signal characteristic value pickup device, the head bioelectric signal characteristic value pickup device is used to pick up the head bioelectric signal characteristic value caused by the movement of the head expression and give the corresponding bioelectric characteristic signal, The head expression action is used to express the intention to confirm the current operation of the control component and its operation type;

控制模块,包括姿态融合计算模块和生物电信号计算模块,所述姿态融合计算模块用于接收所述姿态特征信号并识别出对应于所述人体姿态动作的运动方向控制指令,所述控制模块用于根据所述运动方向控制指令进一步控制所述控制部件的运动方向;所述生物电信号计算模块用于接收所述生物电特征信号并识别出对应于所述头部表情动作的确认指令,所述控制模块用于根据所述确认指令确认所述控制部件的当前操作及其操作类型。The control module includes a posture fusion calculation module and a bioelectric signal calculation module, the posture fusion calculation module is used to receive the posture feature signal and identify the motion direction control instruction corresponding to the human body posture action, and the control module uses In order to further control the movement direction of the control part according to the movement direction control instruction; the bioelectric signal calculation module is used to receive the bioelectricity signal and identify the confirmation instruction corresponding to the head expression action, so The control module is used for confirming the current operation of the control component and the operation type thereof according to the confirmation instruction.

第二种,基于二维意图所定义的控制部件控制方法,其特征在于,包括用于表达方向控制意图的人体姿态动作和用于表达确认意图的语音命令,所述方向控制意图旨在对控制部件的运动方向予以控制,所述确认意图旨在确认所述控制部件的当前操作及其操作类型;The second is a control component control method defined based on a two-dimensional intention, which is characterized in that it includes a human gesture action for expressing a direction control intention and a voice command for expressing a confirmation intention, and the direction control intention is to control the control the direction of movement of the component is controlled, and the confirmation is intended to confirm the current operation of the control component and its type of operation;

实施所述人体姿态动作,拾取表征所述人体姿态动作的动作表象本身的姿态特征值,设置姿态融合计算模块,根据所拾取的所述姿态特征值并基于所述姿态融合计算模块识别出对应于所述人体姿态动作的运动方向控制指令,通过所述运动方向控制指令控制所述控制部件的运动方向;Implement the human body posture action, pick up the posture feature value of the action representation itself that characterizes the human body posture action, set the posture fusion calculation module, and identify the corresponding posture according to the picked up posture feature value and based on the posture fusion calculation module. The movement direction control command of the human body posture action, and the movement direction of the control component is controlled by the movement direction control command;

实施语音命令,拾取表征所述语音命令的语音信号特征值,设置语音信号计算模块,根据所拾取的所述语音信号特征值并基于所述语音信号计算模块识别出对应于所述语音命令的确认指令,通过所述确认指令确认所述控制部件的当前操作及其操作类型。Implement a voice command, pick up the voice signal feature value that characterizes the voice command, set a voice signal calculation module, identify a confirmation corresponding to the voice command according to the picked up voice signal feature value and based on the voice signal calculation module an instruction, and the current operation of the control part and its operation type are confirmed through the confirmation instruction.

本发明还提供一种实现第二种控制方法的基于二维意图的控制装置,其特征在于,包括:The present invention also provides a two-dimensional intention-based control device for realizing the second control method, which is characterized by comprising:

姿态特征值拾取装置,所述姿态特征值拾取装置用于拾取表征人体姿态动作的动作表象本身的姿态特征值并给出相应的姿态特征信号,所述人体姿态动作用于表达对控制部件的运动方向予以控制的控制意图;An attitude feature value pickup device, the attitude feature value pickup device is used to pick up the attitude feature value of the action representation itself that characterizes the posture action of the human body and give the corresponding posture feature signal, and the human body posture action is used to express the movement of the control part the control intention to control the direction;

语音命令特征值拾取装置,所述语音命令特征值拾取装置用于拾取表征所述语音命令的语音信号特征值并给出相应的语音特征信号,所述语音命令用于表达对控制部件的当前操作及其操作类型予以确认的意图;Voice command feature value pickup device for picking up voice signal feature values characterizing the voice command and giving a corresponding voice feature signal, the voice command being used to express the current operation of the control part and its intent to identify the type of operation;

控制模块,包括姿态融合计算模块和语音信号计算模块,所述姿态融合计算模块用于接收所述姿态特征信号并识别出对应于所述人体姿态动作的运动方向控制指令,所述控制模块用于根据所述运动方向控制指令进一步控制所述控制部件的运动方向;所述语音信号计算模块用于接收所述语音特征信号并识别出对应于所述语音命令的确认指令,所述控制模块用于根据所述确认指令确认所述控制部件的当前操作及其操作类型。A control module, including a posture fusion calculation module and a voice signal calculation module, the posture fusion calculation module is used to receive the posture feature signal and identify the motion direction control instruction corresponding to the human body posture action, and the control module is used for The movement direction of the control component is further controlled according to the movement direction control instruction; the voice signal calculation module is used for receiving the voice feature signal and identifying the confirmation instruction corresponding to the voice command, and the control module is used for The current operation of the control part and its operation type are confirmed according to the confirmation instruction.

在上述的方案中,所述控制部件,是指一种具有执行某种功能的存在于生活环境中现实物理单元或存在于例如显示屏、AI、VR等虚拟环境中的虚拟物理单元例如游戏公仔,所述控制部件能够在所述控制模块的控制软件的驱动下,不仅执行移动的动作而且执行确认的操作。In the above solution, the control component refers to a virtual physical unit such as a game doll that exists in a real physical unit in a living environment or exists in a virtual environment such as a display screen, AI, VR, etc. , the control part can perform not only the movement action but also the confirmation operation under the driving of the control software of the control module.

在上述的方案中,所述姿态特征值拾取装置与所述头部表情动作或语音信息特征值拾取装置可以合二为一个穿戴设备,其次,所述姿态特征值拾取装置与所述头部表情动作或语音信息特征值拾取装置与所述控制模块之间既可以是有线连接,也可以通过无线收发装置实现通讯。另外,所述控制模块可以集成设置在同一控制电路中,也可分置在包括所述姿态特征值拾取装置、所述头部表情动作或语音信息特征值拾取装置等装置中。In the above solution, the posture feature value pickup device and the head expression action or voice information feature value pickup device can be combined into a wearable device, and secondly, the posture feature value pickup device and the head expression The action or voice information feature value pick-up device and the control module can either be wired or communicated through a wireless transceiver. In addition, the control module may be integrated in the same control circuit, or may be separately disposed in a device including the gesture feature value pickup device, the head expression action or voice information feature value pickup device, and the like.

基于上述的技术方案,与现有技术对比,其有益的技术效果在于利用所述人体姿态动作与所述头部表情动作或语音指令的二维组合,不仅建立了适配于身体动作的方向控制意图指令机制,而且也建立了适配的但又独立的“确认”意图指令机制,克服了单独了语音技术的无节制问题,也克服了单独的EOG眼电技术的局限性问题,它们之间的有效配合有效地解决了对人机交互设备的多维度控制问题,以及人体动作与控制意图之间的协调性问题。Based on the above technical solution, compared with the prior art, its beneficial technical effect is that by using the two-dimensional combination of the human body gesture action and the head expression action or voice command, not only a directional control adapted to the body action is established Intention command mechanism, and also established an adapted but independent "confirmation" intention command mechanism, which overcomes the problem of uncontrolled voice technology alone, and also overcomes the limitation of EOG technology alone. The effective cooperation of the device effectively solves the problem of multi-dimensional control of human-computer interaction equipment, as well as the coordination problem between human actions and control intentions.

由于本发明具有上述特点和优点,为此可以应用到人机交互控制部件、设备及其的控制系统中。Since the present invention has the above-mentioned features and advantages, it can be applied to human-computer interaction control components, equipment and control systems thereof.

附图说明Description of drawings

附图1 ,是应用本发明技术方案的第一种实施例的流程示意图。FIG. 1 is a schematic flowchart of a first embodiment of applying the technical solution of the present invention.

附图2 ,是应用本发明技术方案的第二种实施例的流程示意图。FIG. 2 is a schematic flowchart of a second embodiment of applying the technical solution of the present invention.

具体实施方式Detailed ways

下面结合附图对应用本发明技术方案的基于二维意图所定义的控制部件控制方法和控制装置作进一步的说明。The following further describes the control method and control device of the control component defined based on the two-dimensional intention applying the technical solution of the present invention with reference to the accompanying drawings.

实施例一、如图1所示,具体地说,基于二维意图所定义的控制部件控制方法,其特征在于,Embodiment 1, as shown in FIG. 1, specifically, a control component control method defined based on a two-dimensional intention is characterized in that:

包括用于表达方向控制意图的人体姿态动作和用于表达确认意图的头部表情动作,所述方向控制意图旨在对控制部件4的运动方向予以控制,所述确认意图旨在确认所述控制部件4的当前操作及其操作类型;Including the human body gesture action for expressing the intention of direction control and the head expression action for expressing the intention of confirming the direction, the intention of the direction control is to control the movement direction of the control part 4, and the intention of confirming is to confirm the control the current operation of component 4 and its operation type;

实施所述人体姿态动作,拾取表征所述人体姿态动作的动作表象本身的姿态特征值,设置姿态融合计算模块,根据所拾取的所述姿态特征值并基于所述姿态融合计算模块识别出对应于所述人体姿态动作的运动方向控制指令,通过所述运动方向控制指令控制所述控制部件4的运动方向;Implement the human body posture action, pick up the posture feature value of the action representation itself that characterizes the human body posture action, set the posture fusion calculation module, and identify the corresponding posture according to the picked up posture feature value and based on the posture fusion calculation module. The movement direction control command of the human body posture action, and the movement direction of the control component 4 is controlled by the movement direction control command;

实施所述头部表情动作,拾取关联于所述头部表情动作而导致的头部生物电信号特征值,设置生物电信号计算模块23,根据所拾取的所述生物电信号特征值并基于所述生物电信号计算模块23识别出对应于所述头部表情动作的确认指令,通过所述确认指令确认所述控制部件4的当前操作及其操作类型。Implement the head expression action, pick up the head bioelectric signal characteristic value associated with the head expression movement, set up the bioelectric signal calculation module 23, according to the picked up bioelectric signal characteristic value and based on the The bioelectric signal calculation module 23 identifies the confirmation instruction corresponding to the head expression action, and confirms the current operation and the operation type of the control component 4 through the confirmation instruction.

实现上述方法的基于二维意图的控制装置,其特征在于,包括:The two-dimensional intention-based control device for realizing the above method is characterized by comprising:

姿态特征值拾取装置21,所述姿态特征值拾取装置21用于拾取表征人体姿态动作的动作表象本身的姿态特征值并给出相应的姿态特征信号,所述人体姿态动作用于表达对控制部件4的运动方向予以控制的控制意图;The posture feature value pickup device 21 is used to pick up the posture feature value of the action representation itself that represents the posture action of the human body and give the corresponding posture feature signal, and the posture feature value of the human body is used to express the control component. 4. The control intention to control the movement direction;

头部生物电信号特征值拾取装置11,所述头部生物电信号特征值拾取装置11用于拾取关联于头部表情动作而导致的头部生物电信号特征值并给出相应的生物电特征信号,所述头部表情动作用于表达对所述控制部件4的当前操作及其操作类型予以确认的意图;A head bioelectric signal characteristic value pickup device 11, the head bioelectric signal characteristic value pickup device 11 is used to pick up the head bioelectric signal characteristic value caused by the movement of the head expression and give the corresponding bioelectric characteristic Signal, the head expression is used to express the intention to confirm the current operation of the control part 4 and its operation type;

控制模块2,包括姿态融合计算模块22和生物电信号计算模块23,所述姿态融合计算模块22用于接收所述姿态特征信号并识别出对应于所述人体姿态动作的运动方向控制指令,所述控制模块2用于根据所述运动方向控制指令进一步控制所述控制部件4的运动方向;所述生物电信号计算模块23用于接收所述生物电特征信号并识别出对应于所述头部表情动作的确认指令,所述控制模块2用于根据所述确认指令确认所述控制部件4的当前操作及其操作类型。The control module 2 includes a posture fusion calculation module 22 and a bioelectric signal calculation module 23. The posture fusion calculation module 22 is used to receive the posture feature signal and identify the motion direction control instruction corresponding to the posture action of the human body. The control module 2 is used to further control the movement direction of the control part 4 according to the movement direction control instruction; the bioelectric signal calculation module 23 is used to receive the bioelectrical characteristic signal and identify the The confirmation command of the facial expression, the control module 2 is used to confirm the current operation of the control part 4 and its operation type according to the confirmation command.

所述控制部件是病床4,现以所述病床4的控制为例,对本发明做进一步的说明。The control component is the hospital bed 4, and the present invention is further described by taking the control of the hospital bed 4 as an example.

一种基于EOG(眼电)信息和姿态的控制装置,包括信号采集模块、无线通信模块、控制模块2和显示屏3,无线通信模块包含无线发送单元和无线接收单元,无线发送单元设置在信号采集模块的信号采集端,无线接收单元设置在控制模块2的控制算法端,采用Wifi无线通讯协议。所述控制模块2安装在台式计算机上。A control device based on EOG (eye electricity) information and attitude, including a signal acquisition module, a wireless communication module, a control module 2 and a display screen 3, the wireless communication module includes a wireless sending unit and a wireless receiving unit, and the wireless sending unit is set in the signal The signal acquisition end of the acquisition module and the wireless receiving unit are arranged on the control algorithm end of the control module 2, and use the Wifi wireless communication protocol. The control module 2 is installed on a desktop computer.

利用眼控EOG技术的方法,包括用于表达方向控制意图的人体姿态动作例如晃动头部的动作和用于表达确认意图的导致EOG(眼电)的眼部动作,所述方向控制意图旨在对控制部件4的运动方向予以控制,所述确认意图旨确认所述控制部件4的当前操作及其操作类型;所述晃动头部的动作包括向左偏晃动、右偏晃动、抬头移动和低头移动四种姿态动作。A method of utilizing eye-control EOG technology, including human gesture actions such as head shaking actions for expressing directional control intentions, and eye actions leading to EOG (Eoculography) for expressing confirmation intentions Control the movement direction of the control part 4, and the confirmation intention is to confirm the current operation of the control part 4 and its operation type; the movements of shaking the head include shaking to the left, shaking to the right, raising the head and lowering the head Move four gestures.

实施晃动头部的动作,拾取表征所述晃动头部动作的动作表象本身的姿态特征值,包括动作的加速度、角度和运动方向特征值,设置姿态融合计算模块22,根据所拾取的所述姿态特征值并基于所述姿态融合计算模块22识别出对应于所述晃动头部动作的运动方向控制指令,通过所述运动方向控制指令控制所述控制部件4的运动方向;Implement the action of shaking the head, pick up the attitude feature value of the action image itself that characterizes the shaking head action, including the acceleration, angle and movement direction feature value of the action, set the attitude fusion calculation module 22, according to the picked up attitude The characteristic value is based on the gesture fusion calculation module 22 to identify the motion direction control instruction corresponding to the shaking head action, and the motion direction of the control component 4 is controlled by the motion direction control instruction;

实施头部的眨眼表情动作,拾取关联于所述头部表情动作而导致的EOG(眼电)信号特征值,设置生物电信号计算模块23,根据所拾取的所述生物电信号特征值并基于所述生物电信号计算模块23识别出对应于所述头部表情动作的确认指令,通过所述确认指令确认所述控制部件4的当前操作及其操作类型。Implement the blinking expression action of the head, pick up the EOG (Eoculography) signal characteristic value associated with the head expression action, and set the bioelectric signal calculation module 23, according to the picked up The bioelectric signal characteristic value and based on The bioelectric signal calculation module 23 identifies the confirmation instruction corresponding to the head expression, and confirms the current operation and the operation type of the control component 4 through the confirmation instruction.

所述信号采集模块为可穿戴式设备,它进一步包括用于拾取关联于用户1眨眼表现出的EOG电信号的头部生物电信号特征值拾取装置11、用于拾取用户1姿态动作特征值的姿态特征值拾取装置21,它们分别采集的信号通过无线通信模块传输至所述控制模块2;所述信号采集模块还包括微处理器单元,所述微处理器单元包括有STM32F103芯片,负责整个信号采集模块各组成部分的同步与控制。The signal acquisition module is a wearable device, and it further comprises a head bioelectrical signal feature value pickup device 11 for picking up the EOG electrical signal associated with the blinking of the eyes of the user 1, a device 11 for picking up the gesture action feature value of the user 1. Attitude feature value pickup device 21, the signals collected by them are transmitted to the control module 2 through the wireless communication module; the signal acquisition module also includes a microprocessor unit, and the microprocessor unit includes an STM32F103 chip, which is responsible for the entire signal Synchronization and control of each component of the acquisition module.

在本实施例中,所述姿态特征值拾取装置21是一种可穿戴式的姿态传感器单元,佩戴在用户1头部,包括有PU9250传感器芯片与所述微处理器单元连接,该芯片为九轴姿态传感器,由三轴加速度传感器、三轴陀螺仪和三轴磁力计组成,分别采集用户1头部晃动动作的加速度、角度和运动方向特征值。In this embodiment, the attitude feature value pickup device 21 is a wearable attitude sensor unit, which is worn on the head of the user 1, and includes a PU9250 sensor chip connected to the microprocessor unit. The chip is nine The axis attitude sensor is composed of a three-axis acceleration sensor, a three-axis gyroscope and a three-axis magnetometer, which respectively collect the acceleration, angle and movement direction characteristic values of the shaking action of the user 1's head.

在本实施例中,所述头部生物电信号特征值拾取装置11也是一种基于EOG信号的可穿戴式眼电设备,佩戴在用户1头部,包括电极单元、EOG信号放大单元,所述电极单元、EOG信号放大单元与所述微处理器单元依次连接。所述电极单元含有三个导电电极,紧贴用户1头部皮肤的不同部位,其中一个置于用户1额头,另外两个置于耳后;所述EOG信号放大单元中设置有AD8232集成仪表放大器芯片,芯片上集成有仪表放大器、高通滤波器、低通滤波器和右腿驱动电路。In this embodiment, the head bioelectric signal feature value pickup device 11 is also a wearable ophthalmic device based on EOG signals, which is worn on the head of the user 1 and includes an electrode unit and an EOG signal amplifying unit. The electrode unit and the EOG signal amplifying unit are sequentially connected with the microprocessor unit. The electrode unit contains three conductive electrodes, which are close to different parts of the skin of the head of the user 1, one of which is placed on the forehead of the user 1, and the other two are placed behind the ears; the EOG signal amplifying unit is provided with an AD8232 integrated instrumentation amplifier. The chip integrates an instrumentation amplifier, a high-pass filter, a low-pass filter and a right leg drive circuit.

所述控制模块2中包括有所述姿态融合计算模块22,用于根据所采集的晃动头部的姿态信号对用户1头部的姿态(例如向左偏晃动、右偏晃动、抬头移动和低头移动四种姿态动作)进行算法识别并给出相应的姿态特征信号,根据所述姿态特征信号确定识别结果并进一步确定并驱动所述控制部件4做相应方向的移动,例如当实施抬头动作时所述控制部件4也抬起来,从而便利于控制。所述控制模块2中还包括有生物电信号计算模块23,用于根据所采集的对应于眨眼动作的EOG电信号对用户1的眼部动作进行算法识别并给出相应的生物电特征信号,根据所述生物电特征信号确定识别结果并进一步表达确认控制指令,确认所述控制部件4的当前操作类型(停止上升或停止下降)。也就是说,所述病床4响应于所述姿态动作所蕴含的运动方向控制指令而移动,并响应于所述头部表情动作所蕴含的确认指令完成对当前操作及其操作类型的“确认”。The control module 2 includes the gesture fusion calculation module 22, which is used to analyze the gestures of the head of the user 1 (for example, shaking to the left, shaking to the right, moving up, and bowing the head) according to the collected gesture signal of shaking the head. Move four posture actions) to carry out algorithm identification and give corresponding posture feature signals, determine the recognition result according to the posture feature signals, and further determine and drive the control component 4 to move in the corresponding direction, for example, when the head-up action is implemented The control part 4 is also lifted up to facilitate control. The control module 2 also includes a bioelectric signal calculation module 23, which is used to perform algorithmic identification of the eye movements of the user 1 according to the collected EOG electrical signals corresponding to the blinking action and provide corresponding bioelectric characteristic signals, The identification result is determined according to the bioelectric characteristic signal and a confirmation control instruction is further expressed to confirm the current operation type of the control part 4 (stop rising or stop falling). That is to say, the hospital bed 4 moves in response to the movement direction control instruction contained in the gesture action, and completes the "confirmation" of the current operation and its operation type in response to the confirmation instruction contained in the head expression action .

在上述方案中,所述人体姿态动作除了是一种头部晃动动作以外,还可以是身体整体的左偏晃动、右偏晃动、挺胸移动和弯腰移动;这些姿态信号具有各个动作之间容易区分明显的特点,而且能够将方向控制意图与人体本身的姿态方向统一起来,从而便利于表达和控制,其动作表象容易识别而且也容易提取其特征值。一种比较便于人们习惯的控制方式是,尽量让移动方向控制的结果与这些姿态动作的方向基本协调一致,例如当你抬头或挺胸动作时,控制模块2最好控制所述移动光标的运动方向也向上移动而不是向左或向右移动。In the above solution, in addition to a head swaying action, the human body posture action can also be a leftward swaying, a rightward swaying, a chest-lifting movement, and a bending-over movement of the entire body; It is easy to distinguish obvious features, and it can unify the direction control intention with the posture direction of the human body itself, so as to facilitate expression and control, and its action representation is easy to recognize and its feature value is easy to extract. A control method that is more convenient for people to get used to is to try to make the result of the movement direction control basically consistent with the direction of these gesture actions. For example, when you raise your head or lift your chest, the control module 2 preferably controls the movement of the moving cursor. The direction also moves up instead of left or right.

但是,这些所述人体姿态动作仅是一种控制意图的表达,既不是对控制部件4予以控制的信号本身,也不是所要表达的控制信号本身。为此,为了实现对控制意图的精准表达,最好是事先设定所述人体姿态动作的规范标准化动作,便于精准地展示、表达这种控制意图不至于控制系统难以识别和特征值提取。为此进一步的技术方案还可以是,建立关于头部晃动或身体晃动的人体姿态规范模型,所述人体姿态模型定义用于表达所述方向控制意图的人体动作类型例如是头部晃动还是身体晃动才能有效,以及晃动的速度,或最大、最小角度等姿态,通过所述人体姿态模型予以规范定义,通过实施符合所述人体姿态模型所定义的人体姿态动作,表达对所述控制部件4的运动方向的控制意图。当建立了所述人体姿态模型后,用户1使用前可以在了解和学习所述标准化规范动作的要求后,开始操作。例如启动被控控制部件4的初始状态时,让用户1身姿调试至设定的初始姿态,包括摆正身体,面部正朝向显示屏3等。再例如要求用户1晃动身体的角度、速度等符合包括的规范等。However, these human body gestures are only expressions of control intentions, and are neither the signals themselves to control the control component 4 nor the control signals themselves to be expressed. Therefore, in order to accurately express the control intention, it is better to set the standardized and standardized actions of the human body posture in advance, so as to facilitate the accurate display and expression of the control intention, and prevent the control system from being difficult to identify and extract feature values. A further technical solution for this may also be to establish a human body posture specification model for head shaking or body shaking, and the human posture model defines the type of human action used to express the direction control intention, such as head shaking or body shaking. can be effective, and the speed of shaking, or the postures such as the maximum and minimum angles, are standardized and defined by the human body posture model, and the movement of the control component 4 is expressed by implementing the human body posture actions defined by the human body posture model. Orientation control intent. After the human body posture model is established, the user 1 can start to operate after understanding and learning the requirements of the standardized normative action before use. For example, when starting the initial state of the controlled control component 4, let the posture of the user 1 be adjusted to the set initial posture, including straightening the body and facing the display screen 3, etc. For another example, it is required that the angle and speed of the user 1 shaking the body conform to the included specifications and the like.

在上述方案中,所述姿态融合计算模块22,是基于经验和数据所建立的软件运算模块,用于识别解读当前所述人体姿态动作的控制意图。在使用中,还可以通过所述姿态融合计算模块22采集基于一个规范的姿态动作即人体姿态模型的特征值,作为建立对应于姿态动作的基准特征值,并比较所拾取的当次姿态特征值,实时识别出用户1当前的姿态动作意图。In the above solution, the gesture fusion calculation module 22 is a software operation module established based on experience and data, and is used to identify and interpret the control intention of the current human body gesture action. In use, the posture fusion calculation module 22 can also collect the eigenvalues based on a standard posture action, that is, the human body posture model, as the benchmark eigenvalues corresponding to the posture actions, and compare the current posture eigenvalues picked up. , identify the current gesture and action intention of user 1 in real time.

其中,所述拾取表征所述人体姿态动作的姿态特征值,其定义了控制的起点在于拾取特征值,拾取作为一种立体行为动作的特征参数,提供给后面的所述控制模块2使用。Wherein, the picking up the gesture feature value representing the human body gesture action defines that the starting point of the control is to pick up the feature value, and pick up the feature parameter as a three-dimensional behavior action, and provide it to the control module 2 for use later.

其中,所述人体姿态动作的动作表象,是指姿态动作在三维空间中的外在表现特征。为此,拾取表征所述人体姿态动作的动作表象本身的姿态特征值,实际上也就是姿态动作在时间和空间上留下的轨迹。Wherein, the action representation of the gesture and action of the human body refers to the external performance characteristics of the gesture action in the three-dimensional space. To this end, the gesture feature value of the action image itself representing the gesture action of the human body is picked up, which is actually the trajectory left by the gesture action in time and space.

其中,所述控制部件4的当前操作,是指对所述控制部件4实施的一种确认操作,眨眼动作一次就类似于通过鼠标光标对屏幕中的操作部件实施左击或右击确认操作,而所述操作类型,是指实施该确认操作的类别,例如对所述控制部件4实施启动或停止、用意或否定,也可以是所述控制部件4实施加速或减速等,这些操作类别可以根据不同控制部件4的控制需要进行设定调整。Wherein, the current operation of the control component 4 refers to a confirmation operation performed on the control component 4. A blinking action is similar to performing a left-click or right-click confirmation operation on the operation component on the screen through the mouse cursor. The operation type refers to the type of the confirmation operation, for example, the control part 4 is started or stopped, intentional or negative, or the control part 4 is accelerated or decelerated. These operation types can be based on The control of the different control components 4 requires setting adjustments.

在上述方案中,还可以是,在一个当次的姿态动作结束并确认为有效后,将当次的姿态特征值替换为基准姿态特征值。即为了防止基准姿态特征值漂移导致的识别错误和识别迟缓的问题,采用每次动作结束或定期地将当次的姿态特征值替换为基准姿态特征值,让所述基准特征值适当地刷新重置。In the above solution, it is also possible to replace the current posture feature value with the reference posture feature value after a current posture action is completed and confirmed to be valid. That is to say, in order to prevent the problem of recognition error and recognition delay caused by the drift of the reference posture feature value, the current posture feature value is replaced by the reference posture feature value at the end of each action or periodically, and the reference feature value is refreshed appropriately. set.

在上述方案中,还可以是,所述姿态特征值包括有可以从动作表象上予以识别的人体动作的角度、动作方向和动作加速度。即从动作的角度、动作方向和动作加速度几个方面定义所述姿态特征值。所述姿态特征值的提取,在发明中,采用穿戴式的速率陀螺仪检测器测量动作的角度及加速度传感器测量动作的加速度,在进一步改进的方案中,还可以通过磁力计采集随所述人体姿态动作移动时提供所述姿态特征值中的运动方向特征值。在其它实施方案中,也可以采用摄像采样装置摄取所述人体姿态动作的影像并利用图像解析技术获取所述人体姿态动作的姿态特征值。In the above solution, it can also be that the posture feature value includes the angle, the action direction and the action acceleration of the human action that can be identified from the action representation. That is, the posture feature value is defined from several aspects of action angle, action direction and action acceleration. For the extraction of the attitude feature value, in the invention, a wearable rate gyroscope detector is used to measure the angle of the action and an acceleration sensor is used to measure the acceleration of the action. When the gesture action moves, the movement direction feature value in the gesture feature value is provided. In other embodiments, a camera sampling device may also be used to capture images of the human body gestures and actions, and image analysis techniques may be used to obtain the gesture feature values of the human body gestures and actions.

在上述方案中,所述头部表情动作是一种基于头部而存在的微型肌肉动作或思维动作,为此除了使用眼部的眨眼动作导致的眼电EOG信号外,还可以采用其它微型肌肉动作例如眼球转动动作、面部抽动动作、咬牙、磨牙的下颚动作,或纯粹大脑意识的思维动作;当实施这些表情动作时都能在头部产生相应的不同特性的生物电信号,一般俗称脑电信号、肌电信号等。例如EOG眼电和EMG肌电信号,目前已经有很多种方法和装置拾取这些生物电信号。这些生物电信号的特征值主要包括有电信号幅度大小和信号频率高低及其频谱构成等,拾取和区分这些特征值就能实现确认指令。In the above solution, the head expression action is a micro-muscle action or thinking action based on the head. For this purpose, in addition to using the EOG signal caused by the eye blinking action, other micro-muscles can also be used. Actions such as eye movements, facial twitches, jaw movements of clenching and grinding teeth, or pure cerebral conscious thinking movements; when these facial expressions are performed, corresponding bioelectric signals with different characteristics can be generated in the head, commonly known as EEG signal, EMG, etc. For example, EOG and EMG electromyography, there are many methods and devices to pick up these bioelectrical signals. The eigenvalues of these bioelectrical signals mainly include the amplitude of the electrical signal, the frequency of the signal and its spectral composition, etc. Picking and distinguishing these eigenvalues can realize the confirmation instruction.

将所述人体姿态动作与所述头部表情动作的特点对比就可以发现,它们之间的区别是明显的,前者属于幅度比较大的肢体动作从而可以利用其直接表达与运动方向有关的控制意图,而后者仅是头部的微型肌肉动作或思维动作,并且在实施所述人体姿态动作时完全可以同步实施所述头部表情动作,或者在实施所述人体姿态动作之前或之后快速响应实施所述头部表情动作从而实现快速确认控制,互不干涉,容易让普通用户操作。Comparing the characteristics of the human body gestures and the head expressions, it can be found that the difference between them is obvious. The former belongs to the body movements with a relatively large amplitude, which can be used to directly express the control intention related to the movement direction. , while the latter is only a micro-muscle action or thinking action of the head, and the head expression action can be implemented completely synchronously when the human body gesture action is implemented, or the head expression action can be implemented in a rapid response before or after the human body gesture action is implemented. The above-mentioned head expressions and actions can realize quick confirmation control without interfering with each other, which is easy for ordinary users to operate.

为此,本发明中利用实施这些表情动作来表达或传递一种确认指令,基于该确认的意图和指令,在本发明中,控制程序本身既可以安排一个动作代表一个确认含义例如像传统的单击鼠标的确认动作,也可以是连续的两个动作代表一个确认的含义例如像传统的双击鼠标的确认动作,即一个动作或几个动作的重复或组合。Therefore, in the present invention, these facial expressions are used to express or transmit a confirmation instruction. Based on the confirmation intention and instruction, in the present invention, the control program itself can either arrange an action to represent a confirmation meaning, such as a traditional single The confirmation action of clicking the mouse can also be two consecutive actions representing a confirmation meaning, such as the confirmation action of the traditional double-clicking of the mouse, that is, the repetition or combination of one action or several actions.

其次,所述头部表情动作不仅是独立的动作及其信号,而且定义了所述头部表情动作主要用于表达确认控制的意图,即实施该动作不是希望所述控制部件4例如鼠标光标(也称为特定标识)按自己的意图移动,而是要表达确认类似于鼠标点击确认的启动或停止等确认的含义。Secondly, the head expression action is not only an independent action and its signal, but also defines that the head expression action is mainly used to express the intention of confirming the control, that is, it is not expected that the control component 4 such as the mouse cursor ( Also known as a specific marker) to move on its own terms, but to express the meaning of confirmation like start or stop confirmation of mouse click confirmation.

在上述方案中,还可以是,在一个当次的表情动作结束并确认为有效后,将当次的生物电信号特征值替换为基准的生物电信号特征值。即为了防止用户1在不同时期的生物电信号特征变化或不同参与者的当次生物电信号特征值变化导致识别错误(不是基准的生物电信号特征值漂移),采用每次动作结束或定期地将将当次的姿态特征值替换为基准的生物电信号特征值,让所述基准的生物电信号特征值适当地刷新重置。In the above solution, it is also possible to replace the current bioelectric signal feature value with the reference bioelectric signal feature value after a current expressive action is completed and confirmed to be valid. That is, in order to prevent user 1's bioelectrical signal characteristic changes in different periods or the current bioelectrical signal characteristic value changes of different participants resulting in identification errors (not the baseline bioelectrical signal characteristic value drift), use the end of each action or periodically. The current posture feature value is replaced with the reference bioelectric signal feature value, and the reference bioelectric signal feature value is appropriately refreshed and reset.

实施例二、如图2所示,具体地说,本发明还提供一种基于二维意图所定义的控制方法,其特征在于,包括用于表达方向控制意图的人体姿态动作和用于表达确认意图的语音命令,所述方向控制意图旨在对控制部件4的运动方向予以控制,所述确认意图旨在对所述控制部件4的当前操作及其操作类型予以确认;Embodiment 2, as shown in Figure 2, specifically, the present invention also provides a control method defined based on two-dimensional intention, which is characterized in that it includes a human body gesture action for expressing direction control intention and a method for expressing confirmation. The voice command of the intention, the direction control intention is to control the movement direction of the control part 4, and the confirmation intention is to confirm the current operation of the control part 4 and its operation type;

实施所述人体姿态动作,拾取表征所述人体姿态动作的动作表象本身的姿态特征值,设置姿态融合计算模块22,根据所拾取的所述姿态特征值并基于所述姿态融合计算模块22识别出对应于所述人体姿态动作的运动方向控制指令,通过所述运动方向控制指令控制所述控制部件4的运动方向;Carry out the described human body posture action, pick up the posture feature value of the action representation itself that characterizes the described human body posture action, set the posture fusion calculation module 22, and identify the posture fusion calculation module 22 according to the posture feature value picked up and based on the posture fusion calculation module 22. Corresponding to the motion direction control instruction of the gesture action of the human body, the motion direction of the control component 4 is controlled by the motion direction control instruction;

实施语音命令,拾取表征所述语音命令的语音信号特征值,设置语音信号计算模块24,根据所拾取的所述语音信号特征值并基于所述语音信号计算模块24识别出对应于所述语音命令的确认指令,通过所述确认指令确认所述控制部件4的当前操作及其操作类型予以确认。Implement a voice command, pick up the voice signal characteristic value that characterizes the voice command, set the voice signal calculation module 24, and identify the voice signal corresponding to the voice command according to the voice signal characteristic value picked up and based on the voice signal calculation module 24 The confirmation command is used to confirm the current operation of the control component 4 and its operation type through the confirmation command.

将上述实施例二的方案与实施例一对比,其主要差别在于,将实施例一中的表达确认意图的头部表情动作改变为用户1发出的语音命令,为此,在上述方案中,还可以是,通过所述语音信号计算模块建立对应于语音命令的语音信号基准特征值并比较所拾取的当次语音信号的特征值,实时识别出用户1当前的语音命令意图。Comparing the solution of the second embodiment above with that of the first embodiment, the main difference is that the head expression action for expressing the confirmation intention in the first embodiment is changed to a voice command issued by the user 1. Therefore, in the above solution, also It may be that the current voice command intention of the user 1 is recognized in real time by establishing the voice signal reference feature value corresponding to the voice command and comparing the picked-up feature value of the current voice signal through the voice signal calculation module.

在上述方案中,还可以是,在一个当次的语音信号结束并确认为有效后,所述语音信号计算模块将当次的语音信号特征值替换为基准特征值。这样当出现产品用户1更换、用户1的口音特性改变等情况发生后,依然能够及时调整识别能力。In the above solution, it can also be that, after a current voice signal ends and is confirmed to be valid, the voice signal calculation module replaces the current voice signal feature value with a reference feature value. In this way, when the user 1 of the product is replaced or the accent characteristic of the user 1 is changed, the recognition ability can still be adjusted in time.

本发明还提供一种基于二维意图的控制装置,其特征在于,包括:The present invention also provides a two-dimensional intention-based control device, characterized in that it includes:

姿态特征值拾取装置21,所述姿态特征值拾取装置21用于拾取表征人体姿态动作的动作表象本身的姿态特征值并给出相应的姿态特征信号,所述人体姿态动作用于表达对控制部件4的运动方向予以控制的控制意图;The posture feature value pickup device 21 is used to pick up the posture feature value of the action representation itself that represents the posture action of the human body and give the corresponding posture feature signal, and the posture feature value of the human body is used to express the control component. 4. The control intention to control the movement direction;

语音命令特征值拾取装置5,所述语音命令特征值拾取装置5用于拾取表征所述语音命令的语音信号特征值并给出相应的语音特征信号,所述语音命令用于表达对控制部件4的当前操作及其操作类型予以确认的意图;Voice command feature value pickup device 5, described voice command feature value pickup device 5 is used to pick up the voice signal feature value characterizing the voice command and give the corresponding voice feature signal, and the voice command is used to express the control unit 4 the intent of the current operation and its type of operation to be confirmed;

控制模块2,包括姿态融合计算模块22和语音信号计算模块24,所述姿态融合计算模块22用于接收所述姿态特征信号并识别出对应于所述人体姿态动作的运动方向控制指令,所述控制模块2用于根据所述运动方向控制指令进一步控制所述控制部件4的运动方向;所述语音信号计算模块24用于接收所述语音特征信号并识别出对应于所述语音命令的确认指令,所述控制模块2用于根据所述确认指令确认所述控制部件4的当前操作及其操作类型。The control module 2 includes a gesture fusion calculation module 22 and a speech signal calculation module 24, and the gesture fusion calculation module 22 is used to receive the gesture feature signal and identify the motion direction control instruction corresponding to the gesture action of the human body. The control module 2 is used to further control the movement direction of the control component 4 according to the movement direction control instruction; the voice signal calculation module 24 is used to receive the voice feature signal and identify the confirmation instruction corresponding to the voice command , the control module 2 is used to confirm the current operation of the control component 4 and its operation type according to the confirmation instruction.

Claims (27)

1.基于二维意图所定义的控制部件控制方法,其特征在于,包括用于表达方向控制意图的人体姿态动作和用于表达确认意图的头部表情动作,所述方向控制意图旨在对控制部件的运动方向予以控制,所述确认意图旨在确认所述控制部件的当前操作及其操作类型;1. A control component control method defined based on a two-dimensional intention, characterized in that it includes a human body gesture action for expressing a direction control intention and a head expression action for expressing a confirmation intention, and the direction control intention is intended to control the control. the direction of movement of the component is controlled, and the confirmation is intended to confirm the current operation of the control component and its type of operation; 实施所述人体姿态动作,拾取表征所述人体姿态动作的动作表象本身的姿态特征值,设置姿态融合计算模块,根据所拾取的所述姿态特征值并基于所述姿态融合计算模块识别出对应于所述人体姿态动作的运动方向控制指令,通过所述运动方向控制指令控制所述控制部件的运动方向;Implement the human body posture action, pick up the posture feature value of the action representation itself that characterizes the human body posture action, set the posture fusion calculation module, and identify the corresponding posture according to the picked up posture feature value and based on the posture fusion calculation module. The movement direction control command of the human body posture action, and the movement direction of the control component is controlled by the movement direction control command; 实施所述头部表情动作,拾取关联于所述头部表情动作而导致的头部生物电信号特征值,设置生物电信号计算模块,根据所拾取的所述生物电信号特征值并基于所述生物电信号计算模块识别出对应于所述头部表情动作的确认指令,通过所述确认指令确认所述控制部件的当前操作及其操作类型。Implement the head expression action, pick up the bioelectric signal characteristic value of the head caused by the head expression action, set up a bioelectric signal calculation module, according to the picked up bioelectric signal characteristic value and based on the The bioelectric signal calculation module identifies a confirmation instruction corresponding to the head expression, and confirms the current operation of the control component and its operation type through the confirmation instruction. 2.根据权利要求1所述的控制方法,其特征在于,通过所述姿态融合计算模块建立对应于姿态动作的基准特征值并比较所拾取的当次姿态特征值,实时识别出用户当前的姿态动作意图。2. control method according to claim 1, is characterized in that, establishes the reference characteristic value corresponding to the gesture action and compares the current attitude characteristic value picked up by the described attitude fusion calculation module, recognizes the user's current attitude in real time Action intent. 3.根据权利要求2所述的控制方法,其特征在于,在一个当次的姿态动作结束并确认为有效后,将当次的姿态特征值替换为基准特征值。3 . The control method according to claim 2 , wherein after a current gesture action is completed and confirmed to be valid, the current gesture feature value is replaced with a reference feature value. 4 . 4.根据权利要求2所述的控制方法,其特征在于,所述姿态特征值包括有可以从动作表象上予以识别的人体动作的角度、方向和加速度。4 . The control method according to claim 2 , wherein the posture feature value includes the angle, direction and acceleration of the human action which can be identified from the action representation. 5 . 5.根据权利要求4所述的控制方法,其特征在于,建立人体姿态模型,所述人体姿态模型定义用于表达所述方向控制意图的人体动作类型,通过实施符合所述人体姿态模型所定义的人体姿态动作,表达对控制部件的运动方向的控制意图。5. The control method according to claim 4, wherein a human body posture model is established, and the human body posture model defines a human body action type for expressing the direction control intention, and is defined in accordance with the human body posture model by implementing It expresses the control intention of the movement direction of the control part. 6.根据权利要求1所述的控制方法,其特征在于,通过所述生物电信号计算模块建立对应于表情动作的生物电信号基准特征值并比较所拾取的当次生物电信号的特征值,实时识别出用户当前的表情动作意图。6. The control method according to claim 1, wherein the bioelectric signal calculation module is used to establish a reference characteristic value of the bioelectric signal corresponding to the facial expression and compare the characteristic value of the current bioelectric signal picked up, Real-time recognition of the user's current expression and action intention. 7.根据权利要求6所述的控制方法,其特征在于,在一个当次的表情动作结束并确认为有效后,将当次的生物电信号特征值替换为基准姿态特征值。7 . The control method according to claim 6 , wherein after a current facial expression is completed and confirmed to be valid, the current bioelectric signal feature value is replaced with a reference posture feature value. 8 . 8.根据权利要求6所述的控制方法,其特征在于,所述生物电信号特征值包括有信号幅度、信号频率。8 . The control method according to claim 6 , wherein the characteristic value of the bioelectric signal includes a signal amplitude and a signal frequency. 9 . 9.根据权利要求1所述的控制方法,其特征在于,所述操作类型包括是否予以启动或停止,或是否加速或减速。9 . The control method according to claim 1 , wherein the operation type includes whether to start or stop, or whether to accelerate or decelerate. 10 . 10.根据权利要求1到9任一所述的控制方法,其特征在于,所述人体姿态动作包括四种能够独立表达运动方向控制意图的姿态动作,每种所述姿态动作定义一种运动方向控制意图,四种所述姿态动作用于分别表达前、后、左、右四个方向移动的控制意图。10. The control method according to any one of claims 1 to 9, wherein the human body gesture actions include four kinds of gesture actions capable of independently expressing control intentions of movement directions, and each of the gesture actions defines a movement direction Control intent, the four gesture actions are used to express the control intent of moving in four directions: front, back, left, and right, respectively. 11.根据权利要求10所述的控制方法,其特征在于,所述人体姿态动作是身体的晃动动作。11 . The control method according to claim 10 , wherein the gesture action of the human body is a shaking action of the body. 12 . 12.根据权利要求11所述的控制方法,其特征在于,所述身体的晃动动作包括身体的左偏晃动、右偏晃动、挺胸移动和弯腰移动四种姿态动作。12 . The control method according to claim 11 , wherein the body shaking action includes four posture actions: leftward shaking of the body, rightward shaking, chest up movement and bending over movement. 13 . 13.根据权利要求10所述的控制方法,其特征在于,所述人体姿态动作是头部的摇动动作。13 . The control method according to claim 10 , wherein the gesture action of the human body is a shaking action of the head. 14 . 14.根据权利要求13所述的控制方法,其特征在于,所述头部的摇动动作包括头部的左偏晃动、右偏晃动、抬头移动和低头移动四种姿态动作。14 . The control method according to claim 13 , wherein the shaking action of the head includes four gesture actions of the head tilting to the left, tilting to the right, moving the head up and moving the head down. 15 . 15.根据权利要求1到9任一所述的控制方法,其特征在于,所述头部表情动作是能够导致头部产生可拾取生物电信号的头部肌肉动作或思维动作。15. The control method according to any one of claims 1 to 9, wherein the head expression action is a head muscle action or a thinking action that can cause the head to generate bioelectric signals that can be picked up. 16.根据权利要求15所述的控制方法,其特征在于,所述头部表情动作包括眼部动作、面部动作、下颚动作或脑意识动作中的一个动作或几个动作的重复或组合。16 . The control method according to claim 15 , wherein the head expression action comprises one action or a repetition or combination of several actions among eye action, facial action, jaw action or brain awareness action. 17 . 17.根据权利要求1到9任一所述的控制方法,其特征在于,所述控制部件是病床或轮椅,所述控制部件响应于所述姿态动作所蕴含的运动方向控制指令而移动,并响应于所述头部表情动作所蕴含的确认指令完成对当前操作及其操作类型的“确认”。17. The control method according to any one of claims 1 to 9, wherein the control component is a hospital bed or a wheelchair, and the control component moves in response to a movement direction control command contained in the gesture action, and The "confirmation" of the current operation and its operation type is completed in response to the confirmation instruction implied by the head expression action. 18.基于二维意图所定义的控制部件控制方法,其特征在于,包括用于表达方向控制意图的人体姿态动作和用于表达确认意图的语音命令,所述方向控制意图旨在对控制部件的运动方向予以控制,所述确认意图旨在确认所述控制部件的当前操作及其操作类型;18. A control part control method defined based on a two-dimensional intention, characterized in that it comprises a human body gesture action for expressing a direction control intention and a voice command for expressing a confirmation intention, the direction control intention being aimed at controlling the control part. The direction of movement is controlled, and said confirmation is intended to confirm the current operation of said control part and its type of operation; 实施所述人体姿态动作,拾取表征所述人体姿态动作的动作表象本身的姿态特征值,设置姿态融合计算模块,根据所拾取的所述姿态特征值并基于所述姿态融合计算模块识别出对应于所述人体姿态动作的运动方向控制指令,通过所述运动方向控制指令控制所述控制部件的运动方向;Implement the human body posture action, pick up the posture feature value of the action representation itself that characterizes the human body posture action, set the posture fusion calculation module, and identify the corresponding posture according to the picked up posture feature value and based on the posture fusion calculation module. The movement direction control command of the human body posture action, and the movement direction of the control component is controlled by the movement direction control command; 实施语音命令,拾取表征所述语音命令的语音信号特征值,设置语音信号计算模块,根据所拾取的所述语音信号特征值并基于所述语音信号计算模块识别出对应于所述语音命令的确认指令,通过所述确认指令确认所述控制部件的当前操作及其操作类型。Implement a voice command, pick up the voice signal feature value that characterizes the voice command, set a voice signal calculation module, identify a confirmation corresponding to the voice command according to the picked up voice signal feature value and based on the voice signal calculation module an instruction, and the current operation of the control part and its operation type are confirmed through the confirmation instruction. 19.根据权利要求18所述的控制方法,其特征在于,通过所述语音信号计算模块建立对应于语音命令的语音信号基准特征值并比较所拾取的当次语音信号的特征值,实时识别出用户当前的语音命令意图。19. The control method according to claim 18, characterized in that, establishing the reference feature value of the voice signal corresponding to the voice command by the voice signal calculation module and comparing the feature value of the current voice signal picked up, identifying in real time The user's current voice command intent. 20.根据权利要求19所述的控制方法,其特征在于,在一个当次的语音信号结束并确认为有效后,所述语音信号计算模块将当次的语音信号特征值替换为基准语音信号特征值。20. The control method according to claim 19, characterized in that, after a current voice signal ends and is confirmed to be valid, the voice signal calculation module replaces the current voice signal feature value with a reference voice signal feature value. 21.基于二维意图的控制装置,其特征在于,包括:21. A two-dimensional intent-based control device, characterized in that it comprises: 姿态特征值拾取装置,所述姿态特征值拾取装置用于拾取表征人体姿态动作的动作表象本身的姿态特征值并给出相应的姿态特征信号,所述人体姿态动作用于表达对控制部件的运动方向予以控制的控制意图;An attitude feature value pickup device, the attitude feature value pickup device is used to pick up the attitude feature value of the action representation itself that characterizes the posture action of the human body and give the corresponding posture feature signal, and the human body posture action is used to express the movement of the control part the control intention to control the direction; 头部生物电信号特征值拾取装置,所述头部生物电信号特征值拾取装置用于拾取关联于头部表情动作而导致的头部生物电信号特征值并给出相应的生物电特征信号,所述头部表情动作用于表达对控制部件的当前操作及其操作类型予以确认的意图;A head bioelectric signal characteristic value pickup device, the head bioelectric signal characteristic value pickup device is used to pick up the head bioelectric signal characteristic value caused by the movement of the head expression and give the corresponding bioelectric characteristic signal, The head expression action is used to express the intention to confirm the current operation of the control component and its operation type; 控制模块,包括姿态融合计算模块和生物电信号计算模块,所述姿态融合计算模块用于接收所述姿态特征信号并识别出对应于所述人体姿态动作的运动方向控制指令,所述控制模块用于根据所述运动方向控制指令进一步控制所述控制部件的运动方向;所述生物电信号计算模块用于接收所述生物电特征信号并识别出对应于所述头部表情动作的确认指令,所述控制模块用于根据所述确认指令确认所述控制部件的当前操作及其操作类型。The control module includes a posture fusion calculation module and a bioelectric signal calculation module, the posture fusion calculation module is used to receive the posture feature signal and identify the motion direction control instruction corresponding to the human body posture action, and the control module uses In order to further control the movement direction of the control part according to the movement direction control instruction; the bioelectric signal calculation module is used to receive the bioelectricity signal and identify the confirmation instruction corresponding to the head expression action, so The control module is used for confirming the current operation of the control component and the operation type thereof according to the confirmation instruction. 22.根据权利要求21所述的控制装置,其特征在于,所述控制部件是病床或轮椅,所述控制部件响应于所述姿态动作所蕴含的运动方向控制指令而移动,并响应于所述头部表情动作所蕴含的确认指令完成对当前操作及其操作类型的“确认”。22. The control device according to claim 21, wherein the control part is a hospital bed or a wheelchair, and the control part moves in response to a movement direction control command contained in the gesture action, and in response to the The confirmation command contained in the head expression action completes the "confirmation" of the current operation and its operation type. 23.根据权利要求21所述的控制装置,其特征在于,所述姿态特征值拾取装置包括速率陀螺仪检测器、加速度传感器,所述速率陀螺仪检测器用于随所述人体姿态动作移动时提供所述姿态特征值中的角速度变化特征值;所述加速度传感器用于随所述人体姿态动作移动时提供所述姿态特征值中的加速度变化特征值。23 . The control device according to claim 21 , wherein the attitude feature value pickup device comprises a rate gyroscope detector and an acceleration sensor, and the rate gyroscope detector is used to provide a rate gyroscope detector when moving with the human body attitude action. 24 . The angular velocity change characteristic value in the attitude characteristic value; the acceleration sensor is used to provide the acceleration change characteristic value in the attitude characteristic value when moving with the gesture action of the human body. 24.根据权利要求23所述的控制装置,其特征在于,所述姿态特征值拾取装置还包括磁力计,所述磁力计用于随所述人体姿态动作移动时提供所述姿态特征值中的运动方向特征值。24 . The control device according to claim 23 , wherein the device for picking up the posture characteristic value further comprises a magnetometer, which is used to provide the posture characteristic value when moving with the posture action of the human body. 25 . Motion direction eigenvalues. 25.根据权利要求21所述的控制装置,其特征在于,所述姿态特征值拾取装置是摄像采样装置,所述摄像采样装置用于摄取所述人体姿态动作的影像并利用图像解析技术获取所述人体姿态动作的姿态特征值。25 . The control device according to claim 21 , wherein the attitude feature value pickup device is a camera sampling device, and the camera sampling device is used to capture images of the human body gestures and use image analysis technology to obtain the obtained images. 26 . The posture feature value of the human body posture action. 26.根据权利要求21到25任一所述的控制装置,其特征在于,所述头部生物电信号特征值拾取装置是基于EOG信号的可穿戴式装置,包括电极单元、EOG信号放大单元;所述电极单元含有三个导电电极,紧贴用户头部皮肤,其中一个置于用户额头,另外两个置于耳后;所述EOG信号放大单元中设置有集成仪表放大器,集成仪表放大器上集成有仪表放大器、高通滤波器、低通滤波器和右腿驱动电路。26. The control device according to any one of claims 21 to 25, wherein the head bioelectric signal feature value pickup device is a wearable device based on an EOG signal, comprising an electrode unit and an EOG signal amplifying unit; The electrode unit contains three conductive electrodes, which are close to the skin of the user's head, one of which is placed on the user's forehead, and the other two are placed behind the ear; the EOG signal amplifying unit is provided with an integrated instrumentation amplifier, which is integrated on the integrated instrumentation amplifier. There are instrumentation amplifier, high pass filter, low pass filter and right leg drive circuit. 27.基于二维意图的控制装置,其特征在于,包括:27. A two-dimensional intent-based control device, comprising: 姿态特征值拾取装置,所述姿态特征值拾取装置用于拾取表征人体姿态动作的动作表象本身的姿态特征值并给出相应的姿态特征信号,所述人体姿态动作用于表达对控制部件的运动方向予以控制的控制意图;An attitude feature value pickup device, the attitude feature value pickup device is used to pick up the attitude feature value of the action representation itself that characterizes the posture action of the human body and give the corresponding posture feature signal, and the human body posture action is used to express the movement of the control part the control intention to control the direction; 语音命令特征值拾取装置,所述语音命令特征值拾取装置用于拾取表征所述语音命令的语音信号特征值并给出相应的语音特征信号,所述语音命令用于表达对控制部件的当前操作及其操作类型予以确认的意图;Voice command feature value pickup device for picking up voice signal feature values characterizing the voice command and giving a corresponding voice feature signal, the voice command being used to express the current operation of the control part and its intent to identify the type of operation; 控制模块,包括姿态融合计算模块和语音信号计算模块,所述姿态融合计算模块用于接收所述姿态特征信号并识别出对应于所述人体姿态动作的运动方向控制指令,所述控制模块用于根据所述运动方向控制指令进一步控制所述控制部件的运动方向;所述语音信号计算模块用于接收所述语音特征信号并识别出对应于所述语音命令的确认指令,所述控制模块用于根据所述确认指令确认所述控制部件的当前操作及其操作类型。A control module, including a posture fusion calculation module and a voice signal calculation module, the posture fusion calculation module is used to receive the posture feature signal and identify the motion direction control instruction corresponding to the human body posture action, and the control module is used for The movement direction of the control component is further controlled according to the movement direction control instruction; the voice signal calculation module is used for receiving the voice feature signal and identifying the confirmation instruction corresponding to the voice command, and the control module is used for The current operation of the control part and its operation type are confirmed according to the confirmation instruction.
CN201911186185.9A 2019-05-21 2019-11-28 Method and device for controlling control components defined based on two-dimensional intent Pending CN110727353A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910425565.7A CN110134245A (en) 2019-05-21 2019-05-21 A kind of eye control device and eye prosecutor method based on EOG and attitude transducer
CN2019104255657 2019-05-21

Publications (1)

Publication Number Publication Date
CN110727353A true CN110727353A (en) 2020-01-24

Family

ID=67572108

Family Applications (5)

Application Number Title Priority Date Filing Date
CN201910425565.7A Pending CN110134245A (en) 2019-05-21 2019-05-21 A kind of eye control device and eye prosecutor method based on EOG and attitude transducer
CN201911186185.9A Pending CN110727353A (en) 2019-05-21 2019-11-28 Method and device for controlling control components defined based on two-dimensional intent
CN201911186189.7A Pending CN111290572A (en) 2019-05-21 2019-11-28 Driving device and driving method based on EOG signal and head posture
CN201911186227.9A Pending CN110850987A (en) 2019-05-21 2019-11-28 Specific identification control method and device based on two-dimensional intention expressed by human body
CN202020852482.4U Active CN212112406U (en) 2019-05-21 2020-05-20 Driving device based on user EOG signal and head gesture

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201910425565.7A Pending CN110134245A (en) 2019-05-21 2019-05-21 A kind of eye control device and eye prosecutor method based on EOG and attitude transducer

Family Applications After (3)

Application Number Title Priority Date Filing Date
CN201911186189.7A Pending CN111290572A (en) 2019-05-21 2019-11-28 Driving device and driving method based on EOG signal and head posture
CN201911186227.9A Pending CN110850987A (en) 2019-05-21 2019-11-28 Specific identification control method and device based on two-dimensional intention expressed by human body
CN202020852482.4U Active CN212112406U (en) 2019-05-21 2020-05-20 Driving device based on user EOG signal and head gesture

Country Status (1)

Country Link
CN (5) CN110134245A (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113520740A (en) * 2020-04-13 2021-10-22 广东博方众济医疗科技有限公司 Wheelchair bed control method and device, electronic equipment and storage medium
CN112751882A (en) * 2021-01-19 2021-05-04 华南理工大学 Real-time communication method based on hybrid brain-computer interface
CN112860073A (en) * 2021-03-17 2021-05-28 华南脑控(广东)智能科技有限公司 Man-machine interactive closed-loop mouse identification control system
CN113156861A (en) * 2021-04-21 2021-07-23 华南脑控(广东)智能科技有限公司 Intelligent wheelchair control system
CN113448435B (en) * 2021-06-11 2023-06-13 北京数易科技有限公司 Eye control cursor stabilization method based on Kalman filtering
CN115890655B (en) * 2022-10-11 2024-02-09 人工智能与数字经济广东省实验室(广州) Mechanical arm control method, device and medium based on head gesture and electrooculogram
CN115741670B (en) * 2022-10-11 2024-05-03 华南理工大学 Wheelchair mechanical arm system based on multi-mode signal and machine vision fusion control
CN116880700A (en) * 2023-09-07 2023-10-13 华南理工大学 Raspberry Pi smart car control method and system based on wearable brain-computer interface
CN117357351B (en) * 2023-12-05 2024-06-18 华南脑控(广东)智能科技有限公司 Multi-mode intelligent control method and device for electric sickbed and household appliances

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102419588A (en) * 2011-12-28 2012-04-18 许冰 Method and device for controlling target based on electroencephalogram signal and motion signal
CN104615243A (en) * 2015-01-15 2015-05-13 深圳市掌网立体时代视讯技术有限公司 Head-wearable type multi-channel interaction system and multi-channel interaction method
US20150222948A1 (en) * 2012-09-29 2015-08-06 Shenzhen Prtek Co. Ltd. Multimedia Device Voice Control System and Method, and Computer Storage Medium
CN105487674A (en) * 2016-01-17 2016-04-13 仲佳 Head control device and method thereof
CN106178538A (en) * 2016-09-13 2016-12-07 成都创慧科达科技有限公司 A kind of intelligent toy control system based on attitude detection and method
CN107357311A (en) * 2017-07-28 2017-11-17 南京航空航天大学 A kind of reconnaissance system with unmanned plane based on mixing control technology

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080211768A1 (en) * 2006-12-07 2008-09-04 Randy Breen Inertial Sensor Input Device
CN101308400A (en) * 2007-05-18 2008-11-19 肖斌 Novel human-machine interaction device based on eye-motion and head motion detection
TW201028895A (en) * 2009-01-23 2010-08-01 Rui-Keng Chou Electro-oculogram control system
CN102622085A (en) * 2012-04-11 2012-08-01 北京航空航天大学 Multidimensional sense man-machine interaction system and method
JP5888205B2 (en) * 2012-11-02 2016-03-16 ソニー株式会社 Image display device and information input device
JP2017049960A (en) * 2015-09-06 2017-03-09 株式会社ローレル・コード User interface program and device using sensors of hmd device
CN106933353A (en) * 2017-02-15 2017-07-07 南昌大学 A kind of two dimensional cursor kinetic control system and method based on Mental imagery and coded modulation VEP
WO2019001360A1 (en) * 2017-06-29 2019-01-03 华南理工大学 Human-machine interaction method based on visual stimulations
CN108703760A (en) * 2018-06-15 2018-10-26 安徽中科智链信息科技有限公司 Human motion gesture recognition system and method based on nine axle sensors

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102419588A (en) * 2011-12-28 2012-04-18 许冰 Method and device for controlling target based on electroencephalogram signal and motion signal
US20150222948A1 (en) * 2012-09-29 2015-08-06 Shenzhen Prtek Co. Ltd. Multimedia Device Voice Control System and Method, and Computer Storage Medium
CN104615243A (en) * 2015-01-15 2015-05-13 深圳市掌网立体时代视讯技术有限公司 Head-wearable type multi-channel interaction system and multi-channel interaction method
CN105487674A (en) * 2016-01-17 2016-04-13 仲佳 Head control device and method thereof
CN106178538A (en) * 2016-09-13 2016-12-07 成都创慧科达科技有限公司 A kind of intelligent toy control system based on attitude detection and method
CN107357311A (en) * 2017-07-28 2017-11-17 南京航空航天大学 A kind of reconnaissance system with unmanned plane based on mixing control technology

Also Published As

Publication number Publication date
CN110850987A (en) 2020-02-28
CN110134245A (en) 2019-08-16
CN111290572A (en) 2020-06-16
CN212112406U (en) 2020-12-08

Similar Documents

Publication Publication Date Title
CN110727353A (en) Method and device for controlling control components defined based on two-dimensional intent
CN110840666B (en) An integrated system of a wheelchair manipulator based on eye electricity and machine vision and its control method
US11262851B2 (en) Target selection based on human gestures
US11409371B2 (en) Systems and methods for gesture-based control
CN102866775A (en) System and method for controlling brain computer interface (BCI) based on multimode fusion
CN108646915B (en) Method and system for controlling robotic arm to grasp objects by combining three-dimensional gaze tracking and brain-computer interface
Zhang et al. Study on robot grasping system of SSVEP-BCI based on augmented reality stimulus
WO2021154606A1 (en) Human-machine interface using biopotential sensor and location sensor
CN115890655B (en) Mechanical arm control method, device and medium based on head gesture and electrooculogram
Ghovanloo Wearable and non-invasive assistive technologies
Petrushin et al. Effect of a click-like feedback on motor imagery in EEG-BCI and eye-tracking hybrid control for telepresence
CN204748634U (en) Motion control system of robot
CN112860073A (en) Man-machine interactive closed-loop mouse identification control system
CN111522435A (en) Mechanical arm interaction method based on surface electromyogram signal
US12229345B2 (en) Systems and methods for gesture-based control
CN115590695B (en) Wheelchair control system based on electrooculogram and face recognition
Mamatha et al. Smart sensor design and analysis of brain machine interface using labview
RU193501U1 (en) Device for replacing lost human functions using feedback
Takahashi et al. Remarks on EOG and EMG gesture recognition in hands-free manipulation system
CN110315541B (en) Computer control system for brain electricity and eyes of multi-freedom-degree mechanical arm
CN115793854A (en) A human-computer interaction control system and control method based on eye myoelectric signals
CN115804695A (en) Multi-modal brain-computer interface wheelchair control system integrating double attitude sensors
CN105739442B (en) A kind of bionic hand control system based on EEG signals
KR20230075079A (en) Real-time feedback system for controlling target based on brain-computer interface and method thereof
RS Novel Alternative Approaches for Developing Smart Wheelchairs: A Survey

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Li Yuanqing

Inventor after: Xiao Jing

Inventor after: Qu Jun

Inventor before: Li Yuanqing

Inventor before: Xiao Jing

Inventor before: Zhai Jun