WO2019228174A1 - 一种基于面目表情识别的智能vr眼镜 - Google Patents

一种基于面目表情识别的智能vr眼镜 Download PDF

Info

Publication number
WO2019228174A1
WO2019228174A1 PCT/CN2019/086547 CN2019086547W WO2019228174A1 WO 2019228174 A1 WO2019228174 A1 WO 2019228174A1 CN 2019086547 W CN2019086547 W CN 2019086547W WO 2019228174 A1 WO2019228174 A1 WO 2019228174A1
Authority
WO
WIPO (PCT)
Prior art keywords
signal
unit
frame
display
facial expression
Prior art date
Application number
PCT/CN2019/086547
Other languages
English (en)
French (fr)
Inventor
谭朝予
Original Assignee
烟台市安特洛普网络科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 烟台市安特洛普网络科技有限公司 filed Critical 烟台市安特洛普网络科技有限公司
Publication of WO2019228174A1 publication Critical patent/WO2019228174A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type

Definitions

  • the present invention relates to the technical field of network appliances, and in particular, to intelligent VR glasses based on facial expression recognition.
  • Facial expression refers to the expression of various emotional states through changes in eye muscles, facial muscles, and mouth muscles. Facial expressions are a very important means of non-verbal communication.
  • the face uses the motion of dozens of muscles to accurately convey different mentalities and emotions. Any kind of facial expression is caused by the overall function of facial muscles, but the muscles in certain specific parts of the face have a more obvious effect on expressing some special emotions.
  • Mouth, cheeks, eyebrows, and forehead are the key parts of pleasure; nose, cheeks, and mouth show disgust; eyebrows, forehead, eyes, and eyelids show sadness; eyes and eyelids show fear. Therefore, the state of the facial muscle movement can be obtained by detecting the motion information of the facial muscles, and the muscle movement can generate the electromyographic signal, so the facial facial expression characteristics can be obtained by measuring the facial EMG signal.
  • VR technology is emerging in the market.
  • the interaction between VR technology and virtual networks can bring rich life and entertainment experiences.
  • VR interactions are currently mostly at the stage of human-virtual world interaction.
  • Most of the interactions between people are only through sound, lack of images, and lack of experience.
  • the purpose of the present invention is to overcome the shortcomings of the prior art, and to provide intelligent VR glasses based on facial expression recognition, which are applied to the interactive experience between VR users, and transmit the expressions to both users through images, thereby increasing the experience pleasure.
  • An intelligent VR glasses based on facial expression recognition includes a frame, which is provided with a VR electronic lens to display an electronic image, and further includes a myoelectric signal acquisition unit, which collects surface charge signal changes of a human face muscle through a surface myoelectric sensor. To the processing unit;
  • the processing unit converts the electric signal collected by the surface myoelectric sensor into a digital signal, and the converted digital signal is matched with the facial expression data in the database and transmitted to the interactive unit to generate a corresponding display signal;
  • the interaction unit transmits or receives a display signal to or from the outside, and transmits the display signal to the display unit after receiving the outside display signal;
  • the display unit displays the 3D expression image on the VR electronic lens according to the display signal transmitted from the interaction unit.
  • the signal acquisition unit is further provided with a differential mode amplifier to amplify the signal of the surface electromyography sensor and transmit the signal to the processing unit.
  • the processing unit is provided with an A / D converter, and the A / D converter converts the electrical signal collected by the surface myoelectric sensor into a digital signal for transmission in a circuit.
  • the frame has a built-in rechargeable battery, and the frame is provided with a charging hole.
  • the frame is provided with a USB port for data transmission.
  • the frame includes a frame and legs, and the number of surface electromyography sensors is multiple, and is distributed around the frame.
  • processing unit and the interaction unit are both built in a mirror frame.
  • Face expression information is obtained through signal transmission between device interaction units between different users, and the purpose of VR virtual expression interaction is achieved.
  • FIG. 1 is a schematic structural view of a front view of the present invention.
  • FIG. 2 is a schematic structural view of a side view of the present invention.
  • FIG. 3 is a schematic structural view of a top view of the present invention.
  • FIG. 4 is a schematic structural diagram of a work flow block diagram of the present invention.
  • Facial expression refers to the expression of various emotional states through changes in eye muscles, facial muscles, and mouth muscles. Facial expressions are a very important means of non-verbal communication.
  • the face uses the motion of dozens of muscles to accurately convey different mentalities and emotions. Any kind of facial expression is caused by the overall function of facial muscles, but the muscles in certain specific parts of the face have a more obvious effect on expressing some special emotions.
  • Mouth, cheeks, eyebrows, and forehead are the key parts of pleasure; nose, cheeks, and mouth show disgust; eyebrows, forehead, eyes, and eyelids show sadness; eyes and eyelids show fear. Therefore, the state of the facial muscle movement can be obtained by detecting the motion information of the facial muscles, and the muscle movement can generate the electromyographic signal, so the facial facial expression characteristics can be obtained by measuring the facial electromyographic signal (EMG) signal.
  • EMG facial electromyographic signal
  • Myoelectric signal is the superposition of time and space on the motor unit action potential (MUAP) in many muscle fibers.
  • SEMG Surface electromyographic signal
  • SEMG is a comprehensive effect of superficial muscle EMG and electrical activity on the neural stem on the skin surface, which can reflect neuromuscular activity to a certain extent; compared to needle electrode EMG, SEMG is non-invasive in measurement , Non-invasive, simple operation and other advantages. Therefore, SEMG has important practical value in clinical medicine, ergonomics, rehabilitation medicine, and sports science.
  • Electromyographic signals are bioelectrical signals when the neuromuscular system is guided, amplified, displayed, and recorded from the surface of the skin through electrodes.
  • the signal form has a large randomness and instability. It has different degrees of correlation with muscle activity and function, so it can reflect neuromuscular activity to a certain extent. It is an important method for non-invasive detection of muscle activity on the body surface. Compared with needle electrode EMG, SEMG has the advantages of non-invasive, non-invasive and simple operation.
  • the surface muscle sensor of the present invention mainly monitors the frown muscle and the eyebrow lowering muscle of the human face (expressions such as depression and thinking that can cause these groups of muscles to contract tightly to produce eyebrow expression), and zygomatic great muscles (which produce laughing expression Main factors), orbicularis oris (closed eyes, thinking and other expressions will affect orbicularis oris) and ear clusters (tension and anger) and so on.
  • the convolutional neural network technology is used as the basis of the algorithm, supplemented by a noise reduction processing algorithm and a feature extraction algorithm to analyze and determine facial expression changes.
  • Figure 1-3 is a schematic structural view of three views of smart VR glasses based on facial expression recognition.
  • the frame includes a frame 2 and a temple 3, and a plurality of surface electromyographic sensors 1 are provided around the frame 2 for collecting human faces. Myoelectric signal.
  • the frame has a built-in rechargeable battery.
  • the frame is provided with a charging hole 5 and a USB port 6 for data transmission.
  • the processing unit 7 and the interaction unit 10 are both built into the frame 2.
  • FIG. 4 is a block diagram of the work flow of the dual-user-based intelligent VR glasses based on facial expression recognition in this embodiment.
  • the electromyographic signal acquisition unit collects changes in the surface charge signal of the human face muscles through the surface electromyography sensor 1 and uses the differential mode amplifier 8 to convert the surface muscles.
  • the signal of the electric sensor 1 is amplified and transmitted to the processing unit 7;
  • the processing unit 7 is provided with an A / D converter 9, which converts the electric signal collected by the surface electromyography sensor 1 into a digital signal for transmission in the circuit, and the converted digital signal and expression data in the database After matching, it is transmitted to the interaction unit 10 to generate a corresponding display signal;
  • the interaction unit 10 transmits or receives display signals to or from other user equipment. After receiving external display signals, the display unit 11 transmits the display signals to the display unit 11. Transmission between the interaction units 10 of different devices uses a cloud server. Transmission method.
  • the display unit 11 displays the 3D expression image on the VR electronic lens 4 according to the display signal transmitted from the interaction unit 10.
  • the display unit 11 includes a model library unit for expression processing. The user selects any model in the model library as a carrier The transmitted expression and model are synthesized and presented on the VR electronic lens 4.
  • FIG. 4 is a block diagram of a dual-user-based work flow in this embodiment. According to this working principle, it can be applied to multi-user implementation.
  • the base of the electrode pole piece of the surface electromyography sensor 1 in this embodiment is made of copper, and the surface is plated with silver and bipolar.
  • the EMG signal is detected by two copper silver-plated electrodes. The two input signals are subtracted. The same "common mode” component is removed, and only the different "differential mode” components are amplified. Any noise far away from the detection point will be displayed as a "common mode” signal at the detection point; while the signal near the detection surface appears as a differential film signal and will be amplified.
  • CMRR common mode rejection ratio
  • the sensor is an INA128 chip, which is used to measure the electrical signals generated by the muscle.
  • the chip has a high acquisition frequency, which can reach 200Hz, and can dynamically collect the movement of human muscles in real time.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

一种基于面目表情识别的智能VR眼镜,包括镜架,镜架上设有VR电子镜片(4)显示电子图像,还包括肌电信号采集单元,通过表面肌电传感器(1)采集人脸肌肉的表面电荷信号变化传递给处理单元(7),处理单元(7)将表面肌电传感器(1)采集的电信号转换为数字信号,转换后的数字信号与数据库中的表情数据匹配后传输给交互单元(10)产生相应的显示信号,交互单元(10)将显示信号向外界传输或接收来自外界的显示信号,接收外界的显示信号后将显示信号传输给显示单元(11),显示单元(11)根据交互单元(10)传来的显示信号将3D表情图像显示在VR电子镜片(4)上。智能VR眼镜应用在VR用户之间的交互体验中,将表情通过影像传输给双方用户,增加体验快感。

Description

一种基于面目表情识别的智能VR眼镜 技术领域
本发明涉及网络用具技术领域,具体涉及一种基于面目表情识别的智能VR眼镜。
背景技术
面部表情是指通过眼部肌肉、颜面肌肉和口部肌肉的变化来表现各种情绪状态。面部表情是一种十分重要的非语言交往手段。面部借助数十块肌肉的运动来准确传达不同的心态和情感。任何一种面部表情都是由面部肌肉整体功能所致,但面部某些特定部位的肌肉对于表达某些特殊情感的作用更明显。嘴、颊、眉、额是表现愉悦的关键部位;鼻、颊、嘴表现厌恶;眉、额、眼睛、眼睑表现哀伤;眼睛和眼睑表现恐惧。因此可以通过检测人脸肌肉的运动信息从而获取人脸肌肉运动的状态,而肌肉运动会产生肌电信号,因此可以通过测量人脸EMG信号来获取人脸面部表情的特征。
现在市场上正兴起VR技术,VR技术与虚拟网络相互交互可以给人带来丰富的生活和娱乐体验,但是就现有技术而言,VR交互目前多数停留在在人与虚拟世界的交互的阶段,人与人之间的交互大多只是通过声音,缺少影像,体验较为匮乏。
发明内容
本发明的目的克服现有技术的不足,提供一种基于面目表情识别的智能VR眼镜,应用在VR用户之间的交互体验中,将表情通过影像传输给双方用户,增加体验快感。
本发明的目的是通过以下技术措施达到的:
一种基于面目表情识别的智能VR眼镜,包括镜架,镜架上设有VR电子镜片显示电子图像,还包括肌电信号采集单元,通过表面肌电传感器采集人脸肌肉的表面电荷信号变化传递给处理单元;
处理单元,将表面肌电传感器采集的电信号转换为数字信号,转换后的数字信号与数据库中的表情数据匹配后传输给交互单元产生相应的显示信号;
交互单元,将显示信号向外界传输或接收来自外界的显示信号,接收外界的显示信号后将显示信号传输给显示单元;
显示单元,根据交互单元传来的显示信号将3D表情图像显示在VR电子镜片上。
进一步地,所述信号采集单元还设有差模放大器将表面肌电传感器的信号放大后传输给处理单元。
进一步地,所述处理单元设有A/D转换器,A/D转换器将表面肌电传感器采集的电信号转换为数字信号用于电路中的传输。
进一步地,所述镜架内置可充电电池,镜架设有充电孔。
进一步地,所述镜架设有USB口用于数据传输。
进一步地,所述镜架包括镜框和镜腿,表面肌电传感器的数量为多个,分布在镜框周围。
进一步地,所述处理单元和交互单元均内置在镜框中。
与现有技术相比,本发明的有益效果是:
通过不同用户之间的设备交互单元之间的信号传递得出面目表情信息,达到VR虚拟表情交互的目的。
能实现环境认知和远程虚拟交互,体验感好,并且供电简单快捷,可重复使用。
下面结合附图和具体实施方式对本发明作详细说明。
附图说明
图1是本发明主视图的结构示意图。
图2是本发明侧视图的结构示意图。
图3是本发明俯视图的结构示意图。
图4是本发明工作流程框图的结构示意图。
其中,1、表面肌电传感器,2、镜框,3、镜腿,4、VR电子镜片,5、充电孔,6、USB口,7、处理单元,8、差模放大器,9、A/D转换器,10、交互单元,11、显示单元。
具体实施方式
面部表情是指通过眼部肌肉、颜面肌肉和口部肌肉的变化来表现各种情绪状态。面部表情是一种十分重要的非语言交往手段。面部借助数十块肌肉的运动来准确传达不同的心态和情感。任何一种面部表情都是由面部肌肉整体功能所致,但面部某些特定部位的肌肉对于表达某些特殊情感的作用更明显。嘴、颊、眉、额是表现愉悦的关键部位;鼻、颊、嘴表现厌恶;眉、额、眼睛、眼睑表现哀伤;眼睛和眼睑表现恐惧。因此可以通过检测人脸肌肉的运动信息从而获取人脸肌肉运动的状态,而肌肉运动会产生肌电信号,因此可以通过测量人脸肌电信号(EMG)信号来获取人脸面部表情的特征。
肌电信号(EMG)是众多肌纤维中运动单元动作电位(MUAP)在时间和空间上的叠加。表面肌电信号(SEMG)是浅层肌肉EMG和神经干上电活动在皮肤表面的综合效应,能在一定程度上反映神经肌肉的活动;相对于针电极EMG,SEMG在测量上具有非侵入性、无创伤、操作简单等优点。因而,SEMG在临床医学、人机功效学、康复医学以及体育科学等方面均有重要的实用价值。
表面肌电信号是从皮肤表面通过电极引导、放大、显示和记录下来的神经肌肉系统活动时的生物电信号,信号形态具有较大的随机性和不稳定性。它与肌肉的活动状态和功能状态之间存在着不同程度的关联性,因而能在一定的程度上反映神经肌肉的活动,是在体表无创检测肌肉活动的重要方法,相对于针电极EMG,SEMG在测量上具有非侵入性、无创伤、操作简单等优点。
本发明通过表面肌肉传感器主要监测的是人脸的皱眉肌和降眉肌(愁闷、思考等表情中可以使这几组肌肉紧张收缩而产生锁眉的表情)、颧大肌(是产生笑表情的主要因素)、眼轮匝肌(闭眼、思考等表情都会影响到眼轮匝肌)和耳部集群(紧张和愤怒)等等。通过对这几个主要肌肉表面信号变化的采集,采用卷积神经网络技术作为算法基础,辅以降噪处理算法和特征提取算法来分析判定面目表情变化。
图1-3为基于面目表情识别的智能VR眼镜的三视图的结构示意图,所述镜架包括镜框2和镜腿3,镜框2周围设有多个表面肌电传感器1用于采集人面部的肌电信号,镜架内置可充电电池,镜架设有充电孔5和USB口6用于数据传输,处理单元7和交互单元10均内置在镜框2中。
图4为本实施例中双用户基于面目表情识别的智能VR眼镜的工作流程框图,肌电信号采集单元通过表面肌电传感器1采集人脸肌肉的表面电荷信号变化经差模放大器8将表面肌电传感器1的信号放大后传输给处理单元7;
处理单元7设有A/D转换器9,A/D转换器9将表面肌电传感器1采集的电信号转换为数字信号用于电路中的传输,转换后的数字信号与数据库中的表情数据匹配后传输给交互单元10产生相应的显示信号;
交互单元10将显示信号向其他用户设备传输或接收来自其他用户设备的显示信号,接收外界的显示信号后将显示信号传输给显示单元11,不同设备的交互单元10之间的传输使用云服务器的传输方法。
显示单元11根据交互单元10传来的显示信号将3D表情图像显示在VR电子镜片4上,显示单元11中包括用于给表情加工的模型库单元,用户选择模型库中的任一模型作为载体将传输来的表情与模型合成呈现于VR电子镜片4上。
图4为本实施例中基于双用户的工作流程框图,依照此工作原理可应用于多用户实现。
本实施例表面肌电传感器1的电极极片的基体用铜制作,表面镀银,双极型,在两个电极中间存在一个对比参考电极,即无关电极,用于降低噪声,提高了对共模信号的抑制能力。通过差分放大法减小电线上噪声。EMG信号由两个铜镀银电极检测,两个输入信号相减,相同的“共模”分量会被去除,只有不同的“差模”分量被放大。远离检测点的任何噪声将在检测点显示为“共模”信号;而检测表面附近的信号表现为差膜信号,将被放大。所以,相对距离较远的电力线噪声将被消除殆尽,而相对较近的EMG信号将被放大。其准确性由共模抑制比(CMRR)来衡量。EMG信息在体内组织(体积导体)内传播时,随传输距离增加而迅速衰减。所以电极应放在EMG最强的肌肉腹部,以减少邻近肌肉的EMG干扰(串扰)。使用较小的电极提高了选择灵活性,但带来了较大的皮肤接触电阻。传感器为INA128芯片,用来测量肌肉产生的电信号,本芯片采集频率很高,能够达到200Hz,可以实时动态的采集人体肌肉的运动情况。

Claims (7)

  1. 一种基于面目表情识别的智能VR眼镜,包括镜架,镜架上设有VR电子镜片显示电子图像,其特征在于:还包括肌电信号采集单元,通过表面肌电传感器采集人脸肌肉的表面电荷信号变化传递给处理单元;
    处理单元,将表面肌电传感器采集的电信号转换为数字信号,转换后的数字信号与数据库中的表情数据匹配后传输给交互单元产生相应的显示信号;
    交互单元,将显示信号向外界传输或接收来自外界的显示信号,接收外界的显示信号后将显示信号传输给显示单元;
    显示单元,根据交互单元传来的显示信号将3D表情图像显示在VR电子镜片上。
  2. 根据权利要求1所述的一种基于面目表情识别的智能VR眼镜,其特征在于:所述信号采集单元还设有差模放大器将表面肌电传感器的信号放大后传输给处理单元。
  3. 根据权利要求1所述的一种基于面目表情识别的智能VR眼镜,其特征在于:所述处理单元设有A/D转换器,A/D转换器将表面肌电传感器采集的电信号转换为数字信号用于电路中的传输。
  4. 根据权利要求1所述的一种基于面目表情识别的智能VR眼镜,其特征在于:所述镜架内置可充电电池,镜架设有充电孔。
  5. 根据权利要求1所述的一种基于面目表情识别的智能VR眼镜,其特征在于:所述镜架设有USB口用于数据传输。
  6. 根据权利要求1所述的一种基于面目表情识别的智能VR眼镜,其特征在于:所述镜架包括镜框和镜腿,表面肌电传感器的数量为多个,分布在镜框周围。
  7. 根据权利要求6所述的一种基于面目表情识别的智能VR眼镜,其特征在于:所述处理单元和交互单元均内置在镜框中。
PCT/CN2019/086547 2018-06-01 2019-05-13 一种基于面目表情识别的智能vr眼镜 WO2019228174A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810554101.1 2018-06-01
CN201810554101.1A CN108549153A (zh) 2018-06-01 2018-06-01 一种基于面目表情识别的智能vr眼镜

Publications (1)

Publication Number Publication Date
WO2019228174A1 true WO2019228174A1 (zh) 2019-12-05

Family

ID=63511624

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/086547 WO2019228174A1 (zh) 2018-06-01 2019-05-13 一种基于面目表情识别的智能vr眼镜

Country Status (2)

Country Link
CN (1) CN108549153A (zh)
WO (1) WO2019228174A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022265653A1 (en) * 2021-06-18 2022-12-22 Hewlett-Packard Development Company, L.P. Automated capture of neutral facial expression

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108549153A (zh) * 2018-06-01 2018-09-18 烟台市安特洛普网络科技有限公司 一种基于面目表情识别的智能vr眼镜
US11768379B2 (en) 2020-03-17 2023-09-26 Apple Inc. Electronic device with facial sensors
CN113791692A (zh) * 2021-09-28 2021-12-14 歌尔光学科技有限公司 交互方法、终端设备及可读存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104142583A (zh) * 2014-07-18 2014-11-12 广州市香港科大霍英东研究院 一种具有眨眼检测功能的智能眼镜及实现方法
CN105487676A (zh) * 2016-01-17 2016-04-13 仲佳 具有基于头部生物电信号的人机交互功能的虚拟现实设备
US20160217621A1 (en) * 2015-01-28 2016-07-28 Sony Computer Entertainment Europe Limited Image processing
CN106170083A (zh) * 2015-05-18 2016-11-30 三星电子株式会社 用于头戴式显示器设备的图像处理
CN106390318A (zh) * 2016-10-18 2017-02-15 京东方科技集团股份有限公司 一种智能面罩及其控制方法
CN106599811A (zh) * 2016-11-29 2017-04-26 叶飞 一种vr头显的面部表情追踪方法
CN108549153A (zh) * 2018-06-01 2018-09-18 烟台市安特洛普网络科技有限公司 一种基于面目表情识别的智能vr眼镜
CN208421417U (zh) * 2018-06-01 2019-01-22 烟台市安特洛普网络科技有限公司 一种基于面目表情识别的智能vr眼镜

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104142583A (zh) * 2014-07-18 2014-11-12 广州市香港科大霍英东研究院 一种具有眨眼检测功能的智能眼镜及实现方法
US20160217621A1 (en) * 2015-01-28 2016-07-28 Sony Computer Entertainment Europe Limited Image processing
CN106170083A (zh) * 2015-05-18 2016-11-30 三星电子株式会社 用于头戴式显示器设备的图像处理
CN105487676A (zh) * 2016-01-17 2016-04-13 仲佳 具有基于头部生物电信号的人机交互功能的虚拟现实设备
CN106390318A (zh) * 2016-10-18 2017-02-15 京东方科技集团股份有限公司 一种智能面罩及其控制方法
CN106599811A (zh) * 2016-11-29 2017-04-26 叶飞 一种vr头显的面部表情追踪方法
CN108549153A (zh) * 2018-06-01 2018-09-18 烟台市安特洛普网络科技有限公司 一种基于面目表情识别的智能vr眼镜
CN208421417U (zh) * 2018-06-01 2019-01-22 烟台市安特洛普网络科技有限公司 一种基于面目表情识别的智能vr眼镜

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022265653A1 (en) * 2021-06-18 2022-12-22 Hewlett-Packard Development Company, L.P. Automated capture of neutral facial expression

Also Published As

Publication number Publication date
CN108549153A (zh) 2018-09-18

Similar Documents

Publication Publication Date Title
WO2019228174A1 (zh) 一种基于面目表情识别的智能vr眼镜
WO2020119245A1 (zh) 一种基于可穿戴手环的情绪识别系统及方法
Strauss et al. The handwave bluetooth skin conductance sensor
CN103006187B (zh) 一种非接触式生命体征数据监测系统和监测方法
EP2698112B1 (en) Real-time stress determination of an individual
Kanoh et al. Development of an eyewear to measure eye and body movements
CN106943258A (zh) 一种多功能无线智能床垫及其人体生理信号测量方法
CN109743656A (zh) 基于脑电意念的智能运动耳机及其实现方法与系统
CN104391569A (zh) 基于认知与情绪状态多模态感知的脑机接口系统
CN108968952A (zh) 一种脑肌电及惯性信息同步采集装置
CN108379713A (zh) 一个基于虚拟现实的交互冥想系统
CN109124655A (zh) 精神状态分析方法、装置、设备、计算机介质及多功能椅
JP2011143059A (ja) 顔面動作推定装置及び顔面動作推定方法
CN106406534A (zh) 一种脑波参与控制的虚拟现实游戏设计技术
Wang et al. Developing an online steady-state visual evoked potential-based brain-computer interface system using EarEEG
CN114298089A (zh) 一种多模态力量训练辅助方法和系统
Kim et al. Interactive emotional content communications system using portable wireless biofeedback device
CN103690161B (zh) 一种穿戴式脑电睡眠质量评估仪
Chen et al. Exgsense: Toward facial gesture sensing with a sparse near-eye sensor array
CN104793743B (zh) 一种虚拟社交系统及其控制方法
Han Using adaptive wireless transmission of wearable sensor device for target heart rate monitoring of sports information
CN103690179B (zh) 一种穿戴式脑电放松训练仪
Tivatansakul et al. Healthcare system design focusing on emotional aspects using augmented reality—Relaxed service design
CN206147520U (zh) 一种用于基于运动想象和p300相结合的脑机接口控制虚拟现实的数据采集装置
CN208421417U (zh) 一种基于面目表情识别的智能vr眼镜

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19810665

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19810665

Country of ref document: EP

Kind code of ref document: A1