WO2022141894A1 - 融合表情和肢体运动的三维特征情绪分析方法 - Google Patents

融合表情和肢体运动的三维特征情绪分析方法 Download PDF

Info

Publication number
WO2022141894A1
WO2022141894A1 PCT/CN2021/085070 CN2021085070W WO2022141894A1 WO 2022141894 A1 WO2022141894 A1 WO 2022141894A1 CN 2021085070 W CN2021085070 W CN 2021085070W WO 2022141894 A1 WO2022141894 A1 WO 2022141894A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
limb
different
analysis method
motion
Prior art date
Application number
PCT/CN2021/085070
Other languages
English (en)
French (fr)
Inventor
马勇
马渊
Original Assignee
苏州源想理念文化发展有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州源想理念文化发展有限公司 filed Critical 苏州源想理念文化发展有限公司
Publication of WO2022141894A1 publication Critical patent/WO2022141894A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Definitions

  • the invention relates to the technical field of intelligent analysis, in particular to a three-dimensional feature emotion analysis method integrating facial expressions and body movements.
  • Emotion recognition can be applied to all aspects of life: game manufacturers can intelligently analyze the emotions of players, interact with players according to different expressions, and improve the game experience; camera manufacturers can use this technology to capture human expressions, such as when a picture is needed.
  • the facial expressions of the people being photographed can be captured and the photo work can be completed quickly; the government or sociologists can install cameras in public places to analyze the expressions and body movements of the entire social group to understand people's life and work pressure; Commercial buildings can conduct market research on products based on the actions and facial expressions of customers when shopping for products.
  • the invention overcomes the deficiencies of the prior art and provides a three-dimensional feature emotion analysis method integrating facial expressions and body movements.
  • the technical scheme adopted in the present invention is: a three-dimensional feature emotion analysis method integrating facial expressions and body movements, a method for collecting facial muscles and body movement data based on VR virtual interaction, and characterized in that , the analysis method includes the following steps:
  • Adhesion and wearing attach infrared sensors to the muscles with large facial expressions, attach optical inertial sensors to muscles with large movements of human limbs, and wear the VR device on the head;
  • VR virtual virtual various game scenes or audio or text in VR equipment
  • the infrared sensor detects the movement of each facial muscle, and the built-in pupil detector in VR detects the contraction of the pupil.
  • the optical inertial sensor is used to realize the specific positioning of the joint nodes.
  • the optical inertial sensor reflects the light signal to the photographing equipment, so as to capture the movements of the human limb joints in real time, and obtain the information and parameter data of the limb joint motion information.
  • the photographing equipment shoots image information;
  • Immersion degree analysis compare and analyze the data range of facial muscle movement and limb joint movement data in the database in excited and excited state, and analyze the degree of immersion of the tested person into VR games or audio or text.
  • test subjects of different ages and genders are tested with different VR game scenes or audio or text to obtain the immersion level of different age groups and genders in different VR games or audio or text.
  • the optical inertial sensor is attached to at least the upper arm, elbow, wrist, hip, thigh, knee, calf and ankle.
  • different types of VR equipment virtual games or audio or text are switched at preset time intervals to detect the degree to which the same test person is immersed in different VR equipment virtual games or audio or text.
  • the limb joint motion information parameter data is at least limb movement range, limb movement strength, limb movement time and body posture.
  • the excitatory factor combined with the pupil constriction range further evaluates the probability that the immersion degree of the tested person falls into lower, general, higher and intense.
  • the infrared sensor is attached at least on the forehead, eyebrows, eye sockets, cheeks, corners of the mouth and chin of the face.
  • At least 26 of the infrared sensors and the optical inertial sensors are arranged at the positions of the muscles with a large range of facial expressions and muscles with a large range of motion of human limbs, along the central axis of the human body.
  • the infrared sensor and the optical inertial sensor are arranged symmetrically in position, or only on one side of the human body.
  • the inertial positioning sensor includes at least one or more of an accelerometer, a fiber optic gyroscope, or a vibration gyroscope, which is used to measure the limb movement parameters of the person.
  • the real-time facial muscle expression data, real-time limb joint motion data, and pupil contraction data and corresponding static images of the database include at least 100,000 sets of different age groups, different face shapes, different genders, and different The data of intellectual excitement, excited state.
  • the present invention solves the defects existing in the background technology, and the present invention has the following beneficial effects:
  • the present invention provides a three-dimensional feature emotion analysis method that integrates expressions and body movements.
  • the analysis method is based on virtual reality technology, and stimulates the potential interest of the testee through different virtual game scenes or audio or text in VR, and Through real-time facial muscle movement data, real-time limb joint movement data and pupil contraction data, the excitement and excitement are analyzed and evaluated, and the degree of immersion of the tested person in different game scenes or audio or text can be judged, which is beneficial to VR game makers, VR Scene producers understand which genres are more popular, making consumers more immersed.
  • the present invention makes a difference with the corresponding data under the same scene in the database through the motion parameters and limb joint motion data of each facial muscle part of the tested person, takes the variance after the average, and predicts the degree of immersion of the tested person, And the degree of immersion is further refined by the data of pupil constriction.
  • the infrared sensor is attached to the muscle position with a large range of facial expression movement
  • the optical inertial sensor is attached to the muscle position of the human body with a large range of movement, which can more accurately reflect the subtle changes of facial muscles and the range of changes in limbs , and obtain the data of the age, gender, face shape and intelligence of the tested subjects. These data also intervene in the degree of immersion, which greatly improves the accuracy of the analysis of the degree of immersion.
  • the motion capture realized by the optical inertial sensor of the present invention can achieve sub-millimeter motion capture, the frame rate can reach up to 480, the delay is less than 1ms, the positioning cascade can be realized in a larger area, and the experience effect of virtual reality can be greatly improved. .
  • FIG. 1 is a flow chart of the analysis method of the present invention.
  • the terms “installed”, “connected” and “connected” should be understood in a broad sense, for example, it may be a fixed connection or a detachable connection Connection, or integral connection; it can be mechanical connection or electrical connection; it can be directly connected or indirectly connected through an intermediate medium, and it can be internal communication between two elements.
  • installed should be understood in a broad sense, for example, it may be a fixed connection or a detachable connection Connection, or integral connection; it can be mechanical connection or electrical connection; it can be directly connected or indirectly connected through an intermediate medium, and it can be internal communication between two elements.
  • a three-dimensional feature emotion analysis method that integrates expressions and body movements is a method of facial muscle collection and body movement data collection based on VR virtual interaction.
  • the analysis method is based on virtual reality technology, through VR Different virtual game scenes or audio or text can stimulate the potential interest of the test subject, and analyze and evaluate the excitement and excitement through real-time facial muscle movement data, real-time limb joint movement data and pupil contraction data to judge and evaluate the test subject's immersion.
  • the degree of different game scenes or audio or text is helpful for VR game makers and VR scene makers to understand which type is more popular and makes consumers more immersed.
  • a three-dimensional feature emotion analysis method integrating expressions and body movements including the following steps:
  • Adhesion and wearing attach infrared sensors to the muscles with large facial expressions, attach optical inertial sensors to muscles with large movements of human limbs, and wear the VR device on the head;
  • VR virtual virtual various game scenes or audio or text in VR equipment
  • the infrared sensor detects the movement of each facial muscle, and the built-in pupil detector in VR detects the contraction of the pupil.
  • the optical inertial sensor is used to realize the specific positioning of the joint nodes.
  • the optical inertial sensor reflects the light signal to the photographing equipment, so as to capture the movements of the human limb joints in real time, and obtain the information and parameter data of the limb joint motion information.
  • the photographing equipment shoots image information;
  • the excitatory factor combined with the pupil constriction range further evaluates the probability that the immersion degree of the tested person falls into lower, general, higher and intense.
  • infrared sensors are attached at least on the forehead, eyebrows, eye sockets, cheeks, corners of the mouth and the chin of the face. Attach optical inertial sensors to at least the upper arms, elbows, wrists, hips, thighs, knees, calves, and ankles. At least 26 infrared sensors and optical inertial sensors are installed at the muscle positions with a large range of facial expressions and human limbs, and they are symmetrically arranged along the central axis of the human body, or only one side of the human body is installed. The infrared sensor and optical inertial sensors.
  • the present invention switches different types of VR equipment virtual games or audio or text at preset time intervals to detect the degree of immersion of the same person under test into different VR equipment virtual games or audio or text.
  • the limb joint motion information parameter data of the present invention is at least limb movement range, limb movement strength, limb movement time and body posture.
  • the inertial positioning sensor of the present invention at least includes one or more of an accelerometer, a fiber optic gyroscope or a vibration gyroscope, which is used to measure the limb motion parameters of the person.
  • the present invention is preferably an accelerometer and a fiber optic gyroscope.
  • the motion capture realized by the optical inertial sensor of the present invention can achieve sub-millimeter motion capture, the frame rate can reach up to 480, the delay is less than 1ms, and a larger area of positioning cascade can be realized. Greatly improve the experience of virtual reality.
  • the real-time facial muscle expression data, real-time limb joint motion data, and pupil contraction data and corresponding static images in the database of the present invention include at least 100,000 sets of data in excited and excited states of different age groups, different face shapes, different genders and different intelligences .

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

一种融合表情和肢体运动的三维特征情绪分析方法,包括以下步骤:S1、在脸部表情活动幅度大的肌肉位置黏附红外传感器,在人肢体活动幅度大的肌肉位置黏附光学惯性传感器;S2、VR设备中虚拟各种不同游戏场景或音频或文字;S3、红外传感器检测到每个脸部肌肉的运动,同时VR内置的瞳孔检测器检测瞳孔的收缩情况,通过光学惯性传感器,实时捕捉人肢体关节的动作,得到肢体关节运动信息参数数据;S4、导出实时脸部、肢体关节运动数据以及瞳孔收缩数据;S5、与数据库中亢奋、兴奋状态下的数据范围进行对比分析,分析出被测人员沉浸的程度。通过VR激发被测试者的潜在兴趣,判断评测被测人员沉浸不同游戏场景或音频或文字的程度。

Description

融合表情和肢体运动的三维特征情绪分析方法 技术领域
本发明涉及智能分析技术领域,尤其涉及一种融合表情和肢体运动的三维特征情绪分析方法。
背景技术
随着计算机视觉和多媒体技术的进步,智能情绪识别分析已是目前计算机视觉中最活跃的研究领域之一。其目的是对人类的图像序列进行检测、跟踪和识别,更科学地解释人类行为。情绪识别可以应用于生活的各个方面:游戏厂商可以智能分析玩家的情绪,根据不同表情针对性地和玩家交互,提高游戏的体验;相机厂商可以利用该项技术捕捉人类表情,比如当需要一张微笑或者生气的照片时,可以捕获被拍人员的面部表情并快速完成拍照工作;政府或社会学家可以在公共场合安装摄像头,分析整个社会群体的表情和肢体动作以了解人们的生活工作压力;商厦可以根据顾客对商品的购物时的动作及表情视频,对产品做相关的市场调查。
在实际应用中,单纯基于人脸表情的情绪识别研究已遇到瓶颈,一方面,基于实验室视角的正面人脸表情识别已达到极高识别率,但相关算法在应用于自然态人脸表情识别时却识别率较低;另一方面,肢体动作同样是人们获取社交和情绪的重要线索之一,在很多应用场合中,能够为基于面部表情的情绪识别提供有效的帮助。因此,开展融合面部表情和肢体动作的情绪识别研究对今后人类情感智能识别相关应用的发展具有重要价值。
发明内容
本发明克服了现有技术的不足,提供一种融合表情和肢体运动的三维特征情绪分析方法。
为达到上述目的,本发明采用的技术方案为:一种融合表情和肢体运动的 三维特征情绪分析方法,为一种基于VR虚拟互动的脸部肌肉采集和肢体运动数据采集的方法,其特征在于,所述分析方法,包括以下步骤:
S1、黏附与佩戴:在脸部表情活动幅度大的肌肉位置黏附红外传感器,在人肢体活动幅度大的肌肉位置黏附光学惯性传感器,并将VR设备佩戴于头部;
S2、VR虚拟:VR设备中虚拟各种不同游戏场景或音频或文字;
S3、表情和肢体运动:红外传感器检测到每个脸部肌肉的运动,同时VR内置的瞳孔检测器检测瞳孔的收缩情况,同时人体肢体活动运动时,通过光学惯性传感器,实现关节节点的具体定位,光学惯性传感器反射光信号至摄影设备,从而实时捕捉人肢体关节的动作,得到肢体关节运动信息参数数据,同时,摄影设备拍摄图像信息;
S4、导出运动数据:导出实时脸部肌肉运动数据、实时肢体关节运动数据以及瞳孔收缩的数据并进行聚合分析;
S5、沉浸程度分析:与数据库中亢奋、兴奋状态下的脸部肌肉运动数据范围、肢体关节运动数据范围进行对比分析,分析出被测人员沉浸入VR游戏或音频或文字的程度。
本发明一个较佳实施例中,对不同年龄段、不同性别的被测人员测试不同VR游戏场景或音频或文字,得到不同年龄段、不同性别对于不同VR游戏或音频或文字的沉浸程度。
本发明一个较佳实施例中,至少在上臂部、手肘部、手腕部、臀部、大腿部、膝盖部、小腿部以及脚腕部黏附所述光学惯性传感器。
本发明一个较佳实施例中,每间隔预设时长切换不同类型的VR备虚拟游戏或音频或文字,以检测同一被测人员沉浸入不同VR备虚拟游戏或音频或文字的程度。
本发明一个较佳实施例中,在所述S3中,肢体关节运动信息参数数据至少 为肢体活动幅度、肢体活动力度、肢体活动时间以及身体姿势。
本发明一个较佳实施例中,在所述S5中,还包括以下步骤:
S51、将测试者按照不同VR游戏场景或音频或文字分类每个脸部肌肉部位的运动参数、肢体关节运动数据以及瞳孔收缩的数据;
S52、与正常人在同一VR游戏场景或音频或文字的每个脸部肌肉部位的运动参数的值分别作差,肢体关节运动数据也作差,两个差值取平均值后取方差,得到兴奋因子γ,若兴奋因子γ<2,则被测人员沉浸程度较低;若兴奋因子2≤γ<5,则被测人员沉浸程度一般;若心兴奋因子5≤γ<8,则被测人员沉浸程度较高;若心兴奋因子γ≥8,则被测人员沉浸程度强烈;
S53、兴奋因子结合瞳孔收缩的范围进一步评测被测人员沉浸程度落入较低、一般、较高和强烈的概率。
本发明一个较佳实施例中,至少在脸部的前额处、眉毛处、眼眶处、腮颊处、嘴角处以及下颚处黏附所述红外传感器。
本发明一个较佳实施例中,在脸部表情活动幅度大的肌肉位置和人肢体活动幅度大的肌肉位置均至少设置26个所述红外传感器和所述光学惯性传感器,并沿人体的中轴线位置对称设置,或只设置人体的一侧所述红外传感器和所述光学惯性传感器。
本发明一个较佳实施例中,惯性定位传感器至少包括加速度计、光纤陀螺仪或振动陀螺仪中的一种或多种用于测定人物的肢体运动参数。
本发明一个较佳实施例中,所述数据库的实时脸部肌肉表情数据、实时肢体关节运动数据以及瞳孔收缩的数据和对应的静态图像至少包括100000套不同年龄层、不同脸型、不同性别以及不同智力的亢奋、兴奋状态下的数据。
本发明解决了背景技术中存在的缺陷,本发明具备以下有益效果:
(1)本发明提供了一种融合表情和肢体运动的三维特征情绪分析方法,该 分析方法基于虚拟现实技术,通过VR虚拟的不同游戏场景或音频或文字,激发被测试者的潜在兴趣,并通过实时脸部肌肉运动数据、实时肢体关节运动数据以及瞳孔收缩的数据进行亢奋、兴奋分析评测,判断评测被测人员沉浸不同游戏场景或音频或文字的程度,从而有利于VR游戏制作商、VR场景制作人了解何种类型更受欢迎,使得消费者更加沉浸。
(2)本发明通过被测人员每个脸部肌肉部位的运动参数、肢体关节运动数据,与数据库中同一场景下的相应数据作差,取平均值后方差,预测被测人员沉浸的程度,并通过瞳孔收缩的数据进一步精确沉浸的程度。
(3)本发明在人脸部表情活动幅度大的肌肉位置黏附红外传感器,人肢体活动幅度大的肌肉位置黏附光学惯性传感器,能够更加精准地反应脸部肌肉细微的变化,和肢体的变化幅度,并对被测者的年龄、性别、脸型及其智力的数据进行获取,这些数据也对沉浸程度进行干预,极大提高了沉浸程度分析的准确性。
(4)本发明的光学惯性传感器实现的动作捕捉可以实现亚毫米级别的动作捕捉,帧率最高可达480,延迟小于1ms,实现更大面积的定位级联,大幅提升对虚拟现实的体验效果。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明中记载的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图;
图1是本发明的分析方法的流程图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清 楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
在下面的描述中阐述了很多具体细节以便于充分理解本发明,但是本发明还可以采用其他不同于在此描述的其他方式来实施,因此,本发明的保护范围并不受下面公开的具体实施例的限制。
在本申请的描述中,需要理解的是,术语“中心”、“纵向”、“横向”、“上”、“下”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顶”、“底”、“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本申请和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本申请保护范围的限制。此外,术语“第一”、“第二”等仅用于描述目的,而不能理解为指示或暗示相对重要性或隐含指明所指示的技术特征的数量。因此,限定有“第一”、“第二”等的特征可以明示或者隐含地包括一个或者更多个该特征。在本发明创造的描述中,除非另有说明,“多个”的含义是两个或两个以上。
在本申请的描述中,需要说明的是,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或一体地连接;可以是机械连接,也可以是电连接;可以直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通。对于本领域的普通技术人员而言,可以通过具体情况理解上述术语在本申请中的具体含义。
如图1所示,一种融合表情和肢体运动的三维特征情绪分析方法,为一种基于VR虚拟互动的脸部肌肉采集和肢体运动数据采集的方法,该分析方法基于虚拟现实技术,通过VR虚拟的不同游戏场景或音频或文字,激发被测试者的潜在兴趣,并通过实时脸部肌肉运动数据、实时肢体关节运动数据以及瞳孔收缩 的数据进行亢奋、兴奋分析评测,判断评测被测人员沉浸不同游戏场景或音频或文字的程度,从而有利于VR游戏制作商、VR场景制作人了解何种类型更受欢迎,使得消费者更加沉浸。
融合表情和肢体运动的三维特征情绪分析方法,包括以下步骤:
S1、黏附与佩戴:在脸部表情活动幅度大的肌肉位置黏附红外传感器,在人肢体活动幅度大的肌肉位置黏附光学惯性传感器,并将VR设备佩戴于头部;
S2、VR虚拟:VR设备中虚拟各种不同游戏场景或音频或文字;
S3、表情和肢体运动:红外传感器检测到每个脸部肌肉的运动,同时VR内置的瞳孔检测器检测瞳孔的收缩情况,同时人体肢体活动运动时,通过光学惯性传感器,实现关节节点的具体定位,光学惯性传感器反射光信号至摄影设备,从而实时捕捉人肢体关节的动作,得到肢体关节运动信息参数数据,同时,摄影设备拍摄图像信息;
S4、导出运动数据:导出实时脸部肌肉运动数据、实时肢体关节运动数据以及瞳孔收缩的数据并进行聚合分析;
S5、沉浸程度分析:与数据库中亢奋、兴奋状态下的脸部肌肉运动数据范围、肢体关节运动数据范围进行对比分析,分析出被测人员沉浸入VR游戏或音频或文字的程度。
S51、将测试者按照不同VR游戏场景或音频或文字分类每个脸部肌肉部位的运动参数、肢体关节运动数据以及瞳孔收缩的数据;
S52、与正常人在同一VR游戏场景或音频或文字的每个脸部肌肉部位的运动参数的值分别作差,肢体关节运动数据也作差,两个差值取平均值后取方差,得到兴奋因子γ,若兴奋因子γ<2,则被测人员沉浸程度较低;若兴奋因子2≤γ<5,则被测人员沉浸程度一般;若心兴奋因子5≤γ<8,则被测人员沉浸程度较高;若心兴奋因子γ≥8,则被测人员沉浸程度强烈;
S53、兴奋因子结合瞳孔收缩的范围进一步评测被测人员沉浸程度落入较低、一般、较高和强烈的概率。
本发明至少在脸部的前额处、眉毛处、眼眶处、腮颊处、嘴角处以及下颚处黏附红外传感器。至少在上臂部、手肘部、手腕部、臀部、大腿部、膝盖部、小腿部以及脚腕部黏附光学惯性传感器。在脸部表情活动幅度大的肌肉位置和人肢体活动幅度大的肌肉位置均至少设置26个红外传感器和光学惯性传感器,并沿人体的中轴线位置对称设置,或只设置人体的一侧红外传感器和光学惯性传感器。
对不同年龄段、不同性别的被测人员测试不同VR游戏场景或音频或文字,得到不同年龄段、不同性别对于不同VR游戏或音频或文字的沉浸程度。
本发明每间隔预设时长切换不同类型的VR备虚拟游戏或音频或文字,以检测同一被测人员沉浸入不同VR备虚拟游戏或音频或文字的程度。
本发明肢体关节运动信息参数数据至少为肢体活动幅度、肢体活动力度、肢体活动时间以及身体姿势。
本发明惯性定位传感器至少包括加速度计、光纤陀螺仪或振动陀螺仪中的一种或多种用于测定人物的肢体运动参数。本发明优选为加速度计和光纤陀螺仪,本发明的光学惯性传感器实现的动作捕捉可以实现亚毫米级别的动作捕捉,帧率最高可达480,延迟小于1ms,实现更大面积的定位级联,大幅提升对虚拟现实的体验效果。
本发明数据库的实时脸部肌肉表情数据、实时肢体关节运动数据以及瞳孔收缩的数据和对应的静态图像至少包括100000套不同年龄层、不同脸型、不同性别以及不同智力的亢奋、兴奋状态下的数据。
以上依据本发明的理想实施例为启示,通过上述的说明内容,相关人员完全可以在不偏离本项发明技术思想的范围内,进行多样的变更以及修改。本项 发明的技术性范围并不局限于说明书上的内容,必须要根据权利要求范围来确定技术性范围。

Claims (10)

  1. 一种融合表情和肢体运动的三维特征情绪分析方法,为一种基于VR虚拟互动的脸部肌肉采集和肢体运动数据采集的方法,其特征在于,所述分析方法,包括以下步骤:
    S1、黏附与佩戴:在脸部表情活动幅度大的肌肉位置黏附红外传感器,在人肢体活动幅度大的肌肉位置黏附光学惯性传感器,并将VR设备佩戴于头部;
    S2、VR虚拟:VR设备中虚拟各种不同游戏场景或音频或文字;
    S3、表情和肢体运动:红外传感器检测到每个脸部肌肉的运动,同时VR内置的瞳孔检测器检测瞳孔的收缩情况,同时人体肢体活动运动时,通过光学惯性传感器,实现关节节点的具体定位,光学惯性传感器反射光信号至摄影设备,从而实时捕捉人肢体关节的动作,得到肢体关节运动信息参数数据,同时,摄影设备拍摄图像信息;
    S4、导出运动数据:导出实时脸部肌肉运动数据、实时肢体关节运动数据以及瞳孔收缩的数据并进行聚合分析;
    S5、沉浸程度分析:与数据库中亢奋、兴奋状态下的脸部肌肉运动数据范围、肢体关节运动数据范围进行对比分析,分析出被测人员沉浸入VR游戏或音频或文字的程度。
  2. 根据权利要求1所述的一种融合表情和肢体运动的三维特征情绪分析方法,其特征在于:对不同年龄段、不同性别的被测人员测试不同VR游戏场景或音频或文字,得到不同年龄段、不同性别对于不同VR游戏或音频或文字的沉浸程度。
  3. 根据权利要求1所述的一种融合表情和肢体运动的三维特征情绪分析方法,其特征在于:至少在上臂部、手肘部、手腕部、臀部、大腿部、膝盖部、小腿部以及脚腕部黏附所述光学惯性传感器。
  4. 根据权利要求1所述的一种融合表情和肢体运动的三维特征情绪分析方 法,其特征在于:每间隔预设时长切换不同类型的VR备虚拟游戏或音频或文字,以检测同一被测人员沉浸入不同VR备虚拟游戏或音频或文字的程度。
  5. 根据权利要求1所述的一种融合表情和肢体运动的三维特征情绪分析方法,其特征在于:在所述S3中,肢体关节运动信息参数数据至少为肢体活动幅度、肢体活动力度、肢体活动时间以及身体姿势。
  6. 根据权利要求1所述的一种融合表情和肢体运动的三维特征情绪分析方法,其特征在于:在所述S5中,还包括以下步骤:
    S51、将测试者按照不同VR游戏场景或音频或文字分类每个脸部肌肉部位的运动参数、肢体关节运动数据以及瞳孔收缩的数据;
    S52、与正常人在同一VR游戏场景或音频或文字的每个脸部肌肉部位的运动参数的值分别作差,肢体关节运动数据也作差,两个差值取平均值后取方差,得到兴奋因子γ,若兴奋因子γ<2,则被测人员沉浸程度较低;若兴奋因子2≤γ<5,则被测人员沉浸程度一般;若心兴奋因子5≤γ<8,则被测人员沉浸程度较高;若心兴奋因子γ≥8,则被测人员沉浸程度强烈;
    S53、兴奋因子结合瞳孔收缩的范围进一步评测被测人员沉浸程度落入较低、一般、较高和强烈的概率。
  7. 根据权利要求1所述的一种融合表情和肢体运动的三维特征情绪分析方法,其特征在于:至少在脸部的前额处、眉毛处、眼眶处、腮颊处、嘴角处以及下颚处黏附所述红外传感器。
  8. 根据权利要求1所述的一种融合表情和肢体运动的三维特征情绪分析方法,其特征在于:在脸部表情活动幅度大的肌肉位置和人肢体活动幅度大的肌肉位置均至少设置26个所述红外传感器和所述光学惯性传感器,并沿人体的中轴线位置对称设置,或只设置人体的一侧所述红外传感器和所述光学惯性传感器。
  9. 根据权利要求1所述的一种融合表情和肢体运动的三维特征情绪分析方法,其特征在于:惯性定位传感器至少包括加速度计、光纤陀螺仪或振动陀螺仪中的一种或多种用于测定人物的肢体运动参数。
  10. 根据权利要求1所述的一种融合表情和肢体运动的三维特征情绪分析方法,其特征在于:所述数据库的实时脸部肌肉表情数据、实时肢体关节运动数据以及瞳孔收缩的数据和对应的静态图像至少包括100000套不同年龄层、不同脸型、不同性别以及不同智力的亢奋、兴奋状态下的数据。
PCT/CN2021/085070 2020-12-31 2021-04-01 融合表情和肢体运动的三维特征情绪分析方法 WO2022141894A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011625267.1 2020-12-31
CN202011625267 2020-12-31

Publications (1)

Publication Number Publication Date
WO2022141894A1 true WO2022141894A1 (zh) 2022-07-07

Family

ID=82260096

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/085070 WO2022141894A1 (zh) 2020-12-31 2021-04-01 融合表情和肢体运动的三维特征情绪分析方法

Country Status (1)

Country Link
WO (1) WO2022141894A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115880441A (zh) * 2023-02-06 2023-03-31 合肥孪生宇宙科技有限公司 一种3d可视化模拟人物生成方法及系统
CN116468826A (zh) * 2023-06-16 2023-07-21 北京百度网讯科技有限公司 表情生成模型的训练方法、表情生成的方法及装置
CN117409454A (zh) * 2023-08-25 2024-01-16 中国人民解放军空军军医大学 一种基于面部肌肉运动监测的情绪动态识别方法和装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107123019A (zh) * 2017-03-28 2017-09-01 华南理工大学 一种基于生理数据与情绪识别的vr购物推荐系统及方法
WO2018141061A1 (en) * 2017-02-01 2018-08-09 Cerebian Inc. System and method for measuring perceptual experiences
CN111375196A (zh) * 2018-12-27 2020-07-07 电子技术公司 基于感知的动态游戏状态配置
CN111680550A (zh) * 2020-04-28 2020-09-18 平安科技(深圳)有限公司 情感信息识别方法、装置、存储介质及计算机设备
CN111742560A (zh) * 2017-09-29 2020-10-02 华纳兄弟娱乐公司 响应于用户情绪状态的影视内容的制作和控制

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018141061A1 (en) * 2017-02-01 2018-08-09 Cerebian Inc. System and method for measuring perceptual experiences
CN107123019A (zh) * 2017-03-28 2017-09-01 华南理工大学 一种基于生理数据与情绪识别的vr购物推荐系统及方法
CN111742560A (zh) * 2017-09-29 2020-10-02 华纳兄弟娱乐公司 响应于用户情绪状态的影视内容的制作和控制
CN111375196A (zh) * 2018-12-27 2020-07-07 电子技术公司 基于感知的动态游戏状态配置
CN111680550A (zh) * 2020-04-28 2020-09-18 平安科技(深圳)有限公司 情感信息识别方法、装置、存储介质及计算机设备

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115880441A (zh) * 2023-02-06 2023-03-31 合肥孪生宇宙科技有限公司 一种3d可视化模拟人物生成方法及系统
CN115880441B (zh) * 2023-02-06 2023-05-09 合肥孪生宇宙科技有限公司 一种3d可视化模拟人物生成方法及系统
CN116468826A (zh) * 2023-06-16 2023-07-21 北京百度网讯科技有限公司 表情生成模型的训练方法、表情生成的方法及装置
CN116468826B (zh) * 2023-06-16 2023-10-27 北京百度网讯科技有限公司 表情生成模型的训练方法、表情生成的方法及装置
CN117409454A (zh) * 2023-08-25 2024-01-16 中国人民解放军空军军医大学 一种基于面部肌肉运动监测的情绪动态识别方法和装置

Similar Documents

Publication Publication Date Title
WO2022141894A1 (zh) 融合表情和肢体运动的三维特征情绪分析方法
Lamonaca et al. Health parameters monitoring by smartphone for quality of life improvement
CN110269587B (zh) 婴幼儿动作分析系统和基于动作的婴幼儿视力分析系统
CN108030498A (zh) 一种基于眼动数据的心理干预系统
CN113837153B (zh) 一种融合瞳孔数据和面部表情的实时情绪识别方法及系统
Eichler et al. Non-invasive motion analysis for stroke rehabilitation using off the shelf 3d sensors
CN114999646A (zh) 新生儿运动发育评估系统、方法、装置及存储介质
Wei et al. Using sensors and deep learning to enable on-demand balance evaluation for effective physical therapy
CN113768471B (zh) 一种基于步态分析的帕金森疾病辅助诊断系统
TW202221621A (zh) 用於照護教育之虛擬環境訓練系統
Masullo et al. CaloriNet: From silhouettes to calorie estimation in private environments
CN114052725B (zh) 基于人体关键点检测的步态分析算法设定方法以及装置
Mirabet-Herranz et al. LVT Face Database: A benchmark database for visible and hidden face biometrics
CN115553779A (zh) 一种情绪识别方法、装置、电子设备及存储介质
Fayez et al. Vals: A leading visual and inertial dataset of squats
Peng et al. MVPD: A multimodal video physiology database for rPPG
CN113827242A (zh) 一种集成面部肌肉和肢体活动的特征情绪检测方法
KR20140132864A (ko) 스트레스 변화에 따른 신체 및 심리 변화의 동영상 이용 간이 측정 방법 및 이를 이용한 힐링서비스
Nehra et al. Unobtrusive and non-invasive human activity recognition using Kinect sensor
Dhole et al. Review of Deep Learning Models for Mask Detection and Medical Sensors for IoT based Health Care System
Nahavandi et al. A low cost anthropometric body scanning system using depth cameras
Jaiswal et al. Color space analysis for improvement in rPPG
Krupicka et al. Motion camera system for measuring finger tapping in parkinson’s disease
Jobbagy et al. PAM: passive marker-based analyzer to test patients with neural diseases
Schiavone et al. Multimodal ecological technology: From child’s social behavior assessment to child-robot interaction improvement

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21912664

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21912664

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 24.11.2023)