WO2022184133A1 - 一种基于视觉的人脸表情识别方法 - Google Patents

一种基于视觉的人脸表情识别方法 Download PDF

Info

Publication number
WO2022184133A1
WO2022184133A1 PCT/CN2022/079035 CN2022079035W WO2022184133A1 WO 2022184133 A1 WO2022184133 A1 WO 2022184133A1 CN 2022079035 W CN2022079035 W CN 2022079035W WO 2022184133 A1 WO2022184133 A1 WO 2022184133A1
Authority
WO
WIPO (PCT)
Prior art keywords
sample set
expression recognition
facial expression
classifier
recognition method
Prior art date
Application number
PCT/CN2022/079035
Other languages
English (en)
French (fr)
Inventor
赵雪专
裴利沈
李玲玲
赵中堂
薄树奎
马腾
杨勇
张湘熙
刘汉卿
Original Assignee
郑州航空工业管理学院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 郑州航空工业管理学院 filed Critical 郑州航空工业管理学院
Publication of WO2022184133A1 publication Critical patent/WO2022184133A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • the invention relates to the field of information processing, in particular to a visual-based facial expression recognition method.
  • the purpose of the present invention is to provide a visual-based facial expression recognition method, which can recognize the degree of expression of the facial expression.
  • An embodiment of the present invention provides a visual-based facial expression recognition method, comprising the following steps:
  • the expression feature sample set is introduced into the expression recognition classifier for learning and training, and the trained expression recognition classifier is obtained;
  • test samples are put into the trained expression recognition classifier, and the expression recognition classifier evaluates the test samples.
  • obtaining the face image sample set above, and preprocessing the face image sample set includes converting the color image of the face image sample set into a grayscale image:
  • R, G, B are the color components of each color image pixel respectively, and g is the converted gray value.
  • the features of the preprocessed face image sample set are extracted above, and the expression feature samples obtained from the expression feature sample set are composed of face geometric features.
  • the aforementioned geometric features of the human face include glasses, eyebrows, nose and mouth.
  • the change of the above-mentioned facial expression feature includes a change of the texture shape of the geometric feature of the human face.
  • the facial geometric feature extraction method of the expression feature samples of the above-mentioned expression feature sample set uses the spatiotemporal features of the differential image.
  • the above-mentioned facial expression recognition classifier is a BP neural network classifier.
  • the above-mentioned BP neural network classifier takes the expression feature samples as input, performs linear combination in the BP neural network, and outputs through a nonlinear activation function in each neuron, and each neuron obtains a calculation result.
  • the working process of the above-mentioned BP neural network classifier includes two stages:
  • the embodiments of the present invention have at least the following advantages or beneficial effects:
  • a vision-based facial expression recognition method comprising the following steps:
  • the expression feature sample set is introduced into the expression recognition classifier for learning and training, and the trained expression recognition classifier is obtained;
  • test samples are put into the trained expression recognition classifier, and the expression recognition classifier evaluates the test samples.
  • a visual-based facial expression recognition method includes four steps in total.
  • the first step is to obtain a face image sample set and preprocess the face image sample set, wherein the face image
  • the method of sample set preprocessing is to convert the color image in the RGB space into a grayscale image, because the color of each pixel in the color image is determined by three components of R, G, and B, and each component has 256 values. Therefore, such a pixel can have a color variation range of more than 16 million.
  • the large amount of data causes a certain burden on the calculation of the storage box, and the variation range of a pixel in a grayscale image is only 256 cases.
  • the second step is to extract the features of the preprocessed face image sample set and obtain the expression feature sample set. In order to be able to identify the types and changes of facial expressions, using the expression feature samples is a significant and effective strategy.
  • the third step is to introduce the expression feature sample set into the expression recognition classifier for learning and training, and obtain a trained expression recognition classifier.
  • the expression recognition classifier can use the BP neural network classifier, and the BP neural network classifier is a A learning model based on artificial neural network structure, which can achieve the purpose of learning by setting up multiple hidden layers and continuously modifying the network weights in the way of back-propagation.
  • 5,000 face pictures are used as the training sample set, and 5,000 face pictures are selected as the test sample set. Then adjust the weights in the negative direction. If the error decreases, adjust the weights in the positive direction, so as to continuously improve the recognition accuracy; when the predetermined number of training times or the preset convergence conditions are reached, the training process is completed.
  • the last step is to put the test samples into the trained expression recognition classifier, and the expression recognition classifier evaluates the test samples.
  • the present invention more accurately distinguishes different degrees of the same expression for facial expression recognition, and adopts four steps to realize facial expression recognition
  • image preprocessing adopts the method of converting RGB space into grayscale image
  • feature extraction adopts The texture analysis technology of the gray level co-occurrence matrix, and the expression space-time extraction based on the differential image
  • the BP neural network method is used as the classifier.
  • the BP neural network has three layers, the feature vector is used as the input, and the finely classified expression is used as the
  • the coding output through continuous learning and training, establishes a learning model, and combines deep learning technology to build a convolutional neural network, which further improves the accuracy of expression recognition.
  • FIG. 1 is a flowchart of a visual-based facial expression recognition method according to an embodiment of the present invention.
  • a vision-based facial expression recognition method provided by an embodiment of the present invention includes the following steps:
  • the expression feature sample set is introduced into the expression recognition classifier for learning and training, and the trained expression recognition classifier is obtained;
  • test samples are put into the trained expression recognition classifier, and the expression recognition classifier evaluates the test samples.
  • a visual-based facial expression recognition method includes four steps in total. First, a face image sample set is acquired, and the face image sample set is preprocessed. Among them, since other body parts such as shoulders and necks are also collected while scanning the three-dimensional face, it is necessary to separate the face and remove the redundant non-face information. For each collected face data, there will be differences in size and inclination angle due to factors such as posture and distance. In order to reduce the differences in these two aspects, the face needs to be normalized and aligned. In order to determine the normalized coordinate value of each point cloud data of the face, the nose tip is selected as the feature reference point in the present embodiment, so four steps are obtained through nose tip positioning, face normalization, posture correction and depth map.
  • the raw 3D point data is preprocessed.
  • the nose tip is extracted by using the two features of the nose tip as the local highest point and the shape of a spire.
  • For each point P its adjacent points can be regarded as distributed on a sphere with the point as the center, defining the inner product of the effective energy (P i -P) and N p .
  • N p represents two normalized normals, P is the point to be selected, and its three-dimensional coordinates and normal vector N p are known , Pi is an adjacent point.
  • the angle ⁇ between the normal and the adjacent point is greater than 90°, so the effective energy d i is a negative value .
  • This limitation is not enough to detect the tip of the nose. , because many points on the cheeks, chin, etc. also satisfy this situation, but this step greatly reduces the search range.
  • the two classes are distinguished by finding the minimum value of the energy function:
  • the kernel function For the test vector x the function category is judged by the following formula: b* is the parameter obtained by optimizing the energy function, and L is the number of support vectors obtained.
  • the length and width of the face surface are not equal, and the coefficient C is decomposed to obtain three eigenvalues ⁇ 1 ⁇ ⁇ 2 ⁇ ⁇ 3 and the corresponding eigenvectors of different sizes.
  • O v as the origin
  • Y axis as the X-axis
  • Z axis define a pose coordinate system PCS, and transform the original point cloud into a new coordinate space, in which all faces have the same frontal pose:
  • the preprocessed 3D point cloud can be mapped onto the 2D plane through orthogonal projection to obtain the depth information of the point cloud, and the corresponding color information is converted into a grayscale image.
  • the grayscale method is to convert the color image in the RGB space into a grayscale image, because the color of each pixel in the color image is determined by three components, R, G, and B, and each component has 256 values. , so such a pixel can have a color variation range of more than 16 million, the amount of data is too large to cause a certain burden on the storage box calculation, and the variation range of a pixel in a grayscale image is only 256 cases, so by converting the face image Converting the sample into a grayscale image can greatly reduce the amount of subsequent image processing calculations.
  • the grayscale image can still reflect the overall and local distribution and characteristics of the chromaticity and brightness levels of the entire image that are consistent with the color image.
  • the use of expression feature samples is a significant and effective strategy.
  • the expression feature sample set is introduced into the expression recognition classifier for learning and training, and the trained expression recognition classifier is obtained.
  • the facial expression recognition classifier can use the BP neural network classifier.
  • the BP neural network classifier is a learning model based on the artificial neural network structure. It can continuously modify the network weight by setting up multiple hidden layers and using back propagation. value, so as to achieve the purpose of learning.
  • 5,000 face pictures are used as the training sample set, and 5,000 face pictures are selected as the test sample set. Then adjust the weights in the negative direction. If the error decreases, adjust the weights in the positive direction, so as to continuously improve the recognition accuracy; when the predetermined number of training times or the preset convergence conditions are reached, the training process is completed.
  • test samples are put into the trained expression recognition classifier, and the expression recognition classifier evaluates the test samples.
  • the present invention more accurately distinguishes different degrees of the same expression for facial expression recognition, and adopts four steps to realize facial expression recognition
  • image preprocessing adopts the method of converting RGB space into grayscale image
  • feature extraction adopts The texture analysis technology of the gray level co-occurrence matrix, and the expression space-time extraction based on the differential image
  • the BP neural network method is used as the classifier.
  • the BP neural network has three layers, the feature vector is used as the input, and the finely classified expression is used as the
  • the coding output through continuous learning and training, establishes a learning model, and combines deep learning technology to build a convolutional neural network, which further improves the accuracy of expression recognition.
  • acquiring a face image sample set, and preprocessing the face image sample set includes converting a color image of the face image sample set into a grayscale image:
  • R, G, B are the color components of each color image pixel respectively, and g is the converted gray value.
  • each pixel in the color image is determined by three components, R, G, and B, and each component has 256 values, so such a pixel can have more than 16 million color changes.
  • the amount of data is too large to cause a certain burden to the calculation of the storage box, and the variation range of one pixel of a grayscale image is only 256 cases, so by converting the face image sample into a grayscale image, the subsequent images can be greatly reduced.
  • the grayscale image can still reflect the distribution and characteristics of the overall and local chromaticity and brightness levels of the entire image consistent with the color image.
  • the features of the preprocessed face image sample set are extracted, and the expression feature samples obtained from the expression feature sample set are composed of face geometric features.
  • the geometric features of the human face refer to the changes in the positions of various organs on the human face, such as eyes, eyebrows, nose, and mouth, and also include changes in the positions of the corners of the eyes, the tip of the eyebrows, and the corners of the mouth.
  • the geometric features of the human face include eyes, eyebrows, nose and mouth.
  • the eyes, eyebrows, nose and mouth can express facial expressions significantly and effectively.
  • the change in the expression feature includes a change in the texture shape of the geometric feature of the human face.
  • the texture analysis adopts the method of spatial grayscale co-occurrence matrix, and a matrix is obtained by calculating the number of times that two grayscale levels are adjacent in a certain direction in the image.
  • the directions here generally include horizontal, 45°, and 90°. and 135°.
  • each element (i, j) in the gray-level co-occurrence matrix represents the number of times that the gray level i and the gray level j are horizontally adjacent in the image.
  • the facial geometric feature extraction method of the expression feature samples of the expression feature sample set uses the spatiotemporal features of the differential image.
  • the facial expression video data set can be extracted from the face image of each frame in the acquired video to obtain a feature vector
  • the geometric features used for expression recognition refer to the changes of several main organs on the face, which can be
  • the difference operation is performed directly between each frame of image in the video sequence and the neutral expression frame, and the facial expression texture feature extracted by the above-mentioned gray co-occurrence matrix method is used. Matrix subtraction, and then expand the new gray level co-occurrence matrix into a vector representation by column, and obtain the face geometric feature vector.
  • the expression recognition classifier is a BP neural network classifier.
  • the BP neural network is a learning model based on an artificial neural network structure, and the network weights can be continuously modified by setting multiple hidden layers and backpropagating, so as to achieve the purpose of learning and training.
  • the BP neural network classifier takes the expression feature samples as input, performs linear combination in the BP neural network, and outputs a nonlinear activation function in each neuron, and each neuron obtains a Calculation results.
  • the working process of the BP neural network classifier includes two stages:
  • the BP neural network is a learning model based on an artificial neural network structure, which can achieve the purpose of learning and training by setting up multiple hidden layers and continuously modifying the network weights by means of back propagation.
  • the activation function :
  • an error function should be determined before learning and training.
  • the square sum of the difference between the actual output x and the expected output y is used to represent:
  • the BP neural network continuously adjusts the network weights to reduce the error, so as to achieve the expected effect.
  • the present invention adopts a three-layer neural network structure, wherein the input layer is set to include 10 nodes, corresponding to a set of facial expression feature input vectors; the hidden layer has 10 nodes; the output layer has a total of 9 nodes, representing 9 The output result of the expression.
  • a visual-based facial expression recognition method provided by an embodiment of the present invention includes the following steps:
  • the expression feature sample set is introduced into the expression recognition classifier for learning and training, and the trained expression recognition classifier is obtained;
  • test samples are put into the trained expression recognition classifier, and the expression recognition classifier evaluates the test samples.
  • a visual-based facial expression recognition method includes four steps in total.
  • the first step is to obtain a face image sample set and preprocess the face image sample set.
  • the preprocessing method of the face image sample set is to convert the color image in the RGB space into a grayscale image.
  • the color of each pixel in a color image is determined by three components, R, G, and B, and each component has 256 values. Therefore, such a pixel can have a color variation range of more than 16 million, and the amount of data is too large.
  • the chrominance image can still reflect the distribution and characteristics of the global and local chrominance and luminance levels of the entire image that are consistent with the color image.
  • the second step is to extract the features of the preprocessed face image sample set to obtain the expression feature sample set.
  • the use of expression feature samples is a significant and effective strategy.
  • the third step is the step of introducing the expression feature sample set into the expression recognition classifier for learning and training, and obtaining a trained expression recognition classifier.
  • the facial expression recognition classifier can use the BP neural network classifier.
  • the BP neural network classifier is a learning model based on the artificial neural network structure. It can continuously modify the network weight by setting up multiple hidden layers and using back propagation. value, so as to achieve the purpose of learning.
  • 5,000 face pictures are used as the training sample set, and 5,000 face pictures are selected as the test sample set. Then adjust the weights in the negative direction. If the error decreases, adjust the weights in the positive direction, so as to continuously improve the recognition accuracy; when the predetermined number of training times or the preset convergence conditions are reached, the training process is completed.
  • the last step is to put the test samples into the trained expression recognition classifier, and the expression recognition classifier evaluates the test samples.
  • the present invention more accurately distinguishes different degrees of the same expression for facial expression recognition, and adopts four steps to realize facial expression recognition
  • image preprocessing adopts the method of converting RGB space into grayscale image
  • feature extraction adopts The texture analysis technology of the gray level co-occurrence matrix, and the expression space-time extraction based on the differential image
  • the BP neural network method is used as the classifier.
  • the BP neural network has three layers, the feature vector is used as the input, and the finely classified expression is used as the
  • the coding output through continuous learning and training, establishes a learning model, and combines deep learning technology to build a convolutional neural network, which further improves the accuracy of expression recognition.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明提出了一种基于视觉的人脸表情识别方法。本发明涉及信息处理领域。本发明包括以下步骤:获取人脸图像样本集,并且对人脸图像样本集进行预处理;提取预处理后人脸图像样本集的特征,获得表情特征样本集;将表情特征样本集引入至表情识别分类器中进行学习训练,得到训练好的表情识别分类器;将测试样本放入训练好的表情识别分类器,并且表情识别分类器对测试样本进行评估。本发明能够识别出人脸表情的表情程度。

Description

一种基于视觉的人脸表情识别方法
本申请要求于2021年03月03日提交中国专利局、申请号为202110234838.7、发明名称为“一种基于视觉的人脸表情识别方法”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及信息处理领域,具体而言,涉及一种基于视觉的人脸表情识别方法。
背景技术
近几年,随着人工智能技术的再次崛起,以及计算机技术的大幅度提升,人们对人机交互任务提出了越来越高的要求。同时,在人与人的沟通中,不只是语言符号,人脸表情以及其它身体语言也是传达信息的组成成分。
人的表情有不同的程度,例如笑这类表情可以分为大笑、微笑等,哭这类表情可以分为抽噎、大哭等,然而,现有的表情识别系统不能识别相同表情的不同程度。
发明内容
本发明的目的在于提供一种基于视觉的人脸表情识别方法,其能够识别出人脸表情的表情程度。
本发明的实施例是这样实现的:
本发明实施例提供一种基于视觉的人脸表情识别方法,包括以下步骤:
获取人脸图像样本集,并且对人脸图像样本集进行预处理;
提取预处理后人脸图像样本集的特征,获得表情特征样本集;
将表情特征样本集引入至表情识别分类器中进行学习训练,得到训练好的表情识别分类器;
将测试样本放入训练好的表情识别分类器,并且表情识别分类器对测试样本进行评估。
在本发明的一些实施例中,上述获取人脸图像样本集,并且对人脸图像样本集进行预处理包括将人脸图像样本集彩色图像转化成灰度图像:
Figure PCTCN2022079035-appb-000001
其中,R、G、B分别为每个彩色图像像素点的颜色分量,g为转化后的灰度值。
在本发明的一些实施例中,上述提取预处理后人脸图像样本集的特征,获得表情特征样本集的表情特征样本由人脸几何特征组成。
在本发明的一些实施例中,上述人脸几何特征包括眼镜、眉毛、鼻子和嘴。
在本发明的一些实施例中,上述表情特征的变化包括人脸几何特征的纹理形状变化。
在本发明的一些实施例中,上述表情特征样本集的表情特征样本的人脸几何特征提取方法通过差分图像的时空特征。
在本发明的一些实施例中,上述表情识别分类器为BP神经网络分类器。
在本发明的一些实施例中,上述BP神经网络分类器将表情特征样本作为输入,在BP神经网络中进行线性组合,并在每个神经元经过非线性的激活函数输出,每个神经元得到一个计算结果。
在本发明的一些实施例中,上述BP神经网络分类器的工作过程包括两个阶段:
正向传播:首先将表情特征样本集输入输入层,经过隐含层进行加权计算,最后由输出层输出,其中,每一层的处理中前一层等效于后一层的输入层;
反向传播:传播至输出层时,将结果与给定的标签进行比较,判断是否达到收敛条件,若达到,则结束训练过程;若没达到,则逐层反向传播,对权值进行依次调整,直到满足收敛条件。
在本发明的一些实施例中,上述激活函数:
Figure PCTCN2022079035-appb-000002
其中,x为实际输出。
相对于现有技术,本发明的实施例至少具有如下优点或有益效果:
一种基于视觉的人脸表情识别方法,包括以下步骤:
获取人脸图像样本集,并且对人脸图像样本集进行预处理;
提取预处理后人脸图像样本集的特征,获得表情特征样本集;
将表情特征样本集引入至表情识别分类器中进行学习训练,得到训练好的表情识别分类器;
将测试样本放入训练好的表情识别分类器,并且表情识别分类器对测试样本进行评估。
上述实施方式中,一种基于视觉的人脸表情识别方法总共包括四个步骤,第一步为获取人脸图像样本集,并且对人脸图像样本集进行预处理的步骤,其中,人脸图像样本集预处理的方法是将RGB空间中的彩色图像转化成灰度图像,由于彩色图像中的每个像素的颜色由R、G、B三个分量决定,而每个分量有256种取值情况,因此这样一个像素点可以有1600多万的颜色变化范围,数据量太大对存储盒计算造成一定的负担,而灰度图像的一个像素点变化范围只有256种情况,所以通过将人脸图像样本转化成灰度图像,能够大幅度减少后续的图像处理计算量;同时,灰度图像仍然能够反映出与彩色图像一致的整幅图像的整体和局部的色度和亮度等级的分布和特征。第二步为提取预处理后人脸图像样本集的特征,获得表情特征样本集的步骤,为了能够识别人脸表情的类别及变化情况,利用表情特征样本是一个显著有效的策略。第三步为将表情特征样本集引入至表情识别分类器中进行学习训练,得到训练好的表情识别分类器的步骤,表情识别分类器可以采用BP神经网络分类器,BP神经网络分类器是一种基于人工神经网络结构的学习模型,可以通过设置多层隐含层,并以反向逆传播的方式不断修改网络权值,从而达到学习的目的。本发明中采用5000幅人脸图片作为训练样本集,再选用5000幅人脸图片作为测试样本集,为了保证训练效果,每次训练结束后将与上一次训练结果进行比较,若误差增大,则向负方向调整权值,若误差减小,则正方向调整权值,从而不断提升识别准确性;当达到预定的训练次数或达到预先设定的收敛条件时,完成训练过程。最后一步为将测试样本放入训练好的表情识别分类器,并且表情识别分类器对测试样本进行评估的步骤。
本实施例中,本发明针对人脸表情识别更精准区分出同种表情的不同程度,采用四个步骤实现人脸表情识别,图像预处理采用RGB空间转化 为灰度图像的方法,特征提取采用灰度共生矩阵的纹理分析技术,并基于差分图像的表情时空提取,采用BP神经网络的方法作为分类器,其中BP神经网络共三层,将特征向量作为输入,将被精细分类后的表情作为编码输出,通过不断学习训练,建立学习模型,同时结合深度学习技术构建卷积神经网络,进一步提高了表情识别的精准度。
说明书附图
为了更清楚地说明本发明实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,应当理解,以下附图仅示出了本发明的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。
图1为本发明实施例一种基于视觉的人脸表情识别方法的流程图。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本发明实施例的组件可以以各种不同的配置来布置和设计。
因此,以下对在附图中提供的本发明的实施例的详细描述并非旨在限制要求保护的本发明的范围,而是仅仅表示本发明的选定实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
下面结合附图,对本发明的一些实施方式作详细说明。在不冲突的情况下,下述的各个实施例及实施例中的各个特征可以相互组合。
实施例
请参考图1所示。本发明实施例提供的一种基于视觉的人脸表情识别方法,包括以下步骤:
获取人脸图像样本集,并且对人脸图像样本集进行预处理;
提取预处理后人脸图像样本集的特征,获得表情特征样本集;
将表情特征样本集引入至表情识别分类器中进行学习训练,得到训练 好的表情识别分类器;
将测试样本放入训练好的表情识别分类器,并且表情识别分类器对测试样本进行评估。
上述实施方式中,一种基于视觉的人脸表情识别方法总共包括四个步骤,首先获取人脸图像样本集,并且对人脸图像样本集进行预处理。其中,由于扫描三维人脸的同时也会采集到肩部和颈部等其它身体部位,因此需要分离出人脸,去除非脸部冗余信息。每个采集到的人脸数据,由于姿态和距离等因素会存在尺寸和倾斜角度的差异,为了减少这两方面的差异,需要对脸部进行归一化对齐。为确定脸部每一个点云数据的归一化坐标值,本实施例中选取鼻尖点作为特征参考点,因此通过鼻尖点定位、人脸归一化、姿态校正和深度图获取四个步骤对原始三维点数据进行预处理。利用鼻尖为局部最高点和具有尖顶冒形状两个特征进行鼻尖点的提取。对于每一点P,其相邻点可以看成分布在以该点为圆心的球体上,定义有效能量(P i-P)与N p的内积。d i=(P i-P)*N p=(P i-P)cosɑ;N p代表两个归一化的法线,P为待选点,其三维坐标和法向量N p是已知的,P i为邻近点,对于峰顶的数据点来说,法线与邻点的夹角ɑ大于90°,因此有效能量d i为负值,这一限制条件是不足以检测出鼻尖点的,因为面颊、下巴等部位的很多点也满足这一情况,但是该步骤大大减少了搜索范围。然后计算能量d i的统计特性均值μ和方差
Figure PCTCN2022079035-appb-000003
Figure PCTCN2022079035-appb-000004
所有的点分为两类:鼻尖点和非鼻尖点,{(x i,y i),i=1,2,……,l}为训练样本,其中x i为二维向量
Figure PCTCN2022079035-appb-000005
y i∈{-1,1}表示类别。结合支持向量SVM,通过寻找能量函数的最小值来区分两个类:
Figure PCTCN2022079035-appb-000006
其中,核函数
Figure PCTCN2022079035-appb-000007
对于测试向量x,通过以下公式判断函数类别:
Figure PCTCN2022079035-appb-000008
b*是通过优化能量函数得到的参数,L是获得的支持向量个数。当鼻尖点被检测出来后,使其位于坐标原 点,从而使数据库中的所有三维人脸模型都对齐,并同时以该点为中心,根据三庭五眼的理论,确定人脸的大小范围,将人脸归一化为准尺寸,归一化后的人脸大小为130*100。虽然采集三维人脸数据的时候要求被采集对象尽可能保持不动,但人脸在三维空间中有很多自由度,不同角度的输入姿态会增加识别难度,姿态校正是将所有输入的三维人脸置于同一坐标下,基于此构建姿态校正坐标的步骤:
假定人脸点云V是N个顶点的集合,V={v i∈R 3|1≤I≤N};
计算人脸所有点云V的质心:
Figure PCTCN2022079035-appb-000009
构造点云顶点分布的系数矩阵:
Figure PCTCN2022079035-appb-000010
人脸曲面长宽不等,对系数C进行特征分解,可以求得3个大小不等的特征值λ 1≥λ 2≥λ 3以及对应的特征向量
Figure PCTCN2022079035-appb-000011
将O v作为原点,
Figure PCTCN2022079035-appb-000012
作为Y轴,
Figure PCTCN2022079035-appb-000013
作为X轴,
Figure PCTCN2022079035-appb-000014
作为Z轴,定义一个姿态坐标系PCS,并将原始点云变换到新的坐标空间,在这个姿态坐标系中所有人脸具有相同的正面姿势:
Figure PCTCN2022079035-appb-000015
预处理后的三维点云可以通过正交投影映射到二维平面上,得到点云深度信息,相应的颜色信息转换为灰度图。灰度图的方法是将RGB空间中的彩色图像转化成灰度图像,由于彩色图像中的每个像素的颜色由R、G、B三个分量决定,而每个分量有256种取值情况,因此这样一个像素点可以有1600多万的颜色变化范围,数据量太大对存储盒计算造成一定的负担,而灰度图像的一个像素点变化范围只有256种情况,所以通过将人脸图像样本转化成灰度图像,能够大幅度减少后续的图像处理计算量,同时,灰度图像仍然能够反映出与彩色图像一致的整幅图像的整体和局部的色度和亮度等级的分布和特征。
提取预处理后人脸图像样本集的特征,获得表情特征样本集。为了能够识别人脸表情的类别及变化情况,利用表情特征样本是一个显著有效的 策略。
将表情特征样本集引入至表情识别分类器中进行学习训练,得到训练好的表情识别分类器。表情识别分类器可以采用BP神经网络分类器,BP神经网络分类器是一种基于人工神经网络结构的学习模型,可以通过设置多层隐含层,并以反向逆传播的方式不断修改网络权值,从而达到学习的目的。本发明中采用5000幅人脸图片作为训练样本集,再选用5000幅人脸图片作为测试样本集,为了保证训练效果,每次训练结束后将与上一次训练结果进行比较,若误差增大,则向负方向调整权值,若误差减小,则正方向调整权值,从而不断提升识别准确性;当达到预定的训练次数或达到预先设定的收敛条件时,完成训练过程。
最后将测试样本放入训练好的表情识别分类器,并且表情识别分类器对测试样本进行评估。
本实施例中,本发明针对人脸表情识别更精准区分出同种表情的不同程度,采用四个步骤实现人脸表情识别,图像预处理采用RGB空间转化为灰度图像的方法,特征提取采用灰度共生矩阵的纹理分析技术,并基于差分图像的表情时空提取,采用BP神经网络的方法作为分类器,其中BP神经网络共三层,将特征向量作为输入,将被精细分类后的表情作为编码输出,通过不断学习训练,建立学习模型,同时结合深度学习技术构建卷积神经网络,进一步提高了表情识别的精准度。
在本发明的一些实施例中,获取人脸图像样本集,并且对人脸图像样本集进行预处理包括将人脸图像样本集彩色图像转化成灰度图像:
Figure PCTCN2022079035-appb-000016
其中,R、G、B分别为每个彩色图像像素点的颜色分量,g为转化后的灰度值。
本实施例中,由于彩色图像中的每个像素的颜色由R、G、B三个分量决定,而每个分量有256种取值情况,因此这样一个像素点可以有1600多万的颜色变化范围,数据量太大对存储盒计算造成一定的负担,而灰度图像的一个像素点变化范围只有256种情况,所以通过将人脸图像样本转化成灰度图像,能够大幅度减少后续的图像处理计算量,同时,灰度图像 仍然能够反映出与彩色图像一致的整幅图像的整体和局部的色度和亮度等级的分布和特征。
在本发明的一些实施例中,提取预处理后人脸图像样本集的特征,获得表情特征样本集的表情特征样本由人脸几何特征组成。
本实施例中,人脸几何特征是指人脸上的各个器官,眼睛、眉毛、鼻子和嘴等的位置变化,此外还包括眼角、眉梢和嘴角位置的变化。
在本发明的一些实施例中,人脸几何特征包括眼睛、眉毛、鼻子和嘴。
本实施例中,眼睛、眉毛、鼻子和嘴可以显著有效地表达人脸表情。
在本发明的一些实施例中,表情特征的变化包括人脸几何特征的纹理形状变化。
本实施例中,纹理分析采用空间灰度共生矩阵的方法,通过计算两个灰度级别在图像中以一定方向相邻的次数而得到一个矩阵,这里的方向一般包括水平、45°、90°和135°。采用水平方向计算灰度共生矩阵为例,灰度共生矩阵中的每一个元素(i,j)代表灰度i与灰度j在图像中水平相邻的次数。
在本发明的一些实施例中,表情特征样本集的表情特征样本的人脸几何特征提取方法通过差分图像的时空特征。
本实施例中,人脸表情视频的数据集,可以对获取的视频中每一帧的人脸图像提取得到特征向量,用于表情识别的几何特征,指脸上几个主要器官的变化,可以将视频序列中的每一帧图像与中性表情帧直接做差分运算,利用上述通过灰度共生矩阵方法提取出的人脸表情纹理特征,差分运算是指将两帧图像的灰度共生矩阵做矩阵减法,接着将新的灰度共生矩阵按列展开为一个向量表示,得到人脸几何特征向量。
在本发明的一些实施例中,表情识别分类器为BP神经网络分类器。
本实施例中,BP神经网络是一种基于人工神经网络结构的学习模型,可以通过设置多层隐含层,并反向逆传播的方式不断修改网络权值,从而达到学习训练的目的。
在本发明的一些实施例中,BP神经网络分类器将表情特征样本作为输入,在BP神经网络中进行线性组合,并在每个神经元经过非线性的激活函数输出,每个神经元得到一个计算结果。
在本发明的一些实施例中,BP神经网络分类器的工作过程包括两个阶段:
正向传播:首先将表情特征样本集输入输入层,经过隐含层进行加权计算,最后由输出层输出,其中,每一层的处理中前一层等效于后一层的输入层;
反向传播:传播至输出层时,将结果与给定的标签进行比较,判断是否达到收敛条件,若达到,则结束训练过程;若没达到,则逐层反向传播,对权值进行依次调整,直到满足收敛条件。
本实施例中,BP神经网络是一种基于人工神经网络结构的学习模型,可以通过设置多层隐含层,并利用反向逆传播的方式不断修改网络权值,从而达到学习训练的目的。
在本发明的一些实施例中,激活函数:
Figure PCTCN2022079035-appb-000017
其中,x为实际输出。
本实施例中,为了进行反向传播过程调整权值,学习训练之前要确定一个误差函数,本发明中采用实际输出x与期望输出y之差的平方和表示:
Figure PCTCN2022079035-appb-000018
BP神经网络经过大量学习训练后,不断调整网络权值,减小误差,从而达到预期效果。为了保证准确性,本发明采用三层神经网络结构,其中设置输入层包括10个节点,对应一组人脸表情特征输入向量;隐含层具有10个节点;输出层共9个节点,表示9种表情的输出结果。
在本发明所提供的实施例中,应该理解到,所揭露的装置和方法,也可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,附图中的流程图和框图显示了根据本发明的多个实施例的装置、方法和计算机程序产品的可能实现的体系架构、功能和操作。
综上所述,本发明实施例提供的一种基于视觉的人脸表情识别方法,包括以下步骤:
获取人脸图像样本集,并且对人脸图像样本集进行预处理;
提取预处理后人脸图像样本集的特征,获得表情特征样本集;
将表情特征样本集引入至表情识别分类器中进行学习训练,得到训练好的表情识别分类器;
将测试样本放入训练好的表情识别分类器,并且表情识别分类器对测试样本进行评估。
上述实施方式中,一种基于视觉的人脸表情识别方法总共包括四个步骤。第一步为获取人脸图像样本集,并且对人脸图像样本集进行预处理的步骤,其中,人脸图像样本集预处理的方法是将RGB空间中的彩色图像转化成灰度图像,由于彩色图像中的每个像素的颜色由R、G、B三个分量决定,而每个分量有256种取值情况,因此这样一个像素点可以有1600多万的颜色变化范围,数据量太大对存储盒计算造成一定的负担,而灰度图像的一个像素点变化范围只有256种情况,所以通过将人脸图像样本转化成灰度图像,大幅度减少后续的图像处理计算量;同时,灰度图像仍然能够反映出于彩色图像一致的整幅图像的整体和局部的色度和亮度等级的分布和特征。
第二步为提取预处理后人脸图像样本集的特征,获得表情特征样本集的步骤。为了能够识别人脸表情的类别及变化情况,利用表情特征样本是一个显著有效的策略。
第三步为将表情特征样本集引入至表情识别分类器中进行学习训练,得到训练好的表情识别分类器的步骤。表情识别分类器可以采用BP神经网络分类器,BP神经网络分类器是一种基于人工神经网络结构的学习模型,可以通过设置多层隐含层,并以反向逆传播的方式不断修改网络权值,从而达到学习的目的。本发明中采用5000幅人脸图片作为训练样本集,再选用5000幅人脸图片作为测试样本集,为了保证训练效果,每次训练结束后将与上一次训练结果进行比较,若误差增大,则向负方向调整权值,若误差减小,则正方向调整权值,从而不断提升识别准确性;当达到预定的训练次数或达到预先设定的收敛条件时,完成训练过程。
最后一步为将测试样本放入训练好的表情识别分类器,并且表情识别分类器对测试样本进行评估的步骤。
本实施例中,本发明针对人脸表情识别更精准区分出同种表情的不同程度,采用四个步骤实现人脸表情识别,图像预处理采用RGB空间转化 为灰度图像的方法,特征提取采用灰度共生矩阵的纹理分析技术,并基于差分图像的表情时空提取,采用BP神经网络的方法作为分类器,其中BP神经网络共三层,将特征向量作为输入,将被精细分类后的表情作为编码输出,通过不断学习训练,建立学习模型,同时结合深度学习技术构建卷积神经网络,进一步提高了表情识别的精准度。
以上所述仅为本发明的优选实施例而已,并不用于限制本发明,对于本领域的技术人员来说,本发明可以有各种更改和变化。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。
对于本领域技术人员而言,显然本发明不限于上述示范性实施例的细节,而且在不背离本发明的精神或基本特征的情况下,能够以其它的具体形式实现本发明。因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本发明的范围由所附权利要求而不是上述说明限定,因此旨在将落在权利要求的等同要件的含义和范围内的所有变化囊括在本发明内。不应将权利要求中的任何附图标记视为限制所涉及的权利要求。

Claims (10)

  1. 一种基于视觉的人脸表情识别方法,其特征在于,包括以下步骤:
    获取人脸图像样本集,并且对人脸图像样本集进行预处理;
    提取预处理后人脸图像样本集的特征,获得表情特征样本集;
    将表情特征样本集引入至表情识别分类器中进行学习训练,得到训练好的表情识别分类器;
    将测试样本放入训练好的表情识别分类器,并且表情识别分类器对测试样本进行评估。
  2. 根据权利要求1所述的一种基于视觉的人脸表情识别方法,其特征在于,所述获取人脸图像样本集,并且对人脸图像样本集进行预处理,包括:将所述人脸图像样本集彩色图像转化成灰度图像:
    Figure PCTCN2022079035-appb-100001
    其中,R、G、B分别为每个彩色图像像素点的颜色分量,g为转化后的灰度值。
  3. 根据权利要求1所述的一种基于视觉的人脸表情识别方法,其特征在于,所述提取预处理后人脸图像样本集的特征,获得表情特征样本集的表情特征样本由人脸几何特征组成。
  4. 根据权利要求3所述的一种基于视觉的人脸表情识别方法,其特征在于,所述人脸几何特征包括眼镜、眉毛、鼻子和嘴。
  5. 根据权利要求3或4所述的一种基于视觉的人脸表情识别方法,其特征在于,所述表情特征的变化包括人脸几何特征的纹理形状变化。
  6. 根据权利要求3所述的一种基于视觉的人脸表情识别方法,其特征在于,所述表情特征样本集的表情特征样本的人脸几何特征提取方法通过差分图像的时空特征。
  7. 根据权利要求1所述的一种基于视觉的人脸表情识别方法,其特征在于,所述表情识别分类器为BP神经网络分类器。
  8. 根据权利要求7所述的一种基于视觉的人脸表情识别方法,其特征在于,所述BP神经网络分类器将表情特征样本作为输入,在BP神经网络中进行线性组合,并在每个神经元经过非线性的激活函数输出,每个神经元得到一个计算结果。
  9. 根据权利要求8所述的一种基于视觉的人脸表情识别方法,其特征在于,所述BP神经网络分类器的工作过程包括两个阶段:
    正向传播:首先将表情特征样本集输入输入层,经过隐含层进行加权计算,最后由输出层输出,其中,每一层的处理中前一层等效于后一层的输入层;
    反向传播:传播至输出层时,将结果与给定的标签进行比较,判断是否达到收敛条件,若达到,则结束训练过程;若没达到,则逐层反向传播,对权值进行依次调整,直到满足收敛条件。
  10. 根据权利要求8所述的一种基于视觉的人脸表情识别方法,其特征在于,所述激活函数:
    Figure PCTCN2022079035-appb-100002
    其中,x为实际输出。
PCT/CN2022/079035 2021-03-03 2022-03-03 一种基于视觉的人脸表情识别方法 WO2022184133A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110234838.7 2021-03-03
CN202110234838.7A CN112836680A (zh) 2021-03-03 2021-03-03 一种基于视觉的人脸表情识别方法

Publications (1)

Publication Number Publication Date
WO2022184133A1 true WO2022184133A1 (zh) 2022-09-09

Family

ID=75934489

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/079035 WO2022184133A1 (zh) 2021-03-03 2022-03-03 一种基于视觉的人脸表情识别方法

Country Status (2)

Country Link
CN (1) CN112836680A (zh)
WO (1) WO2022184133A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116825365A (zh) * 2023-08-30 2023-09-29 安徽爱学堂教育科技有限公司 基于多角度微表情的心理健康分析方法
CN117038055A (zh) * 2023-07-05 2023-11-10 广州市妇女儿童医疗中心 一种基于多专家模型的疼痛评估方法、系统、装置及介质
CN117275070A (zh) * 2023-10-11 2023-12-22 中邮消费金融有限公司 基于微表情的视频面签处理方法及系统

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112836680A (zh) * 2021-03-03 2021-05-25 郑州航空工业管理学院 一种基于视觉的人脸表情识别方法
CN115331292B (zh) * 2022-08-17 2023-04-14 武汉元紫东科技有限公司 基于面部图像的情绪识别方法、装置及计算机储存介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180211102A1 (en) * 2017-01-25 2018-07-26 Imam Abdulrahman Bin Faisal University Facial expression recognition
CN110110653A (zh) * 2019-04-30 2019-08-09 上海迥灵信息技术有限公司 多特征融合的情绪识别方法、装置和存储介质
CN112836680A (zh) * 2021-03-03 2021-05-25 郑州航空工业管理学院 一种基于视觉的人脸表情识别方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108304826A (zh) * 2018-03-01 2018-07-20 河海大学 基于卷积神经网络的人脸表情识别方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180211102A1 (en) * 2017-01-25 2018-07-26 Imam Abdulrahman Bin Faisal University Facial expression recognition
CN110110653A (zh) * 2019-04-30 2019-08-09 上海迥灵信息技术有限公司 多特征融合的情绪识别方法、装置和存储介质
CN112836680A (zh) * 2021-03-03 2021-05-25 郑州航空工业管理学院 一种基于视觉的人脸表情识别方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LIANG, RUIQI: "Facial Emotion Recognition Based on Neural Network", DIANZI ZHIZUO - PRACTICAL ELECTRONICS, SHIJIE ZHISHI CHUBANSHE, CN, no. 20, 31 October 2018 (2018-10-31), CN , pages 46 - 48, XP009539394, ISSN: 1006-5059 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117038055A (zh) * 2023-07-05 2023-11-10 广州市妇女儿童医疗中心 一种基于多专家模型的疼痛评估方法、系统、装置及介质
CN117038055B (zh) * 2023-07-05 2024-04-02 广州市妇女儿童医疗中心 一种基于多专家模型的疼痛评估方法、系统、装置及介质
CN116825365A (zh) * 2023-08-30 2023-09-29 安徽爱学堂教育科技有限公司 基于多角度微表情的心理健康分析方法
CN116825365B (zh) * 2023-08-30 2023-11-28 安徽爱学堂教育科技有限公司 基于多角度微表情的心理健康分析方法
CN117275070A (zh) * 2023-10-11 2023-12-22 中邮消费金融有限公司 基于微表情的视频面签处理方法及系统

Also Published As

Publication number Publication date
CN112836680A (zh) 2021-05-25

Similar Documents

Publication Publication Date Title
WO2022184133A1 (zh) 一种基于视觉的人脸表情识别方法
US11341769B2 (en) Face pose analysis method, electronic device, and storage medium
CN109344693B (zh) 一种基于深度学习的人脸多区域融合表情识别方法
Rafique et al. Statistical multi-objects segmentation for indoor/outdoor scene detection and classification via depth images
CN106960202B (zh) 一种基于可见光与红外图像融合的笑脸识别方法
CN108229296B (zh) 人脸皮肤属性识别方法和装置、电子设备、存储介质
WO2018107979A1 (zh) 一种基于级联回归的多姿态的人脸特征点检测方法
WO2020103700A1 (zh) 一种基于微表情的图像识别方法、装置以及相关设备
WO2017219391A1 (zh) 一种基于三维数据的人脸识别系统
CN107169455B (zh) 基于深度局部特征的人脸属性识别方法
CN108629336B (zh) 基于人脸特征点识别的颜值计算方法
CN112766160A (zh) 基于多级属性编码器和注意力机制的人脸替换方法
US11900557B2 (en) Three-dimensional face model generation method and apparatus, device, and medium
CN112800903B (zh) 一种基于时空图卷积神经网络的动态表情识别方法及系统
WO2021139557A1 (zh) 肖像简笔画生成方法、系统及绘画机器人
CN108830237B (zh) 一种人脸表情的识别方法
CN107832740B (zh) 一种远程教学的教学质量评估方法及系统
US20230044644A1 (en) Large-scale generation of photorealistic 3d models
KR102400609B1 (ko) 딥러닝 네트워크를 이용한 배경 및 얼굴 합성 방법 및 장치
WO2021127916A1 (zh) 脸部情感识别方法、智能装置和计算机可读存储介质
CN107194364B (zh) 一种基于分治策略的Huffman-LBP多姿态人脸识别方法
CN109948569B (zh) 一种利用粒子滤波框架的三维混合表情识别方法
CN107895154B (zh) 面部表情强度计算模型的形成方法及系统
CN116681579A (zh) 一种实时视频人脸替换方法、介质及系统
CN108399358B (zh) 一种在视频聊天的表情显示方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22762595

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22762595

Country of ref document: EP

Kind code of ref document: A1