WO2019085329A1 - 基于循环神经网络的人物性格分析方法、装置及存储介质 - Google Patents

基于循环神经网络的人物性格分析方法、装置及存储介质 Download PDF

Info

Publication number
WO2019085329A1
WO2019085329A1 PCT/CN2018/076120 CN2018076120W WO2019085329A1 WO 2019085329 A1 WO2019085329 A1 WO 2019085329A1 CN 2018076120 W CN2018076120 W CN 2018076120W WO 2019085329 A1 WO2019085329 A1 WO 2019085329A1
Authority
WO
WIPO (PCT)
Prior art keywords
sample
character
model
personality
neural network
Prior art date
Application number
PCT/CN2018/076120
Other languages
English (en)
French (fr)
Inventor
陈林
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019085329A1 publication Critical patent/WO2019085329A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Definitions

  • the present application relates to the field of computer vision technology, and in particular, to a method, a device and a storage medium for character personality analysis based on a cyclic neural network.
  • the present application provides a character personality analysis method, apparatus, and storage medium, which can objectively and accurately determine a person's personality type by recognizing and analyzing a face video of a person.
  • the present application provides a character personality analysis method based on a cyclic neural network, the method comprising:
  • Sample preparation step collecting facial videos of predetermined character lengths of different personality type characters as samples, and labeling each sample with a personality type;
  • Sample feature extraction step extracting feature vectors of image sequences of each sample
  • Model construction step construct a cyclic neural network model with Softmax classifier as the output layer;
  • Model training step Defining the Softmax loss function, using the character annotation of each sample and the feature vector of the image sequence as sample data, training the cyclic neural network model, and outputting the probability values of each personality type for each sample, each training Updating the training parameters of the cyclic neural network model, so that the training parameter that minimizes the Softmax loss function is used as a final parameter to obtain a character personality analysis model;
  • the application step of the model is: collecting a facial video of a predetermined duration of the object to be analyzed, analyzing the facial video of the object to be analyzed by using the character analysis model, and obtaining a probability value corresponding to each personality type of the object to be analyzed, and taking the maximum probability value.
  • the personality type is used as the personality type of the object to be analyzed.
  • the application also provides a computing device comprising a memory and a processor, the memory including a character personality analysis program.
  • the computing device is directly or indirectly coupled to the camera device, and the camera device transmits the captured human face video to the computing device.
  • the processor of the computing device executes the character personality analysis program in the memory, the following steps are implemented:
  • Sample preparation step collecting facial videos of predetermined character lengths of different personality type characters as samples, and labeling each sample with a personality type;
  • Sample feature extraction step extracting feature vectors of image sequences of each sample
  • Model construction step construct a cyclic neural network model with Softmax classifier as the output layer;
  • Model training step Defining the Softmax loss function, using the character annotation of each sample and the feature vector of the image sequence as sample data, training the cyclic neural network model, and outputting the probability values of each personality type for each sample, each training Updating the training parameters of the cyclic neural network model, so that the training parameter that minimizes the Softmax loss function is used as a final parameter to obtain a character personality analysis model;
  • the application step of the model is: collecting a facial video of a predetermined duration of the object to be analyzed, analyzing the facial video of the object to be analyzed by using the character analysis model, and obtaining a probability value corresponding to each personality type of the object to be analyzed, and taking the maximum probability value.
  • the personality type is used as the personality type of the object to be analyzed.
  • the present application further provides a computer readable storage medium, which includes a character personality analysis program, and when the character personality analysis program is executed by a processor, the following steps are implemented:
  • Sample preparation step collecting facial videos of predetermined character lengths of different personality type characters as samples, and labeling each sample with a personality type;
  • Sample feature extraction step extracting feature vectors of image sequences of each sample
  • Model construction step construct a cyclic neural network model with Softmax classifier as the output layer;
  • Model training step Defining the Softmax loss function, using the character annotation of each sample and the feature vector of the image sequence as sample data, training the cyclic neural network model, and outputting the probability values of each personality type for each sample, each training Updating the training parameters of the cyclic neural network model, so that the training parameter that minimizes the Softmax loss function is used as a final parameter to obtain a character personality analysis model;
  • the application step of the model is: collecting a facial video of a predetermined duration of the object to be analyzed, analyzing the facial video of the object to be analyzed by using the character analysis model, and obtaining a probability value corresponding to each personality type of the object to be analyzed, and taking the maximum probability value.
  • the personality type is used as the personality type of the object to be analyzed.
  • the method, device and storage medium for character personality analysis based on the cyclic neural network provided by the present application through a large number of character facial video training cyclic neural network models of different personality types, updating the training parameters of the model according to the Softmax loss function, so that the Softmax loss
  • the training parameters of the function are used as the final parameters to obtain the character analysis model of the character.
  • the facial video of the object to be analyzed is collected, the feature vector is extracted, and the extracted feature vector is input into the trained character personality analysis model, and the probability value of each personality feature corresponding to the object to be analyzed is obtained, and the personality type with the largest probability value is obtained.
  • the personality type of the object to be analyzed By using this application, the personality type of the character can be analyzed objectively and effectively, and the labor cost is also reduced, and the time is saved.
  • FIG. 1 is an application environment diagram of a first preferred embodiment of a character personality analysis method based on a cyclic neural network according to the present application.
  • FIG. 2 is an application environment diagram of a second preferred embodiment of a character personality analysis method based on a cyclic neural network according to the present application.
  • FIG. 3 is a program block diagram of the character analysis program of FIGS. 1 and 2.
  • FIG. 4 is a flow chart of a preferred embodiment of a character personality analysis method based on a cyclic neural network according to the present application.
  • FIG. 1 it is an application environment diagram of a first preferred embodiment of a character personality analysis method based on a cyclic neural network according to the present application.
  • the imaging device 3 is connected to the computing device 1 via the network 2, and the imaging device 3 captures the facial video of the person and transmits it to the computing device 1 via the network 2.
  • the computing device 1 analyzes the location using the character analysis program 10 provided by the present application.
  • the video, the output character corresponds to the probability value of each personality type, for people's reference.
  • the computing device 1 may be a terminal device having a storage and computing function, such as a server, a smart phone, a tablet computer, a portable computer, a desktop computer, or the like.
  • the computing device 1 includes a memory 11, a processor 12, a network interface 13, and a communication bus 14.
  • the camera device 3 is installed in a specific place, such as a counseling room, an office place, a monitoring area, and the like, for taking a face video of a predetermined character length of a character of a different personality type, and then transmitting the captured video to the memory 11 through the network 2.
  • the network interface 13 may include a standard wired interface, a wireless interface (such as a WI-FI interface).
  • Communication bus 14 is used to implement connection communication between these components.
  • the memory 11 includes at least one type of readable storage medium.
  • the at least one type of readable storage medium may be a non-volatile storage medium such as a flash memory, a hard disk, a multimedia card, a card type memory, or the like.
  • the readable storage medium may be an internal storage unit of the computing device 1, such as a hard disk of the computing device 1.
  • the readable storage medium may also be an external memory 11 of the computing device 1, such as a plug-in hard disk equipped on the computing device 1, a smart memory card (SMC). , Secure Digital (SD) card, Flash Card, etc.
  • SMC smart memory card
  • SD Secure Digital
  • the memory 11 stores the program code of the character personality analysis program 10, the video captured by the imaging device 3, and the data to which the processor 12 executes the program code of the character personality analysis program 10 and the final output. Data, etc.
  • Processor 12 may be a Central Processing Unit (CPU), microprocessor or other data processing chip in some embodiments.
  • CPU Central Processing Unit
  • microprocessor or other data processing chip in some embodiments.
  • Figure 1 shows only computing device 1 with components 11-14, but it should be understood that not all illustrated components may be implemented and that more or fewer components may be implemented instead.
  • the computing device 1 may further include a user interface
  • the user interface may include an input unit such as a keyboard, a voice input device such as a microphone, a device with a voice recognition function, a voice output device such as an audio, a headphone, and the like.
  • the user interface may also include a standard wired interface and a wireless interface.
  • the computing device 1 may also include a display.
  • the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch sensor, or the like in some embodiments.
  • the display is used to display information processed by the computing device 1 and a visualized user interface.
  • the computing device 1 further comprises a touch sensor.
  • the area provided by the touch sensor for the user to perform a touch operation is referred to as a touch area.
  • the touch sensor described herein may be a resistive touch sensor, a capacitive touch sensor, or the like.
  • the touch sensor includes not only a contact type touch sensor but also a proximity type touch sensor or the like.
  • the touch sensor may be a single sensor or a plurality of sensors arranged, for example, in an array.
  • a user such as a counselor, can initiate the character personality analysis program 10 by touch.
  • the computing device 1 may also include radio frequency (RF) circuits, sensors, audio circuits, and the like, and details are not described herein.
  • RF radio frequency
  • FIG. 2 it is an application environment diagram of a second preferred embodiment of a character personality analysis method based on a cyclic neural network according to the present application.
  • the object to be analyzed realizes the personality analysis process through the terminal 3, and the camera device 30 of the terminal 3 captures the face video of the object to be analyzed and transmits it to the computing device 1 via the network 2, and the processor 12 of the computing device 1 executes the person stored in the memory 11.
  • the program code of the personality analysis program 10 analyzes the video and outputs a probability value corresponding to each personality feature of the object to be analyzed, for reference by an object to be analyzed or a counselor.
  • the terminal 3 can be a terminal device having a storage and computing function, such as a smart phone, a tablet computer, a portable computer, and a desktop computer.
  • the object to be analyzed will deliberately hide the true intention, and it is difficult to analyze its character through the form of answer sheet, etc. It is also inevitable that there will be objectivity.
  • the character personality analysis program 10 by analyzing the video composed of a large number of face images of the analysis object by the character personality analysis program 10, the fine features can be captured, and objective results are obtained for reference.
  • the character personality analysis program 10 of Figures 1 and 2 when executed by the processor 12, implements the following steps:
  • Sample preparation step collecting facial videos of predetermined character lengths of different personality type characters as samples, and labeling each sample with a personality type;
  • Sample feature extraction step extracting feature vectors of image sequences of each sample
  • Model construction step construct a cyclic neural network model with Softmax classifier as the output layer;
  • Model training step Defining the Softmax loss function, using the character annotation of each sample and the feature vector of the image sequence as sample data, training the cyclic neural network model, and outputting the probability values of each personality type for each sample, each training Updating the training parameters of the cyclic neural network model, so that the training parameter that minimizes the Softmax loss function is used as a final parameter to obtain a character personality analysis model;
  • the application step of the model is: collecting a facial video of a predetermined duration of the object to be analyzed, analyzing the facial video of the object to be analyzed by using the character analysis model, and obtaining a probability value corresponding to each personality type of the object to be analyzed, and taking the maximum probability value.
  • the personality type is used as the personality type of the object to be analyzed.
  • FIG. 3 it is a program block diagram of the character analysis program 10 in Figs.
  • the character personality analysis program 10 is divided into a plurality of modules, which are stored in the memory 11 and executed by the processor 12 to complete the present application.
  • a module as referred to in this application refers to a series of computer program instructions that are capable of performing a particular function.
  • the character personality analysis program 10 can be divided into: an acquisition module 110, an extraction module 120, a training module 130, and an analysis module 140.
  • the obtaining module 110 is configured to acquire a facial video of a predetermined duration of a character of a different personality type.
  • the face video may be acquired by the image pickup device 3 of Fig. 1 or the image pickup device 30 of Fig. 2, or may be a face video of a character with a distinctive character selected from a network information or a video library.
  • the sample video used for model training is labeled with a personality type, such as "active", “introverted”, “easy”, etc., and the personality type annotation is mapped to a one-hot vector.
  • the extraction module 120 is configured to extract feature vectors of the sequence of facial video images. Converting the facial video acquired by the obtaining module 110 into an image sequence, normalizing each frame of the image, removing noise, and the like, extracting the underlying features such as the HOG feature vector or the LBP feature vector of the preprocessed image sequence, or directly utilizing the volume
  • the product neural network extracts the feature vector of the original image sequence.
  • the training module 130 is configured to optimize the cyclic neural network model through iterative training.
  • the sequence of facial video images is arranged in a chronological order by a series of single-frame images. Therefore, the present application uses a Long Short-Term Memory (LSTM) model in a cyclic neural network model, since the present application uses the LSTM model to output
  • the analysis object corresponds to the probability value of each personality type, so the LSTM model uses the Softmax classifier as the output layer.
  • LSTM Long Short-Term Memory
  • the network shape is first defined according to the length of the face video image sequence acquired by the obtaining module 110 and the feature vector dimension of each frame image, and the number of circulating neural network layers and the number of neurons in each layer are set according to the personality type.
  • the number of neurons in the Softmax classifier is set. For example, assuming that the predetermined length of the face video is 3 minutes and the number of display frames per minute is m, the image sequence length of each video is recorded as 3*m.
  • the shape of the LSTM can be represented by the code of the tflearn deep learning library as follows:
  • the training parameters are set. Assuming that the number of iterations is 100, the gradient optimization algorithm is adam, and the verification set is 0.1, the code for the LSTM model training tflearn deep learning library is expressed as follows:
  • the LSTM model is trained by using the feature vector of the sample image sequence and the one-hot vector of the character annotation, and the training parameters of the LSTM model are updated each time to make the training parameter with the Softmax loss function minimized as the final parameter, and the character is obtained.
  • Personality analysis model is trained by using the feature vector of the sample image sequence and the one-hot vector of the character annotation, and the training parameters of the LSTM model are updated each time to make the training parameter with the Softmax loss function minimized as the final parameter, and the character is obtained.
  • the analysis module 140 is configured to analyze a probability value of each personality type of the character, and obtain a personality type of the object to be analyzed. Converting the facial video of the predetermined duration of the object to be analyzed obtained by the obtaining module 110 into an image sequence, extracting the feature vector of the image sequence, inputting the obtained feature vector into the trained character personality analysis model, and outputting each character corresponding to the object to be analyzed.
  • the probability value of the type taking the personality type with the largest probability value as the personality type of the object to be analyzed.
  • FIG. 4 it is a flowchart of a preferred embodiment of a character personality analysis method based on a cyclic neural network according to the present application.
  • the computing device 1 is started.
  • the processor 12 executes the character personality analysis program 10 stored in the memory 11 to implement the following steps:
  • the acquisition module 110 acquires the facial video of the predetermined duration of the character of the different personality types and labels the personality type.
  • the facial video may be acquired by the imaging device 3 of FIG. 1 or the imaging device 30 of FIG. 2, or may be a facial video of a character with a distinctive character selected from a network information or a video library.
  • the personality type annotation is expressed in the form of a one-hot vector, that is, the flag bit corresponding to each type is 1, and the remaining bits are all 0.
  • step S20 the feature vector of the sequence of facial video images is extracted by the extraction module 120.
  • the facial video is converted into an image sequence, the image sequence is normalized, noise is removed, and the like, and the features of each frame after the pre-processing are extracted, and the features are filtered by using a special screening algorithm. Assuming that the sample has n features, then it has 2 n -1 possible feature subsets. If all 2 n possible feature subsets are exhaustive, the calculation is too expensive for n. achieve. Therefore, the selection of features can be achieved by some algorithms.
  • the feature filtering algorithm herein may be a forward/backward search, a filter feature selection, or other available feature filtering algorithms.
  • the embodiment may extract an underlying feature such as an HOG feature vector or an LBP feature vector of the image sequence, or directly extract the feature vector of the original image sequence by using a convolutional neural network.
  • Step S30 constructing a cyclic neural network model by using the training module 130 according to the image sequence length, the feature vector dimension, and the personality type number.
  • the number of circulating neural network layers and the number of neurons in each layer are set, and the Softmax classifier as the network output layer is set according to the number of personality types. The number of neurons.
  • Step S40 Optimize the cyclic neural network model by using the training module 130 according to the feature vector of the image sequence and the personality annotation of the facial video, and obtain a trained character personality analysis model.
  • the extraction module 120 extracts the feature vector and the one-hot vector mapped by the personality annotation of the facial video acquired by the acquisition module 110 as sample data, and trains the cyclic neural network model to obtain a trained character personality analysis model.
  • step S50 the acquisition module 110 is used to collect the facial video of the predetermined duration of the object to be analyzed. This face video is acquired by the image pickup device 3 of Fig. 1 or the image pickup device 30 of Fig. 2 .
  • step S60 the feature vector of the sequence of facial video images of the object to be analyzed is extracted by the extraction module 120.
  • the feature vector is one or more of a HOG feature vector, an LBP feature vector, and a feature vector extracted by a convolutional neural network.
  • Step S70 the analysis module 140 is used to obtain the personality type of the object to be analyzed according to the extracted feature vector.
  • the extracted feature vector of the facial video image sequence of the object to be analyzed is input into the trained character personality analysis model, and the probability value corresponding to each personality type of the object to be analyzed is output, and the personality type with the largest probability value is taken as the personality type of the object to be analyzed. .
  • the embodiment of the present application further provides a computer readable storage medium, which may be a hard disk, a multimedia card, an SD card, a flash memory card, an SMC, a read only memory (ROM), and an erasable programmable Any combination or combination of any one or more of read only memory (EPROM), portable compact disk read only memory (CD-ROM), USB memory, and the like.
  • the computer readable storage medium includes a sample video, a cyclic neural network model, and a character personality analysis program 10, and when the character personality analysis program 10 is executed by the processor, the following operations are performed:
  • Sample preparation step collecting a facial video of a predetermined duration of different character type characters as a sample, and assigning a character annotation to each sample;
  • Sample feature extraction step extracting feature vectors of image sequences of each sample
  • Model construction step construct a cyclic neural network model with Softmax classifier as the output layer;
  • Model training step Defining the Softmax loss function, using the character annotation of each sample and the feature vector of the image sequence as sample data, training the cyclic neural network model, and outputting the probability values of each personality type for each sample, each training Updating the training parameters of the cyclic neural network model, so that the training parameter that minimizes the Softmax loss function is used as a final parameter to obtain a character personality analysis model;
  • the application step of the model is: collecting a facial video of a predetermined duration of the object to be analyzed, analyzing the facial video of the object to be analyzed by using the character analysis model, and obtaining a probability value corresponding to each personality type of the object to be analyzed, and taking the maximum probability value.
  • the personality type is used as the personality type of the object to be analyzed.
  • the specific implementation of the computer readable storage medium of the present application is substantially the same as the above-described embodiment of the character network analysis method based on the cyclic neural network and the computing device 1, and details are not described herein again.
  • a disk including a number of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the various embodiments of the present application.
  • a terminal device which may be a mobile phone, a computer, a server, or a network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

本申请提供一种基于循环神经网络的人物性格分析方法、装置及计算机可读存储介质。该方法包括以下步骤:收集样本视频并标注性格类型;提取样本视频图像序列的特征向量;构建以Softmax分类器为输出层的循环神经网络模型;用所述特征向量及性格类型标注训练循环神经网络模型,优化训练参数,得到人物性格分析模型;采集待分析对象的面部视频并提取该视频图像序列的特征向量;将提取的特征向量输入人物性格分析模型,得到待分析对象对应每种性格类型的概率值,取概率值最大的性格类型作为该待分析对象的性格类型。利用本申请,可以客观分析人物的性格。

Description

基于循环神经网络的人物性格分析方法、装置及存储介质
优先权申明
本申请要求于2017年11月2日提交中国专利局、申请号为201711061207.X,发明名称为“基于循环神经网络的人物性格分析方法、装置及存储介质”的中国专利申请的优先权,其内容全部通过引用结合在本申请中。
技术领域
本申请涉及计算机视觉技术领域,尤其涉及一种基于循环神经网络的人物性格分析方法、装置及存储介质。
背景技术
性格是人格的重要组成部分,了解人物的性格,可以提高人与人的沟通效率,也有助于人们选择理性的思维方式,形成良好的人格特质。
目前,人物性格分析一般是通过问卷调查或语音问答的方式实现的,需要耗费大量的时间和人力资源。如果答卷人或答题者受客观环境影响或不积极配合分析过程,分析结果往往不准确客观。
发明内容
鉴于以上原因,本申请提供一种人物性格分析方法、装置及存储介质,可以通过对人物的面部视频进行识别、分析,客观、准确地判断人物的性格类型。
为实现上述目的,本申请提供一种基于循环神经网络的人物性格分析方法,该方法包括:
样本准备步骤:收集不同性格类型人物的预定时长的面部视频作为样本,为每个样本标注一个性格类型;
样本特征提取步骤:提取每个样本的图像序列的特征向量;
模型构建步骤:构建以Softmax分类器为输出层的循环神经网络模型;
模型训练步骤:定义Softmax损失函数,以各样本的性格标注及图像序列 的特征向量为样本数据,对所述循环神经网络模型进行训练,输出各样本对应每种性格类型的概率值,每次训练更新该循环神经网络模型的训练参数,以使所述Softmax损失函数最小化的训练参数作为最终参数,得到人物性格分析模型;及
模型应用步骤:采集待分析对象的预定时长的面部视频,利用所述人物性格分析模型分析该待分析对象的该面部视频,得到该待分析对象对应每种性格类型的概率值,取概率值最大的性格类型作为该待分析对象的性格类型。
本申请还提供一种计算装置,包括存储器和处理器,所述存储器中包括人物性格分析程序。该计算装置直接或间接地与摄像装置相连接,摄像装置将拍摄的人物面部视频传送至计算装置。该计算装置的处理器执行存储器中的人物性格分析程序时,实现以下步骤:
样本准备步骤:收集不同性格类型人物的预定时长的面部视频作为样本,为每个样本标注一个性格类型;
样本特征提取步骤:提取每个样本的图像序列的特征向量;
模型构建步骤:构建以Softmax分类器为输出层的循环神经网络模型;
模型训练步骤:定义Softmax损失函数,以各样本的性格标注及图像序列的特征向量为样本数据,对所述循环神经网络模型进行训练,输出各样本对应每种性格类型的概率值,每次训练更新该循环神经网络模型的训练参数,以使所述Softmax损失函数最小化的训练参数作为最终参数,得到人物性格分析模型;及
模型应用步骤:采集待分析对象的预定时长的面部视频,利用所述人物性格分析模型分析该待分析对象的该面部视频,得到该待分析对象对应每种性格类型的概率值,取概率值最大的性格类型作为该待分析对象的性格类型。
此外,为实现上述目的,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质中包括人物性格分析程序,所述人物性格分析程序被处理器执行时,实现以下步骤:
样本准备步骤:收集不同性格类型人物的预定时长的面部视频作为样本,为每个样本标注一个性格类型;
样本特征提取步骤:提取每个样本的图像序列的特征向量;
模型构建步骤:构建以Softmax分类器为输出层的循环神经网络模型;
模型训练步骤:定义Softmax损失函数,以各样本的性格标注及图像序列的特征向量为样本数据,对所述循环神经网络模型进行训练,输出各样本对应每种性格类型的概率值,每次训练更新该循环神经网络模型的训练参数,以使所述Softmax损失函数最小化的训练参数作为最终参数,得到人物性格分析模型;及
模型应用步骤:采集待分析对象的预定时长的面部视频,利用所述人物性格分析模型分析该待分析对象的该面部视频,得到该待分析对象对应每种性格类型的概率值,取概率值最大的性格类型作为该待分析对象的性格类型。
本申请提供的基于循环神经网络的人物性格分析方法、装置及存储介质,通过大量不同性格类型的人物面部视频训练循环神经网络模型,根据Softmax损失函数更新模型的训练参数,以使所述Softmax损失函数的训练参数作为最终参数,得到人物性格分析模型。之后,采集待分析对象的面部视频,提取特征向量,将提取的特征向量输入训练好人物性格分析模型,即可得到该待分析对象对应每种性格特征的概率值,取概率值最大的性格类型作为该待分析对象的性格类型。利用本申请,可以客观、有效地分析人物的性格类型,也降低了人力成本,节省了时间。
附图说明
图1为本申请基于循环神经网络的人物性格分析方法第一较佳实施例的应用环境图。
图2为本申请基于循环神经网络的人物性格分析方法第二较佳实施例的应用环境图。
图3为图1、图2中人物性格分析程序的程序模块图。
图4为本申请基于循环神经网络的人物性格分析方法较佳实施例的流程图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
下面将参考若干具体实施例来描述本申请的原理和精神。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
参照图1所示,为本申请基于循环神经网络的人物性格分析方法第一较佳实施例的应用环境图。在该实施例中,摄像装置3通过网络2连接计算装置1,摄像装置3拍摄人物的面部视频,通过网络2传送至计算装置1,计算装置1利用本申请提供的人物性格分析程序10分析所述视频,输出人物对应每种性格类型的概率值,供人们参考。
计算装置1可以是服务器、智能手机、平板电脑、便携计算机、桌上型计算机等具有存储和运算功能的终端设备。
该计算装置1包括存储器11、处理器12、网络接口13及通信总线14。
摄像装置3安装于特定场所,如心理咨询室、办公场所、监控区域等,用于拍摄不同性格类型人物的预定时长的面部视频,然后通过网络2将拍摄得到的视频传输至存储器11。网络接口13可以包括标准的有线接口、无线接口(如WI-FI接口)。通信总线14用于实现这些组件之间的连接通信。
存储器11包括至少一种类型的可读存储介质。所述至少一种类型的可读存储介质可为如闪存、硬盘、多媒体卡、卡型存储器等的非易失性存储介质。在一些实施例中,所述可读存储介质可以是所述计算装置1的内部存储单元,例如该计算装置1的硬盘。在另一些实施例中,所述可读存储介质也可以是所述计算装置1的外部存储器11,例如所述计算装置1上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。
在本实施例中,所述存储器11存储所述人物性格分析程序10的程序代码、摄像装置3拍摄的视频,以及处理器12执行人物性格分析程序10的程序代码应用到的数据以及最后输出的数据等。
处理器12在一些实施例中可以是一中央处理器(Central Processing Unit,CPU),微处理器或其他数据处理芯片。
图1仅示出了具有组件11-14的计算装置1,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。
可选地,该计算装置1还可以包括用户接口,用户接口可以包括输入单元比如键盘(Keyboard)、语音输入装置比如麦克风(microphone)等具有语音识别功能的设备、语音输出装置比如音响、耳机等,可选地用户接口还可以包括标准的有线接口、无线接口。
可选地,该计算装置1还可以包括显示器。显示器在一些实施例中可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED(Organic Light-Emitting Diode,有机发光二极管)触摸器等。显示器用于显示计算装置1处理的信息以及可视化的用户界面。
可选地,该计算装置1还包括触摸传感器。所述触摸传感器所提供的供用户进行触摸操作的区域称为触控区域。此外,这里所述的触摸传感器可以为电阻式触摸传感器、电容式触摸传感器等。而且,所述触摸传感器不仅包括接触式的触摸传感器,也可包括接近式的触摸传感器等。此外,所述触摸传感器可以为单个传感器,也可以为例如阵列布置的多个传感器。用户,例如心理咨询师,可以通过触摸启动人物性格分析程序10。
该计算装置1还可以包括射频(Radio Frequency,RF)电路、传感器和音频电路等等,在此不再赘述。
参照图2所示,为本申请基于循环神经网络的人物性格分析方法第二较佳实施例的应用环境图。待分析对象通过终端3实现性格分析过程,终端3的摄像装置30拍摄待分析对象的面部视频,并通过网络2传送至所述计算装置1,计算装置1的处理器12执行存储器11存储的人物性格分析程序10的程序代码,对视频进行分析,输出该待分析对象对应每种性格特征的概率值,供待分析对象或心理咨询师等人参考。
图2中计算装置1的组件,例如图中示出的存储器11、处理器12、网络接口13及通信总线14,以及图中未示出的组件,请参照关于图1的介绍。
所述终端3可以为智能手机、平板电脑、便携计算机、桌上型计算机等具有存储和运算功能的终端设备。
在一些场景中,待分析对象为了达到某种目的,例如获取信任,迷惑对方,不愿接受客观事实等等,会刻意隐藏真实意图,难以通过答卷等形式分析其性格,单靠人为观察分析判断也难免有失客观。此时,利用人物性格分 析程序10对待分析对象的大量人脸图像组成的视频进行分析,可以捕捉到细微特征,得到客观结果,供人们参考。
图1、图2中的人物性格分析程序10,在被处理器12执行时,实现以下步骤:
样本准备步骤:收集不同性格类型人物的预定时长的面部视频作为样本,为每个样本标注一个性格类型;
样本特征提取步骤:提取每个样本的图像序列的特征向量;
模型构建步骤:构建以Softmax分类器为输出层的循环神经网络模型;
模型训练步骤:定义Softmax损失函数,以各样本的性格标注及图像序列的特征向量为样本数据,对所述循环神经网络模型进行训练,输出各样本对应每种性格类型的概率值,每次训练更新该循环神经网络模型的训练参数,以使所述Softmax损失函数最小化的训练参数作为最终参数,得到人物性格分析模型;及
模型应用步骤:采集待分析对象的预定时长的面部视频,利用所述人物性格分析模型分析该待分析对象的该面部视频,得到该待分析对象对应每种性格类型的概率值,取概率值最大的性格类型作为该待分析对象的性格类型。
关于上述步骤的详细介绍,请参照下述图3关于人物性格分析程序10的程序模块图及图4基于循环神经网络的人物性格分析方法较佳实施例的流程图的说明。
参照图3所示,为图1、图2中人物性格分析程序10的程序模块图。在本实施例中,人物性格分析程序10被分割为多个模块,该多个模块被存储于存储器11中,并由处理器12执行,以完成本申请。本申请所称的模块是指能够完成特定功能的一系列计算机程序指令段。
所述人物性格分析程序10可以被分割为:获取模块110、提取模块120、训练模块130及分析模块140。
获取模块110,用于获取不同性格类型人物的预定时长的面部视频。所述面部视频可以是通过图1的摄像装置3或图2的摄像装置30获取的,也可以 是从网络信息或视频资料库中选取的性格鲜明的人物的面部视频。将用于模型训练的样本视频标注性格类型,例如:“活泼”、“内向”、“随和”等,将所述性格类型标注映射为one-hot向量。
提取模块120,用于提取面部视频图像序列的特征向量。将获取模块110获取的面部视频转换为图像序列,对每帧图像进行归一化、去除噪声等预处理,提取预处理后图像序列的HOG特征向量或LBP特征向量等底层特征,或直接利用卷积神经网络提取原始图像序列的特征向量。
训练模块130,用于通过迭代训练优化循环神经网络模型。所述面部视频图像序列由一系列单帧图像按时间顺序排列,因此本申请采用循环神经网络模型中的长短期记忆网络(Long Short-Term Memory,LSTM)模型,由于本申请利用LSTM模型输出待分析对象对应每种性格类型的概率值,故该LSTM模型以Softmax分类器作为输出层。
构建LSTM模型时,先根据获取模块110获取的面部视频图像序列长度及每帧图像的特征向量维数定义网络形状,设置循环神经网络层数及每层的神经元个数,根据所述性格类型的数量设置所述Softmax分类器的神经元个数。例如,假设所述面部视频的预定时长为3分钟,每分钟显示帧数为m,则每个视频的图像序列长度记为3*m。假设图像特征向量的维数为k,则该LSTM的形状可用tflearn深度学习库的代码表示成如下形式:
net=tflearn.input_data(shape=[None,3*m,k])
然后构建两个隐含层,每层128个神经单元,用tflearn深度学习库的代码表示如下:
net=tflearn.lstm(net,128)
net=tflearn.lstm(net,128)
最后,接入Softmax分类器。例如,假设将人物性格分为n类,则Softmax分类器用tflearn深度学习库的代码表示如下:
net=tflearn.fully_connected(net,n,activation=‘softmax’)
定义Softmax损失函数公式如下:
Figure PCTCN2018076120-appb-000001
LSTM模型及Softmax损失函数构建完成后,设置训练参数。假设迭代次数为100、梯度优化算法为adam、验证集为0.1,则LSTM模型训练用tflearn 深度学习库的代码表示如下:
net=tflearn.regression(net,optimizer=‘adam’,loss=‘categorical_crossentropy’,name=‘output1’)
model=tflearn.DNN(net,tersorboard_verbose=2)
model.fit(X,Y,n_epoch=100,validation_set=0.1,snapshot_step=100)
利用样本图像序列的特征向量及性格标注的one-hot向量对LSTM模型进行训练,每次训练更新该LSTM模型的训练参数,以使所述Softmax损失函数最小化的训练参数作为最终参数,得到人物性格分析模型。
分析模块140,用于分析人物对应每种性格类型的概率值,得到待分析对象的性格类型。将获取模块110获取的待分析对象的预定时长的面部视频转换为图像序列,提取该图像序列的特征向量,将得到的特征向量输入训练好的人物性格分析模型,输出待分析对象对应每种性格类型的概率值,取概率值最大的性格类型作为该待分析对象的性格类型。
参照图4所示,为本申请基于循环神经网络的人物性格分析方法较佳实施例的流程图。利用图1或图2所示的架构,启动计算装置1,处理器12执行存储器11中存储的人物性格分析程序10,实现如下步骤:
步骤S10,利用获取模块110获取不同性格类型人物的预定时长的面部视频并标注性格类型。所述面部视频可以是通过图1的摄像装置3或图2的摄像装置30获取的,也可以是从网络信息或视频资料库中选取的性格鲜明的人物的面部视频。所述性格类型标注被表示为one-hot向量的形式,即每个类型所对应的标志位为1,其余位全为0。
步骤S20,利用提取模块120提取面部视频图像序列的特征向量。将所述面部视频转换为图像序列,对图像序列进行归一化、去除噪声等预处理,提取预处理后每帧图像的特征,利用特种筛选算法对所述特征进行筛选。假设样本有n个特征,那么,其有2 n-1种可能的特征子集,如果穷举所有2 n种可能的特征子集,对于n比较大的情况,计算的代价太大,无法真正实现。因此可以通过一些算法实现特征的选择。这里的特征筛选算法可以为正向搜索/反向搜索(forward/backwardsearch)、过滤特征选择(filter feature selection),或是其他可用的特征筛选算法。可选地,本实施例可提取图像序列的HOG特 征向量或LBP特征向量等底层特征,也可以直接利用卷积神经网络提取原始图像序列的特征向量。
步骤S30,根据图像序列长度、特征向量维数以及性格类型数量利用训练模块130构建循环神经网络模型。根据获取模块110获取的面部视频图像序列长度及每帧图像的特征向量维数设置循环神经网络层数及每层的神经元个数,根据性格类型的数量设置作为网络输出层的Softmax分类器的神经元个数。
步骤S40,根据图像序列的特征向量以及面部视频的性格标注利用训练模块130优化循环神经网络模型,得到训练好的人物性格分析模型。以提取模块120提取特征向量及获取模块110获取的面部视频的性格标注映射成的one-hot向量为样本数据,对循环神经网络模型进行训练,得到训练好的人物性格分析模型。
步骤S50,利用获取模块110采集待分析对象的预定时长的面部视频。该面部视频通过图1的摄像装置3或图2的摄像装置30获取。
步骤S60,利用提取模块120提取待分析对象面部视频图像序列的特征向量。特征向量为HOG特征向量、LBP特征向量、通过卷积神经网络提取的特征向量中的一种或几种。
步骤S70,根据提取的特征向量利用分析模块140得到待分析对象的性格类型。将提取的待分析对象面部视频图像序列的特征向量输入训练好的人物性格分析模型,输出待分析对象对应每种性格类型的概率值,取概率值最大的性格类型作为该待分析对象的性格类型。
此外,本申请实施例还提出一种计算机可读存储介质,所述计算机可读存储介质可以是硬盘、多媒体卡、SD卡、闪存卡、SMC、只读存储器(ROM)、可擦除可编程只读存储器(EPROM)、便携式紧致盘只读存储器(CD-ROM)、USB存储器等等中的任意一种或者几种的任意组合。所述计算机可读存储介质中包括样本视频、循环神经网络模型及人物性格分析程序10,所述人物性格分析程序10被处理器执行时实现如下操作:。
样本准备步骤:收集不同性格类型人物的预定时长的面部视频作为样本,为每个样本分配一个性格标注;
样本特征提取步骤:提取每个样本的图像序列的特征向量;
模型构建步骤:构建以Softmax分类器为输出层的循环神经网络模型;
模型训练步骤:定义Softmax损失函数,以各样本的性格标注及图像序列的特征向量为样本数据,对所述循环神经网络模型进行训练,输出各样本对应每种性格类型的概率值,每次训练更新该循环神经网络模型的训练参数,以使所述Softmax损失函数最小化的训练参数作为最终参数,得到人物性格分析模型;及
模型应用步骤:采集待分析对象的预定时长的面部视频,利用所述人物性格分析模型分析该待分析对象的该面部视频,得到该待分析对象对应每种性格类型的概率值,取概率值最大的性格类型作为该待分析对象的性格类型。
本申请之计算机可读存储介质的具体实施方式与上述基于循环神经网络的人物性格分析方法以及计算装置1的具体实施方式大致相同,在此不再赘述。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、装置、物品或者方法不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、装置、物品或者方法所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、装置、物品或者方法中还存在另外的相同要素。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种基于循环神经网络的人物性格分析方法,其特征在于,该方法包括:
    样本准备步骤:收集不同性格类型人物的预定时长的面部视频作为样本,为每个样本标注一个性格类型;
    样本特征提取步骤:提取每个样本的图像序列的特征向量;
    模型构建步骤:构建以Softmax分类器为输出层的循环神经网络模型;
    模型训练步骤:定义Softmax损失函数,以各样本的性格标注及图像序列的特征向量为样本数据,对所述循环神经网络模型进行训练,输出各样本对应每种性格类型的概率值,每次训练更新该循环神经网络模型的训练参数,以使所述Softmax损失函数最小化的训练参数作为最终参数,得到人物性格分析模型;及
    模型应用步骤:采集待分析对象的预定时长的面部视频,利用所述人物性格分析模型分析该待分析对象的该面部视频,得到该待分析对象对应每种性格类型的概率值,取概率值最大的性格类型作为该待分析对象的性格类型。
  2. 如权利要求1所述的人物性格分析方法,其特征在于,所述样本特征提取步骤之前还包括步骤:
    将所述样本的视频转换为图像序列。
  3. 如权利要求1所述的人物性格分析方法,其特征在于,所述特征向量为HOG特征向量、LBP特征向量、通过卷积神经网络提取的特征向量中的一种或几种。
  4. 如权利要求1所述的人物性格分析方法,其特征在于,所述模型构建步骤包括:
    根据所述样本的图像序列长度及每帧图像的特征向量维数设置循环神经网络层数及每层的神经元个数;
    根据所述性格类型的数量设置所述Softmax分类器的神经元个数。
  5. 如权利要求1所述的人物性格分析方法,其特征在于,所述Softmax损失函数公式如下:
    Figure PCTCN2018076120-appb-100001
    其中,θ为所述循环神经网络模型的训练参数,X j表示第j个样本,y j表示第j个样本对应的性格类型的预测概率。
  6. 如权利要求1所述的人物性格分析方法,其特征在于,所述模型训练步骤中的训练参数包括迭代次数。
  7. 如权利要求1所述的人物性格分析方法,其特征在于,所述模型应用步骤还包括:
    将所述待分析对象的面部视频转换为图像序列;
    提取该图像序列的特征向量。
  8. 一种计算装置,包括存储器和处理器,其特征在于,所述存储器中包括人物性格分析程序,所述人物性格分析程序被所述处理器执行时实现如下步骤:
    样本准备步骤:收集不同性格类型人物的预定时长的面部视频作为样本,为每个样本标注一个性格类型;
    样本特征提取步骤:提取每个样本的图像序列的特征向量;
    模型构建步骤:构建以Softmax分类器为输出层的循环神经网络模型;
    模型训练步骤:定义Softmax损失函数,以各样本的性格标注及图像序列的特征向量为样本数据,对所述循环神经网络模型进行训练,输出各样本对应每种性格类型的概率值,每次训练更新该循环神经网络模型的训练参数,以使所述Softmax损失函数最小化的训练参数作为最终参数,得到人物性格分析模型;及
    模型应用步骤:采集待分析对象的预定时长的面部视频,利用所述人物性格分析模型分析该待分析对象的该面部视频,得到该待分析对象对应每种性格类型的概率值,取概率值最大的性格类型作为该待分析对象的性格类型。
  9. 如权利要求8所述的计算装置,其特征在于,所述样本特征提取步骤之前还包括步骤:
    将所述样本的视频转换为图像序列。
  10. 如权利要求8所述的计算装置,其特征在于,所述特征向量为HOG特征向量、LBP特征向量、通过卷积神经网络提取的特征向量中的一种或几种。
  11. 如权利要求8所述的计算装置,其特征在于,所述模型构建步骤包括:
    根据所述样本的图像序列长度及每帧图像的特征向量维数设置循环神经网络层数及每层的神经元个数;
    根据所述性格类型的数量设置所述Softmax分类器的神经元个数。
  12. 如权利要求8所述的计算装置,其特征在于,所述Softmax损失函数公式如下:
    Figure PCTCN2018076120-appb-100002
    其中,θ为所述循环神经网络模型的训练参数,X j表示第j个样本,y j表示第j个样本对应的性格类型的预测概率。
  13. 如权利要求8所述的计算装置,其特征在于,所述模型训练步骤中的训练参数包括迭代次数。
  14. 如权利要求8所述的计算装置,其特征在于,所述模型应用步骤还包括:
    将所述待分析对象的面部视频转换为图像序列;
    提取该图像序列的特征向量。
  15. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中包括人物性格分析程序,所述人物性格分析程序被处理器执行时,实现如 下步骤:
    样本准备步骤:收集不同性格类型人物的预定时长的面部视频作为样本,为每个样本标注一个性格类型;
    样本特征提取步骤:提取每个样本的图像序列的特征向量;
    模型构建步骤:构建以Softmax分类器为输出层的循环神经网络模型;
    模型训练步骤:定义Softmax损失函数,以各样本的性格标注及图像序列的特征向量为样本数据,对所述循环神经网络模型进行训练,输出各样本对应每种性格类型的概率值,每次训练更新该循环神经网络模型的训练参数,以使所述Softmax损失函数最小化的训练参数作为最终参数,得到人物性格分析模型;及
    模型应用步骤:采集待分析对象的预定时长的面部视频,利用所述人物性格分析模型分析该待分析对象的该面部视频,得到该待分析对象对应每种性格类型的概率值,取概率值最大的性格类型作为该待分析对象的性格类型。
  16. 如权利要求15所述的介质,其特征在于,所述样本特征提取步骤之前还包括步骤:
    将所述样本的视频转换为图像序列。
  17. 如权利要求15所述的介质,其特征在于,所述特征向量为HOG特征向量、LBP特征向量、通过卷积神经网络提取的特征向量中的一种或几种。
  18. 如权利要求15所述的介质,其特征在于,所述模型构建步骤包括:
    根据所述样本的图像序列长度及每帧图像的特征向量维数设置循环神经网络层数及每层的神经元个数;
    根据所述性格类型的数量设置所述Softmax分类器的神经元个数。
  19. 如权利要求15所述的介质,其特征在于,所述Softmax损失函数公式如下:
    Figure PCTCN2018076120-appb-100003
    其中,θ为所述循环神经网络模型的训练参数,X j表示第j个样本,y j表 示第j个样本对应的性格类型的预测概率。
  20. 如权利要求15所述的介质,其特征在于,所述模型应用步骤还包括:
    将所述待分析对象的面部视频转换为图像序列;
    提取该图像序列的特征向量。
PCT/CN2018/076120 2017-11-02 2018-02-10 基于循环神经网络的人物性格分析方法、装置及存储介质 WO2019085329A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711061207.XA CN108038414A (zh) 2017-11-02 2017-11-02 基于循环神经网络的人物性格分析方法、装置及存储介质
CN201711061207.X 2017-11-02

Publications (1)

Publication Number Publication Date
WO2019085329A1 true WO2019085329A1 (zh) 2019-05-09

Family

ID=62093519

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/076120 WO2019085329A1 (zh) 2017-11-02 2018-02-10 基于循环神经网络的人物性格分析方法、装置及存储介质

Country Status (2)

Country Link
CN (1) CN108038414A (zh)
WO (1) WO2019085329A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705650A (zh) * 2019-10-14 2020-01-17 深制科技(苏州)有限公司 一种基于深度学习的钣金布局方法
CN111062527A (zh) * 2019-12-10 2020-04-24 北京爱奇艺科技有限公司 一种视频集流量预测方法及装置
CN111159501A (zh) * 2019-11-22 2020-05-15 杭州蛋壳商务信息技术有限公司 一种基于多层神经网络建立判客模型的方法及判客方法
CN111524557A (zh) * 2020-04-24 2020-08-11 腾讯科技(深圳)有限公司 基于人工智能的逆合成预测方法、装置、设备及存储介质
CN111582360A (zh) * 2020-05-06 2020-08-25 北京字节跳动网络技术有限公司 用于标注数据的方法、装置、设备和介质
CN111539443B (zh) * 2020-01-22 2024-02-09 北京小米松果电子有限公司 一种图像识别模型训练方法及装置、存储介质

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109409196A (zh) * 2018-08-30 2019-03-01 深圳壹账通智能科技有限公司 基于人脸的性格预测方法、装置、电子设备
CN109325457B (zh) * 2018-09-30 2022-02-18 合肥工业大学 基于多通道数据和循环神经网络的情感分析方法和系统
CN109259733A (zh) * 2018-10-25 2019-01-25 深圳和而泰智能控制股份有限公司 一种睡眠中呼吸暂停检测方法、装置和检测设备
CN109710272A (zh) * 2018-11-09 2019-05-03 深圳壹账通智能科技有限公司 更新文件的封装方法及装置
CN109524109A (zh) * 2018-11-26 2019-03-26 江西科技师范大学 一种基于肌肉压力状态的非接触式疲劳监测方法
CN109635753A (zh) * 2018-12-14 2019-04-16 深圳壹账通智能科技有限公司 基于神经网络模型的应聘者评估方法及装置
CN109498038B (zh) * 2018-12-25 2020-06-26 北京心法科技有限公司 自闭症评估方法及装置
CN109902645A (zh) * 2019-03-07 2019-06-18 百度在线网络技术(北京)有限公司 用于输出信息的方法和装置
CN110096145A (zh) * 2019-04-11 2019-08-06 湖北大学 基于混合现实和神经网络的心理状态显示方法及装置
CN110751126A (zh) * 2019-10-30 2020-02-04 王安 一种基于人脸特征判断人物性格的分析方法
CN111126197B (zh) * 2019-12-10 2023-08-25 苏宁云计算有限公司 基于深度学习的视频处理方法及装置
CN116739814B (zh) * 2023-04-23 2024-05-14 广州市疾病预防控制中心(广州市卫生检验中心、广州市食品安全风险监测与评估中心、广州医科大学公共卫生研究院) 一种预防疾病传播的方法及社交平台

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104462454A (zh) * 2014-12-17 2015-03-25 上海斐讯数据通信技术有限公司 一种性格分析方法
CN105405082A (zh) * 2015-11-30 2016-03-16 河北工程大学 大数据学生性格分析方法
CN105975497A (zh) * 2016-04-27 2016-09-28 清华大学 微博话题自动推荐方法及装置
US20170098153A1 (en) * 2015-10-02 2017-04-06 Baidu Usa Llc Intelligent image captioning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106909896B (zh) * 2017-02-17 2020-06-30 竹间智能科技(上海)有限公司 基于人物性格与人际关系识别的人机交互系统及工作方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104462454A (zh) * 2014-12-17 2015-03-25 上海斐讯数据通信技术有限公司 一种性格分析方法
US20170098153A1 (en) * 2015-10-02 2017-04-06 Baidu Usa Llc Intelligent image captioning
CN105405082A (zh) * 2015-11-30 2016-03-16 河北工程大学 大数据学生性格分析方法
CN105975497A (zh) * 2016-04-27 2016-09-28 清华大学 微博话题自动推荐方法及装置

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705650A (zh) * 2019-10-14 2020-01-17 深制科技(苏州)有限公司 一种基于深度学习的钣金布局方法
CN110705650B (zh) * 2019-10-14 2023-10-24 深制科技(苏州)有限公司 一种基于深度学习的钣金布局方法
CN111159501A (zh) * 2019-11-22 2020-05-15 杭州蛋壳商务信息技术有限公司 一种基于多层神经网络建立判客模型的方法及判客方法
CN111159501B (zh) * 2019-11-22 2023-09-22 杭州蛋壳商务信息技术有限公司 一种基于多层神经网络建立判客模型的方法及判客方法
CN111062527A (zh) * 2019-12-10 2020-04-24 北京爱奇艺科技有限公司 一种视频集流量预测方法及装置
CN111062527B (zh) * 2019-12-10 2023-12-05 北京爱奇艺科技有限公司 一种视频集流量预测方法及装置
CN111539443B (zh) * 2020-01-22 2024-02-09 北京小米松果电子有限公司 一种图像识别模型训练方法及装置、存储介质
CN111524557A (zh) * 2020-04-24 2020-08-11 腾讯科技(深圳)有限公司 基于人工智能的逆合成预测方法、装置、设备及存储介质
CN111524557B (zh) * 2020-04-24 2024-04-05 腾讯科技(深圳)有限公司 基于人工智能的逆合成预测方法、装置、设备及存储介质
CN111582360A (zh) * 2020-05-06 2020-08-25 北京字节跳动网络技术有限公司 用于标注数据的方法、装置、设备和介质
CN111582360B (zh) * 2020-05-06 2023-08-15 北京字节跳动网络技术有限公司 用于标注数据的方法、装置、设备和介质

Also Published As

Publication number Publication date
CN108038414A (zh) 2018-05-15

Similar Documents

Publication Publication Date Title
WO2019085329A1 (zh) 基于循环神经网络的人物性格分析方法、装置及存储介质
WO2019085330A1 (zh) 人物性格分析方法、装置及存储介质
WO2019085331A1 (zh) 欺诈可能性分析方法、装置及存储介质
WO2019104890A1 (zh) 结合音频分析和视频分析的欺诈识别方法、装置及存储介质
WO2019109526A1 (zh) 人脸图像的年龄识别方法、装置及存储介质
WO2021196830A1 (zh) 智能双录方法、装置及存储介质
WO2019200781A1 (zh) 票据识别方法、装置及存储介质
WO2019071903A1 (zh) 微表情面审辅助方法、装置及存储介质
WO2019033525A1 (zh) Au特征识别方法、装置及存储介质
WO2019033571A1 (zh) 面部特征点检测方法、装置及存储介质
CN112395979B (zh) 基于图像的健康状态识别方法、装置、设备及存储介质
CN110688454A (zh) 咨询对话处理的方法、装置、设备及存储介质
WO2020238353A1 (zh) 数据处理方法和装置、存储介质及电子装置
CN112418059B (zh) 一种情绪识别的方法、装置、计算机设备及存储介质
WO2019109530A1 (zh) 情绪识别方法、装置及存储介质
US20200204546A1 (en) Apparatus, method and computer program product for biometric recognition
WO2019033568A1 (zh) 嘴唇动作捕捉方法、装置及存储介质
CN106156794B (zh) 基于文字风格识别的文字识别方法及装置
US20230410222A1 (en) Information processing apparatus, control method, and program
WO2021051602A1 (zh) 基于唇语密码的人脸识别方法、系统、装置及存储介质
CN112819548A (zh) 用户画像生成方法及装置、可读存储介质、电子设备
Somervuo Time–frequency warping of spectrograms applied to bird sound analyses
WO2020224127A1 (zh) 视频流截取方法、装置及存储介质
CN116680401A (zh) 文档处理方法、文档处理装置、设备及存储介质
CN116130088A (zh) 多模态面诊问诊方法、装置及相关设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18873430

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 25.09.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18873430

Country of ref document: EP

Kind code of ref document: A1