CN109171773A - Sentiment analysis method and system based on multi-channel data - Google Patents

Sentiment analysis method and system based on multi-channel data Download PDF

Info

Publication number
CN109171773A
CN109171773A CN201811154954.2A CN201811154954A CN109171773A CN 109171773 A CN109171773 A CN 109171773A CN 201811154954 A CN201811154954 A CN 201811154954A CN 109171773 A CN109171773 A CN 109171773A
Authority
CN
China
Prior art keywords
data
sentiment analysis
array
convolution
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811154954.2A
Other languages
Chinese (zh)
Other versions
CN109171773B (en
Inventor
孙晓
洪涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN201811154954.2A priority Critical patent/CN109171773B/en
Publication of CN109171773A publication Critical patent/CN109171773A/en
Application granted granted Critical
Publication of CN109171773B publication Critical patent/CN109171773B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/053Measuring electrical impedance or conductance of a portion of the body
    • A61B5/0531Measuring skin impedance
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Pathology (AREA)
  • Psychiatry (AREA)
  • Veterinary Medicine (AREA)
  • Theoretical Computer Science (AREA)
  • Physiology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Hospice & Palliative Care (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Cardiology (AREA)
  • Social Psychology (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Dermatology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)

Abstract

The present invention provides a kind of sentiment analysis method and system based on multi-channel data, is related to sentiment analysis technical field.This method comprises: obtaining human face expression picture, voice data, infrared pulse data and skin resistance data of the person to be analyzed during watching pre- setting video;Voice data, infrared pulse data and skin resistance data are respectively converted into corresponding spectrogram;Human face expression picture, the corresponding spectrogram of voice data, the corresponding spectrogram of infrared pulse data and the corresponding spectrogram of skin resistance data are inputted respectively in preset convolutional neural networks model, corresponding feature array is obtained;It wherein, include the characteristic of the first preset quantity in each feature array;Each characteristic group is merged, a total characteristic array is obtained, and total characteristic array is inputted in sentiment analysis model, obtains ratio shared by all types of emotions of person to be analyzed.The present invention can be improved the accuracy of sentiment analysis.

Description

Sentiment analysis method and system based on multi-channel data
Technical field
The present invention relates to sentiment analysis technical fields, and in particular to a kind of sentiment analysis method based on multi-channel data, System, computer equipment, computer readable storage medium and computer program.
Background technique
In the prior art, the mode of sentiment analysis has the mode that affection computation system is established based on facial expression, also has The mode of affection computation system is established based on pulse signal, no matter which kind of method, the data of use are all relatively simple, so that emotion The accuracy of analysis is poor.
Summary of the invention
(1) the technical issues of solving
In view of the deficiencies of the prior art, the sentiment analysis method that the present invention provides a kind of based on multi-channel data, system, Computer equipment, computer readable storage medium and computer program can be improved the accuracy of sentiment analysis.
(2) technical solution
In order to achieve the above object, the present invention is achieved by the following technical programs:
In a first aspect, the present invention provides a kind of sentiment analysis method based on multi-channel data, this method comprises:
Obtain human face expression picture, voice data, infrared pulse data of the person to be analyzed during watching pre- setting video With skin resistance data;
The voice data, the infrared pulse data and the skin resistance data are respectively converted into corresponding frequency spectrum Figure;
By the human face expression picture, the corresponding spectrogram of the voice data, the corresponding frequency of the infrared pulse data Spectrogram and the corresponding spectrogram of the skin resistance data are inputted respectively in preset convolutional neural networks model, and it is respectively right to obtain The feature array answered;It wherein, include the characteristic of the first preset quantity in each feature array;
Each characteristic group is merged, obtains a total characteristic array, and the total characteristic array is inputted into sentiment analysis In model, ratio shared by all types of emotions of the person to be analyzed is obtained;Wherein, the sentiment analysis model includes instructing in advance Experienced sentiment analysis function, preset first full articulamentum and preset activation primitive;The sentiment analysis function is used for basis The total characteristic array is exported into the first affection data, the first full articulamentum is for being converted to first affection data Second affection data of the second preset quantity, second preset quantity are the quantity of affective style;The activation primitive is used for According to the second affection data of second preset quantity, ratio shared by all types of emotions of the person to be analyzed is determined.
Second aspect, the present invention provide a kind of sentiment analysis system based on multi-channel data, which includes:
Data capture unit, for obtaining human face expression picture, voice of the person to be analyzed during watching pre- setting video Data, infrared pulse data and skin resistance data;
Date Conversion Unit, for dividing the voice data, the infrared pulse data and the skin resistance data Corresponding spectrogram is not converted to;
Characteristics determining unit, for by the human face expression picture, the corresponding spectrogram of the voice data, described infrared The corresponding spectrogram of pulse data and the corresponding spectrogram of the skin resistance data input preset convolutional neural networks respectively In model, corresponding feature array is obtained;It wherein, include the characteristic of the first preset quantity in each feature array According to;
Emotion determination unit obtains a total characteristic array for merging each characteristic group, and by the total characteristic Array inputs in sentiment analysis model, obtains ratio shared by all types of emotions of the person to be analyzed;Wherein, the emotion point Analysing model includes the sentiment analysis function trained in advance, preset first full articulamentum and preset activation primitive;The emotion Analytic function is used to export the first affection data according to by the total characteristic array, and the first full articulamentum is used for described the One affection data is converted to the second affection data of the second preset quantity, and second preset quantity is the quantity of affective style; The activation primitive is used for the second affection data according to second preset quantity, determines all types of feelings of the person to be analyzed The shared ratio of sense.
The third aspect, the present invention provide a kind of computer equipment, comprising:
At least one processor;
And at least one processor, in which:
At least one processor is for storing computer program;
At least one described processor is for calling the computer program stored in at least one processor, to execute Above-mentioned sentiment analysis method.
Fourth aspect, the present invention provide a kind of computer readable storage medium, are stored thereon with computer program, the meter Above-mentioned sentiment analysis method may be implemented in calculation machine program when being executed by processor.
5th aspect, the present invention provide a kind of computer program, including computer executable instructions, and the computer can be held Row instruction makes at least one processor execute above-mentioned sentiment analysis method when executed.
(3) beneficial effect
The embodiment of the invention provides it is a kind of by the sentiment analysis method of multi-channel data, system, computer equipment, based on Calculation machine readable storage medium storing program for executing and computer program acquire multi-channel data-human face expression picture, the voice number of person to be analyzed According to, infrared pulse data, skin resistance data, and be conducive to convolutional neural networks model extraction multi-channel data feature, use Sentiment analysis model ratio according to shared by all types of emotions of signature analysis.Divided since this method is based on multi-channel data Analysis overcomes the problem of prior art cannot really reflect affective style using single channel data, improves sentiment analysis Accuracy.Due in multi-channel data infrared pulse data and skin resistance data be people physiological data, do not anticipated by individual The change of knowledge can more really reflect the emotion of person to be analyzed.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with It obtains other drawings based on these drawings.
Fig. 1 is the flow diagram of the sentiment analysis method in one embodiment of the invention based on multi-channel data;
Fig. 2 is the structural schematic diagram of convolutional neural networks model in one embodiment of the invention;
Fig. 3 is the structural block diagram of the sentiment analysis system in one embodiment of the invention based on multi-channel data.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art Every other embodiment obtained without creative efforts, shall fall within the protection scope of the present invention.
In a first aspect, the present invention provides a kind of sentiment analysis method based on multi-channel data, as shown in Figure 1, the emotion Analysis method includes:
S101, human face expression picture, voice data, infrared pulse of the person to be analyzed during watching pre- setting video are obtained Data and skin resistance data;
It will be appreciated that above-mentioned pre- setting video may include sadness, indignation, glad, surprised, frightened, six seed types of detest At least one of type video.
It will be appreciated that above-mentioned human face expression picture can carry out user during person to be analyzed watches pre- setting video The picture taken pictures is also possible in the video shot during person to be analyzed watches pre- setting video to user Select the picture come.
In practical application, voice capture device, Jin Erli can be arranged at the scene that person to be analyzed watches pre- setting video Above-mentioned voice data is acquired with voice capture device.
In practical application, can be installed on person's body to be analyzed infrared pulse collection equipment and skin resistance acquisition set It is standby, and then can use the infrared pulse data that infrared pulse collection equipment acquires person to be analyzed, it is adopted using skin resistance data Collect the skin resistance data of person to be analyzed.
It will be appreciated that the multi-channel data mentioned in topic i.e. human face expression picture, voice data, infrared pulse data With skin resistance data.
S102, the voice data, the infrared pulse data and the skin resistance data are respectively converted into correspondence Spectrogram;
Subsequent data handling procedure for convenience, here by voice data, infrared pulse data and skin resistance data It is converted into spectrogram, channel data each in this way is graphic form.
S103, the human face expression picture, the corresponding spectrogram of the voice data, the infrared pulse data are corresponded to Spectrogram and the corresponding spectrogram of the skin resistance data inputted in preset convolutional neural networks model respectively, obtain each Self-corresponding feature array;It wherein, include the characteristic of the first preset quantity in each feature array;
It will be appreciated that above-mentioned convolutional neural networks model can carry out feature extraction to the data in each channel, in turn Obtain the corresponding feature array in each channel.
In the specific implementation, the convolutional neural networks model can use various structures, introduce wherein below with reference to Fig. 2 A kind of structure: convolutional neural networks model includes sequentially connected five convolution units and the output with the 5th convolution unit Hold the second full articulamentum of connection;Wherein: each convolution unit includes a convolutional layer and the connection convolutional layer output end Down-sampling layer;The second full articulamentum is used to convert the first present count for the quantity of the output data of the 5th convolution unit Amount.
Wherein, in five convolution units there are many structures of each convolution unit, for example, as shown in Fig. 2, first convolution Convolutional layer 301a in unit includes the convolution kernel that 96 sizes are 11*11, the down-sampling layer in first convolution unit The size of the sampling core of 301b is 3*3, sampling step length 2;For another example the convolutional layer 302a in second convolution unit includes 128 sizes are the convolution kernel of 5*5, and the size of the sampling core of the down-sampling layer 302b in second convolution unit is 3*3, Sampling step length is 1;In another example the convolutional layer 303a in third convolution unit includes the convolution kernel that 192 sizes are 3*3, institute The size for stating the sampling core of the down-sampling layer 303b in third convolution unit is 3*3, sampling step length 1;In another example the 4th Convolutional layer 304a in convolution unit includes the convolution kernel that 192 sizes are 3*3, the down-sampling in the 4th convolution unit The size of the sampling core of layer 304b is 3*3, sampling step length 1;In another example the convolutional layer 305a in the 5th convolution unit includes 128 sizes are the convolution kernel of 3*3, and the size of the sampling core of the down-sampling layer 305b in the 5th convolution unit is 3*3, Sampling step length is 1.
For example, human face expression picture is color image, including tri- Color Channels of R, G and B, therefore human face expression figure Piece is equivalent to a three-dimensional array, such as size is the three-dimensional array of 6*6*3, and therein 3 represent 3 Color Channels, such Array can be understood as the stacking of three layers of two-dimensional array, therefore convolutional layer can be directed to the process of convolution of human face expression picture Each layer of two-dimensional array executes, and then by treated, three layers of two-dimensional array are stacked, three dimensions after forming a process of convolution Group.Equally, the treatment process of down-sampling is similar.
A kind of principle that process of convolution is carried out to a two-dimensional array is described below:
As shown in table 1, a two-dimensional array size is 5*5, and as shown in table 2 below, convolution kernel used by process of convolution is (1,0,1;0,1,0;1,0,1).In table 1 the 1st, 2,3 rows and the 1st, 2,3 column composition array be (1,1,1;0,1,1;0,0, 1), convolution kernel is multiplied with the data of corresponding position in the corresponding array of first three rows first three columns, being then multiplied, it is each to obtain A data are added, i.e. 1*1+1*0+1*1+0*0+1*1+1*0+0*1+0*0+1*1=4, then obtain first output valve.Successively class It pushes away, available size is the output matrix of 3*3.
Table 1
1 1 1 0 0
0 1 1 1 0
0 0 1 1 1
0 0 1 1 0
0 1 1 0 0
Table 2
1 0 1
0 1 0
1 0 1
The size of output matrix is N*N after a process of convolution, wherein N=(W-F)/S+1.The process of convolution The size of input matrix be W*W, the size of convolution kernel is F × F, step-length S.
A kind of principle that down-sampling processing is carried out to a two-dimensional array is described below:
The three-dimensional array obtained after process of convolution is decomposed into three two-dimensional arrays, shown in the following table 3, is obtained after decomposition The size of a two-dimensional array be 4*4, the core size of down-sampling is 2*2, step-length 2.In table 3 the 1st, 2 rows and the 1st, 2 column Array is (1,1;5,6), the maximum value in the array is 6.Due to step-length be the 2, the 1st, 2 rows and the 3rd, 4 column array be (2,4; 7,8), the maximum value in the array is 8.And so on it is available as shown in table 4 go out two-dimensional array.
Table 3
1 1 2 4
5 6 7 8
3 2 1 0
1 2 3 4
Table 4
6 8
3 4
The size of the output matrix obtained after a down-sampling is handled is len*len, wherein len=(X-pool_ size)/stride+1.The size of the input matrix of the down-sampling layer is X*X, and the size of the core of down-sampling layer is pool_size, Step-length is stride.
For example, the human face expression picture that a pixel is 237*237 inputs in convolutional neural networks model, the face It is 11* since the convolutional layer in first convolution unit includes 96 sizes after expression picture is input to first convolution unit 11 convolution kernel obtains the array that dimension is 55*55*96 after the convolutional layer, which is 3*3 by core and step-length is 2 Down-sampling after, obtain dimension be 27*27*96 the first three-dimensional array;First three-dimensional array is input to the of above structure The second three-dimensional array is obtained after two convolution units;It is obtained after second three-dimensional array to be input to the third convolution unit of above structure To third three-dimensional array;The 4th three-dimensional array is obtained after third three-dimensional data to be input to the Volume Four product unit of above structure; The 5th three-dimensional array is obtained after 4th three-dimensional array is input to the 5th convolution unit of above structure.5th three-dimensional array it is big Small is 6*6*256, spreads out to obtain the array that size is 1*4096 to get to 4096 data.The array of 1*4096 is led to The full articulamentum that output data number is 1000 is crossed, obtains 1000 data, is i.e. the size array that is 1*1000, the 1*1000's Array is the feature array for the human face expression picture that pixel is 237*237, includes 1000 data in this feature array.
Equally, by the corresponding spectrogram of the voice data, the infrared corresponding spectrogram of pulse data and the skin The corresponding spectrogram of skin resistance data is inputted respectively in the convolutional neural networks model of above structure, is respectively obtained a size and is The feature array of 1*1000.That is, by human face expression picture, the corresponding spectrogram of voice data, infrared pulse data pair The spectrogram and the corresponding spectrogram of skin resistance data answered are separately input in the convolutional neural networks model of above structure, are obtained The feature array for being 1*1000 to four sizes.
S104, each characteristic group is merged, obtains a total characteristic array, and the total characteristic array is inputted into emotion In analysis model, ratio shared by all types of emotions of the person to be analyzed is obtained;
For example, the characteristic group that above-mentioned 4 sizes are 1*1000 is merged into total spy that a size is 1*4000 Array is levied, includes 4000 data in the total characteristic array.
Wherein, the sentiment analysis model include the sentiment analysis function trained in advance, preset first full articulamentum and Preset activation primitive;The sentiment analysis function is used to export the first affection data according to by the total characteristic array, described First full articulamentum is used to be converted to first affection data the second affection data of the second preset quantity, and described second is pre- If quantity is the quantity of affective style;The activation primitive is used for the second affection data according to second preset quantity, really Ratio shared by all types of emotions of the fixed person to be analyzed.
It will be appreciated that above-mentioned sentiment analysis function, the first full articulamentum and activation primitive are sequentially connected, to form feelings Feel analysis model.
For example, human emotion generally may include sadness, indignation, glad, surprised, frightened, six major class of detest.For The analysis of six kinds of emotions, above-mentioned second preset quantity are 6.
In the specific implementation, above-mentioned sentiment analysis function may include:
F=W*input+bias
In formula, W is weight coefficient, and bias is offset parameter, and input is the total characteristic array, and F is the sentiment analysis First affection data of function output, W and bias are determined by training in advance.
In the specific implementation, above-mentioned activation primitive may include:
In formula, SiRatio shared by the i-th type emotion for the person to be analyzed, ViFor the described first full articulamentum output I-th of second affection datas, C be the second preset quantity.
It will be appreciated that being directed to six kinds of emotions, C value is 6.
It will be appreciated that being directed to big five emotion, S1Indicate ratio shared by first kind emotion, S2Indicate Second Type feelings The shared ratio of sense, S3Indicate ratio shared by third type emotion, S4Indicate ratio shared by the 4th type emotion, S5It indicates Ratio shared by 5th type emotion, S6Indicate ratio shared by the 6th type emotion.
For example, the second preset quantity is 6, and it is complete that the first affection data that sentiment analysis function is exported is input to first First affection data is converted to 6 the second affection datas by articulamentum, the first full articulamentum, thus 6 the second emotion numbers of output According to each second affection data corresponds to a kind of affective style.After this 6 second affection datas are inputted above-mentioned activation primitive, Ratio shared by available each type emotion.
In the specific implementation, the preparatory training process of above-mentioned sentiment analysis function is W and bias in sentiment analysis function Preparatory training process, which specifically includes:
A, the type of emotion produced by watching the pre- setting video in the process to multiple trained objects respectively is marked;
It will be appreciated that above-mentioned multiple trained objects are multiple measured.
It, can be by making that object generated affective style during watching pre- setting video is trained to carry out when practical application The mode of selection carries out affective style label.
It will be appreciated that being directed to six kinds of affective styles, can be marked respectively with 0,1,2,3,4,5.
B, human face expression picture, voice number of the multiple trained object during watching pre- setting video are obtained respectively According to, infrared pulse data and skin resistance data;
C, voice data, infrared pulse data and the skin resistance data of each training object are separately converted to correspond to Spectrogram;
D, by the corresponding spectrogram of human face expression picture, voice data of each training object, infrared pulse data pair The spectrogram and the corresponding spectrogram of the skin resistance data answered are inputted respectively in preset convolutional neural networks model, are obtained Corresponding feature array;
E, each corresponding each characteristic of training object is merged, obtains the total characteristic array of the training object;
F, the feelings that the multiple trained will object respective total characteristic array and the multiple trained object respectively be marked Feel type and carry out the training of sentiment analysis function, obtains sentiment analysis function.
In step f, carry out sentiment analysis function training process really determine sentiment analysis function in parameter W and The process of bias.
It will be appreciated that be the output valve F in above-mentioned sentiment analysis function to the affective style of a training object tag, it should The total characteristic array of training object is input input, by the emotion for largely training object respective total characteristic data and label Type is trained, it can determines the parameter W and bias in above-mentioned formula.
It will be appreciated that above-mentioned steps b~e is similar with above-mentioned steps S101~S104, explanation, act in relation to content Example, specific embodiment can be with reference to the corresponding portions in step S101~S104.
It will be appreciated that the size or dimension of array be a*b indicate array size or dimension be a row b column, array it is big Small or dimension be a*b*c indicate array size or dimension be c layer of b column of a row, it is understood that be length, width and height be respectively a, b and c." * " in place of other indicates to be multiplied.
Sentiment analysis method provided by the invention acquires multi-channel data-human face expression picture, voice of person to be analyzed Data, infrared pulse data, skin resistance data, and it is conducive to the feature of convolutional neural networks model extraction multi-channel data, it adopts With sentiment analysis model ratio according to shared by all types of emotions of signature analysis.Divided since this method is based on multi-channel data Analysis overcomes the problem of prior art cannot really reflect affective style using single channel data, improves sentiment analysis Accuracy.Due in multi-channel data infrared pulse data and skin resistance data be people physiological data, do not anticipated by individual The change of knowledge can more really reflect the emotion of person to be analyzed.
Second aspect, the present invention provides a kind of sentiment analysis system based on multi-channel data, as shown in figure 3, the system Include:
Data capture unit, for obtaining human face expression picture, voice of the person to be analyzed during watching pre- setting video Data, infrared pulse data and skin resistance data;
Date Conversion Unit, for dividing the voice data, the infrared pulse data and the skin resistance data Corresponding spectrogram is not converted to;
Characteristics determining unit, for by the human face expression picture, the corresponding spectrogram of the voice data, described infrared The corresponding spectrogram of pulse data and the corresponding spectrogram of the skin resistance data input preset convolutional neural networks respectively In model, corresponding feature array is obtained;It wherein, include the characteristic of the first preset quantity in each feature array According to;
Emotion determination unit obtains a total characteristic array for merging each characteristic group, and by the total characteristic Array inputs in sentiment analysis model, obtains ratio shared by all types of emotions of the person to be analyzed;Wherein, the emotion point Analysing model includes the sentiment analysis function trained in advance, preset first full articulamentum and preset activation primitive;The emotion Analytic function is used to export the first affection data according to by the total characteristic array, and the first full articulamentum is used for described the One affection data is converted to the second affection data of the second preset quantity, and second preset quantity is the quantity of affective style; The activation primitive is used for the second affection data according to second preset quantity, determines all types of feelings of the person to be analyzed The shared ratio of sense.
It will be appreciated that the sentiment analysis system that second aspect provides is opposite with the sentiment analysis method that first aspect provides It answers, the part such as explanation, citing, specific embodiment and beneficial effect in relation to content can be with reference to corresponding in first aspect Content.
The third aspect, the present invention provide a kind of computer equipment, which includes:
At least one processor;
And at least one processor, in which:
At least one processor is for storing computer program;
At least one described processor is for calling the computer program stored in at least one processor, to execute The sentiment analysis method that first aspect provides.
It will be appreciated that each unit is computer program module in the sentiment analysis system that second aspect provides, these Computer program module is the computer program stored in above-mentioned at least one processor.
Fourth aspect, the present invention provide a kind of computer readable storage medium, are stored thereon with computer program, the meter Calculation machine program can realize the case analysis method that first aspect provides when being executed by processor.
5th aspect, the present invention provide a kind of computer program, including computer executable instructions, and the computer can be held Row instruction makes at least one processor execute the sentiment analysis method that first aspect provides when executed.
It will be appreciated that computer equipment, computer readable storage medium and computer journey that third~five aspects provide The contents such as explanation, specific embodiment, citing, beneficial effect in sequence in relation to content can be with reference to the corresponding portion in first aspect Point.
It should be noted that, in this document, relational terms such as first and second and the like are used merely to a reality Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or equipment Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that There is also other identical elements in process, method, article or equipment including the element.
The above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although with reference to the foregoing embodiments Invention is explained in detail, those skilled in the art should understand that: it still can be to aforementioned each implementation Technical solution documented by example is modified or equivalent replacement of some of the technical features;And these modification or Replacement, the spirit and scope for technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution.

Claims (10)

1. a kind of sentiment analysis method based on multi-channel data characterized by comprising
Obtain human face expression picture, voice data, infrared pulse data and skin of the person to be analyzed during watching pre- setting video Skin resistance data;
The voice data, the infrared pulse data and the skin resistance data are respectively converted into corresponding spectrogram;
By the human face expression picture, the corresponding spectrogram of the voice data, the corresponding spectrogram of the infrared pulse data Spectrogram corresponding with the skin resistance data is inputted respectively in preset convolutional neural networks model, is obtained corresponding Feature array;It wherein, include the characteristic of the first preset quantity in each feature array;
Each characteristic group is merged, obtains a total characteristic array, and the total characteristic array is inputted into sentiment analysis model In, obtain ratio shared by all types of emotions of the person to be analyzed;Wherein, the sentiment analysis model includes training in advance Sentiment analysis function, preset first full articulamentum and preset activation primitive;The sentiment analysis function is used for according to by institute It states total characteristic array and exports the first affection data, the first full articulamentum is used to first affection data being converted to second Second affection data of preset quantity, second preset quantity are the quantity of affective style;The activation primitive is used for basis Second affection data of second preset quantity determines ratio shared by all types of emotions of the person to be analyzed.
2. sentiment analysis method as described in claim 1, which is characterized in that the training process packet of the sentiment analysis function It includes:
The type of emotion produced by watching the pre- setting video in the process to multiple trained objects respectively is marked;
Human face expression picture of the multiple trained object during watching pre- setting video, voice data, infrared is obtained respectively Pulse data and skin resistance data;
Voice data, infrared pulse data and the skin resistance data of each training object are separately converted to corresponding frequency spectrum Figure;
By the corresponding spectrogram of human face expression picture, voice data, the corresponding frequency of infrared pulse data of each training object Spectrogram and the corresponding spectrogram of the skin resistance data are inputted respectively in preset convolutional neural networks model, and it is respectively right to obtain The feature array answered;
Each corresponding each characteristic of training object is merged, the total characteristic array of the training object is obtained;
The affective style that the multiple trained will object respective total characteristic array and the multiple trained object respectively be marked The training of sentiment analysis function is carried out, sentiment analysis function is obtained.
3. sentiment analysis method as described in claim 1, which is characterized in that the structure of the convolutional neural networks model includes Sequentially connected five convolution units and the second full articulamentum being connect with the output end of the 5th convolution unit;Wherein: every One convolution unit includes a convolutional layer and the down-sampling layer for connecting the convolutional layer output end;The second full articulamentum is used for The first preset quantity is converted by the quantity of the output data of the 5th convolution unit.
4. sentiment analysis method according to claim 3, which is characterized in that
Convolutional layer in five convolution units in first convolution unit includes the convolution kernel of 96 11*11, and described first The sampling core of down-sampling layer in a convolution unit is 3*3, sampling step length 2;And/or
Convolutional layer in five convolution units in second convolution unit includes the convolution kernel of 128 5*5, and described second The sampling core of down-sampling layer in convolution unit is 3*3, sampling step length 1;And/or
Convolutional layer in five convolution units in third convolution unit includes the convolution kernel of 192 3*3, the third The sampling core of down-sampling layer in convolution unit is 3*3, sampling step length 1;And/or
Convolutional layer in five convolution units in the 4th convolution unit includes the convolution kernel of 192 3*3, the third The sampling core of down-sampling layer in convolution unit is 3*3, sampling step length 1;And/or
Convolutional layer in five convolution units in the 5th convolution unit includes the convolution kernel of 128 3*3, and described 5th The sampling core of down-sampling layer in convolution unit is 3*3, sampling step length 1.
5. such as the described in any item sentiment analysis methods of Claims 1 to 4, which is characterized in that the sentiment analysis function includes:
F=W*input+bias
In formula, W is weight coefficient, and bias is offset parameter, and input is the total characteristic array, and F is the sentiment analysis function First affection data of output.
6. such as the described in any item sentiment analysis methods of Claims 1 to 4, which is characterized in that the activation primitive includes:
In formula, SiRatio shared by the i-th type emotion for the person to be analyzed, ViIt is the i-th of the described first full articulamentum output A second affection data, C are the second preset quantity.
7. a kind of sentiment analysis system based on multi-channel data characterized by comprising
Data capture unit, for obtain human face expression picture of the person to be analyzed during watching pre- setting video, voice data, Infrared pulse data and skin resistance data;
Date Conversion Unit, for turning the voice data, the infrared pulse data and the skin resistance data respectively It is changed to corresponding spectrogram;
Characteristics determining unit is used for the human face expression picture, the corresponding spectrogram of the voice data, the infrared pulse The corresponding spectrogram of data and the corresponding spectrogram of the skin resistance data input preset convolutional neural networks model respectively In, obtain corresponding feature array;It wherein, include the characteristic of the first preset quantity in each feature array;
Emotion determination unit obtains a total characteristic array for merging each characteristic group, and by the total characteristic array It inputs in sentiment analysis model, obtains ratio shared by all types of emotions of the person to be analyzed;Wherein, the sentiment analysis mould Type includes the sentiment analysis function trained in advance, preset first full articulamentum and preset activation primitive;The sentiment analysis Function is used to export the first affection data according to by the total characteristic array, and the first full articulamentum is used for first feelings Sense data are converted to the second affection data of the second preset quantity, and second preset quantity is the quantity of affective style;It is described Activation primitive is used for the second affection data according to second preset quantity, determines all types of emotion institutes of the person to be analyzed The ratio accounted for.
8. a kind of computer equipment characterized by comprising
At least one processor;
And at least one processor, in which:
At least one processor is for storing computer program;
At least one described processor is for calling the computer program stored in at least one processor, to execute as weighed Benefit requires 1~6 described in any item sentiment analysis methods.
9. a kind of computer readable storage medium, which is characterized in that be stored thereon with computer program, the computer program quilt Processor can realize sentiment analysis method as described in any one of claims 1 to 6 when executing.
10. a kind of computer program, including computer executable instructions, the computer executable instructions make when executed to A few processor executes such as sentiment analysis method described in any one of claims 1 to 6.
CN201811154954.2A 2018-09-30 2018-09-30 Emotion analysis method and system based on multi-channel data Active CN109171773B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811154954.2A CN109171773B (en) 2018-09-30 2018-09-30 Emotion analysis method and system based on multi-channel data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811154954.2A CN109171773B (en) 2018-09-30 2018-09-30 Emotion analysis method and system based on multi-channel data

Publications (2)

Publication Number Publication Date
CN109171773A true CN109171773A (en) 2019-01-11
CN109171773B CN109171773B (en) 2021-05-18

Family

ID=64907959

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811154954.2A Active CN109171773B (en) 2018-09-30 2018-09-30 Emotion analysis method and system based on multi-channel data

Country Status (1)

Country Link
CN (1) CN109171773B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110638472A (en) * 2019-09-27 2020-01-03 新华网股份有限公司 Emotion recognition method and device, electronic equipment and computer readable storage medium
CN112836515A (en) * 2019-11-05 2021-05-25 阿里巴巴集团控股有限公司 Text analysis method, recommendation device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106847309A (en) * 2017-01-09 2017-06-13 华南理工大学 A kind of speech-emotion recognition method
CN106878677A (en) * 2017-01-23 2017-06-20 西安电子科技大学 Student classroom Grasping level assessment system and method based on multisensor
CN107705806A (en) * 2017-08-22 2018-02-16 北京联合大学 A kind of method for carrying out speech emotion recognition using spectrogram and deep convolutional neural networks
CN107799165A (en) * 2017-09-18 2018-03-13 华南理工大学 A kind of psychological assessment method based on virtual reality technology
WO2018079106A1 (en) * 2016-10-28 2018-05-03 株式会社東芝 Emotion estimation device, emotion estimation method, storage medium, and emotion count system
CN108154096A (en) * 2017-12-19 2018-06-12 科大讯飞股份有限公司 A kind of checking method and device of hearing data
CN108597539A (en) * 2018-02-09 2018-09-28 桂林电子科技大学 Speech-emotion recognition method based on parameter migration and sound spectrograph

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018079106A1 (en) * 2016-10-28 2018-05-03 株式会社東芝 Emotion estimation device, emotion estimation method, storage medium, and emotion count system
CN106847309A (en) * 2017-01-09 2017-06-13 华南理工大学 A kind of speech-emotion recognition method
CN106878677A (en) * 2017-01-23 2017-06-20 西安电子科技大学 Student classroom Grasping level assessment system and method based on multisensor
CN107705806A (en) * 2017-08-22 2018-02-16 北京联合大学 A kind of method for carrying out speech emotion recognition using spectrogram and deep convolutional neural networks
CN107799165A (en) * 2017-09-18 2018-03-13 华南理工大学 A kind of psychological assessment method based on virtual reality technology
CN108154096A (en) * 2017-12-19 2018-06-12 科大讯飞股份有限公司 A kind of checking method and device of hearing data
CN108597539A (en) * 2018-02-09 2018-09-28 桂林电子科技大学 Speech-emotion recognition method based on parameter migration and sound spectrograph

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110638472A (en) * 2019-09-27 2020-01-03 新华网股份有限公司 Emotion recognition method and device, electronic equipment and computer readable storage medium
CN110638472B (en) * 2019-09-27 2022-07-05 新华网股份有限公司 Emotion recognition method and device, electronic equipment and computer readable storage medium
CN112836515A (en) * 2019-11-05 2021-05-25 阿里巴巴集团控股有限公司 Text analysis method, recommendation device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN109171773B (en) 2021-05-18

Similar Documents

Publication Publication Date Title
CN110349652B (en) Medical data analysis system fusing structured image data
CN107080546A (en) Mood sensing system and method, the stimulation Method of Sample Selection of teenager's Environmental Psychology based on electroencephalogram
CN109222972A (en) A kind of full brain data classification method of fMRI based on deep learning
WO2021088556A1 (en) Image processing method and apparatus, device, and storage medium
CN107358293A (en) A kind of neural network training method and device
CN105469100A (en) Deep learning-based skin biopsy image pathological characteristic recognition method
CN104361574B (en) No-reference color image quality assessment method on basis of sparse representation
CN109993707A (en) Image de-noising method and device
CN105956150B (en) A kind of method and device generating user's hair style and dressing collocation suggestion
CN110020639A (en) Video feature extraction method and relevant device
CN106779075A (en) The improved neutral net of pruning method is used in a kind of computer
CN109171773A (en) Sentiment analysis method and system based on multi-channel data
CN109902548A (en) A kind of object properties recognition methods, calculates equipment and system at device
CN108363969B (en) Newborn pain assessment method based on mobile terminal
CN112472048B (en) Method for realizing neural network for identifying pulse condition of cardiovascular disease patient
CN114333074B (en) Human body posture estimation method based on dynamic lightweight high-resolution network
CN109753996A (en) Hyperspectral image classification method based on D light quantisation depth network
CN109685148A (en) Multi-class human motion recognition method and identifying system
CN109359610A (en) Construct method and system, the data characteristics classification method of CNN-GB model
CN109978074A (en) Image aesthetic feeling and emotion joint classification method and system based on depth multi-task learning
CN109978873A (en) A kind of intelligent physical examination system and method based on Chinese medicine image big data
CN110175506A (en) Pedestrian based on parallel dimensionality reduction convolutional neural networks recognition methods and device again
CN109171774A (en) Personality analysis method and system based on multi-channel data
Hu et al. Lightweight multi-scale network with attention for facial expression recognition
CN110210439A (en) Activity recognition method based on lightweight Three dimensional convolution network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant