CN109559760A - A kind of sentiment analysis method and system based on voice messaging - Google Patents

A kind of sentiment analysis method and system based on voice messaging Download PDF

Info

Publication number
CN109559760A
CN109559760A CN201811647021.7A CN201811647021A CN109559760A CN 109559760 A CN109559760 A CN 109559760A CN 201811647021 A CN201811647021 A CN 201811647021A CN 109559760 A CN109559760 A CN 109559760A
Authority
CN
China
Prior art keywords
sentiment analysis
voice
voice messaging
server
phonetic feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811647021.7A
Other languages
Chinese (zh)
Other versions
CN109559760B (en
Inventor
李京徽
Original Assignee
Beijing Jinglanyu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jinglanyu Technology Co Ltd filed Critical Beijing Jinglanyu Technology Co Ltd
Priority to CN201811647021.7A priority Critical patent/CN109559760B/en
Publication of CN109559760A publication Critical patent/CN109559760A/en
Application granted granted Critical
Publication of CN109559760B publication Critical patent/CN109559760B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • General Health & Medical Sciences (AREA)
  • Child & Adolescent Psychology (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Telephonic Communication Services (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The sentiment analysis method and system based on voice messaging that the embodiment of the invention discloses a kind of, it is related to speech analysis techniques field, the method is applied to the scene that both sides carry out conversation voice, and the method is executed by server, which comprises receives the voice messaging of voice acquisition device acquisition;Ad Hoc audio file is generated, the phonetic feature of the Ad Hoc audio file is extracted;Sentiment analysis is carried out according to the phonetic feature, provides sentiment analysis result;Sentiment analysis result is sent to user with different ways of presentation.The present invention is able to solve the problem of cannot understanding the emotion for linking up both sides well in talk.

Description

A kind of sentiment analysis method and system based on voice messaging
Technical field
The present embodiments relate to speech analysis techniques fields, and in particular to a kind of sentiment analysis side based on voice messaging Method and system.
Background technique
With the fast development of big data, artificial intelligence technology, the multimedia messages such as voice, text, image, video are obtained It excavates energetically, the intelligence experience on the sense organs such as vision, the sense of hearing is provided for user.And in these multimedia messages, voice messaging Importance it is especially prominent.Either voice is linked up or audio video transmission, speech emotional analysis are to analyse in depth communication effectiveness An important directions.
Emotional problems are to perplex the main problem of most of city men and women, and in social life, it is very that voice, which links up information, Important information can be seen that the attitude of a people by the tone of double talk, tone, answer speed etc., but for this The judgement of a little voice messagings, judges, and will receive the influence of language environment and mood simply by personal subjective consciousness, leads It causes judging nicety rate not high, many AC machine meetings can be missed.
Therefore, people is helped to understand the emotion and attitude of talk other side well by machine learning and big data, from And correctly judge that the communication effectiveness of both sides is necessary.
Summary of the invention
For this purpose, the embodiment of the present invention provides a kind of sentiment analysis method and system based on voice messaging, to solve handing over The problem of emotion for linking up both sides cannot be understood in what is said or talked about well.
To achieve the goals above, embodiments of the present invention provide the following technical solutions: providing a kind of based on voice letter The sentiment analysis method of breath, the method are applied to the scene that both sides carry out conversation voice, and the method is executed by server, institute The method of stating includes: the voice messaging for receiving voice acquisition device acquisition;Ad Hoc audio file is generated, the Ad Hoc audio text is extracted The phonetic feature of part;Sentiment analysis is carried out according to the phonetic feature, provides sentiment analysis result;With different ways of presentation to User sends sentiment analysis result.
Preferably, the method for the voice acquisition device acquisition voice messaging includes: that user passes through intelligent sound or communication Terminal and the other user carry out voice communication, intelligent sound or communication terminal and server is established and communicated to connect, and logical to voice Words carry out being sent to server after record generates voice messaging.
Preferably, the phonetic feature that the server extracts includes: the included sentence of volume, the tone, voice of voice Tone, voice include interval between sentence, the continuity and emotional change of time, call used in call.
Preferably, the server includes: server to voice according to the method that the phonetic feature carries out sentiment analysis After feature is analyzed, the comprehensive numerical value of the voice and the tone spoken according to other side in talk obtains Xiang Yuedu, and according to voice spy Obtain out attention rate.
Preferably, the method that the server provides sentiment analysis result includes: server according to mutually happy degree and attention rate Applied Psychology analysis method obtains emotion index, then the emotion index is matched with the emotion result in database, Finally provide corresponding sentiment analysis result.
Preferably, sentiment analysis result is sent to user, user by the server in a manner of text or voice broadcast Access mode is selected by client.
Preferably, the user includes: to be given birth to automatically by client in such a way that client consults sentiment analysis result At default sequence successively inquire, or inquired by the phone number of other side, or pass through the registration of intelligent sound or communication terminal Account inquiry.
A kind of sentiment analysis system based on voice messaging, which is characterized in that the system comprises: voice acquisition device, Including intelligent sound or communication terminal, for acquiring the voice messaging of dialogue both sides, and it is sent to server;Server is used for Voice messaging is received, provides sentiment analysis as a result, and feeding back to user with different ways of presentation;Client, for receiving clothes The sentiment analysis that business device generates is as a result, for customer inquiries.
Preferably, the server includes: receiving unit, for receiving the voice messaging of voice acquisition device transmission;Language Sound processing unit for generating Ad Hoc audio file, and extracts the phonetic feature of the Ad Hoc audio file;Sentiment analysis list Member provides sentiment analysis result for carrying out sentiment analysis according to the phonetic feature;Transmission unit, for different exhibitions Existing mode sends sentiment analysis result to user.
Embodiment according to the present invention, the present invention has the advantage that the present invention is by server to communication both sides' Voice messaging is analyzed, and show that Xiang Yuedu and attention rate are compared analysis with database again and obtain sentiment analysis as a result, energy Enough real feelings for more accurately analyzing other side;Server provides different ways of presentation and selects for user, with higher Operability.
Detailed description of the invention
It, below will be to embodiment party in order to illustrate more clearly of embodiments of the present invention or technical solution in the prior art Formula or attached drawing needed to be used in the description of the prior art are briefly described.It should be evident that the accompanying drawings in the following description is only It is merely exemplary, it for those of ordinary skill in the art, without creative efforts, can also basis The attached drawing of offer, which is extended, obtains other implementation attached drawings.
Fig. 1 is a kind of flow chart of the sentiment analysis method based on voice messaging provided in an embodiment of the present invention;
Fig. 2 is a kind of structure chart of the sentiment analysis system based on voice messaging provided in an embodiment of the present invention;
In figure: voice acquisition device 1, server 2, client 3, receiving unit 4, Audio Processing Unit 5, sentiment analysis list First 6, transmission unit 7.
Specific embodiment
Embodiments of the present invention are illustrated by particular specific embodiment below, those skilled in the art can be by this explanation Content disclosed by book is understood other advantages and efficacy of the present invention easily, it is clear that described embodiment is the present invention one Section Example, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art are not doing Every other embodiment obtained under the premise of creative work out, shall fall within the protection scope of the present invention.
With reference to Fig. 1, the sentiment analysis method based on voice messaging that the present embodiment provides a kind of is applied to both sides and carries out voice The scene of talk, the method are executed by server 2, and method includes:
S1, the voice messaging that voice acquisition device 1 acquires is received;
S2, Ad Hoc audio file is generated, extracts the phonetic feature of the Ad Hoc audio file;
S3, sentiment analysis is carried out according to the phonetic feature, provides sentiment analysis result;
S4, sentiment analysis result is sent to user with different ways of presentation.
Further, in step S1, the method that voice acquisition device 1 acquires voice messaging includes: that user passes through intelligent language Sound or communication terminal and the other user carry out voice communication, intelligent sound or communication terminal and server 2 is established and communicated to connect, and Voice communication is carried out to be sent to server 2 after record generates voice messaging.Specifically, user is whole by intelligent sound or communication End such as intelligent sound phone and application APP, directly dial counterpart telephone by intelligent sound phone and converse, will converse Server 2 is sent to after content record, after good friend can also being added by APP and other side, by good friend's call function of APP into Row call, while dialog context is uploaded to the cloud platform of server 2 by APP.Voice acquisition device 1 is also possible to have recording function The voice collector that connection can be established with server 2 of energy.
In step S2, the phonetic feature that the server 2 extracts includes: the included sentence of volume, the tone, voice of voice Tone, voice include that the interval between sentence, time, the continuity of call and emotional change used in call etc. are able to reflect The feature of speaker's mood, server 2 generate Ad Hoc audio file, can be avoided the occupancy of the excessive memory of server 2.
In step S3, server 2 includes: server 2 to voice according to the method that the phonetic feature carries out sentiment analysis After feature is analyzed, the comprehensive numerical value of the voice and the tone spoken according to other side in talk obtains Xiang Yuedu, and according to voice spy Obtain out attention rate.It is 78% as mutually pleased degree, attention rate 82%.The index height of Xiang Yuedu is spoken by other side in talk The comprehensive numerical statistic of voice and the tone and determination, it thereby it is assumed that out other side to your vibes degree, because one The emotional change for usually showing him in the speech of people, betrays his heart activity and he is inclined to the emotion of people.Attention rate takes The degree of concern that certainly call is held itself in both call sides emotional change and telephone user.Certainly, the height and words of Xiang Yuedu Topic itself can generate direct influence to the numerical result of attention rate.Sentimental psychology is studies have shown that Xiang Yuedu and concern degree Value represents his attitude to other side, while showing that attention level that other side keeps in communication process and psychology are involved in journey Degree.It can be derived that very accurate sentiment analysis result by degree and attention rate is mutually pleased.
Then server 2 obtains emotion index according to degree and attention rate Applied Psychology analysis method is mutually pleased, then will be described Emotion index is matched with the emotion result in database, finally provides corresponding sentiment analysis result.Sentiment analysis result Form include sentiment analysis+blessing language, such as: " you have the feeling seem to have met before? seem old friend after a long separation ? it appear that you enjoy a lot other side each other.You care for each other, and trust each other ..., most can finally, giving your head Represent the poem of your affective state this moment, the blessing taken to you!"
In step S4, sentiment analysis result is sent to user in a manner of text or voice broadcast by the server 2, is used Family selects access mode by client 3.Access mode includes: that the default sequence automatically generated by client 3 is successively inquired, Or it is inquired by the phone number of other side, or inquired by APP register account number.The method can satisfy the different of different user and practise It is used, there is applicability.
With reference to Fig. 2, the sentiment analysis system based on voice messaging that the present embodiment provides a kind of, comprising:
Voice acquisition device 1, including intelligent sound or communication terminal are talked with such as intelligent sound phone or APP for acquiring The voice messaging of both sides, and it is sent to server 2;Server 2, for receiving voice messaging, provide sentiment analysis as a result, and with Different ways of presentation feed back to user;Client 3, for receiving the sentiment analysis of the generation of server 2 as a result, looking into for client It askes.Voice acquisition device 1, server 2 and client 3 communicate to connect.
Further, the server 2 includes: receiving unit 4, for receiving the voice letter of the transmission of voice acquisition device 1 Breath;Audio Processing Unit 5 for generating Ad Hoc audio file, and extracts the phonetic feature of the Ad Hoc audio file;Emotion point Unit 6 is analysed, for carrying out sentiment analysis according to the phonetic feature, provides sentiment analysis result;Transmission unit 7, for not Same ways of presentation sends sentiment analysis result to user.Receiving unit 4, Audio Processing Unit 5, sentiment analysis unit 6 and hair Unit 7 is sent to be sequentially connected.
Although above having used general explanation and specific embodiment, the present invention is described in detail, at this On the basis of invention, it can be made some modifications or improvements, this will be apparent to those skilled in the art.Therefore, These modifications or improvements without departing from theon the basis of the spirit of the present invention are fallen within the scope of the claimed invention.

Claims (9)

1. a kind of sentiment analysis method based on voice messaging, which is characterized in that the method is applied to both sides and carries out voice friendship The scene of what is said or talked about, the method are executed by server, which comprises
Receive the voice messaging of voice acquisition device acquisition;
Ad Hoc audio file is generated, the phonetic feature of the Ad Hoc audio file is extracted;
Sentiment analysis is carried out according to the phonetic feature, provides sentiment analysis result;
Sentiment analysis result is sent to user with different ways of presentation.
2. a kind of sentiment analysis method based on voice messaging as described in claim 1, which is characterized in that the voice collecting The method of device acquisition voice messaging includes: that user passes through intelligent sound or communication terminal and the other user's progress voice communication, Intelligent sound or communication terminal and server, which are established, to be communicated to connect, and send after record generates voice messaging to voice communication To server.
3. a kind of sentiment analysis method based on voice messaging as described in claim 1, which is characterized in that the server mentions The phonetic feature taken include: the tone of the included sentence of volume, the tone, voice of voice, voice include sentence between interval, The continuity and emotional change of time, call used in conversing.
4. a kind of sentiment analysis method based on voice messaging as described in claim 1, which is characterized in that the server root It include: after server analyzes phonetic feature, according to right in talk according to the method that the phonetic feature carries out sentiment analysis The comprehensive numerical value of the voice and the tone just spoken obtains Xiang Yuedu, and obtains attention rate according to phonetic feature.
5. a kind of sentiment analysis method based on voice messaging as described in claim 1 or 4, which is characterized in that the service The method that device provides sentiment analysis result includes: server according to mutually pleasing degree and attention rate Applied Psychology analysis method obtains feelings Feel index, then the emotion index is matched with the emotion result in database, finally provides corresponding sentiment analysis knot Fruit.
6. a kind of sentiment analysis method based on voice messaging as claimed in claim 5, which is characterized in that the server will Sentiment analysis result is sent to user in a manner of text or voice broadcast, and user selects access mode by client.
7. a kind of sentiment analysis method based on voice messaging as claimed in claim 6, which is characterized in that the user passes through The mode that client consults sentiment analysis result includes: that the default sequence automatically generated by client is successively inquired, or is passed through The phone number of other side is inquired, or is inquired by the register account number of intelligent sound or communication terminal.
8. a kind of sentiment analysis system based on voice messaging, which is characterized in that the system comprises:
Voice acquisition device, including intelligent sound or communication terminal for acquiring the voice messaging of dialogue both sides, and are sent to clothes Business device;
Server provides sentiment analysis as a result, and feeding back to user with different ways of presentation for receiving voice messaging;
Client, for receiving the sentiment analysis of server generation as a result, for customer inquiries.
9. a kind of sentiment analysis system based on voice messaging as claimed in claim 8, which is characterized in that the server packet It includes:
Receiving unit, for receiving the voice messaging of voice acquisition device transmission;
Audio Processing Unit for generating Ad Hoc audio file, and extracts the phonetic feature of the Ad Hoc audio file;
Sentiment analysis unit provides sentiment analysis result for carrying out sentiment analysis according to the phonetic feature;
Transmission unit, for sending sentiment analysis result to user with different ways of presentation.
CN201811647021.7A 2018-12-29 2018-12-29 Emotion analysis method and system based on voice information Active CN109559760B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811647021.7A CN109559760B (en) 2018-12-29 2018-12-29 Emotion analysis method and system based on voice information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811647021.7A CN109559760B (en) 2018-12-29 2018-12-29 Emotion analysis method and system based on voice information

Publications (2)

Publication Number Publication Date
CN109559760A true CN109559760A (en) 2019-04-02
CN109559760B CN109559760B (en) 2021-11-16

Family

ID=65872144

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811647021.7A Active CN109559760B (en) 2018-12-29 2018-12-29 Emotion analysis method and system based on voice information

Country Status (1)

Country Link
CN (1) CN109559760B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110211563A (en) * 2019-06-19 2019-09-06 平安科技(深圳)有限公司 Chinese speech synthesis method, apparatus and storage medium towards scene and emotion
CN110211563B (en) * 2019-06-19 2024-05-24 平安科技(深圳)有限公司 Chinese speech synthesis method, device and storage medium for scenes and emotion

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093752A (en) * 2013-01-16 2013-05-08 华南理工大学 Sentiment analytical method based on mobile phone voices and sentiment analytical system based on mobile phone voices
CN103546503A (en) * 2012-07-10 2014-01-29 百度在线网络技术(北京)有限公司 Voice-based cloud social system, voice-based cloud social method and cloud analysis server
CN103634472A (en) * 2013-12-06 2014-03-12 惠州Tcl移动通信有限公司 Method, system and mobile phone for judging mood and character of user according to call voice
CN104538043A (en) * 2015-01-16 2015-04-22 北京邮电大学 Real-time emotion reminder for call
CN106847310A (en) * 2017-02-17 2017-06-13 安徽金猫数字科技有限公司 A kind of sentiment analysis system based on speech recognition
US20170310820A1 (en) * 2016-04-26 2017-10-26 Fmr Llc Determining customer service quality through digitized voice characteristic measurement and filtering
CN108259686A (en) * 2017-12-28 2018-07-06 合肥凯捷技术有限公司 A kind of customer service system based on speech analysis

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103546503A (en) * 2012-07-10 2014-01-29 百度在线网络技术(北京)有限公司 Voice-based cloud social system, voice-based cloud social method and cloud analysis server
CN103093752A (en) * 2013-01-16 2013-05-08 华南理工大学 Sentiment analytical method based on mobile phone voices and sentiment analytical system based on mobile phone voices
CN103634472A (en) * 2013-12-06 2014-03-12 惠州Tcl移动通信有限公司 Method, system and mobile phone for judging mood and character of user according to call voice
CN104538043A (en) * 2015-01-16 2015-04-22 北京邮电大学 Real-time emotion reminder for call
US20170310820A1 (en) * 2016-04-26 2017-10-26 Fmr Llc Determining customer service quality through digitized voice characteristic measurement and filtering
CN106847310A (en) * 2017-02-17 2017-06-13 安徽金猫数字科技有限公司 A kind of sentiment analysis system based on speech recognition
CN108259686A (en) * 2017-12-28 2018-07-06 合肥凯捷技术有限公司 A kind of customer service system based on speech analysis

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110211563A (en) * 2019-06-19 2019-09-06 平安科技(深圳)有限公司 Chinese speech synthesis method, apparatus and storage medium towards scene and emotion
WO2020253509A1 (en) * 2019-06-19 2020-12-24 平安科技(深圳)有限公司 Situation- and emotion-oriented chinese speech synthesis method, device, and storage medium
CN110211563B (en) * 2019-06-19 2024-05-24 平安科技(深圳)有限公司 Chinese speech synthesis method, device and storage medium for scenes and emotion

Also Published As

Publication number Publication date
CN109559760B (en) 2021-11-16

Similar Documents

Publication Publication Date Title
Jenks Social interaction in second language chat rooms
US20080240379A1 (en) Automatic retrieval and presentation of information relevant to the context of a user's conversation
US20130144619A1 (en) Enhanced voice conferencing
CN111294471B (en) Intelligent telephone answering method and system
JP2006005945A (en) Method of communicating and disclosing feelings of mobile terminal user and communication system thereof
US11803579B2 (en) Apparatus, systems and methods for providing conversational assistance
US11699043B2 (en) Determination of transcription accuracy
KR20100129122A (en) Animation system for reproducing text base data by animation
CN113194203A (en) Communication system, answering and dialing method and communication system for hearing-impaired people
CN111063346A (en) Cross-media star emotion accompany interaction system based on machine learning
CN109559760A (en) A kind of sentiment analysis method and system based on voice messaging
US11790887B2 (en) System with post-conversation representation, electronic device, and related methods
KR102605178B1 (en) Device, method and computer program for generating voice data based on family relationship
Michalsky et al. Effects of perceived attractiveness and likability on global aspects of fundamental frequency
US20220172711A1 (en) System with speaker representation, electronic device and related methods
KR20200083905A (en) System and method to interpret and transmit speech information
US20240153398A1 (en) Virtual meeting coaching with dynamically extracted content
Cromartie et al. Evaluating communication technologies for the deaf and hard of hearing
US20240153397A1 (en) Virtual meeting coaching with content-based evaluation
Agelfors et al. User evaluation of the SYNFACE talking head telephone
KR20180034927A (en) Communication terminal for analyzing call speech
US20220351727A1 (en) Conversaton method, conversation system, conversation apparatus, and program
WO2024102289A1 (en) Virtual meeting coaching with dynamically extracted content
Senkoyuncu et al. Do You Hear What I Hear? Long-Distance Relationships and the Power of a Loved One’s Voice
Shirley et al. VoIPText: Voice chat for deaf and hard of hearing people

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20190528

Address after: Room 804, Building 9, Wannian Huacheng Wanfangyuan District 2, Fengtai District, Beijing

Applicant after: Li Jinghui

Address before: 100022 Block B, Jiqing Lijiahui Center, Chaoyang Menwai Street, Chaoyang District, Beijing

Applicant before: Beijing Jinglanyu Technology Co., Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant