KR20140126485A - Method of Emotion Reactive Type Mobile Private Secretary Service - Google Patents
Method of Emotion Reactive Type Mobile Private Secretary Service Download PDFInfo
- Publication number
- KR20140126485A KR20140126485A KR20130044708A KR20130044708A KR20140126485A KR 20140126485 A KR20140126485 A KR 20140126485A KR 20130044708 A KR20130044708 A KR 20130044708A KR 20130044708 A KR20130044708 A KR 20130044708A KR 20140126485 A KR20140126485 A KR 20140126485A
- Authority
- KR
- South Korea
- Prior art keywords
- response
- emotion
- command
- user
- module
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/63—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Abstract
The emotional responsive mobile personal assistant service method of the present invention comprises the steps of: (a) receiving a voice command of a user from a voice receiving module; (b) analyzing the speech command in a characterization module and converting the analyzed speech command into a text command; (c) analyzing the height and pace of the sound of the voice command in the emotion extraction module, and analyzing the word of the text command to extract the emotion of the user; (d) providing a response to the text command in a response module; (e) determining a response emotion corresponding to the emotion of the user extracted in the step (c) in the emotion response module, and constructing a response sentence using the response prepared in the step (d); And (f) reproducing the response sentence constructed in the step (e).
Description
The present invention relates to an emotional response type mobile personal assistant service method, and more particularly, to an emotional response type mobile personal assistant service which recognizes a user's voice in a mobile device, processes a command based on the voice, Service method.
Mobile personal assistant service (SIRI) service that informs the user of the result of processing or processing the work such as search, e-mail sending, schedule registration, etc. on the mobile device when the user transmits commands to the mobile device by voice, Have recently been put to practical use.
A conventional personal assistant service generally recognizes a voice command of a user as a text command using various voice recognition techniques, and processes the voice command of the user according to the recognition result. Korean Unexamined Patent Application Publication No. 2003-0033890 discloses a system for providing a personal assistant service using such a speech recognition technology.
Such a conventional personal assistant service converts a voice command into a text through the meaning of a word included in a voice command of the user, recognizes only information as a command, but does not recognize the emotion of the user. As a result, the response of the mobile personal assistant service is the same regardless of the user's feelings such as sadness, anger, joy and the like.
Such a conventional mobile personal assistant service can be felt to be non-existent to the user, and there is a problem that the user may lose interest in using it immediately. As a result, there is a problem that the frequency of use of the user is reduced and the need to use the user is also reduced.
Disclosure of Invention Technical Problem [8] Accordingly, the present invention has been made to solve the above problems, and it is an object of the present invention to provide an emotional mobile personal assistant service method for recognizing a user's emotions by analyzing voice commands of a user, .
According to another aspect of the present invention, there is provided a personal assistant service method for an emotional response type mobile personal assistant, the method comprising: (a) receiving a voice command of a user from a voice receiving module; (b) analyzing the speech command in a characterization module and converting the analyzed speech command into a text command; (c) analyzing the height and pace of the sound of the voice command in the emotion extraction module, and analyzing the word of the text command to extract the emotion of the user; (d) providing a response to the text command in a response module; (e) determining a response emotion corresponding to the emotion of the user extracted in the step (c) in the emotion response module, and constructing a response sentence using the response prepared in the step (d); And (f) reproducing the response sentence constructed in the step (e).
The emotional responsive mobile personal assistant service method of the present invention analyzes emotions of a voice command of a user and responds to a mobile personal assistant service with adjusted emotions according to the result, .
1 is a block diagram illustrating an emotional response type mobile personal assistant service method according to the present invention.
FIG. 2 illustrates an emotional plane for explaining the emotional responsive mobile personal assistant service method according to the present invention.
Hereinafter, the emotional responsive mobile personal assistant service method according to the present invention will be described in detail with a preferred embodiment.
When the user speaks a voice command with respect to the mobile device, the voice receiving
The voice command received from the
The
The
The degree of harmony is a value obtained by quantifying the degree of pleasantness and unpleasantness of the user's feelings as shown in Fig. The
The degree of tension is a numerical value of the degree of tension or excitement of the user as shown in Fig. When the tension is high, it is a state of surprise, awakening. When the tension is low, it is the state of difference and relaxation. The
The
Meanwhile, the
For example, if the voice command of the user is "What is today's weather? &Quot;, the
The
The process of determining the response emotion in the
When the response emotion is determined as described above, a response sentence is formed according to the determined emotion. As described above, the degree of tension is expressed by adjusting the height, the speed, and the size of the sound of the response sentence. The degree of harmony is expressed by adding a sentence to add response or add harmony to the answer sentence. As a result, we create a response sentence that reflects the sentiment, such as "Unfortunately, today is rainy," "Today is cool," and "Today is going to rain."
When the generation of the response sentence is completed as described above, the
On the other hand, as described above, the personality of the personal secretary service can be determined according to the relationship of associating the position on the emotional plane of the voice command with the position on the emotional plane of the response emotion. For example, depending on whether the personality of the personal secretary service is set as 50's male or 30's female, the position of the emotional plane of the response emotion can be changed. Also, the type of response sentence according to the determined personality may be different.
Thus, in order to effectively utilize the difference in the response sentence according to the personality of the personal secretary service, it is possible to provide the user with the types of personality of a predetermined personal secretary and to select one of the types of personality from the user ( ) step). As described above, the type of personality can be provided to the user depending on whether it is a male or female, a cheerful personality, or a cunning personality, and can be selected from a user.
If the type of personality of the personal secretary service is determined as described above, the step (e) determines the response sentence in consideration of the type of personality of the personal secretary service selected in the step (g), and constructs the response sentence. That is, the positions on the emotional plane of the response emotion corresponding to the positions on the emotional plane of the voice command are set to be different from each other depending on the type of personality of the personal secretary service.
110: voice receiving module 120:
130: emotion extraction module 140: response module
150: Emotion reaction module 160: Audio reproduction module
Claims (8)
(b) analyzing the speech command in a characterization module and converting the analyzed speech command into a text command;
(c) analyzing the height and pace of the sound of the voice command in the emotion extraction module, and analyzing the word of the text command to extract the emotion of the user;
(d) providing a response to the text command in a response module;
(e) determining a response emotion corresponding to the emotion of the user extracted in the step (c) in the emotion response module, and constructing a response sentence using the response prepared in the step (d); And
(f) reproducing the response sentence constructed in the step (e).
Wherein the step (c) recognizes the sound of the voice command as a state of awakening when the sound is higher and earlier than the predetermined sound reference, recognizes the state of the voice command as a relaxed state lower than the acoustic reference, Wherein the emotional response type mobile personal assistant service method comprises:
Wherein the step (c) analyzes the word of the text instruction to analyze the degree of inclusion of the negative morpheme or the positive morpheme, the degree of inclusion of the negative vocabulary or the positive vocabulary in the text instruction, And the degree of pleasantness is quantified.
The step (c) may include forming an emotional plane on a first axis representing a degree of unpleasantness and pleasantness according to an analysis result of the text command, and a second axis expressing a degree of a tension according to an analysis result of the voice command And classifying and extracting emotions of the user according to positions on the two-dimensional emotional plane.
Wherein the step (e) comprises: setting a position on the emotional plane of the response emotion according to the position on the emotional plane of the voice command recognized in the step (c), and constructing the response sentence according to the position on the emotional plane The mobile personal assistant service method comprising:
The step (e) includes arranging the response sentence in the form of an audio file by adjusting the volume of the response sentence, the pitch of the sound, and the pace of the sound according to the position of the response sentiment on the emotional plane The mobile personal assistant service method comprising:
Wherein the step (e) comprises constructing the response sentence by adjusting a morpheme, a vocabulary, and an ending of the response sentence according to a position on the emotional plane of the response sentiment.
(g) providing the user with types of personalized secretary service personalities and selecting one of the types of personality from the user,
Wherein the step (e) comprises determining the response emotion in consideration of the type of personality of the personal secretary service selected in step (g), and composing the response sentence .
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR20130044708A KR20140126485A (en) | 2013-04-23 | 2013-04-23 | Method of Emotion Reactive Type Mobile Private Secretary Service |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR20130044708A KR20140126485A (en) | 2013-04-23 | 2013-04-23 | Method of Emotion Reactive Type Mobile Private Secretary Service |
Publications (1)
Publication Number | Publication Date |
---|---|
KR20140126485A true KR20140126485A (en) | 2014-10-31 |
Family
ID=51995735
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR20130044708A KR20140126485A (en) | 2013-04-23 | 2013-04-23 | Method of Emotion Reactive Type Mobile Private Secretary Service |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR20140126485A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020067710A1 (en) * | 2018-09-27 | 2020-04-02 | 삼성전자 주식회사 | Method and system for providing interactive interface |
CN111370030A (en) * | 2020-04-03 | 2020-07-03 | 龙马智芯(珠海横琴)科技有限公司 | Voice emotion detection method and device, storage medium and electronic equipment |
WO2020197166A1 (en) * | 2019-03-22 | 2020-10-01 | Samsung Electronics Co., Ltd. | Electronic device providing response and method of operating same |
-
2013
- 2013-04-23 KR KR20130044708A patent/KR20140126485A/en not_active Application Discontinuation
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020067710A1 (en) * | 2018-09-27 | 2020-04-02 | 삼성전자 주식회사 | Method and system for providing interactive interface |
CN111226194A (en) * | 2018-09-27 | 2020-06-02 | 三星电子株式会社 | Method and system for providing interactive interface |
US11423895B2 (en) | 2018-09-27 | 2022-08-23 | Samsung Electronics Co., Ltd. | Method and system for providing an interactive interface |
WO2020197166A1 (en) * | 2019-03-22 | 2020-10-01 | Samsung Electronics Co., Ltd. | Electronic device providing response and method of operating same |
US11430438B2 (en) | 2019-03-22 | 2022-08-30 | Samsung Electronics Co., Ltd. | Electronic device providing response corresponding to user conversation style and emotion and method of operating same |
CN111370030A (en) * | 2020-04-03 | 2020-07-03 | 龙马智芯(珠海横琴)科技有限公司 | Voice emotion detection method and device, storage medium and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11727914B2 (en) | Intent recognition and emotional text-to-speech learning | |
CN110288077B (en) | Method and related device for synthesizing speaking expression based on artificial intelligence | |
KR102582291B1 (en) | Emotion information-based voice synthesis method and device | |
US10402500B2 (en) | Device and method for voice translation | |
US10621968B2 (en) | Method and apparatus to synthesize voice based on facial structures | |
CN106503646B (en) | Multi-mode emotion recognition system and method | |
US20140303958A1 (en) | Control method of interpretation apparatus, control method of interpretation server, control method of interpretation system and user terminal | |
WO2016150001A1 (en) | Speech recognition method, device and computer storage medium | |
CN105940407A (en) | Systems and methods for evaluating strength of an audio password | |
TW201606760A (en) | Real-time emotion recognition from audio signals | |
CN110910903B (en) | Speech emotion recognition method, device, equipment and computer readable storage medium | |
KR102345625B1 (en) | Caption generation method and apparatus for performing the same | |
KR20140126485A (en) | Method of Emotion Reactive Type Mobile Private Secretary Service | |
WO2020073839A1 (en) | Voice wake-up method, apparatus and system, and electronic device | |
US20230148275A1 (en) | Speech synthesis device and speech synthesis method | |
KR102622350B1 (en) | Electronic apparatus and control method thereof | |
KR102138132B1 (en) | System for providing animation dubbing service for learning language | |
KR20210098250A (en) | Electronic device and Method for controlling the electronic device thereof | |
US20190019497A1 (en) | Expressive control of text-to-speech content | |
KR102457822B1 (en) | apparatus and method for automatic speech interpretation | |
EP4350690A1 (en) | Artificial intelligence device and operating method thereof | |
OUKAS et al. | ArabAlg: A new Dataset for Arabic Speech Commands Recognition for Machine Learning Purposes | |
CN117174067A (en) | Speech processing method, device, electronic equipment and computer readable medium | |
KR20220116660A (en) | Tumbler device with artificial intelligence speaker function | |
CN117352000A (en) | Speech classification method, device, electronic equipment and computer readable medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
E601 | Decision to refuse application |