KR20130083035A - A method for communication of emotion by conversational speech in cyber world - Google Patents

A method for communication of emotion by conversational speech in cyber world Download PDF

Info

Publication number
KR20130083035A
KR20130083035A KR1020110144086A KR20110144086A KR20130083035A KR 20130083035 A KR20130083035 A KR 20130083035A KR 1020110144086 A KR1020110144086 A KR 1020110144086A KR 20110144086 A KR20110144086 A KR 20110144086A KR 20130083035 A KR20130083035 A KR 20130083035A
Authority
KR
South Korea
Prior art keywords
emotion
character
voice
cyber world
recognized
Prior art date
Application number
KR1020110144086A
Other languages
Korean (ko)
Inventor
공병구
Original Assignee
공병구
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 공병구 filed Critical 공병구
Priority to KR1020110144086A priority Critical patent/KR20130083035A/en
Publication of KR20130083035A publication Critical patent/KR20130083035A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns

Abstract

PURPOSE: A method for communication of emotion by conversational speed and recognizing the extend thereof like abuse and praise etc in cyber world and a method for expressing the influence thereof on a character in the cyber world are provided to deliver emotion to an opposite side on the cyber world through voice recognition and to leave this through deformation on the character without leaving this as text. CONSTITUTION: If voice is inputted and transmitted through a microphone of a terminal, the corresponding voice is recognized to classify whether the voice is general command or emotion expression. In case of general command, a command is recognized through a continuous speech recognition engine. In case of emotion expression, the kind and the level of the emotion are recognized through an emotion recognition engine part. The influence on the opposite side character is expressed according to the result. [Reference numerals] (AA) Terminal (user); (BB) Network or line; (CC) Sound DB; (DD) Praise DB; (EE) Sound emotion recognizing engine; (FF) Abuse DB; (GG) Character management control unit; (HH) Character DB; (II) Sound effect DB

Description

욕설과 칭찬 등의 감정 표현 및 그 정도를 인지하고 사이버상의 캐릭터에 그 영향을 표현하는 것 {A method for communication of emotion by conversational speech in cyber world}Recognizing and expressing emotions such as swear words and compliments, and expressing the influence on characters in cyberspace {A method for communication of emotion by conversational speech in cyber world}

IT(HCI)IT (HCI)

대화 인지 기술, 감성 인지 기술Conversation Cognitive Skill, Emotion Cognitive Skill

종래 기술은 사이버 상에서 댓글 등의 문자로 감정을 일방적으로 표현하여 실제 감정 전달의 상호 작용이 없이 정보 전달 수준에 머무르고 있다. In the prior art, the emotions are unilaterally expressed with letters such as comments on the cyber, and thus remain at the information transfer level without the interaction of the actual emotion transfer.

인간이 사용하는 상대에 대한 칭찬, 격려, 위로, 지지 등의 대화체를 망라하고 또한 상대에 대한 욕설, 비방, 조롱 등의 대화체를 망라한 후 이에 대한 연속어 인식 및 의미 구조 파악 기술을 이용하여 "대화체 감정 인지 엔진"을 개발한다. 이 인지 엔진을 이용하여 사용자가 자연스럽게 대화체로 상대에 대한 언어 전달 감정을 파악하여 그 결과를 사이버상의 캐릭터에 변화를 즉각적으로 반영하여 보여 줌으로써 사용자가 자신의 말이 상대에 의하여 즉각적인 반응을 보이는 것으로 문자등을 이용한 댓글 등이 보이는 사이버상에서의 일방적인 감정 전달이 아니라 상호 작용을 보여주는 방법이다.It covers conversations such as praise, encouragement, comfort, and support for the person used by human beings, and includes conversations such as abusive language, slander, and ridicule for the other person. Develop an emotional cognitive engine. By using this cognitive engine, the user naturally grasps the sentimental feelings of the partner through the dialogue, and shows the result immediately reflecting the change in the characters of the cyber character. It is a way of showing interaction rather than unilateral sentiment of emotion in cyberspace where comments are displayed.

사이버상에서 사용자의 자연스러운 표현에 즉각적인 반응을 보인다.Respond immediately to the user's natural expression in cyberspace.

댓글 등 문자에 의한 소통은 그 내용이 그대로 남아있어 상대에게 더 큰 피해를 주지만, 단순히 캐릭터의 망가지는 정도만 남아있어 상대에게 정신적 큰 피해를 주지 않는다.Communicating by texts such as comments remains more intact, causing more damage to the opponent, but simply damaging the character so that it does not cause any serious damage to the opponent.

사용자가 욕설을 통해 카타르시스(정화) 작용이 가능하다.The user can catharsis through swear words.

이 욕설의 사용이 자신의 캐릭터를 지탱하는 인격점수를 동시에 깎아주어 자신의 캐릭터에도 동시에 상처를 받을 수 있음을 보여준다.The use of this swearword simultaneously cuts the character scores that support his character, demonstrating that his character can be hurt at the same time.

상대에 대한 칭찬은 자신의 인격 점수를 향상시켜 외부로부터 비방이나 욕설에 견디는 레벨을 올려준다.Praise for your opponent improves your personality score, increasing your level of resistance to slander or abusiveness from the outside.

이와같이 기존의 단순한 정보의 전달 차원이 아니라 감정의 상호 작용이 이루어 지도록하는 효과가 목표이다. As such, the goal is not to simply convey information but to make emotions interact.

제1도는 사용자가 단말기에서 대화 음성으로 캐릭터를 불러오고 그 캐릭터에 감정 표현을 대화로 하여 해당 캐릭터가 사용자의 감정에 반응하는 방법을 구현하는 전체 구성도이다.
제2도는 사용자의 대화를 일반 명령인지 감정 표현인지를 구분하고 일반 명령을 인식하는 음성 인식 엔진부와 감정의 종류와 그 정도를 인지하는 감정 인지 엔진부로 구성된 음성 감정 인진 엔진부의 구성도이다.
제3도는 일반 명령에 의해 특정 캐릭터를 저장된 위치에서 찾아오는 캐릭터 관리 엔진부와 해당 캐릭터에 감정 인지 결과에 따라 단계별로 변화를 수행하여 그 결과를 효과음과 함께 생성하는 케릭터 상태 제어부이다.
FIG. 1 is a diagram showing the overall configuration of a method in which a user calls a character with a conversational voice in a terminal and expresses an emotional expression to the character so that the corresponding character responds to the user's emotion.
2 is a configuration diagram of a voice emotion engine comprising a voice recognition engine unit for discriminating whether a user's conversation is a general command or an expression of emotion and recognizing a general command, and an emotion recognition engine unit for recognizing a kind and a degree of emotion.
3 is a character management engine that retrieves a specific character from a stored position by a general command, and a character state controller that performs a step-by-step change according to an emotional recognition result of the character and generates the result along with the sound effect.

연속어 인식 엔진: HMM 기술을 기반으로 연속어 인식 엔진을 구성하는 기술로 연속어 음성 DB를 이용하여 음향모델을 구축하고, 대화체 언어 DB를 이용하여 언어 모델을 제작하여 엔진을 구성한다.Continuous Language Recognition Engine: A technology that constructs a continuous language recognition engine based on HMM technology. It builds an acoustic model using the continuous language speech DB and constructs the engine using a dialogue language DB.

감정 인지 엔진 : 연속어 인식 엔진과 유사한 구조를 갖으나 감정 표현 음향은 일반 대화와 그 특성이 달라 별도의 감정 표현 음성DB를 통해서 구축한다. 특히 욕설의 경우 일반적인 발성이 아니어서 반드시 실제 감정이 실린 음성 수집이 필요하다. 또한 언어 모델도 감정 표현에서는 일반적으로 적용하는 문법적 언어모델보다는 감정 표현에 적용되는 별도의 언어 모델 구축이 반드시 필요하다.Emotion Recognition Engine: It has a structure similar to that of continuous speech recognition engine, but the emotion expression sound is constructed through a separate emotion expression voice DB because its characteristics are different from general conversation. In particular, abusive speech is not a general vocalization, so it is necessary to collect voices with actual emotions. In addition, the language model also needs to construct a separate language model that is applied to emotional expressions rather than a grammatical language model that is generally applied to emotional expressions.

캐릭터 관리 엔진 : 전문가들에 의해 사전 제작되는 다양한 표준 형태를 미리 제직하여 저장해 두고, 이 캐릭터들을 사용자의 요구에 맞추어 상세 변화를 줄수 있도록 한다. 예를 들어 40대 남자의 캐릭터를 7가지 유형 그룹으로 만들어 저장해 두고 사용자가 가장 유사한 캐릭터로 부터 부분 수정으로 손쉽게 특정 캐릭터를 표현 가능하도록 한다.Character management engine: We store and store various standard forms that are pre-produced by experts in advance, so that these characters can be changed in detail according to user's needs. For example, a character of a man in his 40s is created and stored in seven types of groups so that the user can easily express a specific character by partial modification from the most similar character.

캐릭터 상태 제어 : 현재 저장되어 있는 캐릭터를 함께 보관되어 있는 상태 DB로부터 정보를 받아 초기 캐리터의 상태를 출력 보여준다. 그리고 상태 변화 요구 즉, 감정 인지 엔진으로 부터 캐릭터에 대한 감정 표현이 전달되면 그에 따라 새로운 상태를 저장하고 바뀐 상태를 재 출력 보여준다. 예를 들어 칭찬이 많이 들어오면 캐릭터는 점점 여유와 미소가 표현되는 상태로 나아가고 욕설이 많이 들어오면 캐릭터는 점차 충격에 빠진 상태로 변하여 저장되고 출력된다.Character status control: It displays the information of initial character by receiving the information from the state DB that stores the currently stored character together. When the state change request, that is, the emotion expression for the character is transmitted from the emotion recognition engine, the new state is saved and the output state is displayed again. For example, if a lot of praise comes in, the character gradually goes to a state of relaxation and smile, and if a lot of swear words come in, the character gradually turns into a state of shock and is stored and output.

캐릭터 : 사이버상에서 사용하는 커리커춰, 아바타 등 감정표현이 가능한 상징물을 의미한다.
단말기 : 스마트폰, PDA, PAD, 휴대전화, 일반 PC, 노트북 등 네트웍 또는 일반 라인에 접속 가능하여 음성 입력장치(마이크), 출력장치, Display가 구비된 장치들을 의미한다.
Character: Refers to a symbol that can express emotions such as curry and avatars used in cyberspace.
Terminal: Means devices equipped with voice input device (microphone), output device, and display by connecting to network or general line such as smart phone, PDA, PAD, mobile phone, general PC, notebook, etc.

Claims (4)

단말기에서 음성으로 감정 표현을 하면 이를 인지하여 사이버상에서 캐릭터 등을 이용하여 그 반응을 보여주는 방식
When the terminal expresses emotions by voice, it recognizes this and shows its reaction using characters in cyberspace.
청구항 1에 있어서 음성으로 감정 표현을 인지하는 방법으로 창찬DB와 욕설DB를 사용하는 방식.
The method of claim 1, wherein the chanchan DB and the profanity DB are used as a method of recognizing the emotional expression by the voice.
창구항 1에 있어서 상대 캐릭터가 감정표현에 반응을 보이는 것.
The counter character in counterport 1 responds to an emotional expression.
감정 표현 중 칭찬은 인격 점수를 높이고 욕설은 인격 점수를 낮추도록 하고, 사용자 자신의 방어에도 인격 점수를 사용한 점
In expressing emotions, praise increases personality score, swear words lower personality score, and personality score is also used for user's own defense.
KR1020110144086A 2011-12-28 2011-12-28 A method for communication of emotion by conversational speech in cyber world KR20130083035A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020110144086A KR20130083035A (en) 2011-12-28 2011-12-28 A method for communication of emotion by conversational speech in cyber world

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020110144086A KR20130083035A (en) 2011-12-28 2011-12-28 A method for communication of emotion by conversational speech in cyber world

Publications (1)

Publication Number Publication Date
KR20130083035A true KR20130083035A (en) 2013-07-22

Family

ID=48994199

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020110144086A KR20130083035A (en) 2011-12-28 2011-12-28 A method for communication of emotion by conversational speech in cyber world

Country Status (1)

Country Link
KR (1) KR20130083035A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101715291B1 (en) * 2015-09-16 2017-03-13 주식회사 라스퍼트 Server and User equipment for providing voice data
US9640056B2 (en) 2014-02-19 2017-05-02 Dongwon YI Method for sensing emergency in response to physical and mental shock and apparatus for sensing emergency using the method
KR101962421B1 (en) * 2018-09-05 2019-07-17 넷마블 주식회사 Apparatus and method of feedback on chatting and operation method of terminal

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9640056B2 (en) 2014-02-19 2017-05-02 Dongwon YI Method for sensing emergency in response to physical and mental shock and apparatus for sensing emergency using the method
KR101715291B1 (en) * 2015-09-16 2017-03-13 주식회사 라스퍼트 Server and User equipment for providing voice data
KR101962421B1 (en) * 2018-09-05 2019-07-17 넷마블 주식회사 Apparatus and method of feedback on chatting and operation method of terminal

Similar Documents

Publication Publication Date Title
CN110998725B (en) Generating a response in a dialog
Lamarre Bilingual winks and bilingual wordplay in Montreal's linguistic landscape
US11123873B2 (en) Method and server for controlling interaction robot
JP2001188784A (en) Device and method for processing conversation and recording medium
CN100339885C (en) Intelligent personal assistants
CN107003823B (en) Head-mounted display device and operation method thereof
JP2001188787A (en) Device and method for processing conversation and recording medium
CN109461435B (en) Intelligent robot-oriented voice synthesis method and device
JP2018008316A (en) Learning type robot, learning type robot system, and program for learning type robot
US20160071302A1 (en) Systems and methods for cinematic direction and dynamic character control via natural language output
CN114821744A (en) Expression recognition-based virtual character driving method, device and equipment
KR20130083035A (en) A method for communication of emotion by conversational speech in cyber world
CN116188642A (en) Interaction method, device, robot and storage medium
JP2023156489A (en) Natural language processing system, natural language processing method, and natural language processing program
JP2006068489A (en) Interactive pet robot
JP2008125815A (en) Conversation robot system
Rennick et al. Improving video game conversations with trope-informed design
US10835822B2 (en) Application control method and terminal device
CN111557001A (en) Method, computer device and computer readable storage medium for providing natural language dialog by providing instant responsive language response
KR102063389B1 (en) Character display device based the artificial intelligent and the display method thereof
CN117033587A (en) Man-machine interaction method and device, electronic equipment and medium
JP2018181018A (en) Conversation providing device, conversation providing method, and program
JP6562711B2 (en) Robot motion generation apparatus and program
CN1783736A (en) Radio short distance friend making tool
KR102012664B1 (en) Server and system for providing game using conversation with artificial intelligence

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E601 Decision to refuse application
E601 Decision to refuse application