WO2019156536A1 - Procédé et dispositif informatique pour construire ou mettre à jour un modèle de base de connaissances pour un système d'agent ia interactif en marquant des données identifiables mais non apprenables, parmi des données d'apprentissage, et support d'enregistrement lisible par ordinateur - Google Patents
Procédé et dispositif informatique pour construire ou mettre à jour un modèle de base de connaissances pour un système d'agent ia interactif en marquant des données identifiables mais non apprenables, parmi des données d'apprentissage, et support d'enregistrement lisible par ordinateur Download PDFInfo
- Publication number
- WO2019156536A1 WO2019156536A1 PCT/KR2019/001693 KR2019001693W WO2019156536A1 WO 2019156536 A1 WO2019156536 A1 WO 2019156536A1 KR 2019001693 W KR2019001693 W KR 2019001693W WO 2019156536 A1 WO2019156536 A1 WO 2019156536A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- conversation
- present disclosure
- entity information
- user
- interactive
- Prior art date
Links
- 230000002452 interceptive effect Effects 0.000 title claims abstract description 66
- 238000000034 method Methods 0.000 title claims abstract description 21
- 238000002372 labelling Methods 0.000 title description 2
- 238000013503 de-identification Methods 0.000 claims description 7
- 238000004891 communication Methods 0.000 description 45
- 230000004044 response Effects 0.000 description 39
- 238000012545 processing Methods 0.000 description 18
- 238000010586 diagram Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 7
- 230000033001 locomotion Effects 0.000 description 7
- 230000009471 action Effects 0.000 description 6
- 238000007781 pre-processing Methods 0.000 description 6
- 230000000007 visual effect Effects 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000001010 compromised effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000009223 counseling Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000001404 mediated effect Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000013179 statistical model Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
- G06F21/6245—Protecting personal data, e.g. for financial or medical purposes
- G06F21/6254—Protecting personal data, e.g. for financial or medical purposes by anonymising data, e.g. decorrelating personal data from the owner's identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/02—Knowledge representation; Symbolic representation
- G06N5/022—Knowledge engineering; Knowledge acquisition
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
- G06N7/01—Probabilistic graphical models, e.g. probabilistic networks
Definitions
- the present disclosure relates to learning of an interactive AI agent system, and more particularly, to a method of processing learning data for an interactive AI agent system and learning the interactive AI agent system using the processed learning data. will be.
- the interactive AI agent system builds and utilizes various knowledge base models to understand the user's intent according to the input from the user and to proceed the appropriate dialogue accordingly.
- the interactive AI agent system provides more complex domain service based on free speech type voice input.
- the demand for it is increasing. Therefore, the interactive AI agent system needs to continuously learn to construct and update various knowledge base models, and the data for learning are recorded as conversation logs by conversations between real people, for example, conversations between real people.
- Conversation logs on a recording for example, a recording of a conversation between a customer and a counselor of a counseling center
- the recording of the conversation between the actual people may include a variety of entity information, such as a variety of personal information, such as name, social security number, user ID, telephone number, email, address, and the like.
- entity information such as name, social security number, user ID, telephone number, email, address, and the like.
- personal information such as name, social security number, user ID, telephone number, email, address, and the like.
- Such information may be information that is required to be de-identified by the requirements of various other laws including the Personal Information Protection Act or by various other needs.
- this information usually does not aid in learning for the interactive AI agent system by itself.
- a method performed by a computer device is provided.
- the method of the present disclosure is for automatically building or updating a knowledge base model for an interactive AI agent system, the method comprising: receiving a series of conversation logs associated with each other; Identifying, from each of the received chat logs, entity information determined that de-identification is needed according to predetermined criteria, wherein the entity information includes an entity type and a value; Replacing the value of each identified entity information with each corresponding label-the label being an identifier that identifies the entity type of the corresponding entity information, the entity having the same value of the same type included in the conversation logs The information is replaced with the same label; And building or updating a knowledgebase model for the interactive AI agent system, in accordance with learning using the conversation logs, wherein the value of the identified entity information is replaced with each corresponding label.
- the entity information determined to be de-identified includes personal information, and the personal information includes at least one of a name, social security number, date of birth, address, age, telephone number, ID, and email address. It may include.
- identifying the entity information determined to be de-identified may include identifying entity information that is determined to be de-identified based on each description format of the personal information. have.
- replacing the value of each identified entity information with each corresponding label comprises: replacing entity information with different values of the same type included in the conversation logs with different labels. It may include the step.
- receiving the conversation logs may include receiving the conversation logs from a conversation recording between the customer and the agent of the customer center.
- the knowledgebase model may include a knowledgebase model for conversation understanding and conversation management.
- a computer readable recording medium including one or more instructions, wherein the one or more instructions, when executed for a computer, cause the computer to perform any one of the methods described above.
- a computer readable recording medium is provided.
- a computer device for automatically building or updating a knowledge base model for an interactive AI agent system comprising: a conversation log collector configured to collect and store a series of conversation logs associated with each other; And a knowledge base model building / update unit.
- the knowledge base model building / update unit of the present disclosure receives a series of chat logs from the chat log collecting unit, and from each of the received chat logs, it is determined that de-identification is necessary according to a predetermined criterion-entity information.
- the entity information with the same value of the same type included among the components is replaced with the same label, and according to the learning using the conversation logs, the value of the identified entity information is replaced with each corresponding label. It is configured to build or update a knowledge base model for the application.
- the legal or non-identifiable information appearing on the recording can be provided without compromising the purpose of learning for the interactive AI agent system. De-identification can be done to meet other needs.
- FIG. 1 is a diagram schematically illustrating a system environment in which an interactive AI agent system may be implemented, according to an embodiment of the present disclosure.
- FIG. 2 is a functional block diagram schematically illustrating the functional configuration of the user terminal 102 of FIG. 1, according to an embodiment of the disclosure.
- FIG. 3 is a functional block diagram schematically illustrating the functional configuration of the interactive AI agent server 106 of FIG. 1, in accordance with an embodiment of the present disclosure.
- FIG. 4 is a functional block diagram schematically illustrating a functional configuration of the conversation / task processing unit 304 of FIG. 3, according to an embodiment of the present disclosure.
- FIG. 5 is an exemplary operational flow diagram performed by the knowledgebase model build / update unit 308 of FIG. 3, in accordance with an embodiment of the present disclosure.
- the module or the unit means a functional part that performs at least one function or operation, and may be implemented in hardware or software or in a combination of hardware and software. Also, a plurality of 'modules' or 'units' may be integrated by at least one software module and implemented by at least one processor, except for 'modules' or 'units', which need to be implemented by specific hardware. have.
- the interactive AI agent system may be a natural language input (eg, a natural language) input from a user through an interactive interaction with a user through a natural language in the form of voice and / or text.
- a natural language input eg, a natural language
- the dialogue response provided by the 'interactive AI agent system' may be in visual, auditory and / or tactile form (eg, voice, sound, text, video, image, symbol, emoticon, hyperlink, It may be included in various forms such as, but not limited to, animation, various knots, motion, haptic feedback, and the like.
- Tasks performed by the interactive AI agent system in embodiments of the present disclosure may, for example, retrieve information, proceed with payment, create a message, create an email, make a phone call, play music, take a picture, search for a user's location, map /
- Various types of tasks may be included.
- an interactive AI agent system is a chatbot system based on a messenger platform, for example, a chatbot system that exchanges messages with a user on a messenger and provides various information desired by the user or performs a task. It is to be understood that the present disclosure is not limited thereto.
- FIG. 1 is a diagram schematically illustrating a system environment 100 in which an interactive AI agent system may be implemented, in accordance with an embodiment of the present disclosure.
- the system environment 100 includes a plurality of user terminals 102a-102n, a communication network 104, an interactive AI agent server 106, and an external service server 108.
- each of the plurality of user terminals 102a-102n may be any user electronic device having a wired or wireless communication function.
- Each of the user terminals 102a-102n may be various wired or wireless communication terminals including, for example, a smartphone, a tablet PC, a music player, a smart speaker, a desktop, a laptop, a PDA, a game console, a digital TV, a set-top box, and the like. It should be understood that it is not limited to any particular form.
- each of the user terminals 102a-102n may communicate with the interactive AI agent server 106, that is, send and receive necessary information, through the communication network 104.
- each of the user terminals 102a-102n may communicate with the external service server 108, that is, transmit and receive necessary information through the communication network 104.
- each of the user terminals 102a-102n may receive user input in the form of voice and / or text from the outside, and communicate with the interactive AI agent server 106 via the communication network 104. And / or an operation result corresponding to the above user input (eg, providing a specific conversation response and / or a specific task) obtained through communication with the external service server 108 (and / or processing in the user terminals 102a-102n). May be provided to the user.
- each of the user terminals 102a-102n may have a conversational response as a result of an operation corresponding to a user input in a visual, audio, and / or tactile form (eg, voice, sound, text, video, And images, symbols, emoticons, hyperlinks, animations, various notes, motions, haptic feedback, and the like, but are not limited thereto.
- performing a task as an operation corresponding to a user input includes searching for information, proceeding with payment, writing a message, writing an email, making a phone call, playing a music, taking a picture, searching for a user's location, and a map / navigation service. And various types of tasks, including, but not limited to, performing such tasks.
- communication network 104 may include any wired or wireless communication network, such as a TCP / IP communication network.
- the communication network 104 may include, for example, a Wi-Fi network, a LAN network, a WAN network, an Internet network, and the like, but the present disclosure is not limited thereto.
- the communication network 104 may include, for example, Ethernet, GSM, Enhanced Data GSM Environment (EDGE), CDMA, TDMA, OFDM, Bluetooth, VoIP, Wi-MAX, Wibro, and any other various wired or wireless. It may be implemented using a communication protocol.
- EDGE Enhanced Data GSM Environment
- the interactive AI agent server 106 may communicate with the user terminals 102a-102n via the communication network 104. According to an embodiment of the present disclosure, the interactive AI agent server 106 transmits and receives necessary information with the user terminals 102a-102n through the communication network 104, and thereby receives them on the user terminals 102a-102n. The operation result corresponding to the input user input, that is, the user intent, may be provided to the user. According to an embodiment of the present disclosure, the interactive AI agent server 106 receives user natural language input in the form of voice and / or text from the user terminals 102a-102n, for example, via the communication network 104, and is prepared in advance.
- the received natural language input can be processed to determine the user's intent.
- the interactive AI agent server 106 may perform an operation corresponding to the user intent determined above based on a knowledge base model for conversation flow management prepared in advance.
- the interactive AI agent server 106 obtains from conversation logs (eg, transcripts of a plurality of user and / or system utterance records, etc.) obtained from conversations between real people through various paths. Collected chat logs) and learn the collected chat logs to build and update various knowledge base models for intent determination of user input and / or management of chat flow.
- conversation logs eg, transcripts of a plurality of user and / or system utterance records, etc.
- the interactive AI agent server 106 may generate a specific conversation response that corresponds to, for example, a user intent, and send it to the user terminals 102a-102n.
- the interactive AI agent server 106 generates, based on the user intent determined above, a corresponding conversation response in voice and / or text form, and generates the generated response in a communication network ( Via 104, it may be delivered to the user terminals 102a-102n.
- the conversation response generated by the interactive AI agent server 106 along with the above-described natural language response in the form of voice and / or text, other visual elements such as images, videos, symbols, emoticons, etc. Or other acoustic elements such as sound, sound, or other tactile elements.
- the same type of response on the interactive AI agent server 106 depending on the type of user input received on the user terminals 102a-102n (eg, voice or text input). May be generated (eg, a voice response is generated if a voice input is given and a text response is generated if a text input is given), but the present disclosure is not so limited.
- responses in the form of voice and / or text may be generated and provided regardless of the form of user input.
- the interactive AI agent server 106 may communicate with the external service server 108 via the communication network 104, as mentioned above.
- the external service server 108 may be, for example, a messaging service server, an online consultation center server, an online shopping mall server, an information retrieval server, a map service server, a navigation service server, and the like, but the present disclosure is not limited thereto.
- a conversation response based on user intent delivered from the interactive AI agent server 106 to the user terminals 102a-102n, is retrieved and obtained from, for example, the external service server 108. It should be understood that this may include data content.
- the interactive AI agent server 106 is shown as a separate physical server configured to communicate with the external service server 108 via the communication network 104, but the present disclosure is not limited thereto. According to another embodiment of the present disclosure, it should be understood that the interactive AI agent server 106 may be included as part of various service servers such as an online consultation center server or an online shopping mall server.
- the user terminal 102 includes a user input receiving module 202, a sensor module 204, a program memory module 206, a processing module 208, a communication module 210, and a response output module ( 212).
- the user input receiving module 202 may input various types of inputs from a user, for example, natural language inputs such as voice inputs and / or text inputs (and additionally other types of inputs such as touch inputs). ) Can be received.
- the user input receiving module 202 may include, for example, a microphone and an audio circuit, and may acquire a user voice input signal through the microphone and convert the obtained signal into audio data.
- the user input receiving module 202 may include various pointing devices such as a mouse, a joystick, a trackball, a keyboard, a touch panel, a touch screen, a stylus, and the like.
- the user may acquire text input and / or touch input signals input from the user through these input devices.
- the user input received by the user input receiving module 202 may be associated with performing a predetermined task, for example, executing a predetermined application or retrieving predetermined information, but the present disclosure is limited thereto. It is not.
- the user input received by the user input receiving module 202 may require only a simple conversation response regardless of execution of a predetermined application or retrieval of information.
- the sensor module 204 includes one or more different types of sensors, through which the status information of the user terminal 102, such as the physical state of the corresponding user terminal 102, The software and / or hardware state or information about the environment state of the user terminal 102 may be obtained.
- the sensor module 204 may include, for example, an optical sensor, and detect an ambient light state of the corresponding user terminal 102 through the optical sensor.
- the sensor module 204 may include, for example, a movement sensor, and detect whether the corresponding user terminal 102 is moved through the movement sensor.
- the sensor module 204 may include, for example, a speed sensor and a GPS sensor, and may detect a position and / or orientation state of the corresponding user terminal 102 through these sensors. According to another embodiment of the present disclosure, it should be appreciated that the sensor module 204 may include other various types of sensors, including temperature sensors, image sensors, pressure sensors, contact sensors, and the like.
- the program memory module 206 may be any storage medium in which various programs, for example, various application programs and related data, which may be executed on the user terminal 102, are stored.
- the program memory module 206 includes, for example, a dialing application, an email application, an instant messaging application, a camera application, a music playback application, a video playback application, an image management application, a map application, a browser application, and the like.
- Various application programs, including, and data associated with the execution of these programs can be stored.
- the program memory module 206 may be configured to include various types of volatile or nonvolatile memory such as DRAM, SRAM, DDR RAM, ROM, magnetic disk, optical disk, flash memory, and the like. .
- the processing module 208 may communicate with each component module of the user terminal 102 and perform various operations on the user terminal 102. According to one embodiment of the present disclosure, the processing module 208 may drive and execute various application programs on the program memory module 206. According to one embodiment of the present disclosure, the processing module 208 may receive the signals obtained by the user input receiving module 202 and the sensor module 204 and perform appropriate processing on these signals, if necessary. have. According to one embodiment of the present disclosure, the processing module 208 may perform appropriate processing on a signal received from the outside through the communication module 210 if necessary.
- the communication module 210 is configured such that the user terminal 102 communicates with the interactive AI agent server 106 and / or the external service server 108 via the communication network 104 of FIG. 1.
- the communication module 212 is configured such that, for example, signals acquired on the user input receiving module 202 and the sensor module 204 may be interactive AI agents through the communication network 104 according to a predetermined protocol. May be sent to server 106 and / or external service server 108.
- the communication module 210 may, for example, receive various signals, such as voice and / or the like, from the interactive AI agent server 106 and / or the external service server 108 via the communication network 104. Alternatively, a response signal or various control signals including a natural language response in text form may be received, and appropriate processing may be performed according to a predetermined protocol.
- the response output module 212 may output a response corresponding to a user input in various forms such as visual, auditory, and / or tactile.
- the response output module 212 includes various display devices such as a touch screen based on technologies such as LCD, LED, OLED, QLED, etc., and responds to user input through these display devices.
- Visual responses such as text, symbols, videos, images, hyperlinks, animations, various notes, and the like, to the user.
- the response output module 212 includes, for example, a speaker or a headset, and provides an audible response, such as a voice and / or acoustic response, to the user via the speaker or the headset, corresponding to a user input. can do.
- the response output module 212 may include a motion / haptic feedback generator, thereby providing a tactile response, eg, motion / haptic feedback, to a user.
- the response output module 212 can simultaneously provide any two or more combinations of text response, voice response, and motion / haptic feedback corresponding to user input.
- the interactive agent server 106 includes a communication module 302, a chat / task processor 304, a chat log collector 306, and a knowledge base model build / update unit 308.
- the communication module 302 is configured such that the interactive AI agent server 106 is connected to the user terminal 102 and / or via the communication network 104 according to a predetermined wired or wireless communication protocol. Enable communication with an external service server 108. According to one embodiment of the present disclosure, the communication module 302 may receive voice input and / or text input from the user, transmitted from the user terminal 102, through the communication network 104. According to one embodiment of the present disclosure, the communication module 302, via or via the communication network 104, together with or separately from voice input and / or text input from the user that has been transmitted from the user terminal 102. Status information of the user terminal 102, which has been transmitted from the terminal 102, may be received.
- the status information may include, for example, various status information related to the corresponding user terminal 102 at the time of voice input and / or text input (eg, physical state of the user terminal 102, user). Software and / or hardware status of the terminal 102, environmental status information around the user terminal 102, and the like.
- the communication module 302 may further include a conversation response (eg, voice and / or textual natural language) generated by the interactive AI agent server 106 in response to the received user input. And / or control signals, etc.) and / or control signals may be performed as necessary to communicate to the user terminal 102 via the communication network 104.
- the conversation / task processing unit 304 receives a user natural language input from the user terminal 102 through the communication module 302 and based on a predetermined knowledge base model prepared in advance. By processing, the user's intent corresponding to the user natural language input may be determined. According to one embodiment of the present disclosure, the conversation / task processing unit 304 may also provide an operation corresponding to the determined user intent, such as proper conversation response and / or task performance.
- the conversation log collecting unit 306 may receive and store each conversation log on the conversation collected by any of various methods.
- the conversation logs received and stored in the conversation log collector 306 may be recording records of conversations between real people, such as recordings of actual conversations between a customer and an agent of a counseling center.
- each of the conversation logs collected and stored on the conversation log collection unit 306 may be a variety of entity information, in particular laws or other such as various personal information, including name, social security number, user ID, telephone number, email, address, etc. It may contain information that requires de-identification for various reasons.
- the knowledge base model building / update unit 308 may perform various types of operations for the interactive AI agent server 106 through learning using the respective conversation logs on the above-described conversation log collection unit 306.
- Knowledgebase models eg, various knowledgebase models for intent determination or conversation flow management
- each conversation log on the conversation log collecting unit 306 may include various entity information that needs to be de-identified.
- entity information usually does not help the learning of the information content itself.
- you randomly delete the entity information as described above from the conversation logs for this learning it may be impossible to obtain the intent from the conversation log itself (e.g.
- the knowledge base model building / updater 308 may perform preprocessing on the entity information included in the conversation log to be learned, prior to learning using the conversation logs.
- This pre-processing by the knowledge base model building / update unit 308 replaces each entity information with low learning possibility on the above-mentioned conversation log and needs to be de-identified, instead of randomly deleting its value with any other identifier. It may be. For example, when the names of different people appear in a series of conversation logs obtained from conversations between people, the knowledge base model building / update unit 308 may use the name value of each name itself (eg, Kim Chul-soo, Lee Young-hee, Hong Gil-dong, etc.). In place of), the preprocessing can be performed by inserting respective identifiers (e.g., labels such as name 1, name 2, name 3, etc.).
- respective identifiers e.g., labels such as name 1, name 2, name 3, etc.
- the knowledge base model building / update unit 308 can de-identify the entity information in the conversation log to the extent that the purpose for learning does not harm.
- the knowledgebase model building / updating unit 308 may learn using preprocessed conversation logs, and perform construction and / or updating of knowledgebase models based on the result. have.
- the conversation / task processing unit 302 may include a speech-to-text (STT) module 402, a natural language understanding (NLU) module 404, and a user database 406. ), Conversation understanding knowledge base model 408, conversation management module 410, conversation flow management knowledge base model 412, conversation generation module 414, and text-to-speech (TTS) module 416 ).
- STT speech-to-text
- NLU natural language understanding
- TTS text-to-speech
- the STT module 402 may receive a voice input among user inputs received through the communication module 302 and convert the received voice input into text data based on pattern matching or the like. have. According to one embodiment of the present disclosure, the STT module 402 may generate a feature vector sequence by extracting features from a user's voice input. According to an embodiment of the present disclosure, the STT module 402 may include a dynamic time warping (DTW) method, a HMM model (Hidden Markov Model), a GMM model (Gaussian-Mixture Mode), a deep neural network model, an n-gram model, and the like. Based on a variety of statistical models of, a text recognition result, such as a sequence of words, can be generated. According to an embodiment of the present disclosure, the STT module 402 may refer to each user characteristic data of the user database 406 described below when converting the received voice input into text data based on pattern matching. .
- DTW dynamic time warping
- the NLU module 404 may receive a text input from the communication module 302 or the STT module 402.
- the text input received at the NLU module 404 is, for example, a user text input or communication module 302 that was received from the user terminal 102 via the communication network 104 at the communication module 302.
- the NLU module 404 along with or after receiving the text input, state information associated with the user input, such as state information of the user terminal 102 at the time of the user input. And the like.
- the status information may include, for example, various status information related to the corresponding user terminal 102 at the time of user voice input and / or text input (eg, physical state of the user terminal 102 and software). And / or hardware status, environmental status information around the user terminal 102, and the like.
- the NLU module 404 may map the received text input to one or more user intents based on the conversation understanding knowledge base model 408 described below.
- the user intent here may be associated with a series of action (s) that can be understood and performed by the interactive AI agent server 106 in accordance with the user intent.
- the NLU module 404 may refer to the aforementioned state information in mapping the received text input to one or more user intents.
- the NLU module 404 may refer to each user characteristic data of the user database 406 described below in mapping the received text input to one or more user intents.
- the user database 406 may be a database that stores and manages characteristic data for each user.
- the user database 406 may, for example, record a previous conversation of the user, pronunciation characteristic information of the user, user vocabulary preferences, the user's location, a setting language, a contact / friend list, etc. It may include various user characteristic information.
- the STT module 402 refers to each user characteristic data of the user database 406, for example, a pronunciation characteristic for each user, when converting a voice input into text data. You can get more accurate text data.
- the NLU module 404 determines the user intent more accurately by referring to each user characteristic data of the user database 406, such as each user specific feature or context when determining the user intent. can do.
- a user database 406 for storing and managing characteristic data for each user is illustrated as being disposed in the interactive AI agent server 106, but the present disclosure is not so limited. According to another embodiment of the present disclosure, a user database for storing and managing characteristic data for each user may exist, for example, in the user terminal 102 and is distributed to the user terminal 102 and the interactive AI agent server 106. It should be understood that the arrangement may be made.
- the conversation management module 410 may generate a series of operation flows corresponding thereto according to the user intent determined by the NLU module 404.
- the conversation management module 310 based on the conversation flow management knowledge base model 412, responds to any action, such as any conversation, in response to a user intent received from the NLU module 404. It may determine whether to perform a response and / or perform a task, and generate a detailed operation flow accordingly.
- the conversation understanding knowledge base model 408 may include, for example, a predefined ontology model.
- the ontology model may be represented, for example, in a hierarchical structure between nodes, each node being a "intent” node or a child linked to an "intent” node corresponding to the user's intent. It may be one of the "property” nodes (child “nodes” nodes that are linked directly to "intent” nodes or relinked to "property” nodes of "intent” nodes).
- an "intent" node and “attribute” nodes that are directly or indirectly linked to the "intent” node may constitute one domain, and the ontology may consist of a collection of such domains.
- the conversation understanding knowledge base model 408 is configured to include domains corresponding to all intents each of which, for example, the interactive AI agent system can understand and perform an operation corresponding thereto. Can be.
- the ontology model may be dynamically changed by adding or deleting nodes, or modifying relationships between nodes.
- intent nodes and attribute nodes of each domain in the ontology model may be associated with words and / or phrases related to corresponding user intents or attributes, respectively.
- the dialogue understanding knowledge base model 408 comprises an ontology model consisting of a hierarchy of nodes and a set of words and / or phrases associated with each node, for example in the form of a lexical dictionary ( The NLU module 404 may determine the user intent based on the ontology model implemented in the form of a lexical dictionary.
- the NLU module 404 may determine which words in which domain in the ontology model are associated with each word in the sequence, Based on such a determination, the corresponding domain, ie user intent, can be determined.
- the conversation flow management knowledge base model 412 may include a sequential flow distribution model of conversations or actions required to provide a service corresponding to an intent corresponding to a user input.
- the conversation flow management knowledge base model 412 may include a library of conversation patterns corresponding to each intent.
- the conversation generation module 414 may generate a necessary conversation response based on the conversation / action flow generated by the conversation management module 410.
- the conversation generating module 414 may, when generating a conversation response, for example, the user characteristic data (eg, the user's previous conversation record, the user's pronunciation characteristic information, the user's pronunciation characteristic information) of the user database 406 described above. Vocabulary preferences, user's location, setting language, contact / friend list, history of each user's previous conversation, etc.).
- the TTS module 416 may receive a conversation response generated to be sent to the user terminal 102 by the conversation generation module 414.
- the conversation response received at the TTS module 418 may be a natural language or a sequence of words in text form.
- the TTS module 418 may convert the received text form input into a speech form according to various types of algorithms.
- the interactive AI agent system allows the client-server model between the user terminal 102 and the interactive AI agent server 106, in particular the client, to provide only user input / output functions.
- the interactive AI agent system should be aware that the functions may be implemented by being distributed between the user terminal and the server, or alternatively, may be implemented as a standalone application installed on the user terminal. .
- the interactive AI agent system distributes the functions between the user terminal and the server according to an embodiment of the present disclosure
- the distribution of each function of the interactive AI agent system between the client and the server may vary from embodiment to embodiment. It should be appreciated that other implementations may be made.
- a specific module has been described as performing certain operations, but the present disclosure is not limited thereto. According to another embodiment of the present disclosure, it should be understood that the operations described as performed by any particular module in the above description may be performed by a separate module from each other.
- FIG. 5 is an exemplary operational flow diagram performed by the knowledgebase model build / update unit 308 of FIG. 3, in accordance with an embodiment of the present disclosure.
- the knowledge base model build / update unit 308 may receive a group of conversation logs collected in any of a variety of ways on the knowledge base model build / update unit 308.
- entity information that requires preprocessing may be identified.
- entity information requiring preprocessing may be any type of information that needs to be de-identified, such as personal information.
- the personal information that needs to be de-identified may be, for example, information such as name, social security number, date of birth, address, age, telephone number, ID, email address, and the like, but is not limited thereto.
- such information may be identified based on, for example, a method of notifying each of these pieces of information.
- each type of entity information identified above, as indicated on the conversation logs can be determined, and the respective entity information can be replaced using a different label for each type.
- each of these name values eg, “Kim Chul Soo”, “Lee Young Hee”, “Hong Gil Dong”, etc.
- the same name value should be given the same label (i.e., if "Name 1" is assigned to any one of the name values shown in a given conversation log, e.g. "Hong Gil-dong", the corresponding conversation on the conversation group) "Name 1" should also be given to the same name value "Hong Gil-dong” that appears during the log and other chat logs).
- step 508 the knowledge base model building / updater 308 utilizes and uses the conversation logs processed in step 506 (i.e., replacement by labeling the identified entity information) for learning. Accordingly, various knowledge base models for the interactive AI agent system can be built and updated.
- a computer program may include a storage medium readable by a computer processor or the like, such as an EPROM, an EEPROM, a nonvolatile memory such as a flash memory device, a magnetic disk such as an internal hard disk and a removable disk, a magneto-optical disk, and It may be implemented in a form stored in various types of storage media, including a CDROM disk.
- the program code (s) may be implemented in assembly or machine language.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Bioethics (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Databases & Information Systems (AREA)
- Computer Hardware Design (AREA)
- Computer Security & Cryptography (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
La présente invention concerne un procédé mis en œuvre par un dispositif informatique. Le procédé est destiné à construire ou à mettre à jour automatiquement un modèle de base de connaissances pour un système d'agent IA interactif et comprend les étapes consistant à : recevoir une série de journaux de dialogue liés l'un à l'autre ; identifier, à partir de chacun des journaux de dialogue reçus, des informations d'entité déterminées comme nécessitant une non-identification selon un critère prédéterminé, les informations d'entité comprenant un type d'entité et une valeur ; remplacer, à l'aide de chaque étiquette correspondante, la valeur d'éléments respectifs d'informations d'entité identifiées, l'étiquette étant un identifiant pour identifier un type d'entité d'informations d'entité correspondantes, et des éléments d'informations d'entité ayant le même type et la même valeur, qui sont inclus dans les journaux de dialogue, sont remplacés par la même étiquette ; et construire ou mettre à jour le modèle de base de connaissances pour le système d'agent IA interactif selon l'apprentissage en utilisant des journaux de dialogue dans lesquels la valeur des informations d'entité identifiées est remplacée par chaque étiquette correspondante.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2018-0016712 | 2018-02-12 | ||
KR1020180016712A KR101950387B1 (ko) | 2018-02-12 | 2018-02-12 | 학습 데이터 중 식별 가능하지만 학습 가능성이 없는 데이터의 레이블화를 통한, 대화형 ai 에이전트 시스템을 위한 지식베이스 모델의 구축 또는 갱신 방법, 컴퓨터 장치, 및 컴퓨터 판독 가능 기록 매체 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019156536A1 true WO2019156536A1 (fr) | 2019-08-15 |
Family
ID=65562265
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2019/001693 WO2019156536A1 (fr) | 2018-02-12 | 2019-02-12 | Procédé et dispositif informatique pour construire ou mettre à jour un modèle de base de connaissances pour un système d'agent ia interactif en marquant des données identifiables mais non apprenables, parmi des données d'apprentissage, et support d'enregistrement lisible par ordinateur |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR101950387B1 (fr) |
WO (1) | WO2019156536A1 (fr) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110990637A (zh) * | 2019-10-14 | 2020-04-10 | 平安银行股份有限公司 | 网络图谱的构建方法及其装置 |
CN114297207A (zh) * | 2021-12-07 | 2022-04-08 | 腾讯数码(天津)有限公司 | 实体库更新方法、装置、计算机设备和存储介质 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102388974B1 (ko) | 2019-11-22 | 2022-04-21 | 아시아나아이디티 주식회사 | 객실 승무원의 업무정보 관리를 위한 방법 및 컴퓨터 판독가능 기록매체 |
CN113535980A (zh) * | 2021-07-20 | 2021-10-22 | 南京市栖霞区民政事务服务中心 | 一种基于人工智能的智慧社区知识库体系的快速建立方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016040730A (ja) * | 2015-10-13 | 2016-03-24 | 洋彰 宮崎 | 言語入力により自律的に知識体系を構築する人工知能装置 |
KR20160062668A (ko) * | 2014-11-25 | 2016-06-02 | 한국전자통신연구원 | 개방형 건강 관리 장치 및 방법 |
KR101730600B1 (ko) * | 2015-12-22 | 2017-04-26 | 한양대학교 산학협력단 | 거짓 개인정보를 이용한 개인정보 유출 탐지 장치 및 방법 |
KR101827320B1 (ko) * | 2017-06-08 | 2018-02-09 | 윤준호 | 인공지능 콜센터 서버 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20180003417A (ko) | 2017-04-21 | 2018-01-09 | 주식회사 엔터플 | 챗봇을 이용한 콘텐트 제공 방법 및 장치 |
-
2018
- 2018-02-12 KR KR1020180016712A patent/KR101950387B1/ko active IP Right Grant
-
2019
- 2019-02-12 WO PCT/KR2019/001693 patent/WO2019156536A1/fr active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20160062668A (ko) * | 2014-11-25 | 2016-06-02 | 한국전자통신연구원 | 개방형 건강 관리 장치 및 방법 |
JP2016040730A (ja) * | 2015-10-13 | 2016-03-24 | 洋彰 宮崎 | 言語入力により自律的に知識体系を構築する人工知能装置 |
KR101730600B1 (ko) * | 2015-12-22 | 2017-04-26 | 한양대학교 산학협력단 | 거짓 개인정보를 이용한 개인정보 유출 탐지 장치 및 방법 |
KR101827320B1 (ko) * | 2017-06-08 | 2018-02-09 | 윤준호 | 인공지능 콜센터 서버 |
Non-Patent Citations (1)
Title |
---|
LEE, HYEON SEUNG ET AL.: "A Research on De-identification Technique for Personal Identifiable Information.", SPRI, August 2016 (2016-08-01), pages 1 - 64, XP055629543, Retrieved from the Internet <URL:https://spri.kr/posts/view/18382?code=research> [retrieved on 20190523] * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110990637A (zh) * | 2019-10-14 | 2020-04-10 | 平安银行股份有限公司 | 网络图谱的构建方法及其装置 |
CN110990637B (zh) * | 2019-10-14 | 2022-09-20 | 平安银行股份有限公司 | 网络图谱的构建方法及其装置 |
CN114297207A (zh) * | 2021-12-07 | 2022-04-08 | 腾讯数码(天津)有限公司 | 实体库更新方法、装置、计算机设备和存储介质 |
Also Published As
Publication number | Publication date |
---|---|
KR101950387B1 (ko) | 2019-02-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019124647A1 (fr) | Procédé et appareil informatique permettant de construire ou de mettre à jour automatiquement un modèle hiérarchique de gestion de flux de conversations destiné à un système d'agent ai interactif et support d'enregistrement lisible par ordinateur | |
WO2019156536A1 (fr) | Procédé et dispositif informatique pour construire ou mettre à jour un modèle de base de connaissances pour un système d'agent ia interactif en marquant des données identifiables mais non apprenables, parmi des données d'apprentissage, et support d'enregistrement lisible par ordinateur | |
WO2019088384A1 (fr) | Procédé de fourniture de conversation en langage naturel à expression riche par modification de réponse, dispositif informatique et support d'enregistrement lisible par ordinateur | |
WO2019132135A1 (fr) | Système d'agent intelligent interactif, procédé et support d'enregistrement lisible par ordinateur pour la surveillance active et l'intervention dans une session de dialogue entre des utilisateurs | |
WO2019147039A1 (fr) | Procédé de détermination d'un motif optimal de conversation pour la réalisation d'un objectif à un instant particulier pendant une session de conversation associée à un système de service d'ia de compréhension de conversation, procédé de détermination de probabilité de prédiction d'accomplissement d'objectif et support d'enregistrement lisible par ordinateur | |
KR102120751B1 (ko) | 대화 이해 ai 시스템에 의하여, 머신러닝을 대화 관리 기술에 적용한 하이브리드 계층적 대화 흐름 모델을 기초로 답변을 제공하는 방법 및 컴퓨터 판독가능 기록 매체 | |
KR102104294B1 (ko) | 디스플레이 장치로 읽을 수 있는 저장매체에 저장된 수화 영상 챗봇 애플리케이션 | |
KR20190103951A (ko) | 학습 데이터 중 식별 가능하지만 학습 가능성이 없는 데이터의 레이블화를 통한, 대화형 ai 에이전트 시스템을 위한 지식베이스 모델의 구축 또는 갱신 방법, 컴퓨터 장치, 및 컴퓨터 판독 가능 기록 매체 | |
KR101959292B1 (ko) | 문맥 기반으로 음성 인식의 성능을 향상하기 위한 방법, 컴퓨터 장치 및 컴퓨터 판독가능 기록 매체 | |
WO2019143170A1 (fr) | Procédé de génération de modèle de conversation pour système de service ai de compréhension de conversation ayant un but prédéterminé, et support d'enregistrement lisible par ordinateur | |
WO2019156537A1 (fr) | Système d'agent ai interactif et procédé pour fournir activement un service lié à la sécurité et similaire par l'intermédiaire d'une session de dialogue ou d'une session séparée sur la base d'une surveillance de session de dialogue entre des utilisateurs, et support d'enregistrement lisible par ordinateur | |
WO2019168235A1 (fr) | Procédé et système d'agent d'ia interactif permettant de fournir une détermination d'intention en fonction de l'analyse du même type de multiples informations d'entité, et support d'enregistrement lisible par ordinateur | |
JP2019185737A (ja) | 検索方法及びそれを用いた電子機器 | |
WO2019088638A1 (fr) | Procédé, dispositif informatique et support d'enregistrement lisible par ordinateur permettant de fournir une conversation en langage naturel par la fourniture en temps opportun d'une réponse substantielle | |
WO2019088383A1 (fr) | Procédé et dispositif informatique de fourniture de conversation en langage naturel en fournissant une réponse d'interjection en temps opportun, et support d'enregistrement lisible par ordinateur | |
KR20190094087A (ko) | 머신러닝 기반의 대화형 ai 에이전트 시스템과 연관된, 사용자 맞춤형 학습 모델을 포함하는 사용자 단말 및 사용자 맞춤형 학습 모델이 기록된 컴퓨터 판독가능 기록 매체 | |
KR101927050B1 (ko) | 서버에 대한 액세스 없이, 개인화 데이터를 이용하여 학습 가능하도록 구성된 사용자 맞춤형 학습 모델을 포함하는 사용자 단말 및 컴퓨터 판독가능 기록매체 | |
KR102017544B1 (ko) | 메신저 플랫폼에 관계없이 복수의 메신저를 이용하는 사용자간 다양한 형식의 채팅 서비스를 제공하는 대화형 ai 에이전트 시스템, 방법 및 컴퓨터 판독가능 기록 매체 | |
WO2019103569A1 (fr) | Procédé d'amélioration de la performance de reconnaissance vocale sur la base d'un contexte, appareil informatique et support d'enregistrement lisible par ordinateur | |
WO2019066132A1 (fr) | Procédé d'authentification basée sur un contexte d'utilisateur ayant une sécurité améliorée, système d'agent ai interactif et support d'enregistrement lisible par ordinateur | |
KR20210045704A (ko) | 엔티티 정보의 분석에 기초한 인텐트 결정을 제공하는 방법 및 대화형 ai 에이전트 시스템, 및 컴퓨터 판독가능 기록 매체 | |
WO2019098638A1 (fr) | Procédé, système d'agent ai interactif et support d'enregistrement lisible par ordinateur pour fournir une authentification d'empreinte vocale d'utilisateur sans sémantique ayant une sécurité améliorée | |
WO2019156535A1 (fr) | Système d'agent ia interactif et procédé pour fournir activement un service de commande ou de réservation sur la base d'une surveillance de session de dialogue entre des utilisateurs à l'aide d'informations d'historique précédentes dans une session de dialogue et support d'enregistrement lisible par ordinateur | |
WO2019143141A1 (fr) | Procédé de visualisation d'une base de connaissances, destiné à un système d'agent ai interactif, et support d'enregistrement lisible par ordinateur | |
KR20190094081A (ko) | 대화형 ai 에이전트 시스템을 위한 지식베이스의 시각화 방법 및 컴퓨터 판독가능 기록 매체 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19751559 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19751559 Country of ref document: EP Kind code of ref document: A1 |