WO2019227505A1 - Systèmes et procédés de formation et d'utilisation d'un robot conversationnel - Google Patents
Systèmes et procédés de formation et d'utilisation d'un robot conversationnel Download PDFInfo
- Publication number
- WO2019227505A1 WO2019227505A1 PCT/CN2018/089689 CN2018089689W WO2019227505A1 WO 2019227505 A1 WO2019227505 A1 WO 2019227505A1 CN 2018089689 W CN2018089689 W CN 2018089689W WO 2019227505 A1 WO2019227505 A1 WO 2019227505A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- model
- sentiment
- message
- machine
- chatbot
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation or dialogue systems
Definitions
- the present disclosure generally relates to the field of computer technologies, and in particular, to a sentimental appeasement chatbot system and method.
- chatbots also known as dialogue systems
- chatbot solutions have many advantages, such as 24/7 availability with instant responses and low labor costs, and therefore are leveraged in many businesses scenarios, e.g., Microsoft Xiaoice, Facebook Messenger Bots, and AliMe from Facebook.
- customers communicate with customer service to address problems, but in many cases, they want to vent their negative sentiments, e.g., mass dissatisfaction with a flight delay, or anger to great pressure resulting from working overtime.
- Negative sentiments not only affect the customers themselves but also have a severe impact on customer service personnel, which in turn affects the quality of the whole customer service. Therefore, it is desirable to provide systems and methods for training and using a chatbot that can appease the customer’s negative emotions.
- An aspect of the present disclosure includes a system for training a sentimental appeasement chatbot model, comprising a computer-readable storage medium storing executable instructions for training the sentimental appeasement chatbot model and at least one processor in communication with the computer-readable storage medium.
- the at least one processor may be directed to cause the system to: obtain a corpus, wherein the corpus includes a plurality of message exchange pairs, wherein at least one of the plurality of message exchange pairs includes an input message and a responsive message; apply one or more machine-learning processes to the corpus to train a chatbot model to obtain a machine-learned chatbot model, wherein once fed into an input message, the chatbot model generates a first responsive message; apply one or more machine-learning processes to the corpus to train a sentiment predictor model to obtain a machine-learned sentiment predictor model, wherein once fed into a message exchange pair, the sentiment predictor model generates a sentiment state determination; and apply one or more machine-learning processes to the corpus to train the sentiment
- the chatbot model may be constructed based on a sequence-to-sequence model and an attention model.
- the sentiment predictor model may be constructed based on a dual-RNN model.
- a format of the input message may include at least one of text, image, sound and video.
- the at least one processor may be further directed to cause the system to: for each input message in the plurality of message exchange pairs, generate a sentiment state determination for the input message indicating a sentiment estimation of the input message, to obtain a labeled corpus; and apply the one or more machine-learning processes to the labeled corpus to train the sentiment predictor model to obtain the machine-learned sentiment predictor model.
- the at least one processor may be further directed to cause the system to: generate the sentiment state determination for the input message using a sentiment annotator model.
- the sentiment annotator model may be a fused model constructed based on a plurality of sentiment estimation models. Once fed into the input message, the sentiment annotator model may generate the sentiment state determination of the input message.
- the plurality of sentiment estimation models may include at least one of a Bayesian model and a Dictionary based model.
- the Dictionary based model may be configured to: classify a plurality of target words associated with sentiment into a plurality of categories representing different types of sentiments; filter the input message to obtain one or more words that included in the target words; and determine the sentiment state determination of the input message based on one or more types of sentiments corresponding to the one or more words.
- the at least one processor may be further directed to cause the system to: for each message exchange pair in the plurality of message exchange pairs: obtain a prediction sentiment state determination of a next input message relative to the message exchange pair by feeding the message exchange pair into the sentiment predictor model; and obtain a real sentiment state determination of the next input message based on the labeled corpus.
- the at least one processor may be further directed to cause the system to obtain the machine-learned sentiment predictor model by adjusting parameters of the sentiment predictor model to minimize a difference between the plurality of prediction sentiment state determinations and the plurality of real sentiment state determinations.
- the second responsive message may include appeasement elements reacting upon emotional elements of the input message.
- the at least one processor may be further directed to cause the system to: for each message exchange pair in the plurality of message exchange pairs: generate a temporary responsive message by feeding an input message of the message exchange pair into the machine-learned chatbot model; generate a temporary prediction sentiment state determination of a next input message relative to the message exchange pair by feeding the input message of the message exchange pair and the temporary responsive message into the machine-learned sentiment predictor model; determine a first difference between the temporary responsive message and a real responsive message included in the message exchange pair; determine a second difference between the temporary prediction sentiment state determination and a target sentiment state determination; and determine a combined difference based on the first difference and the second difference.
- the at least one processor may be further directed to cause the system to obtain the machine-learned sentimental appeasement chatbot model by adjusting parameters of the machine-learned chatbot model to minimize a sum of the
- the at least one processor may be further directed to cause the system to: combine the first difference and the second difference according to a predetermined proportion to obtain the combined difference.
- a method for training a sentimental appeasement chatbot model may include: obtaining a corpus, wherein the corpus includes a plurality of message exchange pairs, wherein at least one of the plurality of message exchange pairs includes an input message and a responsive message; applying one or more machine-learning processes to the corpus to train a chatbot model to obtain a machine-learned chatbot model, wherein once fed into an input message, the chatbot model generates a first responsive message; applying one or more machine-learning processes to the corpus to train a sentiment predictor model to obtain a machine-learned sentiment predictor model, wherein once fed into a message exchange pair, the sentiment predictor model generates a sentiment state determination; and applying one or more machine-learning processes to the corpus to train the sentimental appeasement chatbot model to obtain a machine-learned sentimental appeasement chatbot model, wherein the sentimental appeasement chatbot model is constructed based on the machine-learned sentiment predictor model and the machine-learned chatbot
- a non-transitory computer readable medium may comprise at least one set of instructions for training a sentimental appeasement chatbot model.
- the at least one set of instructions may direct the at least one processor to perform acts of: obtain a corpus, wherein the corpus includes a plurality of message exchange pairs, wherein at least one of the plurality of message exchange pairs includes an input message and a responsive message; apply one or more machine-learning processes to the corpus to train a chatbot model to obtain a machine-learned chatbot model, wherein once fed into an input message, the chatbot model generates a first responsive message; apply one or more machine-learning processes to the corpus to train a sentiment predictor model to obtain a machine-learned sentiment predictor model, wherein once fed into a message exchange pair, the sentiment predictor model generates a sentiment state determination; and apply one or more machine-learning processes to the corpus to train the sentimental appeasement chatbot model to obtain a machine-
- a chatbot system may include a computer-readable storage medium storing executable instructions, and at least one processor in communication with the computer-readable storage medium.
- the at least one processor may be directed to cause the system to: receive an input message from an input device, wherein the input message includes sentimental elements indicating a level of negative emotion of a user of the input device; apply a sentimental appeasement chatbot model to the input message to generate a responsive message based on the sentimental elements, wherein the responsive message includes appeasement elements reacting upon emotional elements of the input message; and transmit the responsive message to an output device.
- a method may include receiving an input message from an input device, wherein the input message includes sentimental elements indicating a level of negative emotion of a user of the input device; applying a sentimental appeasement chatbot model to the input message to generate a responsive message based on the sentimental elements, wherein the responsive message includes appeasement elements reacting upon emotional elements of the input message; and transmitting the responsive message to an output device.
- a chatbot system may include a computer-readable storage medium storing executable instructions, and at least one processor in communication with the computer-readable storage medium.
- the at least one processor may be directed to cause the system to: transmit an input message to a processor, wherein the input message includes sentimental elements indicating a level of negative emotion of a user; and receive a responsive message from the processor, wherein the responsive message is generated by applying a sentimental appeasement chatbot model to the input message based on the sentimental elements, and wherein the responsive message includes appeasement elements reacting upon emotional elements of the input message.
- a method may include: transmitting an input message to a processor, wherein the input message includes sentimental elements indicating a level of negative emotion of a user; and receiving a responsive message from the processor, wherein the responsive message is generated by applying a sentimental appeasement chatbot model to the input message based on the sentimental elements, and wherein the responsive message includes appeasement elements reacting upon emotional elements of the input message.
- FIG. 1 is a schematic diagram of an exemplary sentimental appeasement chatbot (SAC) system according to some embodiments of the present disclosure
- FIG. 2 is a schematic diagram illustrating exemplary hardware and/or software components of a computing device on which the processing engine 112 may be implemented according to some embodiments of the present disclosure
- FIG. 3 is a schematic diagram illustrating exemplary hardware and/or software components of a mobile device on which the user terminal 130 may be implemented according to some embodiments of the present disclosure
- FIG. 4 is a block diagram illustrating an exemplary processing engine of the server according to some embodiments of the present disclosure
- FIGs. 5A-5D are schematic diagrams illustrating exemplary models used in the present disclosure according to some embodiments of the present disclosure.
- FIG. 6 is a flowchart illustrating an exemplary process for training an SAC model according to some embodiments of the present disclosure
- FIG. 7 is a schematic diagram illustrating an exemplary architecture of recurrent neural networks according to some embodiments of the present disclosure.
- FIG. 8 illustrates an exemplary chatbot model according to some embodiments of the present disclosure
- FIG. 9 illustrates an exemplary architecture about combining the machine-learned chatbot model and the machine-learned SP model
- FIG. 10 is a flowchart illustrating an exemplary process for training the SP model according to some embodiments of the present disclosure
- FIG. 11 illustrates an exemplary architecture of SP model according to some embodiments of the present disclosure
- FIG. 12 is a flowchart illustrating an exemplary process for operating a SAC model according to some embodiments of the present disclosure.
- FIG. 13 is a flowchart illustrating an exemplary process for operating an SAC model according to some embodiments of the present disclosure.
- the flowcharts used in the present disclosure illustrate operations that systems implement according to some embodiments of the present disclosure. It is to be expressly understood, the operations of the flowchart may be implemented not in order. Conversely, the operations may be implemented in inverted order, or simultaneously. Moreover, one or more other operations may be added to the flowcharts. One or more operations may be removed from the flowcharts.
- system and method in the present disclosure is described primarily regarding training and using a sentimental appeasement chatbot (SAC) model in a scenario of customer service, it should also be understood that this is only one exemplary embodiment.
- the system and method in the present disclosure may be applied to any other scenarios which may need to use the sentimental appeasement chatbot (SAC) model.
- the system and method of the present disclosure may be applied to different scenarios including help desk, website navigation, guided selling, technical support, or the like, or any combination thereof.
- Customer service may relate to responding to customers' questions about products and services, e.g., answering questions about applying for enquiring a price of a mobile phone.
- Help desk may relate to responding to internal employee questions, e.g., responding to HR questions.
- Website navigation may relate to guiding customers to relevant portions of complex websites.
- Guided selling may relate to providing answers and guidance in the sales process, particularly for complex products being sold to novice customers.
- Technical support responds to technical problems, such as diagnosing a problem with a device.
- the sentimental appeasement chatbot (SAC) model may take the customer’s sentiment into consideration.
- the SAC model may both consider the accuracy of the responsive message relative to the customer message, and consider the feeling of the customer when he or she reads the responsive message.
- a processor of the system may obtain a corpus.
- the processor may further apply one or more machine-learning processes to the corpus to train a chatbot model to obtain a machine-learned chatbot model and a sentiment predictor model.
- the processor may construct the SAC model based on the machine-learned chatbot model and the machine-learned SP model, and then apply one or more machine-learning processes to the corpus to train the SAC model.
- the processor or a processor of another system may interact with a user terminal including an input device and an output device using the machine-learned SAC model.
- the processor acquires input message directly or indirectly through an input device.
- the processor may operate the machine-learned SAC model to generate a responsive message based on the input message.
- the responsive message may include appeasement elements that may react upon emotional elements of the input message.
- the processor may further transmit the responsive message, directly or indirectly, to an output device.
- FIG. 1 is a schematic diagram of an exemplary sentimental appeasement chatbot (SAC) system according to some embodiments of the present disclosure.
- the SAC system 100 may include a server 110, a network 120, a user terminal 130, and a storage 160.
- the server 110 may include a processing engine 112.
- the server 110 may be a single server, or a server group.
- the server group may be centralized, or distributed (e.g., server 110 may be a distributed system) .
- the server 110 may be local or remote.
- the server 110 may access information and/or data stored in the user terminal 130, and/or the storage 160 via the network 120.
- the server 110 may connect the user terminal 130, and/or the storage 160 to access stored information and/or data.
- the server 110 may be implemented on a cloud platform.
- the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, or the like, or any combination thereof.
- the server 110 may be implemented on a computing device 200 having one or more components illustrated in FIG. 2 in the present disclosure.
- the server 110 may include a processing engine 112.
- the processing engine 112 may process information and/or data relating to the input message to perform one or more functions described in the present disclosure. For example, the processing engine 112 may generate a responsive message based on the input message.
- the processing engine 112 may include one or more processing engines (e.g., single-core processing engine (s) or multi-core processor (s) ) .
- the processing engine 112 may include one or more hardware processors, such as a central processing unit (CPU) , an application-specific integrated circuit (ASIC) , an application-specific instruction-set processor (ASIP) , a graphics processing unit (GPU) , a physics processing unit (PPU) , a digital signal processor (DSP) , a field programmable gate array (FPGA) , a programmable logic device (PLD) , a controller, a microcontroller unit, a reduced instruction-set computer (RISC) , a microprocessor, or the like, or any combination thereof.
- CPU central processing unit
- ASIC application-specific integrated circuit
- ASIP application-specific instruction-set processor
- GPU graphics processing unit
- PPU physics processing unit
- DSP digital signal processor
- FPGA field programmable gate array
- PLD programmable logic device
- controller a microcontroller unit, a reduced instruction-set computer (RISC) , a microprocessor, or the like, or any
- the network 120 may facilitate exchange of information and/or data.
- one or more components of the SAC system 100 e.g., the server 110, the user terminal 130, and the storage 160
- the server 110 may receive the input message from the user terminal 130 via the network 120.
- the network 120 may be any type of wired or wireless network, or combination thereof.
- the network 120 may include a cable network, a wireline network, an optical fiber network, a tele communications network, an intranet, an Internet, a local area network (LAN) , a wide area network (WAN) , a wireless local area network (WLAN) , a metropolitan area network (MAN) , a public telephone switched network (PSTN) , a Bluetooth network, a ZigBee network, a near field communication (NFC) network, or the like, or any combination thereof.
- the network 120 may include one or more network access points.
- the network 120 may include wired or wireless network access points such as base stations and/or internet exchange points 120-1, 120-2, ..., through which one or more components of the SAC system 100 may be connected to the network 120 to exchange data and/or information between them.
- the user terminal 130 may include a mobile device 130-1, a tablet computer 130-2, a laptop computer 130-3, a personal computer (PC) 130-4, or the like, or any combination thereof.
- the user terminal 130 may include an input device and an output device.
- the user terminal may interact with the processing engine 112.
- the input device of the user terminal 130 may transmit a message to the processing engine 112, and the output device of the user terminal 130 may receive a responsive message from the processing engine 112.
- the mobile device 130-1 may include, a wearable device, a smart mobile device, a virtual reality device, an augmented reality device, or the like, or any combination thereof.
- the wearable device may include a smart bracelet, a smart footgear, a smart glass, a smart helmet, a smart watch, a smart clothing, a smart backpack, a smart accessory, or the like, or any combination thereof.
- the smart mobile device may include a smartphone, a personal digital assistance (PDA) , a gaming device, a navigation device, a point of sale (POS) device, or the like, or any combination thereof.
- the virtual reality device and/or the augmented reality device may include a virtual reality helmet, a virtual reality glass, a virtual reality patch, an augmented reality helmet, an augmented reality glass, an augmented reality patch, or the like, or any combination thereof.
- the virtual reality device and/or the augmented reality device may include a Google GlassTM, a RiftConTM, a FragmentsTM, a Gear VRTM, etc.
- the storage 160 may store data and/or instructions. In some embodiments, the storage 160 may store data obtained from the user terminal 130. In some embodiments, the storage 160 may store data and/or instructions that the server 110 may execute or use to perform exemplary methods described in the present disclosure. In some embodiments, the storage 160 may include a mass storage, a removable storage, a volatile read-and-write memory, a read-only memory (ROM) , or the like, or any combination thereof. Exemplary mass storage may include a magnetic disk, an optical disk, a solid-state drive, etc. Exemplary removable storage may include a flash drive, a floppy disk, an optical disk, a memory card, a zip disk, a magnetic tape, etc.
- Exemplary volatile read-and-write memory may include a random access memory (RAM) .
- RAM may include a dynamic RAM (DRAM) , a double date rate synchronous dynamic RAM (DDR SDRAM) , a static RAM (SRAM) , a thyristor RAM (T-RAM) , and a zero-capacitor RAM (Z-RAM) , etc.
- Exemplary ROM may include a mask ROM (MROM) , a programmable ROM (PROM) , an erasable programmable ROM (EPROM) , an electrically erasable programmable ROM (EEPROM) , a compact disk ROM (CD-ROM) , and a digital versatile disk ROM, etc.
- MROM mask ROM
- PROM programmable ROM
- EPROM erasable programmable ROM
- EEPROM electrically erasable programmable ROM
- CD-ROM compact disk ROM
- digital versatile disk ROM etc.
- the storage 160 may be implemented on a cloud platform.
- the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, or the like, or any combination thereof.
- the storage 160 may be connected to the network 120 to communicate with one or more components of the SAC system 100 (e.g., the server 110, the user terminal 130) .
- One or more components in the SAC system 100 may access the data or instructions stored in the storage 160 via the network 120.
- the storage 160 may be directly connected to or communicate with one or more components in the SAC system 100 (e.g., the server 110, the user terminal 130) .
- the storage 160 may be part of the server 110.
- one or more components of the SAC system 100 may access the storage 160.
- one or more components of the SAC system 100 may read and/or modify information relating to the user, and/or the public when one or more conditions are met.
- the server 110 may read and/or modify one or more users’ information during a conversation.
- an element of the SAC system 100 may perform through electrical signals and/or electromagnetic signals.
- the user terminal 130 may operate logic circuits in its processor to process such task.
- a processor of the service user terminal 130 may generate electrical signals encoding the user message.
- the processor of the user terminal 130 may then send the electrical signals to an output port. If the user terminal 130 communicates with the server 110 via a wired network, the output port may be physically connected to a cable, which may further transmit the electrical signals to an input port of the server 110.
- the output port of the user terminal 130 may be one or more antennas, which may convert the electrical signals to electromagnetic signals.
- an electronic device such as the user terminal 130, and/or the server 110
- the processor retrieves or saves data from a storage medium (e.g., the storage 160)
- it may send out electrical signals to a read/write device of the storage medium, which may read or write structured data in the storage medium.
- the structured data may be transmitted to the processor in the form of electrical signals via a bus of the electronic device.
- an electrical signal may refer to one electrical signal, a series of electrical signals, and/or a plurality of discrete electrical signals.
- FIG. 2 is a schematic diagram illustrating exemplary hardware and/or software components of a computing device on which the processing engine 112 may be implemented according to some embodiments of the present disclosure.
- the computing device 200 may include a processor 210, a storage 220, an input/output (I/O) 230, and a communication port 240.
- I/O input/output
- the processor 210 may execute computer instructions (e.g., program code) and perform functions of the processing engine 112 in accordance with techniques described herein.
- the processor 210 may include interface circuits 210-a and processing circuits 210-b therein.
- the interface circuits may be configured to receive electronic signals from a bus (not shown in FIG. 2) , wherein the electronic signals encode structured data and/or instructions for the processing circuits to process.
- the processing circuits may conduct logic calculations, and then determine a conclusion, a result, and/or an instruction encoded as electronic signals. Then the interface circuits may send out the electronic signals from the processing circuits via the bus.
- the computer instructions may include, for example, routines, programs, objects, components, data structures, procedures, modules, and functions, which perform particular functions described herein.
- the processor 210 may include one or more hardware processors, such as a microcontroller, a microprocessor, a reduced instruction set computer (RISC) , an application specific integrated circuits (ASICs) , an application-specific instruction-set processor (ASIP) , a central processing unit (CPU) , a graphics processing unit (GPU) , a physics processing unit (PPU) , a microcontroller unit, a digital signal processor (DSP) , a field programmable gate array (FPGA) , an advanced RISC machine (ARM) , a programmable logic device (PLD) , any circuit or processor capable of executing one or more functions, or the like, or any combinations thereof.
- RISC reduced instruction set computer
- ASICs application specific integrated circuits
- ASIP application-specific instruction-set processor
- CPU central processing unit
- processors of the computing device 200 may also include multiple processors, thus operations and/or method steps that are performed by one processor as described in the present disclosure may also be jointly or separately performed by the multiple processors.
- the processor of the computing device 200 executes both step A and step B, it should be understood that step A and step B may also be performed by two or more different processors jointly or separately in the computing device 200 (e.g., a first processor executes step A and a second processor executes step B, or the first and second processors jointly execute steps A and B) .
- the storage 220 may store data/information obtained from the user terminal 130, the storage 160, and/or any other component of the SAC system 100.
- the storage 220 may include a mass storage, a removable storage, a volatile read-and-write memory, a read-only memory (ROM) , or the like, or any combination thereof.
- the mass storage may include a magnetic disk, an optical disk, a solid-state drive, etc.
- the removable storage may include a flash drive, a floppy disk, an optical disk, a memory card, a zip disk, a magnetic tape, etc.
- the volatile read-and-write memory may include a random access memory (RAM) .
- the RAM may include a dynamic RAM (DRAM) , a double date rate synchronous dynamic RAM (DDR SDRAM) , a static RAM (SRAM) , a thyristor RAM (T-RAM) , and a zero-capacitor RAM (Z-RAM) , etc.
- the ROM may include a mask ROM (MROM) , a programmable ROM (PROM) , an erasable programmable ROM (EPROM) , an electrically erasable programmable ROM (EEPROM) , a compact disk ROM (CD-ROM) , and a digital versatile disk ROM, etc.
- the storage 220 may store one or more programs and/or instructions to perform exemplary methods described in the present disclosure.
- the storage 220 may store a program for the processing engine 112 for training and using the SAC model.
- the I/O 230 may input and/or output signals, data, information, etc. In some embodiments, the I/O 230 may enable a user interaction with the processing engine 112. In some embodiments, the I/O 230 may include an input device and an output device. Examples of the input device may include a keyboard, a mouse, a touch screen, a microphone, or the like, or a combination thereof. Examples of the output device may include a display device, a loudspeaker, a printer, a projector, or the like, or a combination thereof.
- Examples of the display device may include a liquid crystal display (LCD) , a light-emitting diode (LED) -based display, a flat panel display, a curved screen, a television device, a cathode ray tube (CRT) , a touch screen, or the like, or a combination thereof.
- LCD liquid crystal display
- LED light-emitting diode
- CRT cathode ray tube
- the communication port 240 may be connected to a network (e.g., the network 120) to facilitate data communications.
- the communication port 240 may establish connections between the processing engine 112, the user terminal 130, or the storage 160.
- the connection may be a wired connection, a wireless connection, any other communication connection that can enable data transmission and/or reception, and/or any combination of these connections.
- the wired connection may include, for example, an electrical cable, an optical cable, a telephone wire, or the like, or any combination thereof.
- the wireless connection may include, for example, a Bluetooth TM link, a Wi-Fi TM link, a WiMax TM link, a WLAN link, a ZigBee link, a mobile network link (e.g., 3G, 4G, 5G, etc. ) , or the like, or a combination thereof.
- the communication port 240 may be and/or include a standardized communication port, such as RS232, RS485, etc.
- FIG. 3 is a schematic diagram illustrating exemplary hardware and/or software components of a mobile device on which the user terminal 130 may be implemented according to some embodiments of the present disclosure.
- the mobile device 300 may include a communication platform 310, a display 320, a graphic processing unit (GPU) 330, a central processing unit (CPU) 340, an I/O 350, a memory 360, and a storage 390.
- any other suitable component including but not limited to a system bus or a controller (not shown) , may also be included in the mobile device 300.
- a mobile operating system 370 e.g., iOS TM , Android TM , Windows Phone TM , etc.
- the applications 380 may be loaded into the memory 360 from the storage 390 in order to be executed by the CPU 340.
- the applications 380 may include a browser or any other suitable mobile apps for receiving a response message from the server 110.
- User interactions with the information stream may be achieved via the I/O 350 and provided to the processing engine 112 and/or other components of the SAC system 100 via the network 120.
- computer hardware platforms may be used as the hardware platform (s) for one or more of the elements described herein.
- a computer with user interface elements may be used to implement a personal computer (PC) or any other type of work station or terminal device.
- PC personal computer
- a computer may also act as a server if appropriately programmed.
- FIG. 4 is a block diagram illustrating an exemplary processing engine of the server according to some embodiments of the present disclosure.
- the processing engine 112 may include a data acquisition module 410, a chatbot module 420, a sentiment annotator module 430, a sentiment prediction module 440, an SAC module 450, and a response delivery module 460.
- the modules may also be implemented as an application or set of instructions read and executed by the processing engine 112. Further, the modules may be any combination of the hardware circuits and the application/instructions. For example, the modules may be the part of the processing engine 112 when the processing engine 112 is executing the application/set of instructions.
- the data acquisition module 410 may obtain data from one or more components in the system 100 (e.g., the user terminal 130 or the storage 160) .
- the data acquisition module 410 may obtain a corpus from the storage 160.
- the data acquisition module 410 may obtain an input message transmitted from the user terminal 130 or other input devices.
- the chatbot module 420 may apply one or more machine-learning processes to the corpus to train a chatbot model to obtain a machine-learned chatbot model.
- the chatbot model may be stored in the storage 160, and may be invoked by the chatbot module 420 when needed. Once fed into an input message, the chatbot model may generate a responsive message based on the input message. Details about training the chatbot model may be found elsewhere in the present disclosure (e.g., operation 604 in FIG. 6) .
- the sentiment annotator module 430 may generate a sentiment state determination based on an input message.
- the sentiment state determination of an input message may be generated based on a sentiment annotator model.
- the AS model may generate a sentiment state determination of the input message. Details about the obtaining the labeled corpus by the sentiment annotator module 430 may be found elsewhere in the present disclosure (e.g., operation 1002 in FIG. 10) .
- the sentiment prediction module 440 may apply one or more machine-learning processes to the corpus to train a sentiment predictor (SP) model to obtain a machine-learned SP model.
- the corpus used to train the SP model may be a pre-labeled corpus that different from the corpus used to train the chatbot model.
- the SP model may be stored in the storage 160, and may be invoked by the sentiment prediction module 440 when needed. For example, once fed into a message exchange pair including an input message and a responsive message, the sentiment predictor model may generate a sentiment state determination of a next input message.
- the SP model may include parameters donated as ⁇ .
- the sentiment prediction module 440 may adjust the parameters of the SP model ⁇ to obtain the machine-learned SP model. Details about training the SP model by the sentiment prediction module may be found elsewhere in the present disclosure (e.g., operation 606 in FIG. 6, and operation 1004 in FIG. 10) .
- the SAC module 450 apply one or more machine-learning processes to the corpus to train the SAC model to obtain a machine-learned SAC model.
- the SAC model is constructed based on the machine-learned chatbot model and the machine-learned SP model. Once fed into an input message, the machine-learned SAC model may generate a responsive message taking the user’s sentiment into consideration. Details about training the SAC model by the SAC module may be found elsewhere in the present disclosure (e.g., operation 608 in FIG. 6) .
- the SAC module 450 may be configured to apply the machine-learned SAC model to an input message from the input device to generate sentiment appeasement responsive message. Details about the generating sentiment appeasement responsive message may be found elsewhere in the present disclosure (e.g., operation 1204 in FIG. 12) .
- the response delivery module 460 may be configured to transmit the sentiment appeasement responsive message generated by the SAC module 450 to the user terminal 130 or other output devices.
- the system 100 may be a local system
- the processing engine 112 may receive an input message from an input device of the user terminal 130. After generating a responsive message, the processing engine 112 may transmit it to an output device of the user terminal.
- processing engine 112 is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure.
- the processing engine 112 may further include a storage module facilitating data storage.
- those variations and modifications do not depart from the scope of the present disclosure.
- FIGs. 5A-5D are schematic diagrams illustrating exemplary models used in the present disclosure according to some embodiments of the present disclosure.
- Four types of models including a chatbot model, a sentiment annotator model, a sentiment predictor model, and a sentimental appeasement chatbot (SAC) model are four basic models in the proposed technic.
- Each of the four types of basic models may be constructed based on various architectures, e.g., recurrent neural network (RNN) model, convolutional neural network (CNN) model, or the like, or a combination thereof.
- RNN recurrent neural network
- CNN convolutional neural network
- the following descriptions about the four basic models may propose one or more possible architectures like RNN for illustration purpose. It should be noted that, any other architectures that can achieve the same function of the four basic models may also be included in the present disclosure.
- FIG. 5A illustrates a chatbot model.
- the chatbot model may be stored in the storage 160, and may be invoked by the processing engine 112 when needed. For example, when an input device transmits a message to the processing engine 112, the processing engine 112 may invoke the chatbot model to generate a responsive message based on an input message, and further deliver it to an output device. For the chatbot model, once fed into an input message, the chatbot model may generate a responsive message based on the input message. If the chatbot is well trained, the responsive message may have a strong correlation with the input message. For example, if the input message is a question, the responsive message may be an answer of the question. For another example, if the input message is a request message, the responsive message may be a confirm message about accepting the request.
- FIG. 5B illustrates a sentiment annotator model.
- the sentiment annotator model may be stored in the storage 160, and may be invoked by the processing engine 112 when needed.
- a corpus including a plurality of dialogues may be required, e.g., a training of the sentiment predictor model.
- Messages included in the plurality of dialogues may need to be labeled with certain sentiment state determinations.
- the sentiment state determination may refer to a sentiment estimation of the input message which may reflect a sentimental condition or prediction of the message.
- a sentiment state determination may be a number, a symbol, a description, or any other form that can be used to differentiate with each other.
- the sentiment state determination may be a value located in [0, 1] , and 1 refers to most positive emotion, 0 refers to most negative emotion.
- the sentiment state determination may be a text description relating to sentiment.
- a positive emotion may correspond to a text of “good”
- a negative emotion may correspond to a text of “bad” .
- the sentiment annotator model may be used to generate sentiment state determinations for the messages in the plurality of dialogues. For example, once fed into an input message, the sentiment annotator model may generate a sentiment state determination of the input message.
- the sentiment annotator model may be a fused model constructed based on a plurality of emotion estimation models (e.g., Bayesian model, Dictionary based model, etc. ) . Details about the sentiment annotator model maybe found elsewhere in the present disclosure (e.g., FIG. 10, and the descriptions thereof) .
- FIG. 5C illustrates a sentiment predictor (SP) model.
- the SP model may be stored in the storage 160, and may be invoked by the processing engine 112 when needed.
- the sentiment predictor model is used to predict a sentiment state determination of a future input message that the user may send. For example, once fed into a message exchange pair including an input message and a responsive message, the sentiment predictor model may generate a prediction sentiment state determination of a next input message.
- the responsive message may be sent by a service provider in response to receiving the customer’s message.
- the prediction sentiment state determination may reflect whether the customer is satisfied with the responsive message by the service provider to some extent. In another word, for the service provider, the prediction sentiment state determination may be used to test the quality of the responsive message, which may be important for the service provider to improve customer satisfaction.
- FIG. 5D illustrates a sentimental appeasement chatbot (SAC) model.
- the SAC model may be stored in the storage 160, and may be invoked by the processing engine 112 when needed.
- the SAC model may be constructed based on the chatbot model and the sentiment predictor model. Once fed into an input message, the SAC model may generate a responsive message. However, compare to the chatbot model, the responsive message generated by the SAC model may take the user’s sentiment into consideration since the sentiment predictor model is included in the SAC model.
- the input message may include sentimental elements indicating a level of negative emotion of the user.
- the SAC model may generate the responsive message based on the input message and the sentimental elements. In this case, the responsive message may appease the negative emotion of the user.
- the sentimental elements may include some characteristic words that can describe the user’s emotion (e.g., damn, great, unsatisfied, etc. ) .
- the appeasement elements may include some characteristic words or phrases that may be used to appease emotion (e.g., sorry, apology, thank you, etc. ) .
- the responsive message generated by the SAC model may be a translated to a different language relative to the input message. For example, when a foreign customer who is not familiar with Chinese send a complain message using Chinese, the SAC model may generate the responsive message and translate it in to a mother language of the customer. Details about the SAC model may be found elsewhere in the present disclosure (e.g., FIG. 6 and FIG. 9, and the descriptions thereof) .
- FIG. 6 is a flowchart illustrating an exemplary process for training an SAC model according to some embodiments of the present disclosure.
- the process 600 may be executed by the SAC system 100.
- the process 600 may be implemented as a set of instructions (e.g., an application) stored in a storage device (e.g., the storage device 160) .
- the processor 210 and/or the modules in FIG. 4 may execute the set of instructions and, when executing the instructions, the processor 210 and/or the modules may be configured to perform the process 600.
- the operations of the illustrated process presented below are intended to illustrative. In some embodiments, the process 600 may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process as illustrated in FIG. 6 and described below is not intended to be limiting.
- the processing engine 112 may obtain a corpus.
- the processing engine 112 may access the storage 160 via the network 120 to obtain the corpus.
- the corpus may be used to train the SAC model.
- the corpus may include a plurality of dialogues.
- the corpus may include M dialogues D [1] , ..., D [M] between customers and service providers.
- Each dialogue D [m] may include a plurality of message exchange pairs.
- a message exchange pair refers to a turn of dialogue between the customer and the service provider.
- a dialogue D [m] may include N m turns of message exchange pair which may be denoted as
- x refers to an input message
- y refers to a responsive message relative to the input message.
- the input message or responsive message include various data formats (e.g., text, image, video, sound, symbol, etc. ) .
- the input message may include a plurality of sentences.
- a customer may complain to the service provider by sending several messages to the processor of the service provider. The several messages may be included in a single input message.
- T x may be used to represent the input message with a length of T x , where x 1 denotes a token of the input message at time t.
- T y denotes the responsive message from the service provider.
- the responsive messages from the service provider may be generated by a chatbot or human. Based on the above description, the total number of message exchange pairs in the corpus may be
- the corpus may be obtained from a database.
- The may include redundancy data that may not be used in the corpus. Therefore, the processing engine 112 may perform a filter process on the database to obtain the corpus.
- an exemplary filter process may include following five steps.
- the processing engine 112 may filter out all non-sense or sensitive information, e.g., cell numbers, birthdates, etc., and remove all corrupted or non-text dialogues.
- non-sense or sensitive information e.g., cell numbers, birthdates, etc.
- the processing engine 112 may combine and drop the repeated and redundant messages from both customers’ and customer service’s sides.
- the processing engine 112 may process each dialogue to ⁇ x n , y n > pairs like format and merge the consecutive posts (Also referred to as the input message) or responsive messages (which posted by the same person) to a single input message or responsive message.
- the processing engine 112 may remove dialogues which have the total number of ⁇ x n , y n > pairs (i.e., turns) less than 2.
- the processing engine 112 may segment all the messages into words.
- Exemplary segment algorithm for Chinese character may include “Jieba” which may include three types of segment mode including accurate mode, full mode, and search engine mode.
- the accurate mode attempts to cut the sentence into the most accurate segmentations, which is suitable for text analysis.
- the full mode gets all the possible words from the sentence.
- the search engine mode based on the accurate mode, attempts to cut long words into several short words, which can raise the recall rate.
- the “Jieba” may achieve efficient word graph scanning based on a prefix dictionary structure.
- “Jieba” may build a directed acyclic graph (DAG) for all possible word combinations.
- DAG directed acyclic graph
- “Jieba” may use dynamic programming to find the most probable combination based on the word frequency. For unknown words, a HMM-based model is used with the Viterbi algorithm.
- the processing engine 112 may count the frequencies of each word and segment rare words (frequency ⁇ 3) into characters.
- the processing engine 112 may further recalculate the frequencies of all tokens (i.e., words and chars) , and replace the rare tokens (frequency ⁇ 3) with a RARE tag.
- the processing engine 112 may obtain the corpus.
- An example statistic data about a corpus of Chinese characters is shown in Table 1.
- the corpus may be shuffled and segmented into several groups for training, validating, and testing respectively.
- the processing engine 112 may apply one or more machine-learning processes to the corpus to train a chatbot model to obtain a machine-learned chatbot model.
- at least a portion of the corpus obtained in the operation 602 may be used to train the chatbot model.
- the chatbot model may be built on top of a sequence-to-sequence (seq2seq) model and an attention model.
- sequence-to-sequence seq2seq
- attention model Exemplary seq2seq model and attention model that may be used to construct the chatbot model disclosed in the present disclosure are for illustration purpose and not intend to limit the scope of the present disclosure.
- the chatbot model may be built on top of a seq2seq model which contains an encoder-decoder architecture.
- a basic sequence-to-sequence model consists of two recurrent neural networks (RNNs) : an encoder that processes the input and a decoder that generates the output. This basic architecture is shown in FIG. 7. Each circle in FIG. 7 represents a cell of the RNN. Encoder and decoder can share weights or use a different set of parameters.
- a length of an input message x may be denoted as T x
- a length of a responsive message may be denoted as T y
- the encoder may map the input message to a context vector c.
- the responsive message may be generated from the context vector c by the decoder.
- the seq2seq model may model a conditional probability p (y
- the encoder of the seq2seq model may be a recurrent neural network (RNN) .
- An RNN can learn a probability distribution over a sequence by being trained to predict the next symbol in a sequence. In that case, the output at each time step t is the conditional distribution p (x t
- the decoder may also be an RNN model.
- the conditional distribution of decoding each y t may be parameterized based on Equation (2) as follows:
- g is a non-linear activation function, e.g., softmax function.
- the encoder and the decoder may be jointly trained to maximize the conditional log-likelihood based on Equation (3) as follows
- N denotes a total number of message exchange pairs in the corpus.
- a stacking RNN architecture may also be used to construct the chatbot model.
- the stacking RNN may include Long short-term memory (LSTM) unit (or block) which are a building unit for layers of a recurrent neural network (RNN) .
- the stacking RNN may include two different LSTMs: one for the input sequence and another for the output sequence, which may increase the number model parameters at negligible computational cost and makes it natural to train the LSTM on multiple language pairs simultaneously.
- a gated recurrent unit may be used for constructing the encoder and the decoder.
- GRU is related to a LSTM (Long Short Term Memory) , but both uses a different gating mechanism to prevent long-distance dependencies problem.
- GRU exposes the full hidden content without any control.
- a GRU has two gates, a reset gate r, and an update gate z. Intuitively, the reset gate determines how to combine the new input with the previous memory, and the update gate defines how much of the previous memory to keep.
- an attention model may be integrated into the seq2seq model to address the issues of alignments in the encoder-decoder architecture.
- the attention model may link current decoding time steps to find most relevant portions of the input message x to the current decoding state. For example, may denote the hidden states from the encoder, and may denote the decoding hidden states.
- the attention model may link the current decoding state h t with every input state with a weight vector a tt′ .
- the weight vector a tt′ may be derived based on various scoring functions (e.g., global attentional model, local attentional model, etc. ) .
- a dot product between two vectors i.e., may be used as the scoring function, and the weight vector a tt′ may be determined as Given weight vector a tt′ , the attention vector c t for decoding at step t is determined as the weighted average along all input hidden states
- the attentional hidden state may be produced by where [h t , c t ] is the concatenating operation on the current decoding hidden state and the attention vector. Then the may be fed into a softmax function to get the distribution of the prediction as
- the attention mechanism may be used to boost the performance of the seq2seq model.
- the decoder decides parts of the source sentence to pay attention to.
- the encoder may be relieved from the burden of having to encode all information in the source sentence into a fixed length vector, which may boost the performance of the seq2seq model.
- FIG. 8 illustrates an exemplary chatbot model according to some embodiments of the present disclosure.
- the encoder may convert the input message x to a continuous representation vector c.
- the decoder may decode c to generate the responsive message y.
- the encoder and decoder may both 2-layer GRU architecture.
- the 2-layer GRU may be developed through adding a second GRU layer that captures higher-level feature interactions between different time steps.
- the attention model may be the dot product scoring attention model.
- the chatbot model may estimate the conditional probability p ⁇ (y
- the chatbot model may generate a prediction responsive message once fed into an input message.
- the prediction response may be denoted as where denotes a prediction word.
- a distribution of the prediction word may be denoted as while a distribution of a real word y t may be denoted as o t .
- the loss function of a message exchange pair ⁇ x, y> may be a cross entropy between and o t , which may be parametrized as Equation (4) as follows:
- ⁇ denotes parameters of the chatbot model.
- a total loss of the corpus may be a sum of the losses of all message exchange pairs included in the corpus and shown as Equation (5) as follows:
- a teacher forcing algorithm may be used during the training.
- the teacher forcing algorithm is a method for quickly and efficiently training recurrent neural network models. Teacher forcing works by using the actual or expected output from the training dataset at the current time step y (t) as input in the next time step x (t+1) , rather than the output generated by the network. It is a network training method critical to the development of deep learning language models used in machine translation, text summarization, and image captioning, among many other applications.
- the processing engine 112 may adjust the parameters ⁇ of the chatbot model to minimize the total loss of the corpus to obtain the machine-learned chatbot model.
- the machine-learned chatbot model may be used as a baseline model for the SAC model.
- the beam search algorithm is a heuristic search algorithm that explores a graph by expanding the most promising node in a limited set. Beam search is an optimization of best-first search that reduces its memory requirements. Best-first search is a graph search which orders all partial solutions (states) according to some heuristic which attempts to predict how close a partial solution is to a complete solution (goal state) . But in beam search, only a predetermined number of best partial solutions are kept as candidates. It is thus a greedy algorithm.
- the processing engine 112 may apply one or more machine-learning processes to the corpus to train a sentiment predictor (SP) model to obtain a machine-learned SP model.
- SP sentiment predictor
- at least a portion of the corpus obtained in the operation 602 may be used to train the SP model.
- the corpus used to train the SP model may be different from the corpus used to train the chatbot model (e.g., a pre-labeled corpus) .
- the SP model may include parameters donated as ⁇ .
- the processing engine 112 may adjust the parameters of the SP model ⁇ to obtain the machine-learned SP model. After the SP model is trained, once fed into a message exchange pair ⁇ x, y> , the SP model may generate a prediction sentiment state determination of a next input message.
- the prediction sentiment state determination may indicate whether the responsive message in the message exchange pair can appease the sentiment of the customer.
- the SAC model which includes the SP model may be able to generate responsive messages not only accurate but also considering the customer’s sentiment. Details about the training the SP model may be found elsewhere in the present disclosure (e.g., FIG. 10 and FIG. 11, and the descriptions thereof) .
- the processing engine 112 may apply one or more machine-learning processes to the corpus to train the SAC model to obtain a machine-learned SAC model.
- at least a portion of the corpus obtained in the operation 602 may be used to train the SAC model.
- the corpus used to train the SAC model may be different from the corpus used to train the chatbot model and the SP model.
- the SAC model is constructed based on the machine-learned chatbot model and the machine-learned SP model.
- the responsive message generated by the SAC model and may be at least partly generated based on the sentiment state determination generated by the SP model. By including the chatbot model and the SP model, once given an input message, the sentiment state determination of responsive message generated by the SAC model may be promoted.
- FIG. 9 illustrates an exemplary architecture about combining the machine-learned chatbot model and the machine-learned SP model.
- the input message may be fed into the machine-learned chatbot model to obtain a temporal responsive message
- the processing engine 112 may then feed the input message and the temporal responsive message into the machine-learned SP model to obtain a temporal prediction sentiment state determination of a next input message
- the processing engine 112 may adjust the parameters of the machine-learned chatbot model to promote the temporal prediction sentiment state determination
- the sentiment prediction sentiment state determination is located between [0, 1] , and 1 refers to the most positive sentiment, while 0 refers to the most negative sentiment. Therefore, to promote the temporal prediction sentiment state determination, the processing engine 112 may adjust the parameters of the chatbot model to minimize a mean square error (MSE) between the temporal prediction sentiment state determination and 1 (the most positive score) .
- MSE mean square error
- the processing engine 112 may combine the MSE loss with the chatbot objective function of Equation (4) to obtain a total loss function of the SAC model shown as Equation (6) as follows:
- ⁇ denotes a hyperparameter indicating how much impact that the SP model contributes in the combining of the total loss function.
- the processing engine 112 may adjust the parameters of the machine-learned chatbot model ⁇ to minimize the total loss function to obtain the machine-learned SAC model.
- operation 606 of training the SP model may be performed before operation 604 of training the chatbot model.
- FIG. 10 is a flowchart illustrating an exemplary process for training the SP model according to some embodiments of the present disclosure.
- the process 1000 may be executed by the SAC system 100.
- the process 1000 may be implemented as a set of instructions (e.g., an application) stored in a storage device (e.g., the storage device 160) .
- the processor 210 and/or the modules in FIG. 4 may execute the set of instructions and, when executing the instructions, the processor 210 and/or the modules may be configured to perform the process 1000.
- the operations of the illustrated process presented below are intended to illustrative. In some embodiments, the process 1000 may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed.
- the input messages in the corpus may need to be a labeled corpus that at least a part of the input messages in the corpus correspond to sentiment state determinations.
- the corpus may just include text message and not labeled with sentiment state determination.
- the processing engine 112 may generate a sentiment state determination for each input message in the corpus to obtain a labeled corpus.
- the labeled corpus may be obtained manually. That may increase the labor costs.
- the labeled corpus may be obtained based on a sentiment annotator (SA) model.
- SA sentiment annotator
- the SA model may generate a sentiment state determination of the input message.
- the corpus may include multiple types of language, and the SA model can recognize the type of language of the input message and further generate a corresponding sentiment state determination.
- the sentiment state determination s may locate in a range of [0, 1] , and 1 refers to most positive sentiment, 0 refers to most negative sentiment.
- the processing engine 112 may operate the SA model to generate the sentiment state determination for each input message in the corpus.
- the SA model may be constructed based on a plurality of sentiment estimation models.
- Exemplary sentiment estimation models may include a Bayesian model, a Dictionary based model, or the like, or a combination thereof.
- the SA model may be a fusion model by combining the Bayesian model and the Dictionary model shown as Equation (7) as follows:
- s bayes denotes an output sentiment state determination by the Bayesian model
- s dict denotes an output sentiment state determination by the Dictionary based model
- ⁇ denotes a combination coefficient between 0 to 1.
- the Bayesian model may be a pre-trained Bayesian sentiment classifier from SnowNLP package.
- SnowNLP is a python written class library for processing Chinese text content. it may support multiple functions including Chinese Word Segmentation (Character-based Generative Model) , Verbal labeling, Emotional analyzing, text classification (Naive Bayes) , Converting to pinyin, Traditional simplified form, keywords extracting, abstract extracting, Tf, idf, tokenization, text similarity and Python3 supporting.
- the output from the SnowNLP is between 0 and 1 from the most negative to the most positive.
- the Dictionary based model may include a sentiment polarity dictionary (e.g., positive dictionary, negative dictionary) , a sentiment degree dictionary, a stop word dictionary, or the like, or a combination thereof.
- the Dictionary based model may be applied for various of languages since it may contain various types of dictionaries.
- the Dictionary based model may be configured to classify a plurality of target words associated with sentiment into a plurality of categories representing different types of sentiments. Once fed into an input message, the Dictionary based model may filter the input message to obtain one or more words that included in the target words. Then the Dictionary based model may generate sentiment state determination of the input message based on one or more types of sentiments corresponding to the one or more words.
- the Dictionary based model may include a plurality of electronic dictionaries.
- Exemplary electronic dictionaries may include HowNet, NTUSD, BosonNLP, or the like, or a combination thereof.
- HowNet is an on-line common-sense knowledge base unveiling inter-conceptual relations and inter-attribute relations of concepts as connoting in lexicons of the Chinese and their English equivalents.
- a notable characteristic of HowNet is that synonyms, antonyms and converse relations can be generated by the users themselves based on the rules for synonym relation, List of Antonym Relation and List of Converse Relation instead of coding each of them overtly on every concept as WordNet does.
- NTUSD is a sentiment dictionary. It provides 11,088 sentiment words containing both positive words and negative words.
- NTUSD provides useful polarity information which can serve as seeds to learn sentiment of other words, sentences and even documents.
- BosonNLP is an ensemble approach for word segmentation and POS Tagging including three steps: pre-processing, statistical modeling, and post-processing.
- pre-processing training data was given in a format using 5-tag labeling.
- 5-tag labeling ⁇ B, C, M, E, S ⁇ for word segmentation indicates the beginning, second character, inside, ending, isolation of a word respectively.
- CRF linear-chain conditional random fields
- Both character-level features and dictionary features are extracted to produce accurate prediction.
- a word included in the target words may be treated as positive or negative if it is included in the sentiment polarity dictionary and may be assigned a base sentiment state determination of 1 and -1, respectively. If the word is not included in the target words, it may be considered to be sentiment normal and have a base score of 0.
- the target words may be classified into 7 groups with different degree levels and weights, as shown in Table 2.
- the sentiment state determination may be determined in the following steps. First, the processing engine 112 may filter out the stop words from x to get Q x words after this step.
- the processing engine 112 may determine its individual sentiment state determination by multiply its base sentiment state determination with degree weights of all of its related sentiment degree words. The processing engine 112 may then add up all the individual sentiment state determination and get as an unnormalized sentiment state determination of the input message x. Last, to remedy the effects of the length of the input message, the processing engine 112 may divide the unnormalized score by the squared root of Q x and apply a sigmoid function ⁇ ( ⁇ ) on it. Therefore, the sentiment state determination generated by the Dictionary based model may be defined as
- Table 2 Sentiment degree levels, weights, and example words in Chinese.
- the processing engine 112 may train the SP model based on the labeled corpus to obtain the machine-learned SP model.
- the processing engine 112 may obtain a prediction sentiment state determination of a next input message relative to the message exchange pair by feeding the message exchange pair into the SP model, and obtain a real sentiment state determination of the next input message based on the labeled corpus. Then, the processing engine 112 may obtain the machine-learned SP model by adjusting parameters of the SP model to minimize a difference between the plurality of prediction sentiment state determinations and the plurality of real sentiment state determinations.
- the processing engine 112 may train the SP model based on the following steps. First, for each dialogue in the plurality of dialogues, and for each message exchange pair except the last one in the dialogue content, the processing engine 112 may obtain a prediction sentiment state determination of a next input message relative to the message exchange pair by feeding the message exchange pair into the SP model. The processing engine 112 may then obtain a real sentiment state determination of the next input message based on the labeled corpus. To train the SP model, the processing engine 112 may further adjust parameters of the SP model ⁇ to minimize a difference between the plurality of prediction sentiment state determinations and the plurality of real sentiment state determination.
- an exemplary architecture of SP model is provided for illustration as shown in FIG. 11.
- the SP model may be a dual-RNN model combined with embedding layers at the bottom, and attentional and dense layers on the top.
- the SP model may generate a prediction sentiment state determination s n+1 of a next input message.
- the SP model may be trained based on the labeled corpus annotated by the SA model disclosed above.
- the processing engine 112 may obtain a real sentiment state determination of the next input message by applying the SA model on The processing engine 112 may further obtain a prediction sentiment state determination of the next input message
- the processing engine 112 may train the SP model by minimizing the mean squared error (MSE) between and based on Equation (8) as follows:
- L SP ( ⁇ ) denotes a total loss function of the SP model.
- the machine-learned SP model may be used to construct the SAC model.
- FIG. 12 is a flowchart illustrating an exemplary process for operating a SAC model according to some embodiments of the present disclosure.
- the process 1200 may be executed by the SAC system 100.
- the process 1200 may be implemented as a set of instructions (e.g., an application) stored in a storage device (e.g., the storage device 160) .
- the processor 210 and/or the modules in FIG. 4 may execute the set of instructions and, when executing the instructions, the processor 210 and/or the modules may be configured to perform the process 1200.
- the operations of the illustrated process presented below are intended to illustrative. In some embodiments, the process 1200 may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed.
- the processing engine 112 may receive an input message from an input device.
- the input message may include sentimental elements including a level of negative emotion of a user of the input device.
- the level of negative emotion may refer to a judgement to determine whether the user is in a good humor.
- the user may edit a complaint message on his or her mobile phone (the user terminal 130) and further transmit the complaint message to the processing engine 112 of the sever 110.
- the complaint message may include such sentimental elements.
- the sentimental elements may include some characteristic words that can describe the user’s emotion (e.g., damn, great, unsatisfied, etc. ) .
- the processing engine 112 may apply a sentimental appeasement chatbot (SAC) model to the input message to generate a sentiment appeasement responsive message based on the sentimental elements.
- the sentiment appeasement responsive message may include appeasement elements reacting upon emotional elements of the input message.
- the SAC model may be machine-learned based on the proposed methods and systems disclosed in the present disclosure (e.g., FIG. 6, and the descriptions thereof) .
- the processing engine 112 may invoke the machine-learned SAC model stored in the storage 160 to generate the sentiment appeasement responsive message based on the sentimental elements.
- the sentimental elements included in the input message is about a complain to the service.
- the sentiment appeasement responsive message generated by the machine-learned SAC model may include apology utterance to say sorry to the user.
- the processing engine 112 may transmit the sentiment appeasement responsive message to the output device. In some embodiments, the processing engine 112 may transmit the response sentiment appeasement responsive message to the output device.
- FIG. 13 is a flowchart illustrating an exemplary process for operating an SAC model according to some embodiments of the present disclosure.
- the process 1300 may be executed by the SAC system 100.
- the process 1300 may be implemented as a set of instructions (e.g., an application) stored in a storage device (e.g., the storage device 160) .
- the processor 210 and/or the modules in FIG. 4 may execute the set of instructions and, when executing the instructions, the processor 210 and/or the modules may be configured to perform the process 1300.
- the operations of the illustrated process presented below are intended to illustrative. In some embodiments, the process 1300 may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed.
- the process 1300 may be implemented on a user terminal 130.
- the input device of the user terminal 130 may transmit an input message to a processor.
- the input message may include sentimental elements indicating a level of negative emotion of a user. For example, if the user is not satisfied with the service provider, the user may edit a complaint message on his or her mobile phone (the user terminal 130) and further transmit the complaint message to the processing engine 112 of the sever 110.
- the complaint message may include such sentimental elements.
- the sentimental elements may include some characteristic words that can describe the user’s emotion (e.g., damn, great, unsatisfied, etc. ) .
- the output device of the user terminal 130 may receive a sentiment appeasement responsive message from the processor.
- the sentiment appeasement responsive message may be generated by applying a sentimental appeasement chatbot (SAC) model to the input message based on the sentimental elements.
- the sentiment appeasement responsive message may include appeasement elements reacting upon emotional elements of the input message.
- the processing engine 112 may invoke a trained SAC model stored in the storage 160 to generate the sentiment appeasement responsive message.
- the trained SAC model may be obtained based on the proposed methods and systems in the present disclosure (e.g., FIG. 6, and the descriptions thereof) .
- the processing engine 112 may then transmit the sentiment appeasement responsive message to the output device.
- the chatbot model is constructed based on the seq2seq framework, the encoder and the decoder are both 2-layer stacking GRU, and the dense layers and the attention layers are combined on top of the seq2seq framework.
- the SP model has dual-RNN structure, and both RNNs in the SP model are 2-layer stacking GRU as well.
- a pre-trained word2vec model may be used to prepare the embedding layers.
- the word2vec model may be configured to take a text corpus as input and produces the word vectors as output. It first constructs a vocabulary from the training text data and then learns vector representation of words. The resulting word vector file can be used as features in many natural language processing and machine learning applications.
- Gensim is a robust open-source vector space modeling and topic modeling toolkit implemented in Python. It uses NumPy, SciPy and optionally Cython for performance. Gensim is specifically designed to handle large text collections, using data streaming and efficient incremental algorithms, which differentiates it from most other scientific software packages that only target batch and in-memory processing.
- a skim-gram models with negative sampling may be used to train the word2vec model, and the output dimension of the embedded vector is 128.
- the skip-gram model may include a corpus of words w and their contexts c.
- the conditional probabilities of the skip-gram model is denoted as p (c
- the goal is to set the parameters ⁇ of p (c
- the negative-sampling approach is a more efficient way of deriving word embedding. While negative-sampling is based on the skip-gram model, it is in fact optimizing a different objective.
- the Adam algorithm is a method for efficient stochastic optimization that only requires first-order gradients with little memory requirement.
- the method computes individual adaptive learning rates for different parameters from estimates of first and second moments of the gradients.
- the SP model is evaluated by comparing the performance of the SP model with other baseline models.
- the SP model and the baseline models are trained based on a training data set.
- the parameters of the models are tuned to achieve a best performance based on a validation data set.
- Exemplary baseline models used to compare with the SP model include a Dual-RNN-Attn-Char model, a Dual-RNN model, a MLP model, a RR model, and a LR model.
- the structure of the Dual-RNN-Attn-Char model may be same as the SP model.
- the input of the Dual-RNN-Attn-Char model is characters instead of words used in the SP model.
- the Dual-RNN model is constructed based on the SP model by removing the attentional layers. Therefore, a contribution of the attentional layers to boost the performance of the SP model may be demonstrated.
- the MLP model is constructed by apply the dese layers to make the sentiment state determination predictions instead of using dual-RNN structure.
- the RR model constructed based on a Ridge Regression algorithm form a scikit-learn library.
- a max pooling layer is applied on top of the embedded representations from the message exchange pairs.
- the sentiment state determination may be predicted based on the Ridge Regression algorithm.
- the Ridge Regression algorithm is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates are unbiased, but their variances are large so they may be far from the true value. By adding a degree of bias to the regression estimates, ridge regression reduces the standard errors.
- the Scikit-learn (formerly scikits. learn) is a free software machine learning library for the Python programming language. It features various classification, regression and clustering algorithms including support vector machines, random forests, gradient boosting, k-means and DBSCAN, and is designed to interoperate with the Python numerical and scientific libraries NumPy and SciPy.
- the LR model is similar to the RR model.
- a linear Regression is used instead of the Ridge Regression in the LR model.
- Table 3 shows the results of the models in terms of mean squared error (MSE) . As shown in Table 3, the SP model outperforms the other baseline models, which indicates that it has the ability to make reasonable sentiment state determination predictions.
- MSE mean squared error
- the SAC model is evaluated.
- a corpus including 1,000 samples of customers input messages x n that have the most negative sentiments is incorporated in the evaluation.
- the corresponding responsive messages y n may also be obtained.
- the evaluation of the SAC model is performed as following steps.
- Step one all the corresponding annotated sentiment state determinations s n of the input messages x n are obtained (e.g., obtained based on the SA model) for the selected group.
- Step two all the corresponding annotated sentiment state determinations s n+1 are obtained (e.g., obtained based on the SA model) for the selected group.
- Step three prediction sentiment state determinations are obtained based on the message exchange pairs ⁇ x n , y n > in the selected corpus.
- Step four obtain temporal sentiment state determinations for each message exchange pair based on the SP model.
- The represents a responsive message generated by the SAC model trained with different values of ⁇ .
- Step five, 5 standard appeasing utterance (SAU) which are frequently used in daily customer service are proposed in the example.
- the five SAUs are shown in Table 4.
- Each of the SAUs are paired with a selected input message x n and processed by the SP model to generate its prediction sentiment state determination s n+1 .
- Table 5 shows the result of sentiments scores.
- the 1,000 samples of input messages are annotated by the SA model (Sentiment score of x n ) .
- the next input messages from the customers are also annotated (Sentiment score of ⁇ x n , y n > ) .
- the samples of input messages and their responsive messages among the 5 SAUs are also fed into the SP model to obtain the prediction sentiment state determinations (Prediction score of ⁇ x n , y n > ) .
- the responsive messages generated by the SAC model perform better than the SAUs in appeasing customer sentiments.
- the responsive messages generated by the SAC model may have a better performance in sentiment appeasement.
- the value of ⁇ is 0, which means that the SAC model equals to the chatbot model, the accuracy and perplexity of responsive messages may be better than other values of ⁇ .
- Table 7 Three examples of customer’s input messages and the corresponding responsive messages generated by different SAC models.
- s n is the sentiment state determination of each input message and is the predicted sentiment state determination of the generated responsive message
- the responsive message generated by the SAC model may have a negative effect in sentiment appeasement.
- the responsive message generated by the SAC model may have a positive effect in sentiment appeasement.
- aspects of the present disclosure may be illustrated and described herein in any of a number of patentable classes or context including any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof. Accordingly, aspects of the present disclosure may be implemented entirely hardware, entirely software (including firmware, resident software, micro-code, etc. ) or combining software and hardware implementation that may all generally be referred to herein as a "block, " “module, ” “engine, ” “unit, ” “component, ” or “system. ” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable media having computer readable program code embodied thereon.
- a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including electro-magnetic, optical, or the like, or any suitable combination thereof.
- a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that may communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
- Program code embodied on a computer readable signal medium may be transmitted using any appropriate medium, including wireless, wireline, optical fiber cable, RF, or the like, or any suitable combination of the foregoing.
- Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB. NET, Python or the like, conventional procedural programming languages, such as the “C” programming language, Visual Basic, Fortran 1703, Perl, COBOL 1702, PHP, ABAP, dynamic programming languages such as Python, Ruby and Groovy, or other programming languages.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN) , or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider) or in a cloud computing environment or offered as a service such as a software as a service (SaaS) .
- LAN local area network
- WAN wide area network
- an Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, etc.
- SaaS software as a service
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Human Computer Interaction (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Manipulator (AREA)
Abstract
L'invention concerne un système et un procédé de formation d'un modèle de robot conversationnel d'apaisement de sentiment. Le procédé peut comprendre les étapes consistant à : obtenir un corpus ; appliquer un ou plusieurs processus d'apprentissage machine au corpus de façon à former un modèle de robot conversationnel afin d'obtenir un modèle de robot conversationnel appris par machine ; appliquer un ou plusieurs processus d'apprentissage machine au corpus de façon à former un modèle de prédicteur de sentiment afin d'obtenir un modèle de prédicteur de sentiment appris par machine ; et appliquer un ou plusieurs processus d'apprentissage machine au corpus de façon à former le modèle de robot conversationnel d'apaisement de sentiment afin d'obtenir un modèle de robot conversationnel d'apaisement de sentiment appris par machine. Le modèle de robot conversationnel d'apaisement de sentiment peut être construit sur la base du modèle de prédicteur de sentiment appris par machine et du modèle de robot conversationnel appris par machine.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2018/089689 WO2019227505A1 (fr) | 2018-06-02 | 2018-06-02 | Systèmes et procédés de formation et d'utilisation d'un robot conversationnel |
CN201880093787.7A CN112189192A (zh) | 2018-06-02 | 2018-06-02 | 用于训练和使用聊天机器人的系统和方法 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2018/089689 WO2019227505A1 (fr) | 2018-06-02 | 2018-06-02 | Systèmes et procédés de formation et d'utilisation d'un robot conversationnel |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019227505A1 true WO2019227505A1 (fr) | 2019-12-05 |
Family
ID=68698682
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/089689 WO2019227505A1 (fr) | 2018-06-02 | 2018-06-02 | Systèmes et procédés de formation et d'utilisation d'un robot conversationnel |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112189192A (fr) |
WO (1) | WO2019227505A1 (fr) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111553171A (zh) * | 2020-04-09 | 2020-08-18 | 北京小米松果电子有限公司 | 语料处理方法、装置及存储介质 |
CN111597458A (zh) * | 2020-04-15 | 2020-08-28 | 北京百度网讯科技有限公司 | 场景元素的抽取方法、装置、设备及存储介质 |
CN112214585A (zh) * | 2020-09-10 | 2021-01-12 | 中国科学院深圳先进技术研究院 | 回复消息生成方法、系统、计算机设备及存储介质 |
CN113297346A (zh) * | 2021-06-28 | 2021-08-24 | 中国平安人寿保险股份有限公司 | 文本意图识别方法、装置、设备及存储介质 |
CN113643584A (zh) * | 2021-08-16 | 2021-11-12 | 中国人民解放军陆军特色医学中心 | 一种用于训练医患沟通能力的机器人及其工作方法 |
CN114187997A (zh) * | 2021-11-16 | 2022-03-15 | 同济大学 | 一种面向抑郁人群的心理咨询聊天机器人实现方法 |
CN114417851A (zh) * | 2021-12-03 | 2022-04-29 | 重庆邮电大学 | 一种基于关键词加权信息的情感分析方法 |
CN114444519A (zh) * | 2022-01-24 | 2022-05-06 | 重庆邮电大学 | 一种基于Seq2Seq模型的情感对话生成方法 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113450333B (zh) * | 2021-06-30 | 2022-01-28 | 哈尔滨工业大学 | 基于机器学习的钢筋混凝土柱地震损伤程度评估方法 |
CN116453027B (zh) * | 2023-06-12 | 2023-08-22 | 深圳市玩瞳科技有限公司 | 用于教育机器人的ai识别管理方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9117168B2 (en) * | 2012-09-28 | 2015-08-25 | Korea Institute Of Industrial Technology | Apparatus and method for calculating internal state for artificial emotion |
CN107066568A (zh) * | 2017-04-06 | 2017-08-18 | 竹间智能科技(上海)有限公司 | 基于用户意图预测的人机对话方法及装置 |
CN107944008A (zh) * | 2017-12-08 | 2018-04-20 | 神思电子技术股份有限公司 | 一种针对自然语言进行情绪识别的方法 |
US20180121784A1 (en) * | 2016-10-27 | 2018-05-03 | Fuji Xerox Co., Ltd. | Conversation control system |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106910513A (zh) * | 2015-12-22 | 2017-06-30 | 微软技术许可有限责任公司 | 情绪智能聊天引擎 |
CN106844750A (zh) * | 2017-02-16 | 2017-06-13 | 深圳追科技有限公司 | 一种基于客服机器人中情感安抚的人机交互方法及系统 |
-
2018
- 2018-06-02 WO PCT/CN2018/089689 patent/WO2019227505A1/fr active Application Filing
- 2018-06-02 CN CN201880093787.7A patent/CN112189192A/zh active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9117168B2 (en) * | 2012-09-28 | 2015-08-25 | Korea Institute Of Industrial Technology | Apparatus and method for calculating internal state for artificial emotion |
US20180121784A1 (en) * | 2016-10-27 | 2018-05-03 | Fuji Xerox Co., Ltd. | Conversation control system |
CN107066568A (zh) * | 2017-04-06 | 2017-08-18 | 竹间智能科技(上海)有限公司 | 基于用户意图预测的人机对话方法及装置 |
CN107944008A (zh) * | 2017-12-08 | 2018-04-20 | 神思电子技术股份有限公司 | 一种针对自然语言进行情绪识别的方法 |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111553171A (zh) * | 2020-04-09 | 2020-08-18 | 北京小米松果电子有限公司 | 语料处理方法、装置及存储介质 |
CN111553171B (zh) * | 2020-04-09 | 2024-02-06 | 北京小米松果电子有限公司 | 语料处理方法、装置及存储介质 |
CN111597458A (zh) * | 2020-04-15 | 2020-08-28 | 北京百度网讯科技有限公司 | 场景元素的抽取方法、装置、设备及存储介质 |
CN111597458B (zh) * | 2020-04-15 | 2023-11-17 | 北京百度网讯科技有限公司 | 场景元素的抽取方法、装置、设备及存储介质 |
CN112214585A (zh) * | 2020-09-10 | 2021-01-12 | 中国科学院深圳先进技术研究院 | 回复消息生成方法、系统、计算机设备及存储介质 |
CN112214585B (zh) * | 2020-09-10 | 2024-03-12 | 中国科学院深圳先进技术研究院 | 回复消息生成方法、系统、计算机设备及存储介质 |
CN113297346A (zh) * | 2021-06-28 | 2021-08-24 | 中国平安人寿保险股份有限公司 | 文本意图识别方法、装置、设备及存储介质 |
CN113297346B (zh) * | 2021-06-28 | 2023-10-31 | 中国平安人寿保险股份有限公司 | 文本意图识别方法、装置、设备及存储介质 |
CN113643584A (zh) * | 2021-08-16 | 2021-11-12 | 中国人民解放军陆军特色医学中心 | 一种用于训练医患沟通能力的机器人及其工作方法 |
CN114187997A (zh) * | 2021-11-16 | 2022-03-15 | 同济大学 | 一种面向抑郁人群的心理咨询聊天机器人实现方法 |
CN114417851A (zh) * | 2021-12-03 | 2022-04-29 | 重庆邮电大学 | 一种基于关键词加权信息的情感分析方法 |
CN114444519A (zh) * | 2022-01-24 | 2022-05-06 | 重庆邮电大学 | 一种基于Seq2Seq模型的情感对话生成方法 |
Also Published As
Publication number | Publication date |
---|---|
CN112189192A (zh) | 2021-01-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019227505A1 (fr) | Systèmes et procédés de formation et d'utilisation d'un robot conversationnel | |
CN111027331B (zh) | 用于评估翻译质量的方法和装置 | |
US11501182B2 (en) | Method and apparatus for generating model | |
US11403680B2 (en) | Method, apparatus for evaluating review, device and storage medium | |
EP3125235B1 (fr) | Apprentissage de modèle dialogue générée à partir des transcriptions des dialogues | |
US11862143B2 (en) | Systems and methods for processing speech dialogues | |
US10592607B2 (en) | Iterative alternating neural attention for machine reading | |
US11409964B2 (en) | Method, apparatus, device and storage medium for evaluating quality of answer | |
US9164983B2 (en) | Broad-coverage normalization system for social media language | |
US11138965B2 (en) | Generating phonemes of loan words using two converters | |
US20160232156A1 (en) | Systems and methods for determining translation accuracy in multi-user multi-lingual communications | |
JP2019504413A (ja) | 絵文字を提案するためのシステムおよび方法 | |
US20200151567A1 (en) | Training sequence generation neural networks using quality scores | |
CN111951780B (zh) | 语音合成的多任务模型训练方法及相关设备 | |
WO2022001888A1 (fr) | Procédé et dispositif de génération d'informations basés sur un modèle de génération de vecteur de mot | |
US10915756B2 (en) | Method and apparatus for determining (raw) video materials for news | |
Hori et al. | Statistical dialog management applied to WFST-based dialog systems | |
Kafle et al. | RIT Digital Institutional Repositor y | |
Zhang et al. | TempLM: Distilling language models into template-based generators | |
Yang et al. | Multi-domain dialogue state tracking with disentangled domain-slot attention | |
CN115292492A (zh) | 意图分类模型的训练方法、装置、设备及存储介质 | |
CN115510860A (zh) | 一种文本情感分析方法、装置、电子设备及存储介质 | |
CN114580446A (zh) | 基于文档上下文的神经机器翻译方法及装置 | |
Yee et al. | Exploring BERT-based encoders for sequence classification and multi-task learning in dialogue acts and joint intent-slot filling | |
US20240273371A1 (en) | Code-level neural architecture search using language models |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18920842 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18920842 Country of ref document: EP Kind code of ref document: A1 |