WO2022227211A1 - 基于Bert的篇章的多意图识别方法、设备及可读存储介质 - Google Patents

基于Bert的篇章的多意图识别方法、设备及可读存储介质 Download PDF

Info

Publication number
WO2022227211A1
WO2022227211A1 PCT/CN2021/097234 CN2021097234W WO2022227211A1 WO 2022227211 A1 WO2022227211 A1 WO 2022227211A1 CN 2021097234 W CN2021097234 W CN 2021097234W WO 2022227211 A1 WO2022227211 A1 WO 2022227211A1
Authority
WO
WIPO (PCT)
Prior art keywords
chapter
semantic
intent
recognition
recognition unit
Prior art date
Application number
PCT/CN2021/097234
Other languages
English (en)
French (fr)
Inventor
梁子敬
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2022227211A1 publication Critical patent/WO2022227211A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/211Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the present application relates to the technical field of artificial intelligence, and in particular, to a Bert-based multi-intent recognition method, apparatus, electronic device, and computer-readable storage medium.
  • Intent recognition is often required in some actual scenarios of multiple rounds of conversations.
  • human-machine conversations some problems that cannot be solved by intelligent customer service are prone to occur, or users need some upgrade services.
  • the above process will involve refining the user’s question and answer with the robot. Intentions in the process (that is: in multiple rounds of dialogue, identify multiple intentions); in order to assign these intentions to the human customer service familiar with the corresponding business for processing.
  • the present application provides a Bert-based text multi-intent recognition method, device, electronic device, and computer-readable storage medium, the main purpose of which is to solve the problems of text-level understanding and multi-intent recognition through the Bert model and the lstm model.
  • the multi-intent recognition method based on Bert's articles provided by the application is applied to electronic equipment, and the method includes:
  • the semantic vector of each recognition unit is input into the fusion classification recognition model, and the intent information of the to-be-recognized chapter is obtained.
  • the present application also provides a multi-intent recognition device based on Bert's chapters, the device comprising:
  • a to-be-recognized chapter acquisition module configured to acquire a to-be-recognized chapter according to user interaction content, wherein the to-be-recognized chapter is divided into at least two identification units according to preset rules;
  • a preprocessing module for performing element splicing preprocessing on the recognition unit sentence
  • the semantic vector acquisition module is used to input the preprocessed recognition unit into the Bert model for training, and obtain the semantic vector of each recognition unit;
  • the intent information acquisition module is configured to input the semantic vector of each recognition unit into the fusion classification recognition model to acquire the intent information of the to-be-recognized chapter.
  • the present application also provides an electronic device, the electronic device includes:
  • the memory stores instructions executable by the at least one processor, the instructions are executed by the at least one processor, so that the at least one processor can execute the above-mentioned Bert-based multi-intent recognition method for chapters A step of.
  • the present application further provides a computer-readable storage medium, where at least one instruction is stored in the computer-readable storage medium, and the at least one instruction is executed by a processor in an electronic device to implement the above-mentioned Steps of a multi-intent recognition method based on Bert's chapter.
  • the embodiment of the present application acquires the to-be-recognized chapter according to the user interaction content, and divides the to-be-recognized chapter into at least two recognition units according to preset rules; performs element splicing preprocessing on the recognition unit sentences;
  • the unit is input into the Bert model for training, and the semantic vector of each recognition unit is obtained;
  • the semantic vector of each recognition unit is input into the fusion classification recognition model, and all intent information contained in the recognition unit of the to-be-recognized chapter is obtained .
  • the main purpose of this application is to solve the problems of chapter-level understanding and multi-intent recognition through the Bert model and the lstm model.
  • 1 is a schematic flowchart of a Bert-based chapter multi-intent recognition method provided by an embodiment of the present application
  • FIG. 2 is a schematic block diagram of a multi-intent recognition device based on a Bert chapter provided by an embodiment of the present application;
  • FIG. 3 is a schematic diagram of the internal structure of an electronic device that implements a Bert-based chapter multi-intent identification method provided by an embodiment of the present application;
  • the present application provides a multi-intent recognition method based on Bert's articles.
  • FIG. 1 a schematic flowchart of a Bert-based chapter multi-intent identification method provided by an embodiment of the present application. The method may be performed by an apparatus, which may be implemented in software and/or hardware.
  • the multi-intent recognition method based on Bert's articles includes:
  • S1 Acquire the to-be-recognized chapter according to the user interaction content, wherein the to-be-recognized chapter is divided into at least two identification units according to preset rules;
  • S3 Input the preprocessed recognition unit into the Bert model for training, and obtain the semantic vector of each recognition unit;
  • S4 Input the semantic vector of each recognition unit into the fusion classification recognition model, and obtain the intent information of the to-be-recognized chapter.
  • step S1 in order to solve the problems or descriptive texts generated by the user and the intelligent customer service in multiple rounds of interactions, we can use these problems and descriptive texts. Combined as a chapter in order to understand the user's intent from the whole semantic level.
  • the acquisition of the to-be-identified chapter according to the user interaction content includes the following steps:
  • dividing the to-be-recognized chapter into at least two identification units according to a preset rule includes the following steps:
  • the to-be-identified chapter is segmented by sentence segmentation symbols; wherein, the preset rules include sentence segmentation symbols, and the sentence segmentation symbols include periods, semicolons, exclamation marks and question marks;
  • a sentence or question formed by segmenting the to-be-recognized chapter is determined as a recognition unit.
  • the to-be-recognized chapter is divided into at least two recognition units according to preset rules, and a punctuation mark representing a complete sentence may be used as one of the rules for dividing the recognition unit, for example: the to-be-recognized chapter is divided according to The sentence segmentation symbol divides several sentences.
  • the sentence segmentation symbols may include periods, question marks, exclamation marks, etc.
  • the sentences segmented according to these sentence segmentation symbols are identification units.
  • the preset rules include sentence segmentation symbols, etc.
  • the sentence segmentation symbols include periods, semicolons, exclamation marks, question marks, etc.
  • the recognition unit includes sentences and questions containing one sentence segmentation symbol. Wait.
  • step S2 the element splicing preprocessing performed on the identification unit includes the following steps:
  • S23 Determine the semantic symbol sequence of the recognition unit according to the intent information and the hyperparameter.
  • the recognition unit makes some input adjustments.
  • the following will take sentences as an example for explanation and explanation, and make some input adjustments for sentences 1 to n.
  • step In S21 three pieces of information [CLS], [unused1], and [unused2] are spliced in the head of the original sentence, so that the Bert model is trained to learn that the three pieces of information [CLS], [unused1] and [unused2] contain
  • the idea, the second intention, and the third intention are three pieces of information, so as to prepare the input for the subsequent chapter-level intentions.
  • the intent in each sentence is not limited to the above three intent information.
  • step S22 in order to meet the needs of input into Bert model training, a padding operation of the same length is required for the sentences to be input into Bert, that is, a hyperparameter max_len is spliced at the end of each sentence (the size is agreed here). is 128), and max_len is used as the maximum single input sentence length of the Bert model.
  • step S23 when the length of the sequence [[cls], [unused1], [unused2], [sentence], [SEP]] does not exceed max_len, there are two cases:
  • the semantic conformance sequence is:
  • the value of i ranges from 1 to n, indicating that the current input sequence is the i-th, and this information will be input into the Bert model cyclically. Semantic information output for each sentence.
  • step S3 the described preprocessing recognition unit is input into the Bert model for training, and the semantic vector of each recognition unit is obtained, including the following steps:
  • S31 Input the semantic symbol sequence into the Bert model, and obtain the semantic representation vector corresponding to the position of each semantic symbol in the semantic symbol sequence;
  • S32 Determine the overall semantic vector of the to-be-identified chapter according to the acquired semantic representation vector.
  • the semantic information of each sentence will be obtained, and these semantic information will be summarized into three information representations [CLS], [unused1], [unused2], and three
  • CLS information representations
  • doc_num the number of sentences in the chapter to be recognized
  • the dimension is 3*hidden_size.
  • the first three Token information of sequence_output is taken.
  • the information contained in these three Tokens is [cls], [unused1], [unused2] ,
  • step S32 the overall semantic vector of the to-be-recognized chapter is obtained by the following formula:
  • hidden_output i represents: the deep semantic information of each sentence after dropout filtering. (The semantic information can be understood as the current sentence in the text summarization task.)
  • Dropout means The dropout layer, for the input neural network unit, temporarily discards it from the network according to a certain probability.
  • pooled_output i means: the output after the current sentence is input into bert, among which, if the current sentence is the sentence sent_i, then the output is the same as bert_output[sent_i].
  • step S4 the semantic vector of each recognition unit is input into the fusion classification recognition model, and all intent information contained in the recognition unit of the to-be-recognized chapter is obtained, including the following steps:
  • S41 input the semantic vector of each recognition unit into the lstm model for training, and obtain the semantic information of the to-be-recognized chapter, wherein the semantic information includes the intent summary information of each recognition unit;
  • step S41 the vector of the length of the number of sentences is input into lstm to obtain the semantic information of the whole chapter (the total idea, the second intent, the third intent), that is, the level of each sentence is included Summarize the three intents above.
  • lstm_output represents: the summary semantics after understanding the entire text sequence.
  • lstm means: time series network structure, which will understand the input time series and make a summary output.
  • cat(hidden_output i ) represents: the deep semantic information of each sentence after dropout filtering.
  • step S42 after obtaining the corresponding lstm_output, the information is input into a fully connected network and sigmoid structure for multi-classification of intentions.
  • the formula of this process is as follows.
  • the vector output by formula 1 obtains the final intention through the sigmoid function (formula 4).
  • Formula 3 represents a fully connected network structure, that is, a pure MLP network unit, where the dimension of w i is (3*bert_hidden_size) *intent_class, where intent_class represents a schematic class.
  • h represents: the output of the network structure is a further understanding of the text chapter, that is, the text understanding before extracting the text.
  • w i means: the weight of each neuron in the network structure is the parameter optimized by the model during the training process.
  • b i means: the bias of each neuron in the network structure is the parameter optimized by the model during the training process.
  • cls means: perform a unified processing on the extracted summary information, and finally obtain whether each sentence can currently be used as a summary sentence.
  • the embodiment of the present application acquires the to-be-recognized chapter according to the user interaction content, wherein the to-be-recognized chapter is divided into at least two identification units according to preset rules; element splicing preprocessing is performed on the identification unit;
  • the recognition unit is input into the Bert model for training, and the semantic vector of each recognition unit is obtained; the semantic vector of each recognition unit is input into the fusion classification recognition model, and the graph information of the to-be-recognized chapter is obtained.
  • the main purpose of this application is to solve the problems of chapter-level understanding and multi-intent recognition through the Bert model and the lstm model.
  • FIG. 2 it is a functional block diagram of the multi-intent recognition device based on Bert's chapter.
  • the multi-intent recognition apparatus 100 based on Bert's chapters described in this application can be installed in an electronic device.
  • the Bert-based multi-intent recognition device for chapters may include: a chapter to be recognized acquisition module 101 , a preprocessing module 102 , a semantic vector acquisition module 103 , and an intent information acquisition module 104 .
  • the modules described in this application may also be referred to as units, which refer to a series of computer program segments that can be executed by the processor of an electronic device and can perform fixed functions, and are stored in the memory of the electronic device.
  • each module/unit is as follows:
  • the to-be-recognized chapter acquiring module 101 is configured to acquire the to-be-recognized chapter according to the user interaction content, wherein the to-be-recognized chapter is divided into at least two identification units according to preset rules;
  • a preprocessing module 102 configured to perform element splicing preprocessing on the recognition unit sentence
  • the semantic vector acquisition module 103 is used to input the preprocessed recognition unit into the Bert model for training, and obtain the semantic vector of each recognition unit;
  • the intent information acquisition module 104 is configured to input the semantic vector of each recognition unit into the fusion classification recognition model, and acquire all the intent information contained in the recognition unit of the to-be-recognized chapter.
  • the acquisition of the to-be-recognized chapter according to the user interaction content according to the acquisition module 101 includes:
  • the question text acquisition module is used to obtain the questions and descriptive texts generated by the user and the intelligent customer service in multiple rounds of interactions;
  • the to-be-recognized chapter forming module is used to combine the question and the expressive text to form the to-be-recognized chapter.
  • the to-be-recognized chapter is divided into at least two recognition units according to a preset rule, and the punctuation mark representing a complete sentence can be used as one of the rules for dividing the recognition unit, for example:
  • the to-be-identified chapter is divided into several sentences according to sentence segmentation symbols.
  • the sentence segmentation symbols may include periods, question marks, exclamation marks, etc.
  • the sentences segmented according to these sentence segmentation symbols are identification units.
  • the preset rules include sentence segmentation symbols, etc.
  • the sentence segmentation symbols include periods, semicolons, exclamation marks, question marks, etc.
  • the recognition unit includes complete sentences, questions, and the like.
  • performing element splicing preprocessing on the identification unit includes:
  • an intention information splicing module used for splicing at least two intention information of this identification unit at the starting position of each identification unit
  • the hyperparameter splicing module is used to splicing a hyperparameter at the end position of each recognition unit
  • a semantic symbol sequence determination module configured to determine the semantic symbol sequence of the recognition unit according to the intention information and the hyperparameters.
  • the recognition unit makes some input adjustments.
  • the following will take sentences as an example for explanation and explanation, and make some input adjustments for sentences 1 to n.
  • the intent information splicing module the three pieces of information [CLS], [unused1], and [unused2] are spliced in the head of the original sentence. This is to make the Bert model training learn the three information [CLS], [unused1] and [unused2].
  • This piece of information contains three pieces of information: the main intention, the second intention, and the third intention, so as to prepare for the input of the subsequent intention at the chapter level.
  • the intent in each sentence is not limited to the above three intent information.
  • a padding operation of the same length is required for the sentences to be input into Bert, that is, a hyperparameter max_len is spliced at the end of each sentence (here The agreed size is 128), and max_len is used as the maximum single input sentence length of the Bert model.
  • step S23 when the length of the sequence [[cls], [unused1], [unused2], [sentence], [SEP]] does not exceed max_len, there are two cases:
  • the semantic conformance sequence is:
  • the value of i ranges from 1 to n, indicating that the current input sequence is the i-th, and this information will be input into the Bert model cyclically. Semantic information output for each sentence.
  • the preprocessed recognition unit is input into the Bert model for training, and the semantic vector of each recognition unit is obtained, including:
  • Semantic representation vector acquisition module for inputting the semantic symbol sequence in the Bert model, obtains the semantic representation vector corresponding to the position of each semantic symbol in the semantic symbol sequence;
  • the overall semantic vector obtaining module is configured to determine the overall semantic vector of the to-be-recognized chapter according to the obtained semantic representation vector.
  • the semantic information of each sentence will be obtained, and these semantic information will be summarized into three information representations [CLS], [unused1], [unused2], and three
  • CLS information representations
  • doc_num the number of sentences in the chapter to be recognized
  • the dimension is 3*hidden_size.
  • the first three Token information of sequence_output is taken.
  • the information contained in these three Tokens is [cls], [unused1], [unused2] ,
  • the overall semantic vector of the to-be-recognized chapter is obtained by the following formula:
  • hidden_output i represents: the deep semantic information of each sentence after dropout filtering. (The semantic information can be understood as the current sentence in the text summarization task.)
  • Dropout means The dropout layer, for the input neural network unit, temporarily discards it from the network according to a certain probability.
  • pooled_output i means: the output after the current sentence is input into bert, among which, if the current sentence is the sentence sent_i, then the output is the same as bert_output[sent_i].
  • the semantic vector of each recognition unit is input into the fusion classification recognition model, and all intent information contained in the recognition unit of the to-be-recognized chapter is acquired, including:
  • a voice information acquisition module configured to input the semantic vector of each recognition unit into the lstm model for training, and acquire the semantic information of the to-be-recognized chapter, wherein the semantic information includes the intent summary information of each recognition unit;
  • a linear transformation processing module configured to perform a linear transformation process on the intent summary information to obtain all intent information contained in the identification unit of the chapter to be identified.
  • the vector of the number of sentences is input into lstm to obtain the semantic information of the entire chapter (total idea, second intent, and third intent), that is, including each Summarize the three intents at the sentence level.
  • lstm_output represents: the summary semantics after understanding the entire text sequence.
  • lstm means: time series network structure, which will understand the input time series and make a summary output.
  • cat(hidden_output i ) represents: the deep semantic information of each sentence after dropout filtering.
  • the linear transformation processing module After obtaining the corresponding lstm_output, the information is input into a fully connected network and sigmoid structure for multi-classification of intent.
  • the formula of this process is as follows.
  • the vector output by formula 1 obtains the final intention through the sigmoid function (formula 4).
  • Formula 3 represents a fully connected network structure, that is, a pure MLP network unit, where the dimension of w i is (3*bert_hidden_size) *intent_class, where intent_class represents a schematic class.
  • h represents: the output of the network structure is a further understanding of the text chapter, that is, the text understanding before extracting the text.
  • w i means: the weight of each neuron in the network structure is the parameter optimized by the model during the training process.
  • b i means: the bias of each neuron in the network structure is the parameter optimized by the model during the training process.
  • cls means: perform a unified processing on the extracted summary information, and finally obtain whether each sentence can currently be used as a summary sentence.
  • the to-be-recognized chapter is acquired according to the user interaction content, and the to-be-recognized chapter is divided into at least two recognition units according to preset rules; element splicing preprocessing is performed on the sentence of the recognition unit;
  • the processed recognition unit is input into the Bert model for training, and the semantic vector of each recognition unit is obtained;
  • the semantic vector of each recognition unit is input into the fusion classification recognition model to obtain the intent information of the to-be-recognized chapter.
  • the main purpose of this application is to solve the problems of chapter-level understanding and multi-intent recognition through the Bert model and the lstm model.
  • FIG. 3 it is a schematic structural diagram of an electronic device implementing the Bert-based multi-intent recognition method for chapters of the present application.
  • the electronic device 1 may include a processor 10, a memory 11 and a bus, and may also include a computer program stored in the memory 11 and executable on the processor 10, such as a multi-intent recognition program based on Bert's chapters 12.
  • the memory 11 includes at least one type of readable storage medium, and the readable storage medium includes flash memory, mobile hard disk, multimedia card, card-type memory (for example: SD or DX memory, etc.), magnetic memory, magnetic disk, CD etc.
  • the memory 11 may be an internal storage unit of the electronic device 1 in some embodiments, such as a mobile hard disk of the electronic device 1 .
  • the memory 11 may also be an external storage device of the electronic device 1, such as a pluggable mobile hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital) equipped on the electronic device 1. , SD) card, flash memory card (Flash Card), etc.
  • the memory 11 may also include both an internal storage unit of the electronic device 1 and an external storage device.
  • the memory 11 can not only be used to store application software installed in the electronic device 1 and various data, such as the code of a data audit program, etc., but also can be used to temporarily store data that has been output or will be output.
  • the memory may store content that may be displayed by the electronic device or sent to other devices (eg, headphones) for display or playback by the other devices.
  • the memory can also store content received from other devices. This content from other devices may be displayed, played, or used by the electronic device to perform any necessary tasks or operations that may be implemented by a computer processor or other components in the electronic device and/or wireless access point.
  • the processor 10 may be composed of integrated circuits, for example, may be composed of a single packaged integrated circuit, or may be composed of multiple integrated circuits packaged with the same function or different functions, including one or more integrated circuits.
  • Central processing unit Central Processing Unit, CPU
  • microprocessor digital processing chip
  • graphics processor and combination of various control chips, etc.
  • the processor 10 is the control core (Control Unit) of the electronic device, and uses various interfaces and lines to connect various components of the entire electronic device, and by running or executing the programs or modules (such as data) stored in the memory 11. audit program, etc.), and call the data stored in the memory 11 to perform various functions of the electronic device 1 and process data.
  • the electronics may also include a chipset (not shown) for controlling communications between the one or more processors and one or more of the other components of the user equipment.
  • the electronic device may be based on architecture or architecture, and processors and chipsets can be derived from Processor and Chipset Family.
  • the one or more processors 104 may also include one or more application specific integrated circuits (ASICs) or application specific standard products (ASSPs) for handling specific data processing functions or tasks.
  • ASICs application specific integrated circuits
  • ASSPs application specific standard products
  • the bus may be a peripheral component interconnect (PCI for short) bus or an extended industry standard architecture (Extended industry standard architecture, EISA for short) bus or the like.
  • PCI peripheral component interconnect
  • EISA Extended industry standard architecture
  • the bus can be divided into address bus, data bus, control bus and so on.
  • the bus is configured to implement connection communication between the memory 11 and at least one processor 10 and the like.
  • the network and I/O interfaces may include one or more communication interfaces or network interface devices to provide data transfer between the electronic device and other devices (eg, network servers) via a network (not shown).
  • Communication interfaces may include, but are not limited to, a body area network (BAN), a personal area network (PAN), a wired local area network (LAN), a wireless local area network (WLAN), a wireless wide area network (WWAN), and the like.
  • User equipment 102 may be coupled to the network via a wired connection.
  • the wireless system interface may include hardware or software to broadcast and receive messages using the Wi-Fi Direct standard and/or the IEEE 802.11 wireless standard, the Bluetooth standard, the Bluetooth low energy standard, the Wi-Gig standard, and/or any Other wireless standards and/or combinations thereof.
  • Wireless systems may include transmitters and receivers or transceivers capable of operating within a wide range of operating frequencies governed by the IEEE 802.11 wireless standard.
  • Communication interfaces may utilize acoustic, radio frequency, optical, or other signals to exchange data between electronic devices and other devices such as access points, hosts, servers, routers, reading devices, and the like.
  • the network 118 may include, but is not limited to, the Internet, a private network, a virtual private network, a wireless wide area network, a local area network, a metropolitan area network, a telephone network, and the like.
  • Displays may include, but are not limited to, liquid crystal displays, light emitting diode displays, or E-InkTM displays manufactured by E Ink Corp. of Cambridge, Massachusetts, USA.
  • the display can be used to display content to the user in the form of text, images, or video.
  • the display may also operate as a touch screen display, which may enable a user to initiate commands or operations by touching the screen with certain fingers or gestures.
  • FIG. 3 only shows an electronic device with components. Those skilled in the art can understand that the structure shown in FIG. 2 does not constitute a limitation on the electronic device 1, and may include fewer or more components than those shown in the figure. components, or a combination of certain components, or a different arrangement of components.
  • the electronic device 1 may also include a power supply (such as a battery) for powering the various components, preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so that the power management
  • the device implements functions such as charge management, discharge management, and power consumption management.
  • the power source may also include one or more DC or AC power sources, recharging devices, power failure detection circuits, power converters or inverters, power status indicators, and any other components.
  • the electronic device 1 may further include various sensors, Bluetooth modules, Wi-Fi modules, etc., which will not be repeated here.
  • the electronic device 1 may also include a network interface, optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a Bluetooth interface, etc.), which is usually used in the electronic device 1 Establish a communication connection with other electronic devices.
  • a network interface optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a Bluetooth interface, etc.), which is usually used in the electronic device 1 Establish a communication connection with other electronic devices.
  • the electronic device 1 may further include a user interface, and the user interface may be a display (Display), an input unit (eg, a keyboard (Keyboard)), optionally, the user interface may also be a standard wired interface or a wireless interface.
  • the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode, organic light-emitting diode) touch device, and the like.
  • the display may also be appropriately called a display screen or a display unit, which is used for displaying information processed in the electronic device 1 and for displaying a visualized user interface.
  • the multi-intent recognition program 12 based on Bert's chapters stored in the memory 11 in the electronic device 1 is a combination of multiple instructions. When running in the processor 10, it can realize:
  • the semantic vector of each recognition unit is input into the fusion classification recognition model, and the intent information of the to-be-recognized chapter is obtained.
  • the modules/units integrated in the electronic device 1 may be stored in a computer-readable storage medium.
  • the computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, removable hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) .
  • the computer-readable storage medium may be non-volatile or volatile.
  • a computer-readable storage medium stores at least one instruction, and the at least one instruction is executed by a processor in an electronic device to implement the above-mentioned Bert-based
  • the steps of the multi-intent recognition method of the article are as follows:
  • the semantic vector of each recognition unit is input into the fusion classification recognition model, and the intent information of the to-be-recognized chapter is obtained.
  • modules described as separate components may or may not be physically separated, and the components shown as modules may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution in this embodiment.

Abstract

本申请涉及一种人工智能,提供一种基于Bert的篇章的多意图识别方法、装置、电子设备及计算机可读存储介质,其中方法包括:根据用户交互内容获取待识别篇章,其中,并按照预设规则将所述待识别篇章切分为至少两个识别单元;对所述识别单元进行要素拼接预处理;将预处理后的识别单元输入到Bert模型进行训练,获取每个识别单元的语义向量;将所述每个识别单元的语义向量输入到融合分类识别模型中,获取所述待识别篇章的意图信息。本申请主要目的在于通过Bert模型和lstm模型,解决篇章级别的理解和多意图识别的问题。

Description

基于Bert的篇章的多意图识别方法、设备及可读存储介质
本申请要求于2021年04月30日提交中国专利局、申请号为202110480025.6,发明名称为“基于Bert的篇章的多意图识别方法、设备及可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能技术领域,尤其涉及一种基于Bert的篇章的多意图识别方法、装置、电子设备及计算机可读存储介质。
背景技术
在一些多轮交谈的实际场景中常常需要进行意图识别,在人机对话中容易出现一些智能客服无法解决的问题,或者用户需要一些升级服务,在上述过程中会涉及到提炼用户在与机器人问答过程中的意图(即:在多轮对话中,识别多个意图);以便将这些意图分配给对应业务熟悉的人工客服去处理。
其中,发明人意识到为了解决用户与智能客服在多轮交互中产生的问题或者描述性文字,将这些问题和描述性文字联合起来作为一个篇章,以便从整个语意层面来理解用户的意图,该过程需要解决两个难题,即:篇章级别的理解和多意图识别模型;但是目前业内并没有方法解决篇章级别的多意图识别的问题。
为了解决上述问题,亟需一种能够解决篇章级别的多意图识别的问题的识别方案。
发明内容
本申请提供一种基于Bert的篇章的多意图识别方法、装置、电子设备及计算机可读存储介质,其主要目的在于通过Bert模型和lstm模型,解决篇章级别的理解和多意图识别的问题。
为实现上述目的,本申请提供的基于Bert的篇章的多意图识别方法,应用于电子设备,所述方法包括:
根据用户交互内容获取待识别篇章,其中,并按照预设规则将所述待识别篇章切分为至少两个识别单元;
对所述识别单元进行要素拼接预处理;
将预处理后的识别单元输入到Bert模型进行训练,获取每个识别单元的语义向量;
将所述每个识别单元的语义向量输入到融合分类识别模型中,获取所述待识别篇章的意图信息。
为了解决上述问题,本申请还提供一种基于Bert的篇章的多意图识别装置,所述装置包括:
待识别篇章获取模块,用于根据用户交互内容获取待识别篇章,其中,并按照预设规则将所述待识别篇章切分为至少两个识别单元;
预处理模块,用于对所述识别单元句子进行要素拼接预处理;
语义向量获取模块,用于将预处理后的识别单元输入到Bert模型进行训练,获取每个识别单元的语义向量;
意图信息获取模块,用于将所述每个识别单元的语义向量输入到融合分类识别模型中,获取所述待识别篇章的意图信息。
为了解决上述问题,本申请还提供一种电子设备,所述电子设备包括:
至少一个处理器;以及,
与所述至少一个处理器通信连接的存储器;其中,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行上述的基于Bert的篇章的多意图识别方法的步骤。
为了解决上述问题,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一个指令,所述至少一个指令被电子设备中的处理器执行以实现上述所述的基于Bert的篇章的多意图识别方法的步骤。
本申请实施例根据用户交互内容获取待识别篇章,并按照预设规则将所述待识别篇章切分为至少两个识别单元;对所述识别单元句子进行要素拼接预处理;将预处理的识别单元输入到Bert模型进行训练,获取每个识别单元的语义向量;将所述每个识别单元的语义向量输入到融合分类识别模型中,获取所述待识别篇章的识别单元中包含的所有意图信息。本申请的主要目的在于通过Bert模型和lstm模型,解决篇章级别理解和多意图识别的问题。
附图说明
图1为本申请一实施例提供的基于Bert的篇章的多意图识别方法的流程示意图;
图2为本申请一实施例提供的基于Bert的篇章的多意图识别装置的模块示意图;
图3为本申请一实施例提供的实现基于Bert的篇章的多意图识别方法的电子设备的内部结构示意图;
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
为解决上述问题,本申请提供一种基于Bert的篇章的多意图识别方法。参照图1所示,为本申请一实施例提供的基于Bert的篇章的多意图识别方法的流程示意图。该方法可以由一个装置执行,该装置可以由软件和/或硬件实现。
在本实施例中,基于Bert的篇章的多意图识别方法包括:
S1:根据用户交互内容获取待识别篇章,其中,并按照预设规则将所述待识别篇章切分为至少两个识别单元;
S2:对所述识别单元进行要素拼接预处理;
S3:将预处理后的识别单元输入到Bert模型进行训练,获取每个识别单元的语义向量;
S4:将所述每个识别单元的语义向量输入到融合分类识别模型中,获取所述待识别篇章的意图信息。
上述为本申请人工智能的基于Bert的篇章的多意图识别方法,在步骤S1中,为了解决用户与智能客服在多轮交互中产生的问题或者描述性文字,我们可以将这些问题和描述性文字联合起来作为一个篇章,以便从整个语意层面来理解用户的意图。其中,所述根据用户交互内容获取待识别篇章,包括如下步骤:
S11:获取用户与智能客服在多轮交互中产生的问题和表述性文字;
S12:将所述问题和所述表述性文字相互联合,形成待识别篇章。
在本申请的一个实施方式中,所述按照预设规则将所述待识别篇章切分为至少两个识别单元,包括如下步骤:
通过句子切分符号对所述待识别篇章进行切分处理;其中,所述预设规则包括句子切分符号,所述句子切分符号包括句号、分号、感叹号以及问号;
将所述待识别篇章切分形成的句子或者问题确定为识别单元。
具体地,按照预设规则将所述待识别篇章切分为至少两个识别单元,可以将表示一个完整句子的标点符号作为切分识别单元的规则之一,比如:将所述待识别篇章按照句子切分符号分割若干个句子。其中,所述句子切分符号合可以包括句号、问号、感叹号等,按照这些句子切分符号切分出的一个个句子即为一个个识别单元。也就是说,所述预设规则包括句子切分符号等,所述句子切分符号包括句号、分号、感叹号以及问号等;所述识别单元包括含有一个所述句子切分符号的句子和问题等。
在步骤S2中,所述对所述识别单元进行要素拼接预处理,包括如下步骤:
S21:在每个识别单元的起始位置拼接本识别单元的至少两个意图信息;
S22:在每个识别单元的末端位置拼接一个超参;
S23:根据所述意图信息和所述超参,确定所述识别单元的语义符号序列。
在本申请的实施例中,所述识别单元(句子或者问题)做一些输入的调整,以下将以句子为例进行说明解释,将句子1到句子n做一些输入上的调整,首先,在步骤S21中,在原始句子的头部拼接[CLS]、[unused1]、[unused2]三个信息,这样做是期望使得Bert模型训练获知[CLS]、[unused1]和[unused2]这三个信息含有主意图、第二意图、第三意图这 样三个信息,从而为后续得到篇章级别的意图做好输入准备。其中,需要说明的是,每个句子中的意图并不是可以只拼接三个意图信息,可以根据需要拼接需要合适的意图信息,并不限于上述三个意图信息。
在步骤S22中,为了满足输入到Bert模型训练的需求,对要输入到Bert中的句子需要做相同长度上的padding操作,即:在每个句子的末端位置拼接一个超参max_len(这里约定大小为128),max_len作为Bert模型单次最大输入句子长度。
在步骤S23中,当[[cls],[unused1],[unused2],[sentence],[SEP]]序列中长度不超过max_len,此时分为两种情况,
其中,若[[cls],[unused1],[unused2],[sentence],[SEP]]序列中长度正好等于max_len时,则不需要在原序列中补充[PAD]字符;
其中,若[[cls],[unused1],[unused2],[sentence],[SEP]]小于max_len个WordPiece时,则补[PAD]字符,直到序列的个数正好为max_len为止。
语义符合序列为:
input i=[[cls i],[unused1 i],[unused2 i],[sentence i],[SEP i]]
其中,i的取值范围从1到n,表示当前输入序列为第i个,这些信息将循环输入到Bert模型中,传入的序列将传入Bert中,得到Bert每个句子的输出,取每个句子输出的语意信息。
在步骤S3中,所述将预处理的识别单元输入到Bert模型进行训练,获取每个识别单元的语义向量,包括如下步骤:
S31:将所述语义符号序列输入所述Bert模型中,得到所述语义符号序列中每个语义符号的位置所对应的语义表示向量;
S32:根据获取的语义表示向量,确定所述待识别篇章的整体语义向量。
在本申请的实施例中,在训练的过程中,将获得每句话的语义信息,这些语义信息将汇总到[CLS]、[unused1]、[unused2]三个信息表示中,将取得三个信息对应的切片位置,然后经过Dropout防止过拟合。由于每个句子都将得到长度为doc_num(待识别篇章的句子的个数),维度为3*hidden_size。
此外,[CLS]、[unused1]和[unused2]这三个信息是模型做梯度下降时自动学会的,也可以理解为,在模型训练的过程中,模型构建的截取对应位置信息做融合再去做分类预测,就是使得模型具有将意图汇总至[CLS]、[unused1]、[unused2]三个token上的能力。
具体地,将每个句子输出的语义向量经过一次dropout层(参考公式一)后取sequence_output的前三个Token信息,这三个Token中包含的信息有[cls],[unused1],[unused2],
其维度为batch_size*max_seq_len*(3*bert_hidden_size),通过循环的Bert处理后将获得n个信息后作为一个序列输入LSTM网络结构,获得最后一层隐层状态(维度为:batch_size*3*bert_hidden_size)。
在步骤S32中,通过如下公式获取所述待识别篇章的整体语义向量:
hidden_output i=dropout(pooled_output i),i=(1,2,…,n)……公式一
其中,hidden_output i表示:每个句子经过dropout过滤处理后的深层语意信息。(其中,该语意信息可以理解为当前句子在文本摘要这个任务中。)
dropout表示:dropout层,对输入的神经网络单元,按照一定的概率将其暂时从网络中丢弃。
pooled_output i表示:当前句子输入bert后的输出,其中,若当前为第sent_i个句子,那么输出与bert_output[sent_i]是相同意思。
在步骤S4中,所述将所述每个识别单元的语义向量输入到融合分类识别模型中,获取所述待识别篇章的识别单元中包含的所有意图信息,包括如下步骤:
S41:将所述每个识别单元的语义向量输入lstm模型进行训练,获取所述待识别篇章的语义信息,其中,所述语义信息包括每个识别单元的意图汇总信息;
S42:将所述意图汇总信息进行一次线性变换处理,获取所述待识别篇章的识别单元中包含的所有意图信息。
在步骤S41中,将句子个数长度的向量输入lstm中,获得整个篇章的上的语义信息(总的主意图、总的第二意图、总的第三意图),即包含了每个句子层面上的三个意图汇总。
在lstm模型中,融合每个句子中的[cls],[unused1],[unused2]信息,使多个句子(即hidden_output_1,hidden_output_2,……,hidden_output_n)信息融合成为一个维度为batch_size*(3*bert_hidden_size)的向量lstm_output(如公式二),在步骤S41中,该过程可以用公式描述如下:
lstm_output=lstm(cat(hidden_output i)),i=(1,2,…,n)……公式二
其中,lstm_output表示:对整个篇章序列理解后的汇总语意。
lstm表示:时序网络结构,将对输入的时序进行理解做汇总输出。
cat(hidden_output i)表示:每个句子经过dropout过滤处理后的深层语意信息。
在步骤S42中,在获得对应的lstm_output后,将信息输入到一个全连接网络和Sigmoid结构进行意图的多分类。该过程的公式如下,公式一输出的向量通过sigmoid函数(公式四)得到最后意图,公式三表示一个全连接网络结构,即一个纯粹的MLP网络单元,其中,w i维度为(3*bert_hidden_size)*intent_class,其中intent_class表示意图类别。
h=w i*lstm_output+b i………公式(三)
其中,h表示:该网络结构的输出,是对文本篇章的进一步理解,即对文本做抽取式摘要前的文本理解。w i表示:该网络结构中各个神经元的权重,是模型在训练过程中优化的参数。b i表示:该网络结构中各个神经元的偏置,是模型在训练过程中优化的参数。
Figure PCTCN2021097234-appb-000001
其中,cls表示:对抽取式摘要信息作一个统一处理,最后获得每个句子当前是否能够作为摘要句。
在本申请的实施例中,解决智能问答中的多意图的识别问题,对于多轮任务型对话,首先需要理解用户主要说哪些内容,如何表述的以及对话的逻辑流程,并借助于对业务的理解以及对话文本的数据分析,抽象出对用户发言的语义理解定义,因此需要进行意图的识别。为了解决智能问答中的多意图的识别问题,将多轮问答的输入作为篇章信息,然后逐句融合到Bert模型中进行语意理解,再经过lstm模型进行语意融合,从而获得对应的语意信息,再将数据做一次线性变换,从而得到整个篇章的多个意图。
本申请实施例根据用户交互内容获取待识别篇章,其中,并按照预设规则将所述待识别篇章切分为至少两个识别单元;对所述识别单元进行要素拼接预处理;将预处理后的识别单元输入到Bert模型进行训练,获取每个识别单元的语义向量;将所述每个识别单元的语义向量输入到融合分类识别模型中,获取所述待识别篇章的图信息。本申请的主要目的在于通过Bert模型和lstm模型,解决篇章级别的理解和多意图识别的问题。
如图2所示,是本申请基于Bert的篇章的多意图识别装置的功能模块图。
本申请所述基于Bert的篇章的多意图识别装置100可以安装于电子设备中。根据实现的功能,所述基于Bert的篇章的多意图识别装置可以包括:待识别篇章获取模块101、预处理模块102、语义向量获取模块103、和意图信息获取模块104。本申请所述模块也可以称之为单元,是指一种能够被电子设备处理器所执行,并且能够完成固定功能的一系列计算机程序段,其存储在电子设备的存储器中。
在本实施例中,关于各模块/单元的功能如下:
待识别篇章获取模块101,用于根据用户交互内容获取待识别篇章,其中,并按照预设规则将所述待识别篇章切分为至少两个识别单元;
预处理模块102,用于对所述识别单元句子进行要素拼接预处理;
语义向量获取模块103,用于将预处理后的识别单元输入到Bert模型进行训练,获取每个识别单元的语义向量;
意图信息获取模块104,用于将所述每个识别单元的语义向量输入到融合分类识别模型中,获取所述待识别篇章的识别单元中包含的所有意图信息。
在待识别篇章获取模块101中,为了解决用户与智能客服在多轮交互中产生的问题或者描述性文字,我们可以将这些问题和描述性文字联合起来作为一个篇章,以便从整个语意层面来理解用户的意图。其中,待识别篇章获取模块101所述根据用户交互内容获取待识别篇章,包括:
问题文字获取模块,用于获取用户与智能客服在多轮交互中产生的问题和表述性文字;
待识别篇章形成模块,用于将所述问题和所述表述性文字相互联合,形成待识别篇章。
在本申请的一个实施方式中,按照预设规则将所述待识别篇章切分为至少两个识别单 元,可以将表示一个完整句子的标点符号作为切分识别单元的规则之一,比如:将所述待识别篇章按照句子切分符号分割若干个句子。其中,所述句子切分符号合可以包括句号、问号、感叹号等,按照这些句子切分符号切分出的一个个句子即为一个个识别单元。也就是说,所述预设规则包括句子切分符号等,所述句子切分符号包括句号、分号、感叹号以及问号等;所述识别单元包括完整的句子和问题等。
在预处理模块102中,所述对所述识别单元进行要素拼接预处理,包括:
意图信息拼接模块,用于在每个识别单元的起始位置拼接本识别单元的至少两个意图信息;
超参拼接模块,用于在每个识别单元的末端位置拼接一个超参;
语义符号序列确定模块,用于根据所述意图信息和所述超参,确定所述识别单元的语义符号序列。
在本申请的实施例中,所述识别单元(句子或者问题)做一些输入的调整,以下将以句子为例进行说明解释,将句子1到句子n做一些输入上的调整,首先,在步骤意图信息拼接模块中,在原始句子的头部拼接[CLS]、[unused1]、[unused2]三个信息,这样做是期望使得Bert模型训练获知[CLS]、[unused1]和[unused2]这三个信息含有主意图、第二意图、第三意图这样三个信息,从而为后续得到篇章级别的意图做好输入准备。其中,需要说明的是,每个句子中的意图并不是可以只拼接三个意图信息,可以根据需要拼接需要合适的意图信息,并不限于上述三个意图信息。
在超参拼接模块中,为了满足输入到Bert模型训练的需求,对要输入到Bert中的句子需要做相同长度上的padding操作,即:在每个句子的末端位置拼接一个超参max_len(这里约定大小为128),max_len作为Bert模型单次最大输入句子长度。
在步骤S23中,当[[cls],[unused1],[unused2],[sentence],[SEP]]序列中长度不超过max_len,此时分为两种情况,
其中,若[[cls],[unused1],[unused2],[sentence],[SEP]]序列中长度正好等于max_len时,则不需要在原序列中补充[PAD]字符;
其中,若[[cls],[unused1],[unused2],[sentence],[SEP]]小于max_len个WordPiece时,则补[PAD]字符,直到序列的个数正好为max_len为止。
语义符合序列为:
input i=[[cls i],[unused1 i],[unused2 i],[sentence i],[SEP i]]
其中,i的取值范围从1到n,表示当前输入序列为第i个,这些信息将循环输入到Bert模型中,传入的序列将传入Bert中,得到Bert每个句子的输出,取每个句子输出的语意信息。
在语义符号序列确定模块中,所述将预处理的识别单元输入到Bert模型进行训练,获取每个识别单元的语义向量,包括:
语义表示向量获取模块,用于将所述语义符号序列输入所述Bert模型中,得到所述语 义符号序列中每个语义符号的位置所对应的语义表示向量;
整体语义向量获取模块,用于根据获取的语义表示向量,确定所述待识别篇章的整体语义向量。
在本申请的实施例中,在训练的过程中,将获得每句话的语义信息,这些语义信息将汇总到[CLS]、[unused1]、[unused2]三个信息表示中,将取得三个信息对应的切片位置,然后经过Dropout防止过拟合。由于每个句子都将得到长度为doc_num(待识别篇章的句子的个数),维度为3*hidden_size。
此外,[CLS]、[unused1]和[unused2]这三个信息是模型做梯度下降时自动学会的,也可以理解为,在模型训练的过程中,模型构建的截取对应位置信息做融合再去做分类预测,就是使得模型具有将意图汇总至[CLS]、[unused1]、[unused2]三个token上的能力。
具体地,将每个句子输出的语义向量经过一次dropout层(参考公式一)后取sequence_output的前三个Token信息,这三个Token中包含的信息有[cls],[unused1],[unused2],
其维度为batch_size*max_seq_len*(3*bert_hidden_size),通过循环的Bert处理后将获得n个信息后作为一个序列输入LSTM网络结构,获得最后一层隐层状态(维度为:batch_size*3*bert_hidden_size)。
在整体语义向量获取模块中,通过如下公式获取所述待识别篇章的整体语义向量:
hidden_output i=dropout(pooled_output i),i=(1,2,…,n)……公式一
其中,hidden_output i表示:每个句子经过dropout过滤处理后的深层语意信息。(其中,该语意信息可以理解为当前句子在文本摘要这个任务中。)
dropout表示:dropout层,对输入的神经网络单元,按照一定的概率将其暂时从网络中丢弃。
pooled_output i表示:当前句子输入bert后的输出,其中,若当前为第sent_i个句子,那么输出与bert_output[sent_i]是相同意思。
在意图信息获取模块104中,所述将所述每个识别单元的语义向量输入到融合分类识别模型中,获取所述待识别篇章的识别单元中包含的所有意图信息,包括:
语音信息获取模块,用于将所述每个识别单元的语义向量输入lstm模型进行训练,获取所述待识别篇章的语义信息,其中,所述语义信息包括每个识别单元的意图汇总信息;
线性变换处理模块,用于将所述意图汇总信息进行一次线性变换处理,获取所述待识别篇章的识别单元中包含的所有意图信息。
在语音信息获取模块中,将句子个数长度的向量输入lstm中,获得整个篇章的上的语义信息(总的主意图、总的第二意图、总的第三意图),即包含了每个句子层面上的三个意图汇总。
在LSTM模型中,融合每个句子中的[cls],[unused1],[unused2]信息,使多个句子(即hidden_output_1,hidden_output_2,……,hidden_output_n)信息融合成为一个维度为 batch_size*(3*bert_hidden_size)的向量lstm_output(如公式二),在步骤S41中,该过程可以用公式描述如下:
lstm_output=lstm(cat(hidden_output i)),i=(1,2,…,n)……公式二
其中,lstm_output表示:对整个篇章序列理解后的汇总语意。
lstm表示:时序网络结构,将对输入的时序进行理解做汇总输出。
cat(hidden_output i)表示:每个句子经过dropout过滤处理后的深层语意信息。
在线性变换处理模块中,在获得对应的lstm_output后,将信息输入到一个全连接网络和Sigmoid结构进行意图的多分类。该过程的公式如下,公式一输出的向量通过sigmoid函数(公式四)得到最后意图,公式三表示一个全连接网络结构,即一个纯粹的MLP网络单元,其中,w i维度为(3*bert_hidden_size)*intent_class,其中intent_class表示意图类别。
h=w i*lstm_output+b i………公式(三)
其中,h表示:该网络结构的输出,是对文本篇章的进一步理解,即对文本做抽取式摘要前的文本理解。w i表示:该网络结构中各个神经元的权重,是模型在训练过程中优化的参数。b i表示:该网络结构中各个神经元的偏置,是模型在训练过程中优化的参数。
Figure PCTCN2021097234-appb-000002
其中,cls表示:对抽取式摘要信息作一个统一处理,最后获得每个句子当前是否能够作为摘要句。
在本申请的实施例中根据用户交互内容获取待识别篇章,并按照预设规则将所述待识别篇章切分为至少两个识别单元;对所述识别单元句子进行要素拼接预处理;将预处理后的识别单元输入到Bert模型进行训练,获取每个识别单元的语义向量;将所述每个识别单元的语义向量输入到融合分类识别模型中,获取所述待识别篇章的意图信息。本申请的主要目的在于通过Bert模型和lstm模型,解决篇章级别理解和多意图识别的问题。
如图3所示,是本申请实现基于Bert的篇章的多意图识别方法的电子设备的结构示意图。
所述电子设备1可以包括处理器10、存储器11和总线,还可以包括存储在所述存储器11中并可在所述处理器10上运行的计算机程序,如基于Bert的篇章的多意图识别程序12。
其中,所述存储器11至少包括一种类型的可读存储介质,所述可读存储介质包括闪存、移动硬盘、多媒体卡、卡型存储器(例如:SD或DX存储器等)、磁性存储器、磁盘、光盘等。所述存储器11在一些实施例中可以是电子设备1的内部存储单元,例如该电子设备1的移动硬盘。所述存储器11在另一些实施例中也可以是电子设备1的外部存储设备,例如电子设备1上配备的插接式移动硬盘、智能存储卡(Smart Media Card,SMC)、安全数字(Secure Digital,SD)卡、闪存卡(Flash Card)等。进一步地,所述存储器11还可以既包括电子设备1的内部存储单元也包括外部存储设备。所述存储器11不仅可以用于存 储安装于电子设备1的应用软件及各类数据,例如数据稽核程序的代码等,还可以用于暂时地存储已经输出或者将要输出的数据。存储器可以存储内容,该内容可由电子设备显示或被发送到其他设备(例如,耳机)以由其他设备来显示或播放。存储器还可以存储从其他设备接收的内容。该来自其他设备的内容可由电子设备显示、播放、或使用,以执行任何必要的可由电子设备和/或无线接入点中的计算机处理器或其他组件实现的任务或操作。
所述处理器10在一些实施例中可以由集成电路组成,例如可以由单个封装的集成电路所组成,也可以是由多个相同功能或不同功能封装的集成电路所组成,包括一个或者多个中央处理器(Central Processing unit,CPU)、微处理器、数字处理芯片、图形处理器及各种控制芯片的组合等。所述处理器10是所述电子设备的控制核心(Control Unit),利用各种接口和线路连接整个电子设备的各个部件,通过运行或执行存储在所述存储器11内的程序或者模块(例如数据稽核程序等),以及调用存储在所述存储器11内的数据,以执行电子设备1的各种功能和处理数据。电子还可包括芯片组(未示出),其用于控制一个或多个处理器与用户设备的其他组件中的一个或多个之间的通信。在特定的实施例中,电子设备可基于
Figure PCTCN2021097234-appb-000003
架构或
Figure PCTCN2021097234-appb-000004
架构,并且处理器和芯片集可来自
Figure PCTCN2021097234-appb-000005
处理器和芯片集家族。该一个或多个处理器104还可包括一个或多个专用集成电路(ASIC)或专用标准产品(ASSP),其用于处理特定的数据处理功能或任务。
所述总线可以是外设部件互连标准(peripheral component interconnect,简称PCI)总线或扩展工业标准结构(extended industry standard architecture,简称EISA)总线等。该总线可以分为地址总线、数据总线、控制总线等。所述总线被设置为实现所述存储器11以及至少一个处理器10等之间的连接通信。
此外,网络和I/O接口可包括一个或多个通信接口或网络接口设备,以提供经由网络(未示出)在电子设备和其他设备(例如,网络服务器)之间的数据传输。通信接口可包括但不限于:人体区域网络(BAN)、个人区域网络(PAN)、有线局域网(LAN)、无线局域网(WLAN)、无线广域网(WWAN)、等等。用户设备102可以经由有线连接耦合到网络。然而,无线系统接口可包括硬件或软件以广播和接收消息,其使用Wi-Fi直连标准和/或IEEE 802.11无线标准、蓝牙标准、蓝牙低耗能标准、Wi-Gig标准、和/或任何其他无线标准和/或它们的组合。
无线系统可包括发射器和接收器或能够在由IEEE 802.11无线标准所支配的操作频率的广泛范围内操作的收发器。通信接口可以利用声波、射频、光学、或其他信号来在电子设备与其他设备(诸如接入点、主机、服务器、路由器、读取设备、和类似物)之间交换数据。网络118可包括但不限于:因特网、专用网络、虚拟专用网络、无线广域网、局域网、城域网、电话网络、等等。
显示器可包括但不限于液晶显示器、发光二极管显示器、或由在美国马萨诸塞州剑桥城的E Ink公司(E Ink Corp.of Cambridge,Massachusetts)所制造的E-InkTM显示器。该显示器可用于将内容以文本、图像、或视频的形式显示给用户。在特定的实例中,该显示 器还可以作为触控屏显示器操作,其可以使得用户能够藉由使用某些手指或手势来触摸屏幕以启动命令或操作。
图3仅示出了具有部件的电子设备,本领域技术人员可以理解的是,图2示出的结构并不构成对所述电子设备1的限定,可以包括比图示更少或者更多的部件,或者组合某些部件,或者不同的部件布置。
例如,尽管未示出,所述电子设备1还可以包括给各个部件供电的电源(比如电池),优选地,电源可以通过电源管理装置与所述至少一个处理器10逻辑相连,从而通过电源管理装置实现充电管理、放电管理、以及功耗管理等功能。电源还可以包括一个或一个以上的直流或交流电源、再充电装置、电源故障检测电路、电源转换器或者逆变器、电源状态指示器等任意组件。所述电子设备1还可以包括多种传感器、蓝牙模块、Wi-Fi模块等,在此不再赘述。
进一步地,所述电子设备1还可以包括网络接口,可选地,所述网络接口可以包括有线接口和/或无线接口(如WI-FI接口、蓝牙接口等),通常用于在该电子设备1与其他电子设备之间建立通信连接。
可选地,该电子设备1还可以包括用户接口,用户接口可以是显示器(Display)、输入单元(比如键盘(Keyboard)),可选地,用户接口还可以是标准的有线接口、无线接口。可选地,在一些实施例中,显示器可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED(Organic Light-Emitting Diode,有机发光二极管)触摸器等。其中,显示器也可以适当的称为显示屏或显示单元,用于显示在电子设备1中处理的信息以及用于显示可视化的用户界面。
应该了解,所述实施例仅为说明之用,在专利申请范围上并不受此结构的限制。
所述电子设备1中的所述存储器11存储的基于Bert的篇章的多意图识别程序12是多个指令的组合,在所述处理器10中运行时,可以实现:
根据用户交互内容获取待识别篇章,其中,并按照预设规则将所述待识别篇章切分为至少两个识别单元;
对所述识别单元进行要素拼接预处理;
将预处理后的识别单元输入到Bert模型进行训练,获取每个识别单元的语义向量;
将所述每个识别单元的语义向量输入到融合分类识别模型中,获取所述待识别篇章的意图信息。
具体地,所述处理器10对上述指令的具体实现方法可参考图1对应实施例中相关步骤的描述,在此不赘述。
进一步地,所述电子设备1集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)。所述计算机可读 存储介质可以是非易失性,也可以是易失性。
在本申请的实施例中,计算机可读存储介质,所述计算机可读存储介质中存储有至少一个指令,所述至少一个指令被电子设备中的处理器执行以实现上述所述的基于Bert的篇章的多意图识别方法的步骤,具体方法如下:
根据用户交互内容获取待识别篇章,其中,并按照预设规则将所述待识别篇章切分为至少两个识别单元;
对所述识别单元进行要素拼接预处理;
将预处理后的识别单元输入到Bert模型进行训练,获取每个识别单元的语义向量;
将所述每个识别单元的语义向量输入到融合分类识别模型中,获取所述待识别篇章的意图信息。
在本申请所提供的几个实施例中,应该理解到,所揭露的设备,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
虽然本申请的某些实施例已经结合目前被认为是最实用的且各式各样的实施例进行了描述,但应当理解,本申请并不限于所公开的实施例,而是意在覆盖包含在所附权利要求书的范围之内的各种修改和等价布置。虽然本文采用了特定的术语,但它们仅以一般性和描述性的意义使用,而不是用于限制的目的。
对于本领域技术人员而言,显然本申请不限于上述示范性实施例的细节,而且在不背离本申请的精神或基本特征的情况下,能够以其他的具体形式实现本申请。
因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本申请的范围由所附权利要求而不是上述说明限定,因此旨在将落在权利要求的等同要件的含义和范围内的所有变化涵括在本申请内。不应将权利要求中的任何附关联图标记视为限制所涉及的权利要求。
最后应说明的是,以上实施例仅用以说明本申请的技术方案而非限制,尽管参照较佳实施例对本申请进行了详细说明,本领域的普通技术人员应当理解,可以对本申请的技术方案进行修改或等同替换,而不脱离本申请技术方案的精神和范围。

Claims (20)

  1. 一种基于Bert的篇章的多意图识别方法,应用于电子设备,其中,所述方法包括:
    根据用户交互内容获取待识别篇章,其中,并按照预设规则将所述待识别篇章切分为至少两个识别单元;
    对所述识别单元进行要素拼接预处理;
    将预处理后的识别单元输入到Bert模型进行训练,获取每个识别单元的语义向量;
    将所述每个识别单元的语义向量输入到融合分类识别模型中,获取所述待识别篇章的意图信息。
  2. 如权利要求1所述的基于Bert的篇章的多意图识别方法,其中,所述根据用户交互内容获取待识别篇章,包括如下步骤:
    获取用户与智能客服在多轮交互中产生的问题和表述性文字;
    将所述问题和所述表述性文字相互联合,形成待识别篇章。
  3. 如权利要求1所述的基于Bert的篇章的多意图识别方法,其中,所述按照预设规则将所述待识别篇章切分为至少两个识别单元,包括如下步骤:
    通过句子切分符号对所述待识别篇章进行切分处理;其中,所述预设规则包括句子切分符号,所述句子切分符号包括句号、分号、感叹号以及问号;
    将所述待识别篇章切分形成的句子或者问题确定为识别单元。
  4. 如权利要求1所述的基于Bert的篇章的多意图识别方法,其中,所述对所述识别单元进行要素拼接预处理,包括如下步骤:
    在所述每个识别单元的起始位置拼接本识别单元的至少两个意图信息;
    在所述每个识别单元的末端位置拼接一个超参;
    根据所述意图信息和所述超参,确定所述识别单元的语义符号序列。
  5. 如权利要求1所述的基于Bert的篇章的多意图识别方法,其中,所述将预处理的识别单元输入到Bert模型进行训练,获取每个识别单元的语义向量,包括如下步骤:
    将所述语义符号序列输入所述Bert模型中,获取所述语义符号序列中每个语义符号的位置所对应的语义表示向量;
    根据所获取的语义表示向量,确定所述待识别篇章的整体语义向量;其中,通过如下公式获取所述待识别篇章的整体语义向量:
    hidden_output i=dropout(pooled_output i),i=(1,2,…,n)
    其中,hidden_output i表示:每个句子经过dropout层过滤处理后的深层语意信息;
    dropout表示:dropout层,对输入的神经网络单元按照一定的概率将其暂时从网络中丢弃;
    pooled_output i表示:当前句子输入Bert模型后的输出。
  6. 如权利要求1所述的基于Bert的篇章的多意图识别方法,其中,所述将所述每个识别单元的语义向量输入到融合分类识别模型中,获取所述待识别篇章的识别单元中包含 的所有意图信息,包括如下步骤:
    将所述每个识别单元的语义向量输入lstm模型进行训练,获取所述待识别篇章的语义信息,其中,所述语义信息包括每个识别单元的意图汇总信息;
    将所述意图汇总信息进行一次线性变换处理,获取所述待识别篇章的每个识别单元中包含的所有意图信息。
  7. 如权利要求1所述的基于Bert的篇章的多意图识别方法,其中,采用如下公式对所述意图汇总信息进行一次线性变换处理:
    h=w i*lstm_output+b i
    其中,h表示:对待识别篇章做抽取式摘要前的文本理解;
    w i表示:lstm模型在训练过程中优化的参数;
    lstm_output表示:对待识别篇章序列理解后的意图汇总信息;
    b i表示:lstm模型模型在训练过程中优化的参数。
  8. 一种基于Bert的篇章的多意图识别装置,其中,所述装置包括:待识别篇章获取模块,用于根据用户交互内容获取待识别篇章,其中,并按照预设规则将所述待识别篇章切分为至少两个识别单元;
    预处理模块,用于对所述识别单元句子进行要素拼接预处理;
    语义向量获取模块,用于将预处理后的识别单元输入到Bert模型进行训练,获取每个识别单元的语义向量;
    所有意图信息获取模块,用于将所述每个识别单元的语义向量输入到融合分类识别模型中,获取所述待识别篇章的意图信息。
  9. 一种电子设备,其中,所述电子设备包括:
    至少一个处理器;以及,
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行基于Bert的篇章的多意图识别方法的步骤,其中,
    所述基于Bert的篇章的多意图识别方法包括:
    根据用户交互内容获取待识别篇章,其中,并按照预设规则将所述待识别篇章切分为至少两个识别单元;
    对所述识别单元进行要素拼接预处理;
    将预处理后的识别单元输入到Bert模型进行训练,获取每个识别单元的语义向量;
    将所述每个识别单元的语义向量输入到融合分类识别模型中,获取所述待识别篇章的意图信息。
  10. 如权利要求9所述的电子设备,其中,所述根据用户交互内容获取待识别篇章, 包括如下步骤:
    获取用户与智能客服在多轮交互中产生的问题和表述性文字;
    将所述问题和所述表述性文字相互联合,形成待识别篇章。
  11. 如权利要求9所述的电子设备,其中,所述按照预设规则将所述待识别篇章切分为至少两个识别单元,包括如下步骤:
    通过句子切分符号对所述待识别篇章进行切分处理;其中,所述预设规则包括句子切分符号,所述句子切分符号包括句号、分号、感叹号以及问号;
    将所述待识别篇章切分形成的句子或者问题确定为识别单元。
  12. 如权利要求9所述的电子设备,其中,所述对所述识别单元进行要素拼接预处理,包括如下步骤:
    在所述每个识别单元的起始位置拼接本识别单元的至少两个意图信息;
    在所述每个识别单元的末端位置拼接一个超参;
    根据所述意图信息和所述超参,确定所述识别单元的语义符号序列。
  13. 如权利要求9所述的电子设备,其中,所述将预处理的识别单元输入到Bert模型进行训练,获取每个识别单元的语义向量,包括如下步骤:
    将所述语义符号序列输入所述Bert模型中,获取所述语义符号序列中每个语义符号的位置所对应的语义表示向量;
    根据所获取的语义表示向量,确定所述待识别篇章的整体语义向量;其中,通过如下公式获取所述待识别篇章的整体语义向量:
    hidden_output i=dropout(pooled_output i),i=(1,2,…,n)
    其中,hidden_output i表示:每个句子经过dropout层过滤处理后的深层语意信息;
    dropout表示:dropout层,对输入的神经网络单元按照一定的概率将其暂时从网络中丢弃;
    pooled_output i表示:当前句子输入Bert模型后的输出。
  14. 如权利要求9所述的电子设备,其中,所述将所述每个识别单元的语义向量输入到融合分类识别模型中,获取所述待识别篇章的识别单元中包含的所有意图信息,包括如下步骤:
    将所述每个识别单元的语义向量输入lstm模型进行训练,获取所述待识别篇章的语义信息,其中,所述语义信息包括每个识别单元的意图汇总信息;
    将所述意图汇总信息进行一次线性变换处理,获取所述待识别篇章的每个识别单元中包含的所有意图信息。
  15. 如权利要求9所述的电子设备,其中,采用如下公式对所述意图汇总信息进行一次线性变换处理:
    h=w i*lstm_output+b i
    其中,h表示:对待识别篇章做抽取式摘要前的文本理解;
    w i表示:lstm模型在训练过程中优化的参数;
    lstm_output表示:对待识别篇章序列理解后的意图汇总信息;
    b i表示:lstm模型模型在训练过程中优化的参数。
  16. 一种计算机可读存储介质,存储有计算机程序,其中,所述计算机程序被处理器执行时实现基于Bert的篇章的多意图识别方法,其中,
    所述基于Bert的篇章的多意图识别方法包括:
    根据用户交互内容获取待识别篇章,其中,并按照预设规则将所述待识别篇章切分为至少两个识别单元;
    对所述识别单元进行要素拼接预处理;
    将预处理后的识别单元输入到Bert模型进行训练,获取每个识别单元的语义向量;
    将所述每个识别单元的语义向量输入到融合分类识别模型中,获取所述待识别篇章的意图信息。
  17. 如权利要求16所述的计算机可读存储介质,其中,所述根据用户交互内容获取待识别篇章,包括如下步骤:
    获取用户与智能客服在多轮交互中产生的问题和表述性文字;
    将所述问题和所述表述性文字相互联合,形成待识别篇章。
  18. 如权利要求16所述的计算机可读存储介质,其中,所述按照预设规则将所述待识别篇章切分为至少两个识别单元,包括如下步骤:
    通过句子切分符号对所述待识别篇章进行切分处理;其中,所述预设规则包括句子切分符号,所述句子切分符号包括句号、分号、感叹号以及问号;
    将所述待识别篇章切分形成的句子或者问题确定为识别单元。
  19. 如权利要求16所述的计算机可读存储介质,其中,所述对所述识别单元进行要素拼接预处理,包括如下步骤:
    在所述每个识别单元的起始位置拼接本识别单元的至少两个意图信息;
    在所述每个识别单元的末端位置拼接一个超参;
    根据所述意图信息和所述超参,确定所述识别单元的语义符号序列。
  20. 如权利要求16所述的计算机可读存储介质,其中,所述将预处理的识别单元输入到Bert模型进行训练,获取每个识别单元的语义向量,包括如下步骤:
    将所述语义符号序列输入所述Bert模型中,获取所述语义符号序列中每个语义符号的位置所对应的语义表示向量;
    根据所获取的语义表示向量,确定所述待识别篇章的整体语义向量;其中,通过如下公式获取所述待识别篇章的整体语义向量:
    hidden_output i=dropout(pooled_output i),i=(1,2,…,n)
    其中,hidden_output i表示:每个句子经过dropout层过滤处理后的深层语意信息;
    dropout表示:dropout层,对输入的神经网络单元按照一定的概率将其暂时从网络中 丢弃;
    pooled_output i表示:当前句子输入Bert模型后的输出。
PCT/CN2021/097234 2021-04-30 2021-05-31 基于Bert的篇章的多意图识别方法、设备及可读存储介质 WO2022227211A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110480025.6A CN112989800A (zh) 2021-04-30 2021-04-30 基于Bert的篇章的多意图识别方法、设备及可读存储介质
CN202110480025.6 2021-04-30

Publications (1)

Publication Number Publication Date
WO2022227211A1 true WO2022227211A1 (zh) 2022-11-03

Family

ID=76336874

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/097234 WO2022227211A1 (zh) 2021-04-30 2021-05-31 基于Bert的篇章的多意图识别方法、设备及可读存储介质

Country Status (2)

Country Link
CN (1) CN112989800A (zh)
WO (1) WO2022227211A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115687934A (zh) * 2022-12-30 2023-02-03 智慧眼科技股份有限公司 意图识别方法、装置、计算机设备及存储介质
CN116108187A (zh) * 2023-04-14 2023-05-12 华东交通大学 一种集成多粒度信息的方面级情感分类方法
CN116384382A (zh) * 2023-01-04 2023-07-04 深圳擎盾信息科技有限公司 一种基于多轮交互的自动化长篇合同要素识别方法及装置
CN116882398A (zh) * 2023-09-06 2023-10-13 华东交通大学 基于短语交互的隐式篇章关系识别方法和系统
CN117807215A (zh) * 2024-03-01 2024-04-02 青岛海尔科技有限公司 一种基于模型的语句多意图识别方法、装置及设备

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113836285A (zh) * 2021-09-26 2021-12-24 平安科技(深圳)有限公司 意图信息预测方法、装置、设备及介质
CN115658891B (zh) * 2022-10-18 2023-07-25 支付宝(杭州)信息技术有限公司 一种意图识别的方法、装置、存储介质及电子设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107169035A (zh) * 2017-04-19 2017-09-15 华南理工大学 一种混合长短期记忆网络和卷积神经网络的文本分类方法
CN111062220A (zh) * 2020-03-13 2020-04-24 成都晓多科技有限公司 一种基于记忆遗忘装置的端到端意图识别系统和方法
CN111767371A (zh) * 2020-06-28 2020-10-13 微医云(杭州)控股有限公司 一种智能问答方法、装置、设备及介质
US20200335095A1 (en) * 2019-04-22 2020-10-22 International Business Machines Corporation Intent recognition model creation from randomized intent vector proximities
CN112183061A (zh) * 2020-09-28 2021-01-05 云知声智能科技股份有限公司 一种多意图口语理解方法、电子设备和存储介质
CN112270187A (zh) * 2020-11-05 2021-01-26 中山大学 一种基于bert-lstm的谣言检测模型
CN112364664A (zh) * 2020-11-19 2021-02-12 北京京东尚科信息技术有限公司 意图识别模型的训练及意图识别方法、装置、存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10916242B1 (en) * 2019-08-07 2021-02-09 Nanjing Silicon Intelligence Technology Co., Ltd. Intent recognition method based on deep learning network
CN111159332A (zh) * 2019-12-03 2020-05-15 厦门快商通科技股份有限公司 一种基于bert的文本多意图识别方法
CN110968671A (zh) * 2019-12-03 2020-04-07 北京声智科技有限公司 一种基于Bert的意图确定方法及装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107169035A (zh) * 2017-04-19 2017-09-15 华南理工大学 一种混合长短期记忆网络和卷积神经网络的文本分类方法
US20200335095A1 (en) * 2019-04-22 2020-10-22 International Business Machines Corporation Intent recognition model creation from randomized intent vector proximities
CN111062220A (zh) * 2020-03-13 2020-04-24 成都晓多科技有限公司 一种基于记忆遗忘装置的端到端意图识别系统和方法
CN111767371A (zh) * 2020-06-28 2020-10-13 微医云(杭州)控股有限公司 一种智能问答方法、装置、设备及介质
CN112183061A (zh) * 2020-09-28 2021-01-05 云知声智能科技股份有限公司 一种多意图口语理解方法、电子设备和存储介质
CN112270187A (zh) * 2020-11-05 2021-01-26 中山大学 一种基于bert-lstm的谣言检测模型
CN112364664A (zh) * 2020-11-19 2021-02-12 北京京东尚科信息技术有限公司 意图识别模型的训练及意图识别方法、装置、存储介质

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115687934A (zh) * 2022-12-30 2023-02-03 智慧眼科技股份有限公司 意图识别方法、装置、计算机设备及存储介质
CN116384382A (zh) * 2023-01-04 2023-07-04 深圳擎盾信息科技有限公司 一种基于多轮交互的自动化长篇合同要素识别方法及装置
CN116384382B (zh) * 2023-01-04 2024-03-22 深圳擎盾信息科技有限公司 一种基于多轮交互的自动化长篇合同要素识别方法及装置
CN116108187A (zh) * 2023-04-14 2023-05-12 华东交通大学 一种集成多粒度信息的方面级情感分类方法
CN116882398A (zh) * 2023-09-06 2023-10-13 华东交通大学 基于短语交互的隐式篇章关系识别方法和系统
CN116882398B (zh) * 2023-09-06 2023-12-08 华东交通大学 基于短语交互的隐式篇章关系识别方法和系统
CN117807215A (zh) * 2024-03-01 2024-04-02 青岛海尔科技有限公司 一种基于模型的语句多意图识别方法、装置及设备

Also Published As

Publication number Publication date
CN112989800A (zh) 2021-06-18

Similar Documents

Publication Publication Date Title
WO2022227211A1 (zh) 基于Bert的篇章的多意图识别方法、设备及可读存储介质
US11455981B2 (en) Method, apparatus, and system for conflict detection and resolution for competing intent classifiers in modular conversation system
US11151175B2 (en) On-demand relation extraction from text
CN107870974B (zh) 使用设备上模型的智能回复
US9881082B2 (en) System and method for automatic, unsupervised contextualized content summarization of single and multiple documents
US20170364586A1 (en) Contextual Content Graph for Automatic, Unsupervised Summarization of Content
CN112334889A (zh) 用于用户与助理系统交互的个性化手势识别
CN113836333A (zh) 图文匹配模型的训练方法、实现图文检索的方法、装置
US10169466B2 (en) Persona-based conversation
CN111712834A (zh) 用于推断现实意图的人工智能系统
CN112507706B (zh) 知识预训练模型的训练方法、装置和电子设备
US11928985B2 (en) Content pre-personalization using biometric data
WO2021063089A1 (zh) 规则匹配方法、规则匹配装置、存储介质及电子设备
US20190155954A1 (en) Cognitive Chat Conversation Discovery
US11954173B2 (en) Data processing method, electronic device and computer program product
CN114631094A (zh) 智能电子邮件标题行建议和重制
US11100160B2 (en) Intelligent image note processing
WO2022105237A1 (zh) 带格式文本的信息抽取方法和装置
US20160098576A1 (en) Cognitive Digital Security Assistant
WO2022141867A1 (zh) 语音识别方法、装置、电子设备及可读存储介质
CN113850078A (zh) 基于机器学习的多意图识别方法、设备及可读存储介质
CN112748828A (zh) 一种信息处理方法、装置、终端设备及介质
US20220261554A1 (en) Electronic device and controlling method of electronic device
US20190122668A1 (en) Hierarchical intimacy for cognitive assistants
US11689482B2 (en) Dynamically generating a typing feedback indicator for recipient to provide context of message to be received by recipient

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21938681

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21938681

Country of ref document: EP

Kind code of ref document: A1