CN116933800B - Template-based generation type intention recognition method and device - Google Patents

Template-based generation type intention recognition method and device Download PDF

Info

Publication number
CN116933800B
CN116933800B CN202311168587.2A CN202311168587A CN116933800B CN 116933800 B CN116933800 B CN 116933800B CN 202311168587 A CN202311168587 A CN 202311168587A CN 116933800 B CN116933800 B CN 116933800B
Authority
CN
China
Prior art keywords
target
text
determining
intention
generation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311168587.2A
Other languages
Chinese (zh)
Other versions
CN116933800A (en
Inventor
武文杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xumi Yuntu Space Technology Co Ltd
Original Assignee
Shenzhen Xumi Yuntu Space Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xumi Yuntu Space Technology Co Ltd filed Critical Shenzhen Xumi Yuntu Space Technology Co Ltd
Priority to CN202311168587.2A priority Critical patent/CN116933800B/en
Publication of CN116933800A publication Critical patent/CN116933800A/en
Application granted granted Critical
Publication of CN116933800B publication Critical patent/CN116933800B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting
    • G06F40/186Templates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/01Customer relationship services
    • G06Q30/015Providing customer assistance, e.g. assisting a customer within a business location or via helpdesk
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Finance (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Machine Translation (AREA)

Abstract

The application provides a template-based generation type intention recognition method and device. The method comprises the following steps: determining target key elements in a target scene based on the element generation model; determining a problem text and an intention text in a target scene; determining an intention generation template according to the target key elements, the question text and the intention text; the intent generation template and the user questions are input to an intent generation model to output a target intent. According to the method and the device, the intention generation template is built according to the target key elements, the problem text and the intention text, and the intention generation template and user problem interaction are input into the intention generation model to output the target intention, so that the problem that the conventional intention recognition algorithm based on semantic retrieval cannot pay attention to fine-grained information can be solved, fine-grained semantic information can be paid attention to through the interaction of the template and the problem, and the accuracy of intention recognition under different scenes is improved.

Description

Template-based generation type intention recognition method and device
Technical Field
The present disclosure relates to the field of intent recognition technologies, and in particular, to a template-based method and apparatus for generating intent recognition.
Background
With the development of artificial intelligence and machine learning technology, intelligent customer service operation systems are increasingly widely used in various fields. Speech recognition, speech synthesis and natural language understanding are major technical points of intelligent customer service operation systems, and these technologies also promote the development of intelligent customer service operation systems.
The intelligent customer service operation of the user is generally a question-answering system based on a knowledge base. The questions and corresponding answers in the question-answering system of the knowledge base are manually edited in advance, and the real intention expressed by the user is output through the recognition and processing of the questions expressed by the user.
However, in the intelligent customer service scene based on the knowledge base, the same problem may need to correspond to different intentions in different scenes, and there may be only slight difference in literal expression between different graphs, so that wrong real intentions are likely to be output, and the service quality of intelligent customer service is affected.
In the prior art, semantic recall based on a unified model is generally used for matching real intention, but because the manner of semantic recall has no interaction, slight differences among problems are difficult to pay attention to, and intention output errors are caused.
Disclosure of Invention
In view of this, the embodiments of the present application provide a template-based method and apparatus for generating intent recognition, so as to solve the problem that in the prior art, it is difficult to pay attention to subtle differences between problems, resulting in an intent output error.
In a first aspect of an embodiment of the present application, a method for identifying a generated intent based on a template is provided, where the method includes:
determining target key elements in a target scene based on the element generation model;
determining a problem text and an intention text in a target scene;
determining an intention generation template according to the target key elements, the question text and the intention text;
the intent generation template and the user questions are input to an intent generation model to output a target intent.
In a second aspect of the embodiments of the present application, there is provided a template-based generation type intention recognition apparatus, including:
the target key element determining module is used for determining target key elements in a target scene based on the element generation model;
the text determining module is used for determining a problem text and an intention text in a target scene;
the template generation module is used for determining an intention generation template according to the target key elements, the problem text and the intention text;
the target intention output module is used for inputting the intention generation template and the user problem into the intention generation model so as to output the target intention.
In a third aspect of the embodiments of the present application, there is provided an electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the above method when executing the computer program.
In a fourth aspect of the embodiments of the present application, there is provided a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the above method.
Compared with the prior art, the embodiment of the application has the beneficial effects that: according to the method and the device, the intention generation template is built according to the target key elements, the question text and the intention text, and the intention generation template and the user question are interactively input into the intention generation model to output the target intention. The problem that the traditional intent recognition algorithm based on semantic retrieval cannot pay attention to fine-grained information can be solved, fine-grained semantic information can be paid attention to through interaction of templates and the problems, and accuracy of intent recognition under different scenes is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following description will briefly introduce the drawings that are needed in the embodiments or the description of the prior art, it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of an application scenario according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of a template-based method for identifying a generated intent;
FIG. 3 is a schematic diagram of a template-based generating intent recognition device provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
With the development of artificial intelligence and machine learning technology, intelligent customer service operation systems are increasingly widely used in various fields. Speech recognition, speech synthesis and natural language understanding are major technical points of intelligent customer service operation systems, and these technologies also promote the development of intelligent customer service operation systems.
The intelligent customer service operation of the user is generally a question-answering system based on a knowledge base. The questions and corresponding answers in the question-answering system of the knowledge base are manually edited in advance, and the real intention expressed by the user is identified and output through the identification and processing of the questions expressed by the user.
The intention recognition belongs to the technical category of natural language understanding, and refers to outputting the true intention of the user expression through recognition and processing of the user expression. The intelligent dialog system may organize the correct responses according to the user's intention to facilitate the normal progress of the dialog. In the process of intention recognition, no matter which type of intention recognition technology is adopted, the process of selecting the final intention from a plurality of candidate intents is involved, and the correctness of the finally selected intention directly influences that the dialog cannot be correctly conducted.
However, in the intelligent customer service scene based on the knowledge base, the same problem may need to correspond to different intentions in different scenes, and there may be only slight difference in literal expression between different graphs, so that wrong real intentions are likely to be output, and the service quality of intelligent customer service is affected.
In the prior art, semantic recall based on a unified model is generally used for matching real intention, but because the manner of semantic recall has no interaction, slight differences among problems are difficult to pay attention to, and intention output errors are caused.
In view of the above problems in the prior art, embodiments of the present application provide a novel template-based generation type intention recognition method, which establishes an intention generation template according to target key elements, question text and intention text, and inputs the intention generation template and user question interaction into an intention generation model to output a target intention. The problem that the traditional intent recognition algorithm based on semantic retrieval cannot pay attention to fine-grained information can be solved, fine-grained semantic information can be paid attention to through interaction of templates and the problems, and accuracy of intent recognition under different scenes is improved.
A template-based method and apparatus for generating intent recognition according to embodiments of the present application will be described in detail with reference to the accompanying drawings.
Fig. 1 is a schematic view of an application scenario according to an embodiment of the present application. The application scenario may include terminal devices 101, 102 and 103, server 104, network 105.
The terminal devices 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, and 103 are hardware, they may be various electronic devices having a display screen and supporting communication with the server 104, including but not limited to smartphones, tablets, laptop and desktop computers, etc.; when the terminal devices 101, 102, and 103 are software, they may be installed in the electronic device as above. The terminal devices 101, 102 and 103 may be implemented as a plurality of software or software modules, or may be implemented as a single software or software module, which is not limited in this embodiment of the present application. Further, various applications, such as a data processing application, an instant messaging tool, social platform software, a search class application, a shopping class application, and the like, may be installed on the terminal devices 101, 102, and 103.
The server 104 may be a server that provides various services, for example, a background server that receives a request transmitted from a terminal device with which communication connection is established, and the background server may perform processing such as receiving and analyzing the request transmitted from the terminal device and generate a processing result. The server 104 may be a server, a server cluster formed by a plurality of servers, or a cloud computing service center, which is not limited in this embodiment of the present application.
The server 104 may be hardware or software. When the server 104 is hardware, it may be various electronic devices that provide various services to the terminal devices 101, 102, and 103. When the server 104 is software, it may be a plurality of software or software modules providing various services to the terminal devices 101, 102, and 103, or may be a single software or software module providing various services to the terminal devices 101, 102, and 103, which is not limited in the embodiment of the present application.
The network 105 may be a wired network using coaxial cable, twisted pair and optical fiber connection, or may be a wireless network that can implement interconnection of various communication devices without wiring, for example, bluetooth (Bluetooth), near field communication (Near Field Communication, NFC), infrared (Infrared), etc., which is not limited in the embodiment of the present application.
The user can establish a communication connection with the server 104 via the network 105 through the terminal devices 101, 102, and 103 to receive or transmit information or the like. Specifically, the server 104 determines target key elements in the target scene based on the element generation model; the server 104 determines the question text and the intention text in the target scene; determining an intention generation template according to the target key elements, the question text and the intention text; the server 104 inputs the intent generation template and the user question to the intent generation model to output the target intent.
It should be noted that the specific types, numbers and combinations of the terminal devices 101, 102 and 103, the server 104 and the network 105 may be adjusted according to the actual requirements of the application scenario, which is not limited in the embodiment of the present application.
Fig. 2 is a schematic flow chart of a template-based method for identifying a generated intention according to an embodiment of the present application. The template-based generated intention recognition method of fig. 2 may be performed by the terminal device or the server of fig. 1. As shown in fig. 2, the template-based generative intent recognition method includes:
s201, determining target key elements in a target scene based on an element generation model;
s202, determining a problem text and an intention text in a target scene;
s203, determining an intention generation template according to the target key elements, the question text and the intention text;
s204, inputting the intention generation template and the user questions to an intention generation model to output a target intention.
Specifically, in the intelligent customer service scenario based on the knowledge base, the same problem may need to correspond to different intentions in different scenarios, and there may be only slight differences between different graphs from the literal expressions, such as "find master and help me to get water and" find master and help me to get electricity ", so that errors easily occur in the process of identifying intentions due to the fact that the intelligent customer service cannot identify the slight differences between the different graphs, and an incorrect answer may be provided to a customer. This problem can be solved by creating an accurate intent generation template and interacting the intent generation template with the user problem, and more finely grained semantic information within the intent text can be found, providing a suitable intent to output an answer matching the user problem.
Accordingly, the present embodiment introduces from two processes of creating an accurate intent generation template and interacting the intent generation template with a user problem to output a target intent to solve the above-described problem.
Specifically, the purpose of the intent generation template is to quickly and accurately identify the user problem and match the corresponding intent, and the purpose comprises the mapping relation between the problem and the intent in the target scene. Building an accurate intent generation template requires a specific structural form, and in general, the intent generation template can be composed of a target scene, a user problem, and a real intent. That is, a user problem is identified in the target scene, and the fact that the mapping relation can be matched with the real intention is the main function of the intention generation template. The target scene is a scene corresponding to the target key element in the intention generation template. The target key element is a representation of a target scene, which can be a keyword under a certain scene, such as writing a composition of not less than 500 words about sunset, the target scene can be a writing composition, and the target key element can be "composition, sunset, word number limitation" and the like. The determination of the target key elements may be accomplished based on an element generation model. The element generation model adopts a neural network structure to evaluate all key elements in a target scene, and then the target key element with the best effect is selected from the key elements.
Further, the intention generation template comprises a mapping relation between the questions and the intention in the target scene, so that the intention generation template is established to determine the question text and the intention text in the target scene, and the question text and the intention text can be used for determining all the question texts and the intention texts corresponding to the question texts in the target scene based on the text data in the knowledge base.
Further, the target key elements, the question text and the intention text are determined, and thus the components of the intention generation template are basically determined. Taking a question and answer system applied to intelligent customer service as an example, the basic form design of the intent generation template based on the characteristics of one question and one answer of the intelligent customer service can be as follows: "the user inquires about the S2 problem under the S1 scene, wherein the key elements in the S1 scene are W1, W2 and W3 … WN, the problem really asked by the user is [ MASK ]", wherein S1 is the name of the current service scene, namely the target scene, such as 'large member equity', the information is determined by the position of the user entering intelligent customer service during online service, S2 is the text content of the problem actually input by the user, W1 … WN represents the key elements under the target scene, N is the preset number, and [ MASK ] is the real intention of the user required to be generated by the model. The intention generation template established based on the method can accurately express the mapping relation between the problem text and the intention text in the target scene.
Further, an intent generation template and user questions are input to an intent generation model to output a target intent. Among them, the intention generation model may employ an encoder-decoder (encoding-decoding) model structure, which is a model applied to the seq2seq problem. Another output sequence y is generated from one input sequence x, in an intelligent customer service answering system the input sequence is a question posed by the user and the output sequence or predicted result is the target intent. Because the intent generation template needs to interact with the user problem when the prediction result is generated, fine-grained semantic information can be focused, and the intent generation template provides additional information, such as scene information, for the prediction of the intent generation model, so that the model can generate different target intentions for the same user problem in different scenes, and the problem that similar scenes generate the same intentions for one user problem can not occur.
According to the technical scheme provided by the embodiment of the application, the intention generation template is built according to the target key elements, the question text and the intention text, and the intention generation template and the user question are interactively input into the intention generation model so as to output the target intention. The problem that the traditional intent recognition algorithm based on semantic retrieval cannot pay attention to fine-grained information can be solved, fine-grained semantic information can be paid attention to through interaction of templates and the problems, and accuracy of intent recognition under different scenes is improved.
In some embodiments, further comprising:
acquiring a historical consultation log of a user in each initial scene;
determining initial key elements according to the historical consultation log and text data in a knowledge base;
determining a target weight value of an initial key element according to the generation time of the historical consultation log;
determining candidate key elements from the initial key elements according to the target weight value;
and establishing an element generation model based on the candidate key elements.
Specifically, in order to enable the element generation model to accurately evaluate the key elements, it is necessary to train the element generation model using candidate key elements as a training set when the model is built. The specific determination process of the candidate key elements can be as follows: the method comprises the steps of obtaining a historical consultation log of a user in each initial scene, wherein the historical consultation log comprises questions of the user in each scene, time and answers for asking the questions and the like, and the initial scene comprises a target scene. After the historical consultation log is combined with the text data in the knowledge base, the relation between the questions and the answers in each scene can be better embodied by combining the historical consultation log with the text data, and then keyword extraction operation is carried out on the combined text to determine initial key elements. The text data is knowledge and answer text data in the knowledge base, and the keyword extraction operation may be performed by using a keyword extraction function, where the keyword extraction function includes but is not limited to TF-IDF (word frequency-inverse text frequency index) algorithm, and the like, and the initial keyword is output by inputting the text data into the keyword extraction function. The weight proportion of the initial key elements is the same and has no emphasis, so if the element generation model is trained by the initial key elements, the effect is poor, and the weight assignment is carried out on the initial key elements.
Further, considering that each key element in a scene changes with time, the key elements are extracted by time to attenuate weight, and generally, the weighting method includes, but is not limited to, slicing according to a fixed time (such as one day), so that the target weight value of the initial key element is determined according to the generation time of the historical consultation log. In order to prevent overfitting, the inverse of the interval to date of the historical consultation log occurrence time is taken for weight penalty. Note that the attribute of no occurrence time of the data in the knowledge base is that the weight can be directly set to 1, and the final value after the weight penalty is the target weight value. For example, if the number of key elements in a certain scene of the template to be generated is set to be N, the initial key elements of the previous 2N are taken as candidate key elements to be a training set according to the target weight value, element generation model training in a single scene is performed, the training is stopped until a preset qualification threshold is reached, and the qualification threshold can be the training frequency or the qualification rate. The trained model is the element generation model.
In some embodiments, determining the initial key element from the historical consultation log and the text data in the knowledge base includes:
combining the historical consultation log with text data in a knowledge base to determine a text to be extracted;
inputting the text to be extracted to the key element extraction function to output the initial key element.
Specifically, the text to be extracted contains the historical consultation log and text data, so that the quantitative and variable key elements in the initial scene are interacted, the extracted initial key elements are more accurate and comprehensive, and the key element extraction function comprises but is not limited to a TF-IDF (word frequency-inverse text frequency index) algorithm and the like. The initial key element can be a key word in the intelligent customer service question-answering system, the importance of a certain key word in the text to be extracted to the whole text to be extracted can be known through the key element extraction function, the higher the importance is, the larger the value output by the key element extraction function is, so that the first few words are possible to be the initial key elements of the text to be extracted.
In some embodiments, determining the target weight value for the initial key element based on the generation time of the historical consultation log includes:
determining an initial weight value and a weight penalty value of the initial key element according to the generation time of the historical consultation log;
and determining a target weight value according to the initial weight value and the weight penalty value.
Specifically, considering that each key element in a scene changes with time, the weight of an initial key element is attenuated according to time when the initial key element is extracted, generally, the closer to the current time, the higher the value of an event occurs, and therefore, the higher the weight of the initial key element, and the weighting method includes, but is not limited to, slicing according to fixed time (such as one day). However, weighting each initial key element according to its initial weight value may generate a phenomenon of over-fitting, that is, neglecting the processing of the original sample, because we often attach too much importance to the loss caused by adjusting the weight. For example: such as adding in key element identification: the keywords "water" and "switch" must be present, and this condition will allow the model or function to focus on water and switch, with less attention being paid to whether there is "electricity". Therefore, in order to prevent the occurrence of the overfitting phenomenon, weight penalty is performed on the initial key elements, that is, each initial key element is weighted and penalized according to the reciprocal of the interval of the occurrence time of the historical consultation log, and the product of the formed weight penalty value and the initial weight value is the target weight value. The target weight value can well balance the importance of each initial key element, so that the initial key elements are more effective.
In some embodiments, determining target key elements in the target scene based on the element generation model includes:
training the element generation model under the target scene according to the candidate key elements; outputting an element evaluation set;
and when the training times reach a target time threshold, determining target key elements from the element evaluation set according to the training effect.
Specifically, training is performed on the element generation model under the target scene by using the candidate key elements, and a training result, namely an element evaluation set, is output every time the element generation model is trained. The element evaluation set comprises evaluation results of candidate key elements, when the training times reach a target time threshold value, the feature of each candidate key element is memorized by the element generation model, and the expression capability of the element generation model is also the best, and then the element evaluation set comprises relatively accurate effect evaluation of the element generation model on each candidate key element, so that the first N candidate key elements with the best effect can be selected from the element evaluation set according to the training effect and used as target key elements to be added into an intention generation template, and the target key elements can better represent a target scene.
In some embodiments, determining the intent generation template from the target key element, the question text, and the intent text includes:
presetting the target number of target key elements;
determining a composition structure of the template to be generated;
based on the composition structure, intent generation templates are defined with a target number of target key elements, question text, and intent text.
Specifically, different types of templates are corresponding to the intention recognition in different fields, and the embodiment is aimed at the intention recognition under the intelligent customer service answering system, so that the intention generation template has a special composition structure. Generally, the composition is structured in terms of the number of target key elements, question text, intention text, and target key elements. Thus, based on the characteristics of intelligent customer service asking for a response, the basic form design intended to generate templates can be: "the user inquires about the S2 problem under the S1 scene, wherein the key elements in the S1 scene are W1, W2 and W3 … WN, the problem really asked by the user is [ MASK ]", wherein S1 is the name of the current service scene, namely the target scene, such as 'large member equity', the information is determined by the position of the user entering intelligent customer service during online service, S2 is the text content of the problem actually input by the user, W1 … WN represents the key elements under the target scene, N is the preset target quantity, and [ MASK ] is the real intention of the user required to be generated by the model. The intention generation template established based on the method can accurately express the mapping relation between the problem text and the intention text in the target scene.
In some embodiments, inputting the intent generation template and the user question into the intent generation model to output the target intent comprises:
interacting the intent generation template with the user question to determine an interaction text;
the interactive text is input to the intent generation model to output the target intent.
Specifically, the interactive text in this embodiment includes both knowledge required by the intent generation model for reasoning (intent generation template) and related background knowledge (user problem), and fine-grained semantic information can be focused by interacting the intent generation template with the user problem, so that the intent generation model can be better enabled to predict the interactive text based on the fine-grained semantic information, and a more accurate target intent can be output. The intention generation template provides additional information for model prediction, can fully exert the understanding and reasoning capability of the intention generation model, reduces the fact-based errors of the intention generation model in the pushing process, and enables the intention modeling to generate different target intentions for the same user in different target scenes.
Any combination of the above optional solutions may be adopted to form an optional embodiment of the present application, which is not described herein in detail.
The following are device embodiments of the present application, which may be used to perform method embodiments of the present application. For details not disclosed in the device embodiments of the present application, please refer to the method embodiments of the present application.
Fig. 3 is a schematic diagram of a template-based generative intent recognition device according to an embodiment of the present application. As shown in fig. 3, the template-based generation type intention recognition apparatus includes:
a target key element determination module 301 configured to determine a target key element in a target scene based on the element generation model;
a text determination module 302 configured to determine a question text and an intention text in a target scene;
a template generation module 303 configured to determine an intent generation template from the target key elements, the question text, and the intent text;
the target intent output module 304 is configured to input an intent generation template and user questions to an intent generation model to output a target intent.
In some embodiments, the target key element determination module 301 of fig. 3 further comprises:
acquiring a historical consultation log of a user in each initial scene;
determining initial key elements according to the historical consultation log and text data in a knowledge base;
determining a target weight value of an initial key element according to the generation time of the historical consultation log;
determining candidate key elements from the initial key elements according to the target weight value;
and establishing an element generation model based on the candidate key elements.
In some embodiments, the target key element determination module 301 of fig. 3 includes:
combining the historical consultation log with text data in a knowledge base to determine a text to be extracted;
inputting the text to be extracted to the key element extraction function to output the initial key element.
In some embodiments, the target key element determination module 301 of fig. 3 includes:
determining an initial weight value and a weight penalty value of the initial key element according to the generation time of the historical consultation log;
and determining a target weight value according to the initial weight value and the weight penalty value.
In some embodiments, the target key element determination module 301 of fig. 3 includes:
training the element generation model under a target scene according to the candidate key elements to output an element evaluation set;
and when the training times reach a target time threshold, determining target key elements from the element evaluation set according to the training effect.
In some embodiments, the template generation module 303 of fig. 3 includes:
presetting the target number of target key elements;
determining a composition structure of the template to be generated;
based on the composition structure, intent generation templates are defined with a target number of target key elements, question text, and intent text.
In some embodiments, the target intent output module 304 of fig. 3 includes:
interacting the intent generation template with the user question to determine an interaction text;
the interactive text is input to the intent generation model to output the target intent.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application in any way.
Fig. 4 is a schematic diagram of an electronic device 4 provided in an embodiment of the present application. As shown in fig. 4, the electronic apparatus 4 of this embodiment includes: a processor 401, a memory 402 and a computer program 403 stored in the memory 402 and executable on the processor 401. The steps of the various method embodiments described above are implemented by processor 401 when executing computer program 403. Alternatively, the processor 401, when executing the computer program 403, performs the functions of the modules/units in the above-described apparatus embodiments.
The electronic device 4 may be a desktop computer, a notebook computer, a palm computer, a cloud server, or the like. The electronic device 4 may include, but is not limited to, a processor 401 and a memory 402. It will be appreciated by those skilled in the art that fig. 4 is merely an example of the electronic device 4 and is not limiting of the electronic device 4 and may include more or fewer components than shown, or different components.
The processor 401 may be a central processing unit (Central Processing Unit, CPU) or other general purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application SpecificIntegrated Circuit, ASIC), field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like.
The memory 402 may be an internal storage unit of the electronic device 4, for example, a hard disk or a memory of the electronic device 4. The memory 402 may also be an external storage device of the electronic device 4, for example, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like, which are provided on the electronic device 4. Memory 402 may also include both internal storage units and external storage devices of electronic device 4. The memory 402 is used to store computer programs and other programs and data required by the electronic device.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application implements all or part of the flow in the methods of the above embodiments, or may be implemented by a computer program to instruct related hardware, and the computer program may be stored in a computer readable storage medium, where the computer program may implement the steps of the respective method embodiments described above when executed by a processor. The computer program may comprise computer program code, which may be in source code form, object code form, executable file or in some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the content of the computer readable medium can be appropriately increased or decreased according to the requirements of the jurisdiction's jurisdiction and the patent practice, for example, in some jurisdictions, the computer readable medium does not include electrical carrier signals and telecommunication signals according to the jurisdiction and the patent practice.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting thereof; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (7)

1. A template-based generative intent recognition method, the method comprising:
determining target key elements in a target scene based on the element generation model;
determining a problem text and an intention text in a target scene;
determining an intention generation template according to the target key elements, the question text and the intention text;
inputting the intent generation template and the user questions to an intent generation model to output a target intent;
further comprises:
acquiring a historical consultation log of a user in each initial scene;
determining initial key elements according to the historical consultation log and text data in a knowledge base;
determining a target weight value of the initial key element according to the generation time of the historical consultation log;
determining candidate key elements from the initial key elements according to the target weight value;
establishing an element generation model based on the candidate key elements;
the determining initial key elements according to the historical consultation log and text data in a knowledge base comprises:
combining the historical consultation log with text data in the knowledge base to determine a text to be extracted;
inputting the text to be extracted into a key element extraction function to output an initial key element;
the determining the target weight value of the initial key element according to the generation time of the historical consultation log comprises the following steps:
determining an initial weight value and a weight penalty value of the initial key element according to the generation time of the historical consultation log;
and determining the target weight value according to the initial weight value and the weight penalty value.
2. The method of claim 1, wherein determining target key elements in a target scene based on an element generation model comprises:
training the element generation model under the target scene according to the candidate key elements; outputting an element evaluation set;
and when the training times reach a target time threshold, determining the target key element from the element evaluation set according to the training effect.
3. The method of claim 1, wherein the determining an intent generation template from the target key element, question text, and intent text comprises:
presetting the target number of the target key elements;
determining a composition structure of the intent generation template;
the intent generation template is defined with the target number of target key elements, the question text, and the intent text based on the composition structure.
4. The method of claim 1, wherein the inputting the intent generation template and user question to an intent generation model to output a target intent comprises:
interacting the intent generation template with the user question to determine an interaction text;
inputting the interactive text to an intention generation model to output the target intention.
5. A template-based generation type intention recognition apparatus, comprising:
the target key element determining module is used for determining target key elements in a target scene based on the element generation model;
the text determining module is used for determining a problem text and an intention text in a target scene;
the template generation module is used for determining an intention generation template according to the target key elements, the question text and the intention text;
the target intention output module is used for inputting the intention generation template and the user problem into an intention generation model so as to output target intention;
the target key element determining module is further configured to: acquiring a historical consultation log of a user in each initial scene; determining initial key elements according to the historical consultation log and text data in a knowledge base; determining a target weight value of the initial key element according to the generation time of the historical consultation log; determining candidate key elements from the initial key elements according to the target weight value; establishing an element generation model based on the candidate key elements;
wherein the determining initial key elements according to the historical consultation log and text data in the knowledge base comprises: combining the historical consultation log with text data in the knowledge base to determine a text to be extracted; inputting the text to be extracted into a key element extraction function to output an initial key element;
the determining the target weight value of the initial key element according to the generation time of the historical consultation log comprises the following steps: determining an initial weight value and a weight penalty value of the initial key element according to the generation time of the historical consultation log; and determining the target weight value according to the initial weight value and the weight penalty value.
6. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 4 when the computer program is executed.
7. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 4.
CN202311168587.2A 2023-09-12 2023-09-12 Template-based generation type intention recognition method and device Active CN116933800B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311168587.2A CN116933800B (en) 2023-09-12 2023-09-12 Template-based generation type intention recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311168587.2A CN116933800B (en) 2023-09-12 2023-09-12 Template-based generation type intention recognition method and device

Publications (2)

Publication Number Publication Date
CN116933800A CN116933800A (en) 2023-10-24
CN116933800B true CN116933800B (en) 2024-01-05

Family

ID=88375578

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311168587.2A Active CN116933800B (en) 2023-09-12 2023-09-12 Template-based generation type intention recognition method and device

Country Status (1)

Country Link
CN (1) CN116933800B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6745168B1 (en) * 1998-01-28 2004-06-01 Fujitsu Limited Intention achievement information processing apparatus
CN111782776A (en) * 2019-09-26 2020-10-16 北京沃东天骏信息技术有限公司 Method and device for realizing intention identification through slot filling
CN112069828A (en) * 2020-07-31 2020-12-11 飞诺门阵(北京)科技有限公司 Text intention identification method and device
WO2021196981A1 (en) * 2020-03-31 2021-10-07 华为技术有限公司 Voice interaction method and apparatus, and terminal device
WO2022041728A1 (en) * 2020-08-28 2022-03-03 康键信息技术(深圳)有限公司 Medical field intention recognition method, apparatus, device and storage medium
CN114186061A (en) * 2021-12-13 2022-03-15 深圳壹账通智能科技有限公司 Statement intention prediction method, device, storage medium and computer equipment
CN114880480A (en) * 2022-04-08 2022-08-09 北京捷通华声科技股份有限公司 Question-answering method and device based on knowledge graph
CN115204156A (en) * 2022-07-14 2022-10-18 北京金山数字娱乐科技有限公司 Keyword extraction method and device
CN115345177A (en) * 2021-05-13 2022-11-15 海信集团控股股份有限公司 Intention recognition model training method and dialogue method and device
CN115422944A (en) * 2022-09-01 2022-12-02 深圳市人马互动科技有限公司 Semantic recognition method, device, equipment and storage medium
CN116167355A (en) * 2021-11-25 2023-05-26 中移(杭州)信息技术有限公司 Intention recognition method, device, equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6745168B1 (en) * 1998-01-28 2004-06-01 Fujitsu Limited Intention achievement information processing apparatus
CN111782776A (en) * 2019-09-26 2020-10-16 北京沃东天骏信息技术有限公司 Method and device for realizing intention identification through slot filling
WO2021196981A1 (en) * 2020-03-31 2021-10-07 华为技术有限公司 Voice interaction method and apparatus, and terminal device
CN112069828A (en) * 2020-07-31 2020-12-11 飞诺门阵(北京)科技有限公司 Text intention identification method and device
WO2022041728A1 (en) * 2020-08-28 2022-03-03 康键信息技术(深圳)有限公司 Medical field intention recognition method, apparatus, device and storage medium
CN115345177A (en) * 2021-05-13 2022-11-15 海信集团控股股份有限公司 Intention recognition model training method and dialogue method and device
CN116167355A (en) * 2021-11-25 2023-05-26 中移(杭州)信息技术有限公司 Intention recognition method, device, equipment and storage medium
CN114186061A (en) * 2021-12-13 2022-03-15 深圳壹账通智能科技有限公司 Statement intention prediction method, device, storage medium and computer equipment
CN114880480A (en) * 2022-04-08 2022-08-09 北京捷通华声科技股份有限公司 Question-answering method and device based on knowledge graph
CN115204156A (en) * 2022-07-14 2022-10-18 北京金山数字娱乐科技有限公司 Keyword extraction method and device
CN115422944A (en) * 2022-09-01 2022-12-02 深圳市人马互动科技有限公司 Semantic recognition method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于问题生成的知识图谱问答方法;乔振浩;车万翔;刘挺;;智能计算机与应用(第05期);第11-15页11-15 *

Also Published As

Publication number Publication date
CN116933800A (en) 2023-10-24

Similar Documents

Publication Publication Date Title
JP2021089705A (en) Method and device for evaluating translation quality
CN109514586B (en) Method and system for realizing intelligent customer service robot
WO2022095380A1 (en) Ai-based virtual interaction model generation method and apparatus, computer device and storage medium
CN111428010B (en) Man-machine intelligent question-answering method and device
EP4177812A1 (en) Recommendation reason generation method and apparatus, and device and storage medium
US9697057B2 (en) Automated transfer of user data between applications utilizing different interaction modes
CN108268450B (en) Method and apparatus for generating information
CN111666416A (en) Method and apparatus for generating semantic matching model
CN112559865A (en) Information processing system, computer-readable storage medium, and electronic device
CN114969352B (en) Text processing method, system, storage medium and electronic equipment
CN110377733A (en) A kind of text based Emotion identification method, terminal device and medium
CN108681871B (en) Information prompting method, terminal equipment and computer readable storage medium
CN116204714A (en) Recommendation method, recommendation device, electronic equipment and storage medium
CN113592315A (en) Method and device for processing dispute order
CN117370512A (en) Method, device, equipment and storage medium for replying to dialogue
CN116701593A (en) Chinese question-answering model training method based on GraphQL and related equipment thereof
CN116933800B (en) Template-based generation type intention recognition method and device
CN110705308A (en) Method and device for recognizing field of voice information, storage medium and electronic equipment
CN114943590A (en) Object recommendation method and device based on double-tower model
CN116911313B (en) Semantic drift text recognition method and device
CN112131378A (en) Method and device for identifying categories of civil problems and electronic equipment
CN111556096A (en) Information pushing method, device, medium and electronic equipment
CN110990528A (en) Question answering method and device and electronic equipment
US20230342553A1 (en) Attribute and rating co-extraction
Ali et al. Intelligent Agents in Educational Institutions: AEdBOT–A Chatbot for Administrative Assistance using Deep Learning Hybrid Model Approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant