CN115238143A - Query statement generation method and device, model training method, equipment and medium - Google Patents

Query statement generation method and device, model training method, equipment and medium Download PDF

Info

Publication number
CN115238143A
CN115238143A CN202210906294.9A CN202210906294A CN115238143A CN 115238143 A CN115238143 A CN 115238143A CN 202210906294 A CN202210906294 A CN 202210906294A CN 115238143 A CN115238143 A CN 115238143A
Authority
CN
China
Prior art keywords
information
conversation
module
inputting
dialogue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210906294.9A
Other languages
Chinese (zh)
Inventor
于凤英
王健宗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202210906294.9A priority Critical patent/CN115238143A/en
Publication of CN115238143A publication Critical patent/CN115238143A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/9032Query formulation
    • G06F16/90332Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/90335Query processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The embodiment relates to the technical field of artificial intelligence, in particular to a query statement generation method and device, a model training method, equipment and a medium. The query statement generation method comprises the following steps: the method comprises the steps of obtaining a conversation record and database information, wherein the conversation record comprises first conversation information and second conversation information, the first conversation information comprises latest conversation information sent out in the conversation record, and the second conversation information comprises conversation information except the first conversation information in the conversation record. And obtaining a statement generating model, wherein the statement generating model comprises a coding module and a generating module. And inputting the first dialogue information and the database information into a coding module for coding to obtain target sequence information. And inputting the second dialogue information and the database information into a coding module for coding to obtain historical sequence information. And inputting the conversation record, the target sequence information and the historical sequence information into a generating module to generate sentences to obtain target query sentences, so that the accuracy of generating the query sentences is improved.

Description

Query statement generation method and device, model training method, equipment and medium
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a query statement generation method and device, a model training method, equipment and a medium.
Background
With the development of big data, interacting with a database through natural language becomes a new technical hotspot. At present, keywords are extracted from a natural language of a user, and query sentences can be generated according to the keywords, so that the content required by the user can be quickly queried from mass data of a database by using the query sentences, but complex semantic information is easily lost in the method, and the accuracy of generating the query sentences is low.
Disclosure of Invention
The embodiment of the application mainly aims to provide a query statement generation method and device, a model training method, equipment and a medium, and the accuracy of query statement generation can be improved.
In order to achieve the above object, a first aspect of an embodiment of the present application provides a query statement generating method, where the method includes:
obtaining a conversation record and database information, wherein the conversation record comprises first conversation information and second conversation information, the first conversation information comprises latest conversation information sent out in the conversation record, and the second conversation information comprises conversation information except the first conversation information in the conversation record; obtaining a statement generating model, wherein the statement generating model comprises a coding module and a generating module; inputting the first dialogue information and the database information into the coding module for coding to obtain target sequence information; inputting the second dialogue information and the database information into the coding module for coding to obtain historical sequence information; and inputting the conversation record, the target sequence information and the historical sequence information into the generation module for sentence generation to obtain a target query sentence.
In some embodiments, the encoding module includes an encoding network, a first attention network, and a sequence generation network; the inputting the first dialogue information and the database information into the coding module for coding to obtain the target sequence information includes:
inputting the first dialogue information and the database information into the coding network for word vector coding to obtain a first word vector corresponding to the first dialogue information and a second word vector corresponding to the database information; inputting the first word vector and the second word vector into the first attention network for processing to obtain a first attention result; and inputting the first word vector and the first attention result into the sequence generation network for processing to obtain target sequence information.
In some embodiments, the database information includes a table name of at least one data table and data item information of the data table, the data item information is used for determining a data item included in the data table, and the second word vector includes a first sub-vector corresponding to the data table and a second sub-vector corresponding to the data item information; the inputting the first word vector and the second word vector into the first attention network for processing to obtain a first attention result, including:
inputting the first word vector and the first sub-vector into the first attention network for processing to obtain a first attention result; inputting the first word vector and the second sub-vector into the first attention network for processing to obtain a second attention result; determining the first attention result and the second attention result as a first attention result.
In some embodiments, the inputting the first dialogue information and the database information into the coding network for word vector coding to obtain a first word vector corresponding to the first dialogue information and a second word vector corresponding to the database information includes:
performing word segmentation processing on the first dialogue information to obtain a word segmentation set; obtaining a table name of at least one data table and data item information of the data table from the database information, wherein the data item information is used for determining data items included in the data table; constructing a table set according to all the table names; constructing a data item set according to the data item information of the data table; splicing the word segmentation set, the table set and the data item set into an input sequence; and inputting the input sequence into the coding network for word vector coding to obtain a first word vector corresponding to the first dialogue information and a second word vector corresponding to the database information.
In some embodiments, the generating module comprises a second attention network, a stitching module, and a decoding module; the inputting the dialogue record, the target sequence information and the historical sequence information into the generation module for statement generation to obtain a target query statement, including:
inputting the target sequence information into the second attention network for processing to obtain first weight information corresponding to the target sequence information; splicing the first weight information and the dialogue record through the splicing module to obtain a statement feature vector; and inputting the statement feature vector into the decoding module for decoding to obtain a target query statement.
In some embodiments, the splicing, by the splicing module, the first weight information and the dialogue record to obtain a sentence feature vector includes:
obtaining historical question information and historical answer information from the second dialogue information, wherein the historical answer information is used for answering the historical question information; analyzing feedback information corresponding to the historical answer information according to the historical question information and the first dialogue information, wherein the feedback information is used for representing the matching degree between the historical answer information and the historical question information; determining second weight information of the historical reply information according to the feedback information; and splicing the first weight information, the second weight information and the dialogue record through the splicing module to obtain a statement feature vector.
To achieve the above object, a second aspect of the embodiments of the present application proposes a model training method for training a sentence generation model according to the first aspect of the embodiments of the present application, the method including:
obtaining a conversation sample and an initial query statement corresponding to the conversation sample, wherein the conversation sample comprises at least two pieces of conversation information; acquiring database information, and determining a query grammar rule according to the database information; converting the initial query statement into a reference query statement according to the query grammar rule, wherein the reference query statement meets a grammar structure corresponding to the query grammar rule; and inputting the dialogue sample, the database information and the reference query sentence into a preset generation model for training to obtain a sentence generation model.
In order to achieve the above object, a third aspect of the embodiments of the present application provides a query statement generation apparatus, including:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the conversation record comprises first conversation information and second conversation information, the first conversation information comprises the latest conversation information sent out in the conversation record, and the second conversation information comprises the conversation information except the first conversation information in the conversation record; acquiring a statement generating model, wherein the statement generating model comprises a coding module and a generating module;
the first coding module is used for inputting the first dialogue information and the database information into the coding module for coding to obtain target sequence information;
the second coding module is used for inputting the second dialogue information and the database information into the coding module for coding to obtain historical sequence information;
and the generation module is used for inputting the conversation record, the target sequence information and the historical sequence information into the generation module to generate sentences so as to obtain target query sentences.
To achieve the above object, a fourth aspect of embodiments of the present application proposes an electronic device, comprising at least one memory;
at least one processor;
at least one computer program;
the computer programs are stored in the memory, and the processor executes the at least one computer program to implement:
a query statement generation method as claimed in any one of the embodiments of the first aspect; or
A method of model training as described in an embodiment of the second aspect.
To achieve the above object, a fifth aspect of embodiments of the present application further provides a computer-readable storage medium storing computer-executable instructions for causing a computer to perform:
a query statement generation method as claimed in any one of the embodiments of the first aspect; or
A method of model training as described in an embodiment of the second aspect.
According to the query statement generation method and device, the model training method, the equipment and the medium, the statement generation model is trained in advance, the statement generation model comprises the coding module and the generation module, the first dialogue information and the second dialogue information can be respectively input into the coding module with the database information to be coded to obtain the target sequence information and the historical sequence information, the dialogue records, the target sequence information and the historical sequence information are input into the generation module to be generated into the statements to obtain the target query statement, and the relevance between each dialogue information and the database information can be deeply mined in combination with the context information of the dialogue in the dialogue interaction occasion, so that the complex semantic information cannot be lost, complex database query structures such as nesting and multi-table connection can be better processed based on the database information, and the accuracy of generating the query statement is improved.
Drawings
Fig. 1 is a flowchart of a query statement generation method provided in an embodiment of the present application;
FIG. 2 is a schematic view of a specific flowchart of step S103 in FIG. 1;
FIG. 3 is a schematic diagram of an application of a sentence generation model in an embodiment of the present application;
FIG. 4 is a schematic view of a specific flowchart of step S105 in FIG. 1;
FIG. 5 is a flow chart of a model training method provided by an embodiment of the present application;
fig. 6 is a block diagram of modules of a query statement generating apparatus according to an embodiment of the present application;
FIG. 7 is a block diagram of a model training apparatus provided in an embodiment of the present application;
fig. 8 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It is noted that while functional block divisions are provided in device diagrams and logical sequences are shown in flowcharts, in some cases, steps shown or described may be performed in sequences other than block divisions within devices or flowcharts. The terms first, second and the like in the description and in the claims, and the drawings described above, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
First, several terms referred to in the present application are resolved:
artificial Intelligence (AI): is a new technical science for researching and developing theories, methods, technologies and application systems for simulating, extending and expanding human intelligence; artificial intelligence is a branch of computer science that attempts to understand the essence of intelligence and produces a new intelligent machine that can react in a manner similar to human intelligence, and research in this field includes robotics, language recognition, image recognition, natural language processing, and expert systems, among others. The artificial intelligence can simulate the information process of human consciousness and thinking. Artificial intelligence is also a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results.
Natural Language Processing (NLP): NLP uses computer to process, understand and use human language (such as chinese, english, etc.), and it belongs to a branch of artificial intelligence, which is a cross discipline of computer science and linguistics, also commonly called computational linguistics. Natural language processing includes parsing, semantic analysis, discourse understanding, and the like. Natural language processing is commonly used in the technical fields of machine translation, character recognition of handwriting and print, speech recognition and text-to-speech conversion, information retrieval, information extraction and filtering, text classification and clustering, public opinion analysis and opinion mining, and relates to data mining, machine learning, knowledge acquisition, knowledge engineering, artificial intelligence research, linguistic research related to language calculation, and the like, which are related to language processing.
BERT (Bidirectional Encoder retrieval from transformations) model: the BERT model further increases the generalization capability of a word vector model, fully describes character-level, word-level, sentence-level and even sentence-level relational characteristics, and is constructed based on a Transformer. There are three embeddings in the BERT, namely Token Embedding, segment Embedding and Position Embedding; wherein, token entries is a word vector, the first word is a CLS mark, and the first word can be used for the subsequent classification task; segment Embeddings are used to distinguish two sentences because pre-training does not only do LM but also do classification tasks with two sentences as input; position entries, where the Position word vector is not a trigonometric function in transform, but is learned by BERT training. But BERT directly trains a Position embedding to reserve Position information, randomly initializes a vector at each Position, adds model training to obtain an embedding containing the Position information, and finally selects direct splicing in the combination mode of the Position embedding and the word embedding.
With the development of big data, interacting with a database through natural language becomes a new technical hotspot. At present, keywords are extracted from a natural language of a user, and query sentences can be generated according to the keywords, so that the content required by the user can be quickly queried from massive data of a database by using the query sentences, but complex semantic information is easily lost in the method, and the accuracy of generating the query sentences is low.
Based on this, the embodiments of the present application provide a query statement generation method and apparatus, a model training method, a device, and a medium, which can improve the accuracy of generating a query statement.
The embodiments of the present application provide a query statement generation method and apparatus, a model training method, a device, and a medium, which are specifically described in the following embodiments, and first, a query statement generation method in the embodiments of the present application is described.
The embodiment of the application can acquire and process related data based on an artificial intelligence technology. The artificial intelligence is a theory, a method, a technology and an application system which simulate, extend and expand human intelligence by using a digital computer or a machine controlled by the digital computer, sense the environment, acquire knowledge and use the knowledge to obtain the best result.
The artificial intelligence base technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
The embodiment of the application provides a query statement generation method and a model training method, relates to the technical field of artificial intelligence, and particularly relates to the technical field of data processing. The query statement generation method or the model training method provided by the embodiment of the application can be applied to a terminal, a server side and software running in the terminal or the server side. In some embodiments, the terminal may be a smartphone, tablet, laptop, desktop computer, smart watch, or the like; the server may be an independent server, or a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like; the software may be an application or the like that implements a query statement generation method or a model training method, but is not limited to the above form.
The application is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The query statement generation method and the model training method provided in the embodiments of the present application are described below with a terminal as an example.
In a first aspect, please refer to fig. 1, fig. 1 is a flowchart of a query statement generation method provided in an embodiment of the present application, where the query statement generation method includes steps S101 to S105, and it should be understood that the query statement generation method in the embodiment of the present application includes, but is not limited to, steps S101 to S105, which is described in detail below with reference to fig. 1.
Step S101: obtaining a conversation record and database information, wherein the conversation record comprises first conversation information and second conversation information.
The embodiment of the application is suitable for various conversation interaction occasions such as online conversation (for example, at least two users carry out conversation through social software) and man-machine conversation (for example, online customer service, automatic question answering, robot training and the like), and the conversation records can comprise all conversation information of at least two conversation objects during conversation interaction and also can comprise N pieces of conversation information which are newly issued or meet specified screening conditions, wherein N is a positive integer and is set manually, and the method is not particularly limited. Illustratively, each session object can be a user of a different terminal, and is suitable for application occasions such as private chat or multi-user group chat; or, the at least two session objects include a first session object and a second session object, the first session object may refer to a user, and the second session object may refer to a terminal, which is suitable for application occasions of man-machine conversation.
Based on this, the dialog record is divided into first dialog information and second dialog information, the first dialog information includes the latest dialog information sent in the dialog record, and the second dialog information includes the dialog information except the first dialog information in the dialog record, that is, the sending time of the second dialog information is earlier than that of the first dialog information. The manner of obtaining the conversation record may include, but is not limited to: 1. the terminal is provided with software or plug-in with a conversation communication function (such as a customer service robot, an outbound robot, a voice assistant and other third-party social communication software), so that the terminal can acquire conversation information manually or inputted by a user through the front end of the software (or plug-in) and receive the conversation information sent by other terminals. 2. And the terminal responds to the statement generation instruction and directly acquires the information content corresponding to the statement generation instruction as the dialogue record. The triggering mode of the statement generating instruction may include, but is not limited to, a text operation, a picture operation, or other preset operations, for example, a user selects a certain text in an operation interface of the terminal. 3. And the terminal acquires the dialogue record file input by the user and analyzes the dialogue record from the dialogue record file. The conversation recording file may be a file derived from third-party social communication software, and is used for recording the conversation information in a specified format (such as a text format or a database file).
Wherein, the conversation record may be recorded with identification information, and the identification information may include, but is not limited to, a speaking account, a separator, a timestamp, etc., then the terminal may determine the first conversation information and the second conversation information from the conversation record according to the indication of the identification information. For example, the session information corresponding to the latest timestamp is used as the first session information, and the session information in the session record except the first session information is used as the second session information.
In this embodiment of the present application, the database information may include a database schema, where the database schema is used to describe a logical structure and characteristics of all data in the database, the database information may further include a table name of a data table and data item information of the data table, and the like, and the data item information is used to determine a data item included in the data table, and the data item may refer to row data or column data included in the data table, and the like, and the data item information may include a data row name or a data column name, and the like, which are not specifically limited.
Step S102: and acquiring a statement generating model, wherein the statement generating model comprises a coding module and a generating module.
In the embodiment of the application, the coding module is used for coding by combining the dialogue information and the database information to obtain the sequence information, so that the semantic features of the dialogue information, the mode features of the database and the association features between the dialogue information and the database are fully mined; the generating module is used for generating a query statement according to the sequence information. The encoding module may adopt a pre-training language model such as a Transform model or a BERT model, and the generating module may adopt a generated confrontation network (GAN) model or a Seq2Seq model, which are not specifically limited.
Step S103: and inputting the first dialogue information and the database information into a coding module for coding to obtain target sequence information.
Step S104: and inputting the second dialogue information and the database information into a coding module for coding to obtain historical sequence information.
Step S105: and inputting the conversation record, the target sequence information and the historical sequence information into a generating module for sentence generation to obtain a target query sentence.
In the embodiment of the present application, the target query statement may be statement information that conforms to the query syntax rule of the database. The database may adopt Mysql, hive, sql server, redis, kafka, and the like, and correspondingly, the target query statement may be structured query language (Sql) or other database languages, and may be adjusted according to the type of the database, which is not specifically limited. In practical application, the terminal can directly realize the query operation on the database by executing the target query statement. For example, assuming the database is Mysql, if the dialog records "please help me query for product A for the 60 year old population", the target query statement may be "select product from database where age ≧ 60 and product No = 'A'".
Therefore, the query statement generation method provided by the embodiment of the application can be used for deeply mining the association between each piece of dialogue information and the database information by combining the context information of the dialogue under the dialogue interaction occasion, so that the complex semantic information can not be lost, and complex database query structures such as nesting and multi-table connection can be better processed based on the database information, thereby improving the accuracy of generating the query statement.
In step S103 of some embodiments, the encoding module may include an encoding network, a first attention network, and a sequence generation network. Referring to fig. 2, fig. 2 is a schematic flowchart of step S103 in fig. 1. As shown in fig. 2, step S103 may include, but is not limited to, the following steps S201 to S203.
Step S201: and inputting the first dialogue information and the database information into a coding network for word vector coding to obtain a first word vector corresponding to the first dialogue information and a second word vector corresponding to the database information.
In particular, the coding network may be a BERT network. Taking fig. 3 as an example for explanation, fig. 3 is an application schematic diagram of a statement generation model in the embodiment of the present application. As shown in fig. 3, the first dialogue information and the database information are input into the coding network, and the first word vector H output by the coding network is obtained q1 And a second word vector H d1
In step S201 of some embodiments, step S201 may include, but is not limited to, the following steps:
firstly, performing word segmentation processing on the first dialogue information to obtain a word segmentation set. Wherein, the participle set comprises at least two words extracted from the first dialogue information, such as participle set Q = { Q = { (Q) } 1 ,q 2 ,q 3 …q n N is a positive integer. Word segmentation processing means may include, but are not limited to: using word segmentation tools such as hand and Baidu NLP; a word segmentation method based on a dictionary is adopted, such as a forward maximum matching method, a shortest path method and the like; statistical-based word segmentation methods such as hidden markov models and N-grams are employed.
And acquiring the table name of at least one data table and the data item information of the data table from the database information, thereby constructing a table set according to all the table names and constructing a data item set according to the data item information of the data table. Set of tables T = { T = } 1 ,t 2 ,t 3 …,t x }, set of data items
Figure BDA0003772553400000081
Wherein, x is the total number of the data tables,
Figure BDA0003772553400000082
for the mth data item information included in the first data table,
Figure BDA0003772553400000083
and the information of the ith data item in the xth data table is positive integers of x, m and y.
And then, splicing the word segmentation set, the table set and the data item set into an input sequence. Specifically, the word set Q, the table set T, and the data item set C may be spliced by using a preset separator to obtain an input sequence X = [ CLS ] Q [ SEP ] TC [ SEP ], where [ CLS ] is a start marker of the input sequence and [ SEP ] is a preset separator, which is used to separate different types of information and ensure accuracy of the splicing process.
And finally, inputting the input sequence into a coding network for word vector coding to obtain a first word vector corresponding to the first dialogue information and a second word vector corresponding to the database information.
Therefore, the vocabulary in the first dialogue information, the data table in the database information and the data item information thereof are spliced into an input sequence, so that the deep bidirectional language representation can be generated conveniently.
Step S202: and inputting the first word vector and the second word vector into the first attention network for processing to obtain a first attention result.
In particular, the first attention network may employ attention mechanisms including, but not limited to, key-value-pair attention mechanisms, multi-head attention mechanisms, and self-attention mechanisms. The first attention network is used to focus the associated features of the first word vector and the second word vector and to ignore the insignificant parts. The first attention result may be used to represent an attention weight assigned to the first word vector.
In step S202 of some embodiments, the database information includes a table name of at least one data table and data item information of the data table, and the second word vector includes a first sub-vector corresponding to the data table and a second sub-vector corresponding to the data item information. Then, step S202 may include, but is not limited to, the following steps: and inputting the first word vector and the first sub-vector into a first attention network for processing to obtain a first processing result. And inputting the first word vector and the second subvector into the first attention network for processing to obtain a second processing result. And splicing the first processing result and the second processing result into a first attention result.
In some alternative implementations, if the first attention network employs a multi-head attention mechanism, the jth first head attention is input into the first attention network for processing when the first word vector and the first sub-vector are input into the first attention network
Figure BDA0003772553400000091
The calculation formula is obtained as follows:
Figure BDA0003772553400000092
Figure BDA0003772553400000093
wherein j is a positive integer and j ∈ [1, n ]],W q 、W k And W v Three weight matrices, H, for the first attention network Q Is a first word vector, H T Is a first sub-vector; d k Represents W q And W k The dimension of (2) plays a role in normalization, so that the phenomenon that the partial derivative approaches to 0 due to overlarge attention value can be prevented, and the data distribution condition that the variance is expected to be 0 and 1 can be met, so that the variance of the vector dot product is stabilized to 1. Based on this, a first processing result is obtained by stitching all the first head attentions.
Similarly, when the first word vector and the second sub-vector are input into the first attention network for processing, the jth second head attention
Figure BDA0003772553400000094
The calculation formula is obtained as follows:
Figure BDA0003772553400000095
Figure BDA0003772553400000096
wherein H C Is the second subvector. Based on this, the second processing result is obtained by stitching all the second head attentions.
Step S203: and inputting the first word vector and the first attention result into a sequence generation network for processing to obtain target sequence information.
Specifically, in step S203, the first word vector and the first attention result may be spliced first, and then the spliced result is input into the sequence generation network for processing, so as to obtain a problem representation including interaction between the dialogue information and the database information. The sequence generating network may adopt a Recurrent Neural Network (RNN), a bidirectional long-term memory network (BiLSTM), a Convolutional Neural Network (CNN), or the like, and is not particularly limited.
It can be seen that, in the above steps S201 to S203, by mining the key association features between the dialogue information and the database information by using the attention mechanism and combining the dialogue information generation sequence, the accuracy of the subsequent query sentence generation is further improved.
It is understood that, similar to the implementation of step S103, in step S104 of some embodiments, step S104 may include, but is not limited to, the following steps: inputting the second dialogue information and the database information into the coding network for word vector coding to obtain a third word vector corresponding to the second dialogue information and a fourth word vector corresponding to the database information, such as a third word vector H shown in FIG. 3 q2 And a fourth word vector H d2 . And inputting the third word vector and the fourth word vector into the first attention network for processing to obtain a second attention result. And inputting the third word vector and the second attention result into a sequence generation network for processing to obtain historical sequence information. It can be understood that, if the second session information includes a plurality of session information, the historical sequence information includes the sequence information after each session information is encoded, that is, each session information included in the second session information and the database information are input into the encoding network respectively for word vector encoding.
Further, the fourth word vector includes a third sub-vector corresponding to the data table and a fourth sub-vector corresponding to the data item information. Similar to the implementation manner of step S202, inputting the third word vector and the fourth word vector into the first attention network for processing, and obtaining the second attention result, may include but is not limited to the following steps: and inputting the third word vector and the third sub-vector into the first attention network for processing to obtain a third processing result. And inputting the third word vector and the fourth sub-vector into the first attention network for processing to obtain a fourth processing result. And splicing the third processing result and the fourth processing result into a second attention result.
In step S105 of some embodiments, the generating module may include a second attention network, a stitching module, and a decoding module. Referring to fig. 4, fig. 4 is a schematic flowchart illustrating an embodiment of step S105 in fig. 1. As shown in fig. 4, step S105 may include, but is not limited to, the following steps S401 to S403.
Step S401: and inputting the target sequence information and the historical sequence information into a second attention network for processing to obtain first weight information.
The second attention network is configured to perform attention processing on the correlation characteristics between the target sequence information and the historical sequence information, and the second attention network may specifically refer to the description of the first attention network, which is not described herein again.
Step S402: and splicing the first weight information and the dialogue records through a splicing module to obtain the sentence characteristic vector.
Through steps S401 and S402, the sequence information of the latest dialog is processed by using the sequence information of the historical dialog, and the alignment and supplement of the dialog context information can be performed. In one implementation, the first weight information includes weight information corresponding to the first session information and weight information corresponding to the second session information. And processing the first dialogue information by using the weight information corresponding to the first dialogue information to obtain a first weighting result. And carrying out weighting processing on the second dialogue information by using the weight information corresponding to the second dialogue information to obtain a second weighting result. Based on the statement feature vector, the first weighting result and the second weighting result are spliced into the statement feature vector, so that the first dialogue information and the second dialogue information are respectively spliced after being weighted according to the relevance between the first dialogue information and the second dialogue information. In another implementation, the first weight information and the first dialogue information may be spliced into a sentence feature vector, so that the second attention network may directly perform weighting processing on the first dialogue information according to the relevance between the first dialogue information and the second dialogue information, so as to highlight the enhancement effect on the first dialogue information.
In step S402 of some embodiments, step S402 may include, but is not limited to, the following steps:
first, history question information and history answer information are acquired from the second dialogue information, and the history answer information is used for answering the history question information.
Optionally, a session object corresponding to each piece of session information in the second session information may be obtained, that is, each piece of session information is sent by a corresponding session object. And adding the session information of which the session object is a first preset object into historical question information, and adding the session information of which the session object is a second preset object into historical answer information. The first preset object and the second preset object are different and can be set and adjusted manually, for example, in a man-machine conversation situation, the first preset object is a user, and the second preset object is a terminal.
For example, the content of one dialog interaction is as follows:
q1: please help me to search for products suitable for the 60 year old?
A1: and returning a list of product names with the applicable age of 60 years, such as class A products, class B products and the like.
Q2: what is the specific use, price and shelf life of class a products?
A2: a data sheet relating to usage/price/shelf life of the class a product is returned.
Q3: is the product of class B distinguished from this?
Then, in the above-mentioned multiple round of dialogue interaction, Q1 and Q2 are history question information, A1 and A2 are history answer information, and Q3 is first dialogue information.
And then, analyzing feedback information corresponding to the historical answer information according to the historical question information and the first dialogue information, wherein the feedback information is used for indicating the matching degree between the historical answer information and the historical question information, and therefore second weight information of the historical answer information is determined according to the feedback information.
Alternatively, the evaluation keyword may be extracted from the history question information and the first dialogue information, such as "yes" or "no", and the like. Acquiring matching values set for different evaluation keywords, and calculating second weight information according to the matching values, for example, the second weight information = sum of the matching values of all the evaluation keywords ÷ F, where F represents a preset total matching value, and the total matching value is determined according to the number and/or type of all the evaluation keywords.
Optionally, the historical answer information, the historical question information, and the first dialogue information may be input into a deep learning model formed by DQN and bellman equations to obtain second weight information, where the deep learning model is used to perform matching value calculation on the historical answer information and the historical question information.
And finally, splicing the first weight information, the second weight information and the dialogue records through a splicing module to obtain the statement feature vector. Specifically, the first weighting information may be used to perform weighting processing on the dialogue records to obtain a third weighting result, and the second weighting information may be used to perform weighting processing on the second dialogue information in the dialogue records to obtain a fourth weighting result, so that the third weighting result and the fourth weighting result are spliced into the sentence feature vector.
It can be seen that by introducing the first weight information, the context information of multiple rounds of conversations can be associated to complete the content of the first conversation information, for example, the association between "this" and "a type products" in Q3 is analyzed. By introducing the second weight information, the feedback effect on the historical reply information can be effectively analyzed, so that the weight ratio of the historical reply information when the sentence characteristic vector is generated is adaptively adjusted according to the feedback effect, the actual reply effect of the second dialogue information can be fully considered in the generated query sentence, and the accuracy of the query sentence is improved.
Step S403: and inputting the sentence characteristic vector into a decoding module for decoding to obtain a target query sentence.
The decoding module can be a decoder which is combined with the query grammar rule of the database and constructed based on a neural network, and is matched with the coding module for training and use.
In a second aspect, please refer to fig. 5, fig. 5 is a flowchart of a model training method provided in an embodiment of the present application, where the model training method is used to train a sentence generation model as shown in the foregoing method embodiment, and the model training method includes, but is not limited to, steps S501 to S504. As described in detail below in conjunction with fig. 5.
Step S501: and acquiring a conversation sample and an initial query statement corresponding to the conversation sample.
Wherein the dialog sample includes at least two pieces of dialog information.
Step S502: and acquiring database information, and determining a query grammar rule according to the database information.
It will be appreciated that the query grammar rules are particularly relevant to the database and its database type included in the database information.
Step S503: and converting the initial query statement into a reference query statement according to the query grammar rule.
The initial query statement is used for querying the target data indicated by the dialogue sample, and the reference query statement conforms to the query grammar structure of the database. For example, the initial query statement is "query a-type products", and the initial query statement is subjected to sql conversion to obtain a reference query statement.
Step S504: and inputting the dialogue sample, the database information and the reference query sentence into a preset generation model for training to obtain a sentence generation model.
Specifically, in step S504, the dialogue sample and the database information are input into a preset generation model to generate a statement, so as to obtain an output query statement. Calculating the similarity between the output query statement and the reference query statement through a loss function of a preset generating model, optimizing the loss function of the preset generating model according to the similarity, performing back propagation on the model loss of the loss function, continuously adjusting the model parameters until the similarity is greater than or equal to a similarity threshold value, stopping optimizing the preset generating model, and obtaining the statement generating model meeting the requirements.
Referring to fig. 6, fig. 6 is a block diagram of a query statement generating device according to an embodiment of the present disclosure. In some embodiments, the query statement generating device includes a first obtaining module 601, a first encoding module 602, a second encoding module 603, and a generating module 604.
A first obtaining module 601, configured to obtain a session record and database information, where the session record includes first session information and second session information, the first session information includes session information that is sent most recently in the session record, and the second session information includes session information other than the first session information in the session record; and acquiring a statement generating model, wherein the statement generating model comprises a coding module and a generating module.
The first encoding module 602 is configured to input the first dialogue information and the database information into the encoding module for encoding, so as to obtain target sequence information.
And a second encoding module 603, configured to input the second dialogue information and the database information into the encoding module for encoding, so as to obtain historical sequence information.
The generating module 604 is configured to input the session record, the target sequence information, and the historical sequence information into the generating module to generate a statement, so as to obtain a target query statement.
It should be noted that the query statement generation apparatus in the embodiment of the present application corresponds to the foregoing query statement generation method, and for a specific training process, reference is made to the foregoing query statement generation method, which is not described herein again.
Referring to fig. 7, fig. 7 is a block diagram of a model training apparatus according to an embodiment of the present disclosure. In some embodiments, the model training apparatus comprises a second obtaining module 701, a determining module 702, a converting module 703 and a training module 704.
A second obtaining module 701, configured to obtain a dialog sample and an initial query statement corresponding to the dialog sample, where the dialog sample includes at least two pieces of dialog information; and obtaining database information.
A determining module 702, configured to determine a query grammar rule according to the database information.
The conversion module 703 is configured to convert the initial query statement into a reference query statement according to the query grammar rule, where the reference query statement satisfies a grammar structure corresponding to the query grammar rule.
And the training module 704 is used for inputting the dialogue sample, the database information and the reference query statement into a preset generation model for training to obtain a statement generation model.
It should be noted that the model training apparatus in the embodiment of the present application corresponds to the aforementioned model training method, and for the specific model training step, reference is made to the aforementioned model training method, which is not repeated herein.
An embodiment of the present application further provides an electronic device, including:
at least one memory;
at least one processor;
at least one program;
a program is stored in the memory, and the processor executes at least one program to implement the present disclosure to implement the query statement generation method or the model training method described above. The electronic device can be any intelligent terminal including a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), a vehicle-mounted computer and the like.
The electronic device according to the embodiment of the present application will be described in detail below with reference to the drawings.
Referring to fig. 8, fig. 8 illustrates a hardware structure of an electronic device according to another embodiment, where the electronic device includes:
the processor 801 may be implemented by a general Central Processing Unit (CPU), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits, and is configured to execute a relevant program to implement the technical solution provided in the embodiment of the present application;
the memory 802 may be implemented in the form of a Read Only Memory (ROM), a static memory device, a dynamic memory device, or a Random Access Memory (RAM). The memory 802 may store an operating system and other application programs, and when the technical solution provided by the embodiments of the present disclosure is implemented by software or firmware, the relevant program codes are stored in the memory 802, and the processor 801 calls to execute the query statement generation method or the model training method according to the embodiments of the present disclosure;
an input/output interface 803 for realizing input and output of information;
the communication interface 804 is used for realizing communication interaction between the device and other devices, and can realize communication in a wired manner (such as USB, network cable, and the like) or in a wireless manner (such as mobile network, WIFI, bluetooth, and the like);
a bus 805 that transfers information between the various components of the device (e.g., the processor 801, memory 802, input/output interface 803, and communications interface 804);
wherein the processor 801, the memory 802, the input/output interface 803 and the communication interface 804 are communicatively connected to each other within the device via a bus 805.
The embodiment of the present application further provides a storage medium, which is a computer-readable storage medium, where computer-executable instructions are stored in the computer-readable storage medium, and the computer-executable instructions are used to enable a computer to execute the query statement generation method or the model training method.
The memory, as a non-transitory computer-readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer-executable programs. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and these remote memories may be connected to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The embodiments described in the embodiments of the present application are for more clearly illustrating the technical solutions of the embodiments of the present application, and do not constitute limitations on the technical solutions provided in the embodiments of the present application, and it is obvious to those skilled in the art that the technical solutions provided in the embodiments of the present application are also applicable to similar technical problems with the evolution of technologies and the emergence of new application scenarios.
It will be understood by those skilled in the art that the embodiments shown in the figures are not limiting, and may include more or fewer steps than those shown, or some of the steps may be combined, or different steps.
The above-described embodiments of the apparatus are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may also be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
It will be understood by those of ordinary skill in the art that all or some of the steps of the methods, systems, and functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
The terms "first," "second," "third," "fourth," and the like in the description of the application and the above-described figures, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be implemented in sequences other than those illustrated or described herein. Moreover, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" is used to describe the association relationship of the associated object, indicating that there may be three relationships, for example, "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of single item(s) or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b and c may be single or plural.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is only a logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the present application, which are essential or part of the technical solutions contributing to the prior art, or all or part of the technical solutions, may be embodied in the form of a software product, which is stored in a storage medium and includes multiple instructions for enabling an electronic device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing programs, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The preferred embodiments of the present application have been described above with reference to the accompanying drawings, and the scope of the claims of the embodiments of the present application is not limited thereto. Any modifications, equivalents and improvements that may occur to those skilled in the art without departing from the scope and spirit of the embodiments of the present application are intended to be within the scope of the claims of the embodiments of the present application.

Claims (10)

1. A query statement generation method, the method comprising:
obtaining a conversation record and database information, wherein the conversation record comprises first conversation information and second conversation information, the first conversation information comprises latest conversation information sent out in the conversation record, and the second conversation information comprises conversation information except the first conversation information in the conversation record;
obtaining a statement generating model, wherein the statement generating model comprises a coding module and a generating module;
inputting the first dialogue information and the database information into the coding module for coding to obtain target sequence information;
inputting the second dialogue information and the database information into the coding module for coding to obtain historical sequence information;
and inputting the conversation record, the target sequence information and the historical sequence information into the generation module for sentence generation to obtain a target query sentence.
2. The method of claim 1, wherein the encoding module comprises an encoding network, a first attention network, and a sequence generation network; the inputting the first dialogue information and the database information into the coding module for coding to obtain the target sequence information includes:
inputting the first dialogue information and the database information into the coding network for word vector coding to obtain a first word vector corresponding to the first dialogue information and a second word vector corresponding to the database information;
inputting the first word vector and the second word vector into the first attention network for processing to obtain a first attention result;
and inputting the first word vector and the first attention result into the sequence generation network for processing to obtain target sequence information.
3. The method according to claim 2, wherein the database information includes a table name of at least one data table and data item information of the data table, the data item information is used for determining a data item included in the data table, and the second word vector includes a first sub-vector corresponding to the data table and a second sub-vector corresponding to the data item information; the inputting the first word vector and the second word vector into the first attention network for processing to obtain a first attention result, including:
inputting the first word vector and the first sub-vector into the first attention network for processing to obtain a first processing result;
inputting the first word vector and the second sub-vector into the first attention network for processing to obtain a second processing result;
and splicing the first processing result and the second processing result into a first attention result.
4. The method of claim 2, wherein the inputting the first dialog information and the database information into the coding network for word vector coding to obtain a first word vector corresponding to the first dialog information and a second word vector corresponding to the database information comprises:
performing word segmentation processing on the first dialogue information to obtain a word segmentation set;
obtaining a table name of at least one data table and data item information of the data table from the database information, wherein the data item information is used for determining data items included in the data table;
constructing a table set according to all the table names;
constructing a data item set according to the data item information of the data table;
splicing the word segmentation set, the table set and the data item set into an input sequence;
and inputting the input sequence into the coding network for word vector coding to obtain a first word vector corresponding to the first dialogue information and a second word vector corresponding to the database information.
5. The method of any of claims 1 to 4, wherein the generating module comprises a second attention network, a stitching module, and a decoding module; the inputting the dialogue record, the target sequence information and the historical sequence information into the generation module to generate sentences to obtain target query sentences includes:
inputting the target sequence information and the historical sequence information into the second attention network for processing to obtain first weight information;
splicing the first weight information and the dialogue record through the splicing module to obtain a statement feature vector;
and inputting the statement feature vector into the decoding module for decoding to obtain a target query statement.
6. The method according to claim 5, wherein the obtaining, by the concatenation module, a sentence feature vector by concatenating the first weight information and the dialogue record comprises:
obtaining historical question information and historical answer information from the second dialogue information, wherein the historical answer information is used for answering the historical question information;
analyzing feedback information corresponding to the historical answer information according to the historical question information and the first dialogue information, wherein the feedback information is used for representing the matching degree between the historical answer information and the historical question information;
determining second weight information of the historical reply information according to the feedback information;
and splicing the first weight information, the second weight information and the dialogue records through the splicing module to obtain a statement feature vector.
7. A model training method for training the sentence generation model according to any one of claims 1 to 6, the method comprising:
obtaining a conversation sample and an initial query statement corresponding to the conversation sample, wherein the conversation sample comprises at least two pieces of conversation information;
acquiring database information, and determining a query grammar rule according to the database information;
converting the initial query statement into a reference query statement according to the query grammar rule;
and inputting the dialogue sample, the database information and the reference query sentence into a preset generation model for training to obtain a sentence generation model.
8. An apparatus for generating a query statement, the apparatus comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the conversation record comprises first conversation information and second conversation information, the first conversation information comprises the latest conversation information sent out in the conversation record, and the second conversation information comprises the conversation information except the first conversation information in the conversation record; obtaining a statement generating model, wherein the statement generating model comprises a coding module and a generating module;
the first coding module is used for inputting the first dialogue information and the database information into the coding module for coding processing to obtain target sequence information;
the second coding module is used for inputting the second dialogue information and the database information into the coding module for coding to obtain historical sequence information;
and the generation module is used for inputting the conversation record, the target sequence information and the historical sequence information into the generation module for sentence generation to obtain a target query sentence.
9. An electronic device, comprising:
at least one memory;
at least one processor;
at least one computer program;
the computer programs are stored in the memory, and the processor executes the at least one computer program to implement:
the query statement generation method according to any one of claims 1 to 6; or alternatively
The model training method of claim 7.
10. A computer-readable storage medium having computer-executable instructions stored thereon for causing a computer to perform:
the query statement generation method according to any one of claims 1 to 6; or alternatively
The model training method of claim 7.
CN202210906294.9A 2022-07-29 2022-07-29 Query statement generation method and device, model training method, equipment and medium Pending CN115238143A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210906294.9A CN115238143A (en) 2022-07-29 2022-07-29 Query statement generation method and device, model training method, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210906294.9A CN115238143A (en) 2022-07-29 2022-07-29 Query statement generation method and device, model training method, equipment and medium

Publications (1)

Publication Number Publication Date
CN115238143A true CN115238143A (en) 2022-10-25

Family

ID=83676907

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210906294.9A Pending CN115238143A (en) 2022-07-29 2022-07-29 Query statement generation method and device, model training method, equipment and medium

Country Status (1)

Country Link
CN (1) CN115238143A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116775848A (en) * 2023-08-23 2023-09-19 宁波吉利汽车研究开发有限公司 Control method, device, computing equipment and storage medium for generating dialogue information

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116775848A (en) * 2023-08-23 2023-09-19 宁波吉利汽车研究开发有限公司 Control method, device, computing equipment and storage medium for generating dialogue information
CN116775848B (en) * 2023-08-23 2023-11-07 宁波吉利汽车研究开发有限公司 Control method, device, computing equipment and storage medium for generating dialogue information

Similar Documents

Publication Publication Date Title
CN113792818B (en) Intention classification method and device, electronic equipment and computer readable storage medium
CN107798140B (en) Dialog system construction method, semantic controlled response method and device
CN111931517B (en) Text translation method, device, electronic equipment and storage medium
CN109740158B (en) Text semantic parsing method and device
CN112417102A (en) Voice query method, device, server and readable storage medium
CN113887215A (en) Text similarity calculation method and device, electronic equipment and storage medium
CN115309877B (en) Dialogue generation method, dialogue model training method and device
CN113239169A (en) Artificial intelligence-based answer generation method, device, equipment and storage medium
CN116561538A (en) Question-answer scoring method, question-answer scoring device, electronic equipment and storage medium
CN111026840A (en) Text processing method, device, server and storage medium
CN116541493A (en) Interactive response method, device, equipment and storage medium based on intention recognition
CN115510232A (en) Text sentence classification method and classification device, electronic equipment and storage medium
CN115272540A (en) Processing method and device based on virtual customer service image, equipment and medium
CN115497477A (en) Voice interaction method, voice interaction device, electronic equipment and storage medium
CN113221553A (en) Text processing method, device and equipment and readable storage medium
CN114492661A (en) Text data classification method and device, computer equipment and storage medium
CN115238143A (en) Query statement generation method and device, model training method, equipment and medium
CN113901838A (en) Dialog detection method and device, electronic equipment and storage medium
CN110287396B (en) Text matching method and device
CN117033796A (en) Intelligent reply method, device, equipment and medium based on user expression preference
CN116312463A (en) Speech synthesis method, speech synthesis device, electronic device, and storage medium
CN115795007A (en) Intelligent question-answering method, intelligent question-answering device, electronic equipment and storage medium
CN115796141A (en) Text data enhancement method and device, electronic equipment and storage medium
CN115292495A (en) Emotion analysis method and device, electronic equipment and storage medium
CN114611529A (en) Intention recognition method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination