CN112559713A - Text relevance judgment method and device, model, electronic equipment and readable medium - Google Patents

Text relevance judgment method and device, model, electronic equipment and readable medium Download PDF

Info

Publication number
CN112559713A
CN112559713A CN202011548198.9A CN202011548198A CN112559713A CN 112559713 A CN112559713 A CN 112559713A CN 202011548198 A CN202011548198 A CN 202011548198A CN 112559713 A CN112559713 A CN 112559713A
Authority
CN
China
Prior art keywords
text
vector
dependency relationship
semantic role
semantic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011548198.9A
Other languages
Chinese (zh)
Other versions
CN112559713B (en
Inventor
张文君
詹俊峰
庞海龙
薛璐影
施鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202011548198.9A priority Critical patent/CN112559713B/en
Publication of CN112559713A publication Critical patent/CN112559713A/en
Application granted granted Critical
Publication of CN112559713B publication Critical patent/CN112559713B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The disclosure provides a text relevance judgment method and device, a model, electronic equipment and a readable medium, and relates to the technical field of computers and artificial intelligence. The specific implementation scheme is as follows: determining a semantic role vector and a dependency relationship vector in a first text and a semantic role vector and a dependency relationship vector in a second text; respectively carrying out vector fusion on the semantic role vector and the dependency relationship vector in the first text and the semantic role vector and the dependency relationship vector in the second text, and carrying out vector fusion again on the vector fusion results obtained respectively to obtain a fused vector of the first text and the second text; and classifying the fused vectors through a preset neural network, and determining the correlation between the first text and the second text according to the classification result. According to the scheme of the disclosure, the answer contents provided in the question answering service can be identified.

Description

Text relevance judgment method and device, model, electronic equipment and readable medium
Technical Field
The present disclosure relates to the field of computer and artificial intelligence technologies, and in particular, to a text relevance determination method and apparatus, a model, an electronic device, and a readable medium.
Background
With the development of network technology, more and more users acquire information through a question and answer service provided by a network. For example, a user presents a question on a platform providing question and answer services according to the needs of the user, and other users in the platform present answers.
Due to the openness of the network, the quality of answers provided by the question-answering service varies greatly. Some answers may help the questioner to obtain information, and some answers may not satisfy the questioner's request, i.e., answer questions, and thus, it is necessary to identify the contents of the questions asked in the questioning and answering service.
Disclosure of Invention
A text relevance judgment method and device, a model, an electronic device and a readable medium are provided.
According to a first aspect, there is provided a correlation determination method including: determining a semantic role vector and a dependency relationship vector in a first text and a semantic role vector and a dependency relationship vector in a second text; respectively carrying out vector fusion on the semantic role vector and the dependency relationship vector in the first text and the semantic role vector and the dependency relationship vector in the second text, and carrying out vector fusion again on the vector fusion results obtained respectively to obtain a fused vector of the first text and the second text; and classifying the fused vectors through a preset neural network, and determining the correlation between the first text and the second text according to the classification result.
According to a second aspect, there is provided a correlation determination apparatus comprising: the vector determination module is used for determining a semantic role vector and a dependency relationship vector in the first text and a semantic role vector and a dependency relationship vector in the second text; the vector fusion module is used for respectively carrying out vector fusion on the semantic role vector and the dependency relationship vector in the first text and the semantic role vector and the dependency relationship vector in the second text, and carrying out vector fusion on the vector fusion results obtained respectively again to obtain a fused vector of the first text and the second text; and the classification determining module is used for classifying the fused vectors through a preset neural network and determining the correlation between the first text and the second text according to a classification result.
According to a third aspect, a text relevance determination model is provided, which is configured to execute any one of the above relevance determination methods according to a received first text and a received second text, so as to obtain a relevance determination result of the first text and the second text.
According to a fourth aspect, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute any one of the correlation determination methods.
According to a fifth aspect, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform any one of the above-described correlation determination methods.
According to a sixth aspect, there is provided a computer program product comprising a computer program which, when executed by a processor, implements any of the above-described correlation determination methods.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram of a scenario provided by an embodiment of the present disclosure;
fig. 2 is a flowchart of a text relevance determination method provided by an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a model principle of a text relevance determination method according to an embodiment of the present disclosure
Fig. 4 is a flowchart illustrating a text relevance determination method according to an exemplary embodiment of the disclosure;
fig. 5 is a schematic structural diagram of a text relevance determination apparatus according to an embodiment of the present disclosure;
fig. 6 is a block diagram of an electronic device for implementing a text relevance determination method according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
In the implementation of the present disclosure, the question-answering service may be, for example, a question-answering service provided by a question-answering service platform such as a community question-answering platform, a website question-answering platform, and an interactive question-answering platform. Taking the community question-answering platform as an example, the community question-answering platform allows users to put forward questions according to own requirements, and allows users to collaborate to edit and answer the questions, so that an interactive community with knowledge attributes is formed.
The interactive community mode provides a channel for users to acquire knowledge, but due to the openness of the community, the answer quality difference of the questioning and answering community is large, some answers can help questioners to acquire information, and some answers cannot meet the requirements of the questioners, namely answer questions randomly. Therefore, the quality difference of the answer content is a main problem to be solved in the community question and answer process.
Low-quality answers can generally be identified by one based on similarity algorithms and by another based on vocabularies.
When the answer content is identified based on the similarity algorithm, the question-answer content can be converted into corresponding feature vectors, the direct distance of the question-answer vector is calculated through cosine similarity between the question and the answer, and the semantic correlation between the question and the answer is determined according to the distance between the vectors, so that the answer content with low semantic correlation is obtained. However, the application range of recognition based on the similarity algorithm is limited, and the method is only used for recognizing scenes of question and answer contents with low correlation, and cannot accurately recognize the situations with high question and answer semantic correlation. As an example, the question is "do apples well and eat poorly? When the answer is 'apple is a fruit', the question and the answer are described in the apple, the similarity obtained by calculation based on the similarity algorithm is extremely high, and the contents asked by the answer cannot be successfully identified.
When recognition is performed based on a vocabulary, whether answers are questions of answers or not is generally judged by detecting whether answers hit a preset low-quality vocabulary, and the judgment effect is greatly related to the magnitude of the low-quality vocabulary. Therefore, the recognition method based on the word list is only suitable for hitting low-quality contents of the preset feature words, the judgment capability cannot be provided for the missed contents, and meanwhile, the arrangement of the low-quality word list has great manual workload.
Fig. 1 is a scene schematic of an embodiment of the disclosure. In the scenario shown in fig. 1, it includes: a terminal 11, a question and answer service platform 12, a network 13 and a text processing device 14.
Among them, the terminal 11 may include but is not limited to: personal computers, smart phones, tablets, personal digital assistants, servers, and the like. The user can input the contents of the questions into the question-answering service platform 12 through the terminal 11.
The question answering service platform 12 may be implemented as a network platform or an application providing question answering service, and if implemented as an application providing question answering service, may be installed in the terminal 11. The question-answering service platform 12 can search in the network 13 according to the received question content to obtain the answer content of the question content; alternatively, the answer content of the quiz content may be acquired by collecting edits and answers to the quiz content by other users in the network 13.
Network 13 is the medium used to provide communications links between various platforms and electronic devices. In particular, network 13 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
As shown in FIG. 1, in one embodiment, the question-answering service platform 12 may input the received question content and the obtained answer content to the text processing device 14; the text processing device 14 executes the relevance determination method of the embodiment of the present disclosure, based on the received question content and answer content from the question-and-answer service platform 12, to determine whether the answer content is an answer to a question.
In another embodiment, the terminal 11 may also input the questioning content and the received answering content into the text processing device 14 after receiving the answering content provided by the question-answering service platform 12, and the text processing device 14 executes the relevance determination method of the embodiment of the present disclosure according to the questioning content and the answering content received from the user terminal 11 to determine whether the answering content is answered or not answered.
It should be understood that the number of devices in fig. 1 is merely illustrative. According to the actual application needs, can carry out nimble adjustment. For example, the text processing device 14 may be a server cluster including one service device or a plurality of service devices. The configuration can be flexibly configured according to the requirements, and the content in the aspect is not limited.
Fig. 2 is a flowchart of a text relevance determination method according to an embodiment of the present disclosure.
In a first aspect, referring to fig. 1, an embodiment of the present disclosure provides a text relevance determination method, which may include the following steps.
S110, determining a semantic role vector and a dependency relationship vector in the first text and a semantic role vector and a dependency relationship vector in the second text.
And S120, respectively carrying out vector fusion on the semantic role vector and the dependency relationship vector in the first text and the semantic role vector and the dependency relationship vector in the second text, and carrying out vector fusion again on the vector fusion results obtained respectively to obtain a fused vector of the first text and the second text.
S130, classifying the fused vectors through a preset neural network, and determining the correlation between the first text and the second text according to the classification result.
According to the text relevance determination method of the embodiment of the disclosure, vector fusion processing can be performed on the semantic role vector and the dependency relationship vector of the first text, vector fusion processing can be performed on the semantic role vector and the dependency relationship vector of the second text, vector fusion processing is performed on the vector fusion results obtained respectively, a fused vector between the first text and the second text is obtained, the fused vector is classified by using a preset neural network, and the relevance between the first text and the second text is determined according to the classification result.
According to the text relevance determination method of the embodiment of the disclosure, in the case that the first text and the second text are the question text and the answer text, the relevance of the question and the answer can be determined, so that the content of the answer to the question can be identified.
Compared with a low-quality identification method based on a word list, the method is not limited by preset word list contents, does not need manual mining and sorting of the word list, can save labor cost, and has wider application.
Compared with a semantic relevance judging method, the semantic role vector and the dependency relationship vector of the text represent the syntactic structure of the text, so that the vector of the first text and the vector of the second text can be obtained from the perspective of the syntactic structure, the influence of similar lexical meanings on the similarity judging result can be removed, and the judging result has stability; moreover, because the change space of the grammar structure is far smaller than the change space of the meaning of words, the text similarity judgment method has wider universality; furthermore, the feature space corresponding to the grammar structure is small, so that when the feature vector is processed through the neural network, the occupied space of the required module is small, the processing speed is high, and the recognition efficiency is high.
In some embodiments, the first text and the second text are texts or character strings having a length less than a predetermined length threshold.
In this embodiment, the shorter the text length is, the fewer semantic roles in the text are, the simpler the dependency relationship in the text is, and the more accurate the determination result is.
In some embodiments, the first text and the second text are question text and answer text.
In this embodiment, when the first text and the second text are the question text and the answer text, vector fusion may be performed on the semantic role vector and the dependency relationship vector of the question text and on the semantic role and the dependency relationship of the answer text based on the angle of the grammar structure, and vector fusion is performed again on the vector fusion result of the question text and the vector fusion result of the answer text to obtain a fused vector, a preset neural network is used to classify the fused vector, and the relevance between the question text and the answer text is determined according to the classification result, so that the question and the answer are matched based on the grammar structure completely, and the relevance determination result is not affected by the sense of words and has stability; the grammar structure change space is far smaller than the change space of the word meaning, so that the method has wider universality on the judgment related to question answering; the corresponding feature space of the grammatical structures of the question text and the answer text is small, and when the feature vectors are processed through the neural network, the occupied space of the required modules is small, and the processing speed is high.
In the embodiment of the present disclosure, the first text and the second text may include, but are not limited to, a question text and an answer text, and any two texts that need to be subjected to correlation determination may be processed by the text similarity determination method of the embodiment of the present disclosure, so that diversity and flexibility of data that can be processed by text similarity determination can be improved.
In some embodiments, prior to step S110, the method further comprises: and S11, analyzing the semantic role and the dependency relationship of the acquired first text and the acquired second text respectively to obtain the semantic role and the dependency relationship of the first text and the semantic role and the dependency relationship of the second text.
Illustratively, semantic characters include one or more of subjects, objects, indirect objects, predicate verbs, ways, times, and non-semantic characters in text; the dependency relationship comprises one or more of a major relationship, a parallel relationship and a moving guest relationship.
In this embodiment, the semantic role and dependency relationship of the first text and the semantic role and dependency relationship of the second role are obtained through parsing of the semantic role and dependency relationship, and a data basis is provided for subsequent text correlation determination based on a grammar structure.
In some embodiments, step S110 may specifically include the following steps.
S21, the semantic roles and the dependency relations in the first text and the semantic roles and the dependency relations in the second text are respectively encoded, and sparse feature codes corresponding to the semantic roles and the dependency relations in the first text and sparse feature codes corresponding to the semantic roles and the dependency relations in the second text are respectively obtained.
And S22, embedding the corresponding sparse feature codes respectively to obtain dense feature codes corresponding to the semantic roles and the dependency relationships in the first text and dense feature codes corresponding to the semantic roles and the dependency relationships in the second text.
And S23, performing convolution and pooling on the dense feature codes respectively corresponding to the dense feature codes to obtain a semantic role vector and a dependency relationship vector in the first text and a semantic role vector and a dependency relationship vector in the second text.
In this embodiment, the neural network information compression processing steps of S21-S23 are performed for the semantic role and dependency relationship in the first text and the semantic role and dependency relationship in the second text, respectively, to obtain the semantic role vector and dependency relationship vector in the first text and the semantic role vector and dependency relationship vector in the second text.
In step S21, the purpose of the encoding is to convert the semantic roles and dependencies in the first text into codes that can be recognized by the computer, and to convert the semantic roles and dependencies in the second text into codes that can be recognized by the computer. Illustratively, the encoding manner may include, but is not limited to, any one of one-hot (one-hot) encoding, binarization encoding, histogram encoding, and count encoding.
By the feature coding, sparse feature codes corresponding to the semantic roles and the dependency relationships in the first text and sparse feature codes corresponding to the semantic roles and the dependency relationships in the second text can be obtained.
In step S22, each sparse feature code generated in step S21 may be converted into a corresponding dense feature code by an Embedding (Embedding) process, thereby reducing the dimension of the feature space and the complexity of feature construction, and reducing the amount of computation.
In step S23, each dense feature code generated in step S22 may be subjected to ordered information mixing by convolution processing, and the remaining information is removed by pooling processing, leaving the key information. After convolution processing and pooling processing, a semantic role vector and a dependency relationship vector in the first text and a semantic role vector and a dependency relationship vector in the second text are obtained, so that original character expression of the semantic role and the dependency relationship in the first text and the semantic role and the dependency relationship in the first text is realized, and the original character expression is converted into a comprehensive characteristic vector containing more information.
In some embodiments, step S120 may specifically include the following steps.
S31, performing vector addition on the semantic role vector and the dependency relationship vector in the first text to obtain a problem vector;
s32, performing vector addition on the semantic role vector and the dependency relationship vector in the second text to obtain an answer vector;
and S33, carrying out vector splicing on the question vector and the answer vector to obtain a fused vector of the first text and the second text.
In the step, vector addition is carried out on the semantic role vector and the dependency relationship vector in the same text to obtain a vector after the text addition; and carrying out vector splicing on the added vectors of the two texts to obtain a new vector fused in a splicing mode.
In some embodiments, the predetermined neural network comprises a fully connected layer; step S130 may specifically include: s41, classifying the fused vectors through the full connection layer to obtain the classification result of the fused vectors; and S42, determining that the first text is related to the second text when the classification result is a preset first value, and determining that the first text is not related to the second text when the classification result is a preset second value.
In this embodiment, the vector after the fusion of the first text and the second text is input to a neural network for processing, the neural network includes a fully connected layer, the fully connected layer can function as a classifier in the neural network, so as to obtain a classification result of the vector after the fusion, and the correlation and the matching degree of the first text and the second text are determined according to the value of the classification result.
In the text relevance judging method of the embodiment of the disclosure, whether the first text and the second text are relevant or not can be judged according to the grammatical structure, the method can disassemble the text according to the grammatical structure through semantic roles such as a principal and a predicate and the like and the mutual dependency relationship, and input the information obtained after disassembly into a neural network for classification and identification; for example, when the above-mentioned question is "apple is good and not good to eat" and the answer is "apple is a kind of fruit", the text relevance determination method according to the embodiment of the present disclosure can remove the influence of the meaning of a word on the text relevance determination result, and obtain a more accurate determination result.
For a better understanding of the present disclosure, a text relevance determination flow of an exemplary embodiment of the present disclosure is described below with reference to fig. 3 and 4.
Fig. 3 is a schematic model principle diagram of a text relevance determination method according to an embodiment of the present disclosure. As shown in fig. 3, the model structure of the embodiment of the present disclosure may include: an input module 310, a parse (parse) module 320, an encoding module 330, an embedding processing module 340, a convolution and pooling module 350, a merge (Cancat) module 360, a fusion (fusion) module 370, a full connection module 380, and an output module 390.
In fig. 3, the input module 310 is configured to receive a first text and a second text, the first text being, for example, a question text "is sky blue", and the second text being, for example, a reply text "is not blue".
The analysis module 320 is configured to analyze the question and the answer respectively for semantic roles and dependency relationships, and analyze lists of semantic roles (verbs, nouns, adjectives, and the like) and lists of dependency relationships (verb relationships, predicate relationships, association structures, and the like) included in the question and the answer respectively.
The encoding module 330 is configured to encode the semantic role list and the dependency relationship list of the first text, and encode the semantic role list and the dependency relationship list of the second text to obtain a sparse feature code corresponding to the semantic role and the dependency relationship in the first text and a sparse feature code corresponding to the semantic role and the dependency relationship in the second text.
The embedding processing module 340 is configured to perform embedding processing on the sparse feature codes corresponding to the semantic roles and the dependency relationships in the first text and the sparse feature codes corresponding to the semantic roles and the dependency relationships in the second text, respectively, to obtain dense feature codes corresponding to the semantic roles and the dependency relationships in the first text and dense feature codes corresponding to the semantic roles and the dependency relationships in the second text.
Illustratively, when one-hot is adopted for feature coding, the sparse matrix formed by one-hot can be compressed into a dense matrix through an embedding operation.
A convolution and pooling module 350 for generating a semantic role vector and dependency vector for the first text and a semantic role vector and dependency vector for the second text via one or more one-dimensional convolution and one-dimensional pooling compressions.
The merging module 360 is configured to perform vector addition on the semantic role vector and the dependency relationship of the first text to obtain a vector of the first text, that is, a problem vector; and the processor is further configured to perform vector addition on the semantic role vector and the dependency relationship of the second text to obtain a vector of the second text, that is, an answer vector.
The fusion module 370 is configured to perform vector concatenation on the vector of the first text and the vector of the second text to obtain a fused vector of the first text and the second text, which may be recorded as a question & answer vector, for example.
And the full-connection module 380 is configured to process the fused vector of the first text and the second text to obtain a classification result of the fused vector.
And an output module 390, configured to output a classification result of the fused vector.
Through the model structure of the neural network in the embodiment of the disclosure, parsing of a syntactic structure of an input first text and parsing of a syntactic structure of a second text can be performed, and respective parsing results are respectively encoded and vectorized to obtain a semantic role vector and a dependency relationship vector of the first text and a semantic role vector and a dependency relationship vector of the second text, and vector fusion is performed on the semantic role and the dependency relationship of the first text and the semantic role and the dependency relationship of the second text through a vector addition and vector splicing method, so that the fused vectors are classified by using a full connection layer to obtain a classification result of the fused vectors, and the correlation between the first text and the second text is determined.
It should be understood that the Neural Network in the embodiment of the present disclosure may be a Recurrent Neural Network (RNN) or a Convolutional Neural Network (CNN), or may be other Neural Networks except RNN and CNN, and a user may select a suitable Neural Network for processing according to actual needs, which is not specifically limited in the embodiment of the present disclosure.
Fig. 4 is a flowchart illustrating a text relevance determination method according to an exemplary embodiment of the present disclosure. As shown in fig. 4, in the case where the first text and the second text are the question text and the answer text, the text relevance determination method may include the following steps.
S410, obtaining the semantic role and the dependency relationship of the question text through syntactic structure analysis, and answering the semantic role and the dependency relationship in the text.
And S420, performing neural network information compression processing in the embodiment of the disclosure on the semantic role and the dependency relationship of the question text and the semantic role and the dependency relationship of the answer text respectively to obtain a semantic role vector of the question text, a dependency relationship vector of the question text, a semantic role vector of the answer text and a dependency relationship vector of the answer text.
In this step, the neural network information compression processing in the embodiment of the present disclosure may include, for example, feature coding, embedding processing, convolution processing, and pooling processing, and perform feature coding, embedding processing, convolution processing, and pooling processing on the semantic role and the dependency relationship in the first text and the semantic role and the dependency relationship in the second text, respectively, to obtain a semantic role vector and a dependency relationship vector in the first text and a semantic role vector and a dependency relationship vector in the second text.
And S430, performing semantic and dependency relationship fusion processing on the semantic role vector and the dependency relationship vector in the question text, and performing semantic and dependency relationship fusion processing on the semantic role vector and the dependency relationship vector in the answer text.
In this step, the semantic role vector of the first text and the dependency relationship vector are added to obtain a question vector, and the semantic role vector of the second text and the dependency relationship vector are added to obtain an answer vector.
And S440, fusing the question vector and the answer vector.
In this step, the question vector and the answer vector are spliced to obtain a new spliced vector, i.e., a fused vector of the question vector and the answer vector shown in fig. 4.
S450, processing the fused vector by a full connection layer in a neural network to obtain a classification result.
And S460, judging whether the question text is related to the answer text according to the classification result.
As an example, the classification result is a preset classification value, for example, the classification value is 0 or 1, and if the preset classification value is 0, which indicates no correlation, and the classification value is 1, which indicates correlation, it may be determined whether the question text and the answer text are correlated, i.e., whether to answer a question.
Fig. 5 is a schematic structural diagram of a text relevance determination apparatus according to an embodiment of the disclosure.
In a second aspect, referring to fig. 5, an embodiment of the present disclosure provides a text relevance determination apparatus 500, which may include the following modules.
A vector determination module 510 for determining a semantic role vector and a dependency vector in the first text and a semantic role vector and a dependency vector in the second text.
The vector fusion module 520 is configured to perform vector fusion on the semantic role vector and the dependency relationship vector in the first text, and perform vector fusion on the semantic role vector and the dependency relationship vector in the second text, and perform vector fusion again on the vector fusion results obtained respectively to obtain a fused vector of the first text and the second text.
And a classification determining module 530, configured to classify the fused vector through a preset neural network, and determine a correlation between the first text and the second text according to a classification result.
In some embodiments, the first text and the second text are question text and answer text.
In some embodiments, the first text and the second text are texts or character strings having a length less than a predetermined threshold.
In some embodiments, the text relevance determination apparatus 500 further includes: and the analysis module is used for respectively analyzing the semantic role and the dependency relationship of the acquired first text and the acquired second text before determining the semantic role vector and the dependency relationship vector in the first text and the semantic role vector and the dependency relationship vector in the second text to obtain the semantic role and the dependency relationship of the first text and the semantic role and the dependency relationship of the second text.
In some embodiments, the vector determination module 510 comprises: the encoding unit is used for encoding the semantic roles and the dependency relationships in the first text and the semantic roles and the dependency relationships in the second text respectively to obtain sparse feature codes corresponding to the semantic roles and the dependency relationships in the first text and sparse feature codes corresponding to the semantic roles and the dependency relationships in the second text respectively; the embedding processing unit is used for embedding the corresponding sparse feature codes respectively to obtain dense feature codes corresponding to the semantic roles and the dependency relationships in the first text and dense feature codes corresponding to the semantic roles and the dependency relationships in the second text; and the convolution and pooling processing unit is used for performing convolution and pooling processing on the dense feature codes respectively corresponding to the dense feature codes to obtain the semantic role vector and the dependency relationship vector in the first text and the semantic role vector and the dependency relationship vector in the second text.
In some embodiments, the vector fusion module 520 includes: the first vector addition unit is used for carrying out vector addition on the semantic role vector and the dependency relationship vector in the first text to obtain a problem vector; the second vector addition unit is used for carrying out vector addition on the semantic role vector and the dependency relationship vector in the second text to obtain an answer vector; and the vector splicing unit is used for carrying out vector splicing on the question vector and the answer vector to obtain a fused vector of the first text and the second text.
In some embodiments, the predetermined neural network comprises a fully-connected layer; the classification determination module 530 includes: the vector classification unit is used for classifying the fused vectors through the full-connection layer to obtain a classification result of the fused vectors; the classification determining module 530 is further configured to determine that the first text is related to the second text if the classification result is a preset first value, and determine that the first text is not related to the second text if the classification result is a preset second value.
In some embodiments, the first text and the second text are question text and answer text.
In some embodiments, the first text and the second text are texts or character strings having a length less than a predetermined threshold.
According to the text relevance determination device of the embodiment of the disclosure, vector fusion can be performed on the semantic role vector and the dependency relationship vector of the first text and the semantic role and dependency relationship of the second text, a preset neural network is used for classifying the fused vectors, and the relevance between the first text and the second text is determined according to the classification result.
The embodiment of the disclosure is not limited by the content of the preset word list, and the word list does not need to be mined and sorted manually, so that the labor cost can be saved, and the method and the device have wider application; because the semantic role vector and the dependency relationship vector of the text embody the grammatical structure of the text, the embodiment of the disclosure can acquire the vectors of the first text and the second text from the perspective of the grammatical structure, and can remove the influence of similar meaning of words on the similarity judgment result, so the judgment result has stability; moreover, because the change space of the grammar structure is far smaller than the change space of the meaning of words, the text similarity judgment method has wider universality; furthermore, the feature space corresponding to the syntactic structure is small, so that when the feature vector is processed through the neural network, the occupied space of the required module is small, and the processing speed is high.
In a third aspect, an embodiment of the present disclosure further provides a text relevance determination model, where the text relevance determination model is configured to execute any one of the text relevance determination methods described in conjunction with the foregoing fig. 2 to fig. 4 according to the received first text and the received second text.
In one embodiment, the model structure of the text relevance determination model may refer to the model structure described in conjunction with fig. 3, a processing flow of the acquired first text and the acquired second text by the text relevance determination model, a processing flow of the module structure shown in fig. 3, and a processing flow of the text relevance determination method described in conjunction with fig. 2 to 4.
It should be clear that, for convenience and brevity of description, detailed description of known methods is omitted here, and for specific working processes of the above-described systems, modules and units, reference may be made to corresponding processes in the foregoing method embodiments, which are not described herein again.
The present disclosure also provides an electronic device and a readable storage medium according to an embodiment of the present disclosure.
FIG. 6 illustrates a schematic block diagram of an example electronic device 600 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)602 or a computer program in a Random Access Memory (RAM)603 from a storage unit 608. In the RAM603, various programs and data required for the operation of the device 600 can also be stored. The calculation unit 601, the ROM602, and the RAM603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604. A number of components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 601 executes the respective methods and processes described above, such as the text relevance determination method. For example, in some embodiments, the text relevance determination method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM602 and/or the communication unit 609. When the computer program is loaded into the RAM603 and executed by the computing unit 601, one or more steps of the text relevance determination method described above may be performed. Alternatively, in other embodiments, the calculation unit 601 may be configured to perform the text relevance determination method in any other suitable way (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to an embodiment of the present disclosure, there is also provided a computer program product comprising a computer program which, when executed by a processor, implements any of the above-described text relevance determination methods.
Artificial intelligence is the subject of research that causes computers to simulate certain mental processes and intelligent behaviors of humans (e.g., learning, reasoning, planning, etc.), both at the hardware level and at the software level. The artificial intelligence hardware technology generally comprises the technologies of a sensor, a special artificial intelligence chip, cloud computing, distributed storage, big data processing and the like; the artificial intelligence software technology comprises a computer vision technology, a voice recognition technology, a natural language processing technology, machine learning/deep learning, a big data processing technology, a knowledge map technology and the like.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (18)

1. A text relevance determination method is characterized by comprising the following steps:
determining a semantic role vector and a dependency relationship vector in a first text and a semantic role vector and a dependency relationship vector in a second text;
respectively carrying out vector fusion on the semantic role vector and the dependency relationship vector in the first text and the semantic role vector and the dependency relationship vector in the second text, and carrying out vector fusion again on the vector fusion results obtained respectively to obtain a fused vector of the first text and the second text;
classifying the fused vector through a preset neural network, and determining the correlation between the first text and the second text according to a classification result.
2. The method of claim 1, wherein prior to determining the semantic role vector and dependency vector in the first text and the semantic role vector and dependency vector in the second text, the method further comprises:
and analyzing the semantic role and the dependency relationship of the acquired first text and the acquired second text respectively to obtain the semantic role and the dependency relationship of the first text and the semantic role and the dependency relationship of the second text.
3. The method of claim 1, wherein determining the semantic role vector and the dependency vector in the first text and the semantic role vector and the dependency vector in the second text comprises:
respectively coding the semantic roles and the dependency relations in the first text and the semantic roles and the dependency relations in the second text to respectively obtain sparse feature codes corresponding to the semantic roles and the dependency relations in the first text and sparse feature codes corresponding to the semantic roles and the dependency relations in the second text;
embedding the corresponding sparse feature codes respectively to obtain dense feature codes corresponding to the semantic roles and the dependency relationships in the first text and dense feature codes corresponding to the semantic roles and the dependency relationships in the second text;
and performing convolution and pooling on the respectively corresponding dense feature codes to obtain a semantic role vector and a dependency relationship vector in the first text and a semantic role vector and a dependency relationship vector in the second text.
4. The method according to claim 1, wherein the vector fusion is performed on the semantic role vector and the dependency relationship vector in the first text and the semantic role vector and the dependency relationship vector in the second text, and the vector fusion results obtained by the vector fusion are vector fused again to obtain a fused vector of the first text and the second text, and the method comprises:
performing vector addition on the semantic role vector and the dependency relationship vector in the first text to obtain a problem vector;
vector addition is carried out on the semantic role vector and the dependency relationship vector in the second text to obtain an answer vector;
and carrying out vector splicing on the question vector and the answer vector to obtain a fused vector of the first text and the second text.
5. The method of claim 1, wherein the pre-defined neural network comprises a fully-connected layer; the classifying the fused vector through a preset neural network, and determining the correlation between the first text and the second text according to the classification result, including:
classifying the fused vector through the full connection layer to obtain a classification result of the fused vector;
and determining that the first text is related to the second text when the classification result is a preset first value, and determining that the first text is not related to the second text when the classification result is a preset second value.
6. The method according to any one of claims 1 to 5,
the first text and the second text are question text and answer text.
7. The method according to any one of claims 1 to 5,
the first text and the second text are texts or character strings with lengths smaller than a preset threshold value.
8. A correlation determination device, comprising:
the vector determination module is used for determining a semantic role vector and a dependency relationship vector in the first text and a semantic role vector and a dependency relationship vector in the second text;
the vector fusion module is used for respectively carrying out vector fusion on the semantic role vector and the dependency relationship vector in the first text and the semantic role vector and the dependency relationship vector in the second text, and carrying out vector fusion again on the vector fusion results obtained respectively to obtain a fused vector of the first text and the second text;
and the classification determining module is used for classifying the fused vector through a preset neural network and determining the correlation between the first text and the second text according to a classification result.
9. The apparatus of claim 8, further comprising:
and the analysis module is used for respectively analyzing the semantic role and the dependency relationship of the acquired first text and the acquired second text before determining the semantic role vector and the dependency relationship vector in the first text and the semantic role vector and the dependency relationship vector in the second text to obtain the semantic role and the dependency relationship of the first text and the semantic role and the dependency relationship of the second text.
10. The apparatus of claim 8, wherein the vector determination module comprises:
the encoding unit is used for encoding the semantic roles and the dependency relationships in the first text and the semantic roles and the dependency relationships in the second text respectively to obtain sparse feature codes corresponding to the semantic roles and the dependency relationships in the first text and sparse feature codes corresponding to the semantic roles and the dependency relationships in the second text respectively;
the embedding processing unit is used for embedding the corresponding sparse feature codes respectively to obtain dense feature codes corresponding to the semantic roles and the dependency relationships in the first text and dense feature codes corresponding to the semantic roles and the dependency relationships in the second text;
and the convolution and pooling processing unit is used for performing convolution and pooling processing on the dense feature codes respectively corresponding to the dense feature codes to obtain the semantic role vector and the dependency relationship vector in the first text and the semantic role vector and the dependency relationship vector in the second text.
11. The apparatus of claim 8, wherein the vector fusion module comprises:
the first vector addition unit is used for carrying out vector addition on the semantic role vector and the dependency relationship vector in the first text to obtain a problem vector;
the second vector addition unit is used for carrying out vector addition on the semantic role vector and the dependency relationship vector in the second text to obtain an answer vector;
and the vector splicing unit is used for carrying out vector splicing on the question vector and the answer vector to obtain a fused vector of the first text and the second text.
12. The apparatus of claim 8, wherein the pre-defined neural network comprises a fully-connected layer; the classification determination module includes:
the vector classification unit is used for classifying the fused vectors through the full-connection layer to obtain a classification result of the fused vectors;
the classification determining module is further configured to determine that the first text is related to the second text when the classification result is a preset first value, and determine that the first text is not related to the second text when the classification result is a preset second value.
13. The apparatus according to any one of claims 8-12,
the first text and the second text are question text and answer text.
14. The apparatus according to any one of claims 8-12,
the first text and the second text are texts or character strings with lengths smaller than a preset threshold value.
15. A text relevance determination model is characterized in that,
the text relevance determination model is used for executing the method of any one of claims 1 to 7 according to the received first text and second text to obtain a relevance determination result of the first text and the second text.
16. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
17. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.
18. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-7.
CN202011548198.9A 2020-12-24 2020-12-24 Text relevance judging method and device, model, electronic equipment and readable medium Active CN112559713B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011548198.9A CN112559713B (en) 2020-12-24 2020-12-24 Text relevance judging method and device, model, electronic equipment and readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011548198.9A CN112559713B (en) 2020-12-24 2020-12-24 Text relevance judging method and device, model, electronic equipment and readable medium

Publications (2)

Publication Number Publication Date
CN112559713A true CN112559713A (en) 2021-03-26
CN112559713B CN112559713B (en) 2023-12-01

Family

ID=75033129

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011548198.9A Active CN112559713B (en) 2020-12-24 2020-12-24 Text relevance judging method and device, model, electronic equipment and readable medium

Country Status (1)

Country Link
CN (1) CN112559713B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113268566A (en) * 2021-05-28 2021-08-17 平安国际智慧城市科技股份有限公司 Question and answer pair quality evaluation method, device, equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104298658A (en) * 2014-10-29 2015-01-21 百度在线网络技术(北京)有限公司 Method and device for acquiring search result
CN104462327A (en) * 2014-12-02 2015-03-25 百度在线网络技术(北京)有限公司 Computing method, search processing method, computing device and search processing device for sentence similarity
CN106202010A (en) * 2016-07-12 2016-12-07 重庆兆光科技股份有限公司 The method and apparatus building Law Text syntax tree based on deep neural network
CN107818081A (en) * 2017-09-25 2018-03-20 沈阳航空航天大学 Sentence similarity appraisal procedure based on deep semantic model and semantic character labeling
CN110969023A (en) * 2018-09-29 2020-04-07 北京国双科技有限公司 Text similarity determination method and device
CN111353306A (en) * 2020-02-22 2020-06-30 杭州电子科技大学 Entity relationship and dependency Tree-LSTM-based combined event extraction method
US20200293921A1 (en) * 2019-03-12 2020-09-17 Beijing Baidu Netcom Science And Technology Co., Ltd. Visual question answering model, electronic device and storage medium
CN111930914A (en) * 2020-08-14 2020-11-13 工银科技有限公司 Question generation method and device, electronic equipment and computer-readable storage medium
CN112036189A (en) * 2020-08-10 2020-12-04 中国人民大学 Method and system for recognizing gold semantic

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104298658A (en) * 2014-10-29 2015-01-21 百度在线网络技术(北京)有限公司 Method and device for acquiring search result
CN104462327A (en) * 2014-12-02 2015-03-25 百度在线网络技术(北京)有限公司 Computing method, search processing method, computing device and search processing device for sentence similarity
CN106202010A (en) * 2016-07-12 2016-12-07 重庆兆光科技股份有限公司 The method and apparatus building Law Text syntax tree based on deep neural network
CN107818081A (en) * 2017-09-25 2018-03-20 沈阳航空航天大学 Sentence similarity appraisal procedure based on deep semantic model and semantic character labeling
CN110969023A (en) * 2018-09-29 2020-04-07 北京国双科技有限公司 Text similarity determination method and device
US20200293921A1 (en) * 2019-03-12 2020-09-17 Beijing Baidu Netcom Science And Technology Co., Ltd. Visual question answering model, electronic device and storage medium
CN111353306A (en) * 2020-02-22 2020-06-30 杭州电子科技大学 Entity relationship and dependency Tree-LSTM-based combined event extraction method
CN112036189A (en) * 2020-08-10 2020-12-04 中国人民大学 Method and system for recognizing gold semantic
CN111930914A (en) * 2020-08-14 2020-11-13 工银科技有限公司 Question generation method and device, electronic equipment and computer-readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张苗苗;刘明童;张玉洁;徐金安;陈钰枫;: "融合Gate过滤机制与深度Bi-LSTM-CRF的汉语语义角色标注", 情报工程, no. 02, pages 46 - 54 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113268566A (en) * 2021-05-28 2021-08-17 平安国际智慧城市科技股份有限公司 Question and answer pair quality evaluation method, device, equipment and storage medium
CN113268566B (en) * 2021-05-28 2022-06-14 平安国际智慧城市科技股份有限公司 Question and answer pair quality evaluation method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN112559713B (en) 2023-12-01

Similar Documents

Publication Publication Date Title
CN111625635A (en) Question-answer processing method, language model training method, device, equipment and storage medium
CN112560496A (en) Training method and device of semantic analysis model, electronic equipment and storage medium
CN112148881B (en) Method and device for outputting information
CN107862058B (en) Method and apparatus for generating information
CN112580328A (en) Event information extraction method and device, storage medium and electronic equipment
CN111241285A (en) Method, device, equipment and storage medium for identifying question answer types
CN115099239B (en) Resource identification method, device, equipment and storage medium
CN113553412A (en) Question and answer processing method and device, electronic equipment and storage medium
CN114003682A (en) Text classification method, device, equipment and storage medium
CN112926308A (en) Method, apparatus, device, storage medium and program product for matching text
CN114417878B (en) Semantic recognition method and device, electronic equipment and storage medium
CN112528146B (en) Content resource recommendation method and device, electronic equipment and storage medium
CN113139043B (en) Question-answer sample generation method and device, electronic equipment and storage medium
CN113919424A (en) Training of text processing model, text processing method, device, equipment and medium
CN113705192A (en) Text processing method, device and storage medium
CN112559713B (en) Text relevance judging method and device, model, electronic equipment and readable medium
CN114647739B (en) Entity chain finger method, device, electronic equipment and storage medium
CN116680386A (en) Answer prediction method and device based on multi-round dialogue, equipment and storage medium
CN114118049B (en) Information acquisition method, device, electronic equipment and storage medium
CN115168544A (en) Information extraction method, electronic device and storage medium
CN113886543A (en) Method, apparatus, medium, and program product for generating an intent recognition model
CN110502741B (en) Chinese text recognition method and device
CN114970666A (en) Spoken language processing method and device, electronic equipment and storage medium
CN113190679A (en) Relationship determination method, relationship determination device, electronic equipment and storage medium
CN115048523B (en) Text classification method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant