CN116932723A - Man-machine interaction system and method based on natural language processing - Google Patents

Man-machine interaction system and method based on natural language processing Download PDF

Info

Publication number
CN116932723A
CN116932723A CN202310939997.6A CN202310939997A CN116932723A CN 116932723 A CN116932723 A CN 116932723A CN 202310939997 A CN202310939997 A CN 202310939997A CN 116932723 A CN116932723 A CN 116932723A
Authority
CN
China
Prior art keywords
sequence
training
problem description
word
speech
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310939997.6A
Other languages
Chinese (zh)
Inventor
张青辉
王英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
4u Beijing Technology Co ltd
Original Assignee
4u Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 4u Beijing Technology Co ltd filed Critical 4u Beijing Technology Co ltd
Priority to CN202310939997.6A priority Critical patent/CN116932723A/en
Publication of CN116932723A publication Critical patent/CN116932723A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis

Abstract

A man-machine interaction system based on natural language processing and a method thereof are disclosed. Firstly, acquiring a question description input by a user, then, carrying out semantic understanding on the question description to obtain a question description semantic understanding feature vector, and then, generating a recommended answer text based on the question description semantic understanding feature vector. Therefore, semantic understanding and answer text generation can be respectively carried out on the questions input by the user based on the deep learning semantic understanding model and the AIGC model, so that the intelligence and the interestingness of man-machine interaction are improved.

Description

Man-machine interaction system and method based on natural language processing
Technical Field
The present disclosure relates to the field of human-computer interaction, and more particularly, to a human-computer interaction system based on natural language processing and a method thereof.
Background
Man-machine interaction refers to information exchange and operation modes between a person and a computer. Current human-machine interaction systems are typically rule-based or predefined dialog flows, lacking real flexibility and context awareness. Therefore, the conventional man-machine interaction system cannot cope with a complicated dialog scene, and at the same time, it may become extremely difficult to cope with an intention transition or a complicated problem. In addition, some man-machine interaction systems based on natural language processing that are currently presented tend to be mechanical and tedious when talking to a user, lacking in intelligence and interest. This also limits the interactive experience between the interactive system and the user, making it difficult for the user to establish a true emotional connection.
Accordingly, an optimized natural language processing based human-machine interaction system is desired.
Disclosure of Invention
In view of this, the present disclosure provides a man-machine interaction system and method based on natural language processing, which can perform semantic understanding and answer text generation on a question input by a user based on a deep learning semantic understanding model and an AIGC model, respectively, so as to improve the intelligence and interestingness of man-machine interaction.
According to an aspect of the present disclosure, there is provided a man-machine interaction system based on natural language processing, including:
a question description acquisition module for acquiring a question description input by a user;
the problem semantic understanding module is used for carrying out semantic understanding on the problem description to obtain a problem description semantic understanding feature vector; and
and the answer text generation module is used for generating a recommended answer text based on the semantic understanding feature vector of the question description.
According to another aspect of the present disclosure, there is provided a man-machine interaction method based on natural language processing, including:
acquiring a question description input by a user;
carrying out semantic understanding on the problem description to obtain a problem description semantic understanding feature vector; and
and generating a recommended answer text based on the semantic understanding feature vector of the question description.
According to an embodiment of the present disclosure, a question description input by a user is first acquired, then the question description is semantically understood to obtain a question description semantically understood feature vector, and then a recommended answer text is generated based on the question description semantically understood feature vector. Therefore, semantic understanding and answer text generation can be respectively carried out on the questions input by the user based on the deep learning semantic understanding model and the AIGC model, so that the intelligence and the interestingness of man-machine interaction are improved.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features and aspects of the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 illustrates a block diagram of a natural language processing based human-machine interaction system, according to an embodiment of the present disclosure.
FIG. 2 illustrates a block diagram of the problem semantic understanding module in a natural language processing based human-machine interaction system according to an embodiment of the present disclosure.
Fig. 3 shows a block diagram of the question-part-of-speech semantic association coding unit in a natural language processing based human-computer interaction system according to an embodiment of the present disclosure.
Fig. 4 illustrates a block diagram of a training module further included in a natural language processing based human-machine interaction system, according to an embodiment of the present disclosure.
Fig. 5 illustrates a flow chart of a natural language processing based human-computer interaction method according to an embodiment of the present disclosure.
Fig. 6 shows an architectural diagram of a natural language processing based human-computer interaction method according to an embodiment of the present disclosure.
Fig. 7 illustrates an application scenario diagram of a natural language processing based human-machine interaction system according to an embodiment of the present disclosure.
Detailed Description
The following description of the embodiments of the present disclosure will be made clearly and fully with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some, but not all embodiments of the disclosure. All other embodiments, which can be made by one of ordinary skill in the art without undue burden based on the embodiments of the present disclosure, are also within the scope of the present disclosure.
As used in this disclosure and in the claims, the terms "a," "an," "the," and/or "the" are not specific to a singular, but may include a plurality, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
Various exemplary embodiments, features and aspects of the disclosure will be described in detail below with reference to the drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
In addition, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits well known to those skilled in the art have not been described in detail in order not to obscure the present disclosure.
Aiming at the problems, the technical concept of the present disclosure is to improve the intelligence and the interestingness of human-computer interaction by adopting a semantic understanding model and an AIGC model based on deep learning to respectively perform semantic understanding and answer text generation on the problems input by a user. It should be appreciated that AIGC is a method of generating content based on artificial intelligence techniques that may be used to generate diverse and interesting responses, thereby increasing the interestingness of human-machine interactions. In particular, AIGC is able to make conversations more lively and interesting by learning a large amount of linguistic data and patterns to generate creative and interesting responses. By the method, interaction experience between the user and the system can be enhanced, so that emotional connection with the user is better established.
Fig. 1 shows a block diagram schematic of a natural language processing based human-machine interaction system in accordance with an embodiment of the present disclosure. As shown in fig. 1, a natural language processing based man-machine interaction system 100 according to an embodiment of the present disclosure includes: a question description acquisition module 110 for acquiring a question description input by a user; the problem semantic understanding module 120 is configured to perform semantic understanding on the problem description to obtain a problem description semantic understanding feature vector; and an answer text generation module 130 for generating a recommended answer text based on the question description semantic understanding feature vector.
Specifically, in the technical scheme of the present disclosure, first, a description of a problem input by a user is acquired. It should be understood that, due to the different behavior habits and speaking modes of each user, the input question descriptions may generate different semantic understanding features, which may cause ambiguity or errors, and may cause difficulty in subsequent semantic understanding and generation of answer text related to the question descriptions. Therefore, word segmentation processing is needed to be carried out on the problem description before semantic understanding to obtain a sequence of problem description words, thereby avoiding confusion of word order and improving the accuracy of semantic understanding.
It should be understood that the word segmentation process is a process of dividing a sentence or text into individual words or labels, and the word segmentation may be performed by a word segmentation algorithm such as a maximum matching method, a forward maximum matching method, a reverse maximum matching method, a statistical-based method (e.g., hidden markov model, conditional random field), etc., and the word segmentation algorithm may divide a problem description into individual words or labels to form a sequence of problem description words.
Accordingly, in one example of the present disclosure, as shown in fig. 2, the problem semantic understanding module 120 includes: a question segmentation unit 121, configured to perform a segmentation process on the question description to obtain a sequence of question descriptors; a question part-of-speech extraction unit 122, configured to extract part-of-speech information of each question descriptor in the sequence of question descriptors to obtain a sequence of part-of-speech information of the question descriptor; and a question-part-of-speech semantic association encoding unit 123, configured to perform semantic association encoding on the sequence of the question descriptor and the sequence of the part-of-speech information of the question descriptor to obtain the question description semantic understanding feature vector.
Then, consider that since the part of speech is the part of speech corresponding to each word after word segmentation, it contains the grammatical role information and other semantic information that this word plays in the sentence. Therefore, when the natural language processing of the question description input by the user is performed, the meaning carried by each word in the question description can be better reflected by using the part-of-speech information of each word, so that the semantic understanding of the question description input by the user can be more accurately performed. Based on the above, in the technical solution of the present disclosure, part-of-speech information of each problem descriptor in the sequence of problem descriptors is further extracted to obtain the sequence of part-of-speech information of the problem descriptor. By extracting part-of-speech information of each word in the question description, semantic expression capability of each question description word can be enhanced, so that semantic understanding accuracy of the question description is improved, and subsequent answer text generation is facilitated.
Further, in the process of realizing man-machine interaction based on natural language processing, each word in the sequence of the problem description words needs to be converted into a word vector representation, so that the sequence of the problem description words is converted into a sequence of problem description word embedded vectors through a word embedding layer. By means of the word vector conversion mode, semantic information of each word in the text can be converted into a group of real number vectors, and semantic understanding of user input questions and generation of answer text can be facilitated.
At the same time, the part of speech of each word in the sequence of the problem description words is also very important information, so that the part of speech of each problem description word needs to be converted into a vector form so as to facilitate subsequent processing and fusion of semantic information. Specifically, in the technical scheme of the disclosure, the sequence of the part-of-speech information of the problem description word is converted into the sequence of the part-of-speech vector of the problem description word through a single-hot encoder. Thus, the part-of-speech information of each word of the user input problem can be added into the text vector, so that the semantic feature expression capability of the text vector is improved, and the text semantic information of the problem description is reflected better.
Then, in order to more accurately express the meaning of each problem descriptor in the problem description, it is necessary to comprehensively consider semantic information and part-of-speech information of each problem descriptor. Therefore, the sequence of the embedding vector of the question descriptor and the sequence of the part-of-speech vector of the question descriptor are further fused to obtain the sequence of the question descriptor-part-of-speech spliced vector, so that the semantic information and the part-of-speech information of each word in the question description are fused, and the semantic understanding accuracy of the subsequent question description is improved.
Further, a two-way long and short term memory neural network model is used for semantic understanding of the problem description. Specifically, the sequence of the problem description word-part-of-speech splice vector is passed through a two-way long-short term memory neural network model, so that fusion information of each word and part of speech in the problem description is extracted based on context semantic association characteristic information depending on a middle-short distance, namely semantic understanding characteristic information of the problem description, and the problem description semantic understanding characteristic vector is obtained.
Accordingly, in one example of the present disclosure, as shown in fig. 3, the question-part-of-speech semantic association encoding unit 123 includes: a word embedding subunit 1231, configured to convert, by the word embedding layer, the sequence of the problem description word into a sequence of a problem description word embedding vector; the part-of-speech vectorization subunit 1232 is configured to convert, by a single-hot encoder, the sequence of part-of-speech information of the problem descriptor into a sequence of part-of-speech vectors of the problem descriptor; a word-part-of-speech concatenation subunit 1233 configured to fuse the sequence of the problem descriptor embedding vector and the sequence of the problem descriptor part-of-speech vector to obtain a sequence of the problem descriptor-part-of-speech concatenation vector; and a semantic coding subunit 1234 configured to perform semantic understanding on the sequence of the question descriptor-part-of-speech splice vectors by using a semantic encoder based on a deep neural network model to obtain the question description semantic understanding feature vector. In one specific example, the deep neural network model is a two-way long-short term memory neural network model.
It is worth mentioning that word embedding is a technique in natural language processing that maps words into a low-dimensional real vector space, which vectors are designed to capture semantic and grammatical information of words such that similar words are closer in vector space and dissimilar words are farther apart. The word embedding layer functions to convert discrete words into a continuous vector representation so that words can be more conveniently processed and compared in a computer, which can help the model understand and infer relationships between words, such as word sense similarity, context, etc. The word embedding layer is generally used as an input layer of the model, converts word sequences into vector sequences so that the subsequent model can better understand and process text data, and can learn richer semantic information from the distributed representation of words through the word embedding layer, thereby improving the performance of the model on various tasks.
It is worth mentioning that a One-Hot Encoder (One-Hot Encoder) is an encoding technique for representing discrete features as binary vectors, which maps each feature value into a unique binary vector, where only One element is 1 and the remaining elements are 0, and the position of this element represents the position of the feature value among all possible values. The function of the one-hot encoder is to convert the discrete features into a numerical form that can be processed by the machine learning algorithm. This has the advantage that it removes the magnitude relation between the eigenvalues, converting them into equidistant vector representations. This is necessary for certain machine learning algorithms (e.g., logistic regression, support vector machines, etc.), as these algorithms typically require that the input be numerical data. In addition, the single thermal coding can also solve the problem of disorder among characteristic values. For example, in some classification tasks, if the classes are represented by integer encodings (e.g., 0, 1, 2, 3.) the algorithm may incorrectly consider that there is a magnitude relationship between the classes, whereas by unicode, each class is represented as an independent vector, the algorithm does not misinterpret the relationship between them as a magnitude relationship. The single thermal encoder can convert discrete features into numerical forms which can be processed by a machine learning algorithm, meanwhile, the problems of size relation and disorder between feature values are eliminated, and the performance and accuracy of the model are improved.
It is worth mentioning that in another example of the present disclosure, the following steps may be adopted to fuse the sequence of the question descriptor embedded vector and the sequence of the question descriptor part-of-speech vector to obtain the sequence of the question descriptor-part-of-speech splice vector: 1. word embedding is carried out on each Word in the problem description, the Word embedding is converted into continuous vector representation, and a pre-trained Word embedding model (such as Word2Vec, gloVe and the like) can be used or a custom Word embedding model is trained to obtain the vector representation of the Word; 2. each word in the problem description is subjected to part-of-speech tagging, and is converted into a corresponding part-of-speech vector for representation, and a trained part-of-speech tagging device (such as NLTK, spaCy and the like) can be used for obtaining part-of-speech tags of the words and converting the part-of-speech tags into corresponding part-of-speech vectors; 3. splicing the sequence of the word embedding vector and the sequence of the part-of-speech vector according to the corresponding positions to obtain a sequence of the problem description word-part-of-speech splicing vector, for example, splicing the word embedding vector and the part-of-speech vector of the ith word in the problem description to form a new vector representation; 4. the sequence of the finally obtained question descriptor-part-of-speech splice vector can be used as input for processing a subsequent model, such as a two-way long-short-term memory neural network model or a question-answering model based on an AIGC model. By splicing the word embedding vector and the part-of-speech vector, the semantic information and the part-of-speech information of the word can be considered at the same time, so that the characteristics of the problem description can be more comprehensively represented, and the spliced vector sequence can provide richer input information, thereby being beneficial to improving the semantic understanding and the characteristic extraction capability of the problem description.
It is worth mentioning that the two-way long and short Term Memory neural network (Bidirectional Long Short-Term Memory, biLSTM) is a variant of a recurrent neural network (Recurrent Neural Network, RNN) that can take into account both past and future information when processing sequence data. Conventional recurrent neural networks only consider past information in the sequence data, and ignore future information. While BiLSTM is able to capture contextual information from both directions (forward and reverse) of the sequence simultaneously by introducing a reverse LSTM layer. The principal advantage of BiLSTM is the ability to more fully understand and model the context information in the sequence data, which captures the context before and after the current position, thereby better understanding the dependencies and long-term dependencies in the sequence. By using BiLSTM, the model may better understand the context information when processing sequence data, thereby improving the performance and accuracy of the model.
Further, after the semantic understanding feature information of the question description is obtained, the question description semantic understanding feature vector is used for generating a recommended answer text through a question-answer model based on an AIGC model. It should be understood that the AIGC model can perform context awareness on the question description, that is, can understand the semantics and the context information in the question description, so as to generate a recommendation answer text matched with the question description according to the specific content and the background of the question description, thereby improving the accuracy and the relevance of the answer. In addition, the AIGC model can be trained through large-scale training data and a deep learning algorithm, so that the AIGC model has certain flexibility and adaptability. Further, questions and answer patterns of different types and fields are learned, and creative and interesting answer texts can be adjusted and generated according to specific situations, so that conversations are more vivid and interesting.
Accordingly, in one example, the answer text generation module 130 is further configured to input the question description semantic understanding feature vector into an AIGC model-based question-answer model to obtain the recommended answer text.
Further, in the technical scheme of the disclosure, the man-machine interaction system based on natural language processing further includes a training module for training the two-way long-short-term memory neural network model and the question-answering model based on the AIGC model. It should be understood that the training module plays a vital role in man-machine interaction system based on natural language processing, and is used for training a two-way long-short-term memory neural network model and a question-answering model based on an AIGC model. The main objective of the training module is to enable these models to learn from the data the ability to understand semantically and answer questions by providing a large amount of annotation data and appropriate training algorithms, which will provide the input training data to the models and optimize the models according to predefined objective functions. For a two-way long and short term memory neural network model, a training module provides a training sample containing questions and answers, the model understands the questions and generates accurate answers by learning context information in the sample, the training module calculates losses according to differences between the output of the model and the labeled answers, and the training module updates parameters of the model by using a back propagation algorithm so that the model can predict the answers better. For a question-answer model based on an AIGC model, the training module would provide a training sample containing questions, answers and related knowledge maps, the model would understand the questions by learning the entities and relationship information in the knowledge maps and generate answers related to the questions, the training module would likewise calculate the differences between the model output and the annotated answers, and update the model parameters using a back propagation algorithm. Through the training process of the training module, the models can gradually improve the semantic understanding and question answering capacity of the models, so that the performance and accuracy of the man-machine interaction system based on natural language processing are improved.
In one specific example, the training module 200 includes: a training data acquisition unit 201, configured to acquire training data, where the training data includes a training question description input by a user, and a real text of the recommended answer text; a training word segmentation unit 202, configured to perform word segmentation processing on the training question description to obtain a training question descriptor sequence; a training word part extracting unit 203, configured to extract part-of-speech information of each training problem descriptor in the training problem descriptor sequence to obtain a training problem descriptor part-of-speech information sequence; a training word embedding unit 204, configured to convert, by a word embedding layer, the sequence of training question descriptors into a sequence of training question descriptor embedding vectors; a training part-of-speech vectorization unit 205, configured to convert, by a single-hot encoder, the sequence of part-of-speech information of the training question descriptor into a sequence of part-of-speech vectors of the training question descriptor; a training word-part-of-speech concatenation unit 206, configured to fuse the sequence of the training question descriptor embedded vector and the sequence of the training question descriptor part-of-speech vector to obtain a sequence of the training question descriptor-part-of-speech concatenation vector; a training question description semantic understanding unit 207, configured to pass the training question descriptor-part-of-speech concatenation vector sequence through the two-way long-short term memory neural network model to obtain a training question description semantic understanding feature vector; a problem description semantic association unit 208, configured to calculate a vector multiplication of the training problem description semantic understanding feature vector and a transpose vector of the training problem description semantic understanding feature vector to obtain an association feature matrix; a manifold convex decomposition consistency loss unit 209, configured to calculate a manifold convex decomposition consistency factor of the correlation feature matrix to obtain a manifold convex decomposition consistency loss function value; and a model training unit 210 for training the two-way long-short-term memory neural network model and the AIGC model-based question-answer model based on propagation of the manifold convex decomposition consistency loss function value through a direction of gradient descent.
In particular, in the technical solution of the present disclosure, when the training question descriptor-part-of-speech splice vector sequence is passed through a two-way long-short term memory neural network model to obtain a training question description semantic understanding feature vector, a short-distance context association relationship between each training question descriptor embedding vector and the training question descriptor part-of-speech vector may be extracted, but, considering a feature coding representation difference between an embedding semantic representation of the training question descriptor itself and a single-hot coding representation of the training question descriptor part-of-speech information, it is still desirable to promote a representation consistency between each training question descriptor embedding vector and the training question descriptor part-of-speech vector in the training question descriptor-part-of-speech splice vector sequence.
Based on this, the applicant of the present disclosure first calculates the position-by-position association of the training problem description semantic understanding feature vector with its own transpose vector to obtain an associated feature matrixConsidering that the associated feature matrix expresses the overall association of each feature value of the training question description semantic understanding feature vector with the training question description semantic understanding feature vector in the row direction, and expresses the self-association of each feature value of the training question description semantic understanding feature vector in the diagonal directionTherefore, if the correlation feature matrix can be made +.>And if the manifold expression in the high-dimensional feature space keeps consistent in the vector granularity integral association dimension in the row direction and the feature value granularity self-association distribution dimension in the diagonal direction, the expression association effect among the feature values can be improved through the association effect under the vector granularity, and the expression consistency between the embedded vectors of the training problem descriptors and the part-of-speech vectors of the training problem descriptors is improved. Thus, for the correlation feature matrix +.>The manifold convex decomposition consistency factor of the feature matrix is introduced as a loss function.
Accordingly, the manifold convex decomposition consistency loss unit 209 is configured to: calculating a manifold convex decomposition consistency factor of the correlation feature matrix according to the following loss formula to obtain a manifold convex decomposition consistency loss function value; wherein, the loss formula is:wherein (1)>A +.o representing the correlation characteristic matrix>Characteristic value of the location->And->The first part is the correlation characteristic matrix +.>Mean vector and diagonal vector of individual row vectors, < ->Representation directionOne norm of the quantity, ++>Frobenius norms of the matrix are represented, < >>Representing vector multiplication, ++>Is the length of the feature vector, and +.>、/>And->Is a weight superparameter,/->Representation->The function of the function is that,representing the manifold convex decomposition consistency loss function value. The Frobenius norm of a matrix is a common matrix norm that measures the square root of the sum of squares of all elements in the matrix.
That is, taking into account the correlation characteristic matrixThe above-mentioned feature-associated expression properties of row or column dimension and diagonal dimension, the manifold convex decomposition consistency factor being +_for the associated feature matrix>Distribution relevance in sub-dimensions represented by row direction and diagonal direction is determined by the relevance feature matrix +.>The represented feature manifold is geometrically convex decomposed to flatten a set of finite convex polygons of manifolds in different dimensions and constrain the geometric convex decomposition in the form of sub-dimension associated shape weights to facilitate the association feature matrix>The consistency of the convex geometric representation of the feature manifold in the resolvable dimensions represented by the rows and diagonals, such that the correlation feature matrix +.>Manifold representations within a high-dimensional feature space remain consistent across spatially-dependent dimensions. Thus, the gradient is passed back through the correlation feature matrix when model training>In this case, the acquisition of the correlation characteristic matrix by self-correlation is promoted>The training problem description semantics understanding feature vector feature value granularity self-correlation distribution effect, so that the representation consistency between the embedded vectors of the training problem description words and the part-of-speech vectors of the training problem description words is improved. In this way, creative and interesting responses can be generated based on the question descriptions entered by the user to enhance the interactive experience between the user and the system so that an emotional connection is better established with the user.
In summary, the human-computer interaction system 100 based on natural language processing according to the embodiments of the present disclosure is illustrated, which can perform semantic understanding and answer text generation on a question input by a user based on a deep learning semantic understanding model and an AIGC model, respectively, so as to improve the intelligence and interestingness of human-computer interaction.
As described above, the natural language processing based man-machine interaction system 100 according to the embodiment of the present disclosure may be implemented in various terminal devices, for example, a server or the like having a natural language processing based man-machine interaction algorithm. In one example, the natural language processing based human-machine interaction system 100 may be integrated into the terminal device as a software module and/or hardware module. For example, the natural language processing based human-machine interaction system 100 may be a software module in the operating system of the terminal device, or may be an application developed for the terminal device; of course, the man-machine interaction system 100 based on natural language processing may also be one of a plurality of hardware modules of the terminal device.
Alternatively, in another example, the natural language processing based man-machine interaction system 100 and the terminal device may be separate devices, and the natural language processing based man-machine interaction system 100 may be connected to the terminal device through a wired and/or wireless network and transmit interaction information in a agreed data format.
Fig. 5 illustrates a flow chart of a natural language processing based human-computer interaction method according to an embodiment of the present disclosure. Fig. 6 shows a schematic diagram of a system architecture of a natural language processing based human-machine interaction method according to an embodiment of the present disclosure. As shown in fig. 5 and 6, a man-machine interaction method based on natural language processing according to an embodiment of the present disclosure includes: s110, acquiring a question description input by a user; s120, carrying out semantic understanding on the problem description to obtain a problem description semantic understanding feature vector; and S130, generating a recommended answer text based on the question description semantic understanding feature vector.
In one possible implementation, performing semantic understanding on the problem description to obtain a problem description semantic understanding feature vector includes: performing word segmentation processing on the problem description to obtain a sequence of problem description words; extracting part-of-speech information of each problem descriptor in the sequence of the problem descriptors to obtain a sequence of part-of-speech information of the problem descriptors; and carrying out semantic association coding on the sequence of the problem description words and the sequence of the part-of-speech information of the problem description words to obtain the semantic understanding feature vector of the problem description words.
In one possible implementation manner, performing semantic association coding on the sequence of the problem description words and the sequence of the part-of-speech information of the problem description words to obtain the semantic understanding feature vector of the problem description words, including: converting the sequence of the problem description words into a sequence of problem description word embedding vectors through a word embedding layer; converting the sequence of the part-of-speech information of the problem description words into a sequence of part-of-speech vectors of the problem description words through a single-hot encoder; fusing the sequence of the question descriptor embedded vector and the sequence of the question descriptor part-of-speech vector to obtain a sequence of the question descriptor-part-of-speech splice vector; and carrying out semantic understanding on the sequence of the problem description word-part-of-speech splice vectors through a semantic encoder based on a deep neural network model to obtain the problem description semantic understanding feature vector.
Here, it will be understood by those skilled in the art that the specific operations of the respective steps in the above-described natural language processing based man-machine interaction method have been described in detail in the above description of the natural language processing based man-machine interaction system with reference to fig. 1 to 4, and thus, repetitive descriptions thereof will be omitted.
Fig. 7 illustrates an application scenario diagram of a natural language processing based human-machine interaction system according to an embodiment of the present disclosure. As shown in fig. 7, in this application scenario, first, a question description input by a user (e.g., D illustrated in fig. 7) is acquired, and then, the question description is input to a server (e.g., S illustrated in fig. 7) in which a natural language processing-based man-machine interaction algorithm is deployed, wherein the server is capable of processing the question description using the natural language processing-based man-machine interaction algorithm to generate a recommended answer text.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A natural language processing-based man-machine interaction system, comprising:
a question description acquisition module for acquiring a question description input by a user;
the problem semantic understanding module is used for carrying out semantic understanding on the problem description to obtain a problem description semantic understanding feature vector; and
and the answer text generation module is used for generating a recommended answer text based on the semantic understanding feature vector of the question description.
2. The natural language processing based man-machine interaction system of claim 1, wherein the problem semantic understanding module comprises:
the problem word segmentation unit is used for carrying out word segmentation processing on the problem description to obtain a sequence of problem description words;
the part-of-speech extracting unit is used for extracting part-of-speech information of each problem descriptor in the sequence of the problem descriptors to obtain a sequence of part-of-speech information of the problem descriptors; and
and the question-part-of-speech semantic association coding unit is used for carrying out semantic association coding on the sequence of the question description words and the sequence of the part-of-speech information of the question description words so as to obtain the question description semantic understanding feature vector.
3. The natural language processing based man-machine interaction system of claim 2, wherein the question-part-of-speech semantic association coding unit comprises:
the word embedding subunit is used for converting the sequence of the problem description words into a sequence of problem description word embedding vectors through a word embedding layer;
the part-of-speech vectorization subunit is used for converting the sequence of part-of-speech information of the problem description word into a sequence of part-of-speech vectors of the problem description word through the single-hot encoder;
the word-part-of-speech splice subunit is used for fusing the sequence of the embedded vector of the problem description word and the sequence of the part-of-speech vector of the problem description word to obtain a sequence of the word-part-of-speech splice vector of the problem description word; and
and the semantic coding subunit is used for carrying out semantic understanding on the sequence of the problem description word-part-of-speech splice vector through a semantic encoder based on a deep neural network model so as to obtain the problem description semantic understanding feature vector.
4. A natural language processing based man-machine interaction system according to claim 3, wherein the deep neural network model is a two-way long-short term memory neural network model.
5. The natural language processing based human-computer interaction system of claim 4, wherein the answer text generation module is further configured to input the question description semantic understanding feature vector into an AIGC model based question-answer model to obtain the recommended answer text.
6. The natural language processing based human-machine interaction system of claim 5, further comprising a training module for training the two-way long and short term memory neural network model and the AIGC model based question-answering model.
7. The natural language processing based man-machine interaction system of claim 6, wherein the training module comprises:
the training data acquisition unit is used for acquiring training data, wherein the training data comprises training question descriptions input by a user and real texts of the recommended answer texts;
the training word segmentation unit is used for carrying out word segmentation processing on the training problem description to obtain a training problem description word sequence;
the training part-of-speech extraction unit is used for extracting part-of-speech information of each training problem description word in the training problem description word sequence to obtain a training problem description word part-of-speech information sequence;
the training word embedding unit is used for converting the sequence of the training problem description words into a sequence of training problem description word embedding vectors through the word embedding layer;
the training part-of-speech vectorization unit is used for converting the sequence of the part-of-speech information of the training problem description word into a sequence of the part-of-speech vector of the training problem description word through the single-hot encoder;
the training word-part-of-speech splicing unit is used for fusing the sequence of the training problem description word embedded vector and the sequence of the training problem description word part-of-speech vector to obtain a sequence of the training problem description word-part-of-speech splicing vector;
the training problem description semantic understanding unit is used for enabling the sequence of the training problem description word-part-of-speech spliced vector to pass through the two-way long-short-term memory neural network model to obtain a training problem description semantic understanding feature vector;
the problem description semantic association unit is used for calculating vector multiplication of the training problem description semantic understanding feature vector and a transpose vector of the training problem description semantic understanding feature vector so as to obtain an association feature matrix;
the manifold convex decomposition consistency loss unit is used for calculating manifold convex decomposition consistency factors of the correlation feature matrix to obtain manifold convex decomposition consistency loss function values; and
and the model training unit is used for training the two-way long-short-term memory neural network model and the question-answering model based on the AIGC model based on propagation of the manifold convex decomposition consistency loss function value in the gradient descending direction.
8. The natural language processing based man-machine interaction system of claim 7, wherein the manifold convex decomposition consistency loss unit is configured to:
calculating a manifold convex decomposition consistency factor of the correlation feature matrix according to the following loss formula to obtain a manifold convex decomposition consistency loss function value;
wherein, the loss formula is:wherein (1)>A +.o representing the correlation characteristic matrix>Characteristic value of the location->And->The first part is the correlation characteristic matrix +.>Mean vector and diagonal vector of individual row vectors, < ->Representing a norm of the vector,/->Frobenius norms of the matrix are represented, < >>Representing vector multiplication, ++>Is the length of the feature vector, and +.>、/>And->Is a weight superparameter,/->Representation ofFunction (F)>Representing the manifold convex decomposition consistency loss function value.
9. The man-machine interaction method based on natural language processing is characterized by comprising the following steps of:
acquiring a question description input by a user;
carrying out semantic understanding on the problem description to obtain a problem description semantic understanding feature vector; and
and generating a recommended answer text based on the semantic understanding feature vector of the question description.
10. The natural language processing based man-machine interaction system of claim 9, wherein performing semantic understanding on the problem description to obtain a problem description semantic understanding feature vector comprises:
performing word segmentation processing on the problem description to obtain a sequence of problem description words;
extracting part-of-speech information of each problem descriptor in the sequence of the problem descriptors to obtain a sequence of part-of-speech information of the problem descriptors; and
and carrying out semantic association coding on the sequence of the problem description words and the sequence of the part-of-speech information of the problem description words to obtain the semantic understanding feature vector of the problem description words.
CN202310939997.6A 2023-07-28 2023-07-28 Man-machine interaction system and method based on natural language processing Pending CN116932723A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310939997.6A CN116932723A (en) 2023-07-28 2023-07-28 Man-machine interaction system and method based on natural language processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310939997.6A CN116932723A (en) 2023-07-28 2023-07-28 Man-machine interaction system and method based on natural language processing

Publications (1)

Publication Number Publication Date
CN116932723A true CN116932723A (en) 2023-10-24

Family

ID=88382458

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310939997.6A Pending CN116932723A (en) 2023-07-28 2023-07-28 Man-machine interaction system and method based on natural language processing

Country Status (1)

Country Link
CN (1) CN116932723A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274450A (en) * 2023-11-21 2023-12-22 长春职业技术学院 Animation image generation system and method based on artificial intelligence
CN117435505A (en) * 2023-12-04 2024-01-23 南京易迪森信息技术有限公司 Visual generation method of performance test script

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110083682A (en) * 2019-04-19 2019-08-02 西安交通大学 It is a kind of to understand answer acquisition methods based on the machine readings for taking turns attention mechanism more
CN110287294A (en) * 2018-12-27 2019-09-27 厦门智融合科技有限公司 Intellectual property concept answers method and system automatically
US20210064821A1 (en) * 2019-08-27 2021-03-04 Ushur, Inc. System and method to extract customized information in natural language text

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110287294A (en) * 2018-12-27 2019-09-27 厦门智融合科技有限公司 Intellectual property concept answers method and system automatically
CN110083682A (en) * 2019-04-19 2019-08-02 西安交通大学 It is a kind of to understand answer acquisition methods based on the machine readings for taking turns attention mechanism more
US20210064821A1 (en) * 2019-08-27 2021-03-04 Ushur, Inc. System and method to extract customized information in natural language text

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
寿建琪: "走向"已知之未知": GPT 大语言模型助力实现以人为本的信息检索", 农业图书情报学报, vol. 35, no. 5, pages 17 - 19 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274450A (en) * 2023-11-21 2023-12-22 长春职业技术学院 Animation image generation system and method based on artificial intelligence
CN117274450B (en) * 2023-11-21 2024-01-26 长春职业技术学院 Animation image generation system and method based on artificial intelligence
CN117435505A (en) * 2023-12-04 2024-01-23 南京易迪森信息技术有限公司 Visual generation method of performance test script
CN117435505B (en) * 2023-12-04 2024-03-15 南京易迪森信息技术有限公司 Visual generation method of performance test script

Similar Documents

Publication Publication Date Title
CN111026861B (en) Text abstract generation method, training device, training equipment and medium
CN116932723A (en) Man-machine interaction system and method based on natural language processing
CN110609891A (en) Visual dialog generation method based on context awareness graph neural network
CN110705206A (en) Text information processing method and related device
CN112699686B (en) Semantic understanding method, device, equipment and medium based on task type dialogue system
CN112100332A (en) Word embedding expression learning method and device and text recall method and device
CN112084789A (en) Text processing method, device, equipment and storage medium
CN109766407A (en) Data processing method and system
Wu et al. Multimodal large language models: A survey
WO2023168601A1 (en) Method and apparatus for training natural language processing model, and storage medium
CN113158687B (en) Semantic disambiguation method and device, storage medium and electronic device
Neidle et al. New shared & interconnected asl resources: Signstream® 3 software; dai 2 for web access to linguistically annotated video corpora; and a sign bank
CN113901191A (en) Question-answer model training method and device
Xian et al. Self-guiding multimodal LSTM—when we do not have a perfect training dataset for image captioning
Dethlefs Domain transfer for deep natural language generation from abstract meaning representations
CN113392179A (en) Text labeling method and device, electronic equipment and storage medium
CN111444695B (en) Text generation method, device and equipment based on artificial intelligence and storage medium
CN113806487A (en) Semantic search method, device, equipment and storage medium based on neural network
CN110705273A (en) Information processing method and device based on neural network, medium and electronic equipment
Guo et al. Who is answering whom? Finding “Reply-To” relations in group chats with deep bidirectional LSTM networks
CN113836303A (en) Text type identification method and device, computer equipment and medium
CN116050352A (en) Text encoding method and device, computer equipment and storage medium
CN113326367B (en) Task type dialogue method and system based on end-to-end text generation
Yao Attention-based BiLSTM neural networks for sentiment classification of short texts
CN113918031A (en) System and method for Chinese punctuation recovery using sub-character information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination