CN115599898A - Intelligent question and answer implementation method and device, electronic equipment and storage medium - Google Patents

Intelligent question and answer implementation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115599898A
CN115599898A CN202211317603.5A CN202211317603A CN115599898A CN 115599898 A CN115599898 A CN 115599898A CN 202211317603 A CN202211317603 A CN 202211317603A CN 115599898 A CN115599898 A CN 115599898A
Authority
CN
China
Prior art keywords
neural network
network model
question
graph neural
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211317603.5A
Other languages
Chinese (zh)
Inventor
李旦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Agricultural Bank of China
Original Assignee
Agricultural Bank of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agricultural Bank of China filed Critical Agricultural Bank of China
Priority to CN202211317603.5A priority Critical patent/CN115599898A/en
Publication of CN115599898A publication Critical patent/CN115599898A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/338Presentation of query results

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention discloses an intelligent question and answer implementation method, an intelligent question and answer implementation device, electronic equipment and a storage medium. The method comprises the following steps: acquiring problem information input by a user, and converting the problem information into target information; inputting target information into a pre-trained non-uniform graph neural network model to obtain output information of the non-uniform graph neural network model; and converting the output information into an answer corresponding to the question information, and displaying the answer for the user. The method provided by the embodiment of the invention can be used for analyzing the problem information of the user by utilizing the pre-established non-uniform graph neural network model, and providing answer information corresponding to the problem for the user accurately and quickly. The speed of processing the problems is high, the accuracy of the output answers is high, and the user experience is further improved.

Description

Intelligent question and answer implementation method and device, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the field of intelligent question answering, in particular to an intelligent question answering implementation method, an intelligent question answering implementation device, electronic equipment and a storage medium.
Background
The intelligent question-answering is to orderly and scientifically arrange the accumulated unordered corpus information and establish a knowledge-based classification model; the classification models can guide the newly added corpus consultation and service information, save human resources, improve the automation of information processing and reduce the operation cost of the website. Nowadays, with the development of science and technology, intelligent question answering is widely applied to the life of people.
The existing intelligent question-answering technology comprises the following steps: the intelligent question answering method comprises an intelligent question answering based on a template matching method algorithm and an intelligent question answering based on traditional deep learning. The intelligent question and answer based on the template matching method algorithm has high requirements on a person constructing a data set, the person needs to have deep research in a special field to construct a high-quality template, the requirement on the scale of the template is high, time and labor are consumed, and the performance of the constructed intelligent question and answer system is poor. The intelligent question answering based on the traditional deep learning method does not have a good processing method for the problems which are distributed unevenly in the data set, and the data in the real scene are often distributed unevenly, so that the intelligent question answering based on the traditional deep learning method is poor in effect, and the accuracy of the generated answers is low.
Disclosure of Invention
The invention provides an intelligent question answering implementation method, an intelligent question answering implementation device, electronic equipment and a storage medium, wherein answers of questions can be obtained by utilizing a non-uniform graph neural network model, and the questions are processed quickly and with high accuracy.
In a first aspect, an embodiment of the present invention provides an intelligent question answering implementation method, where the method includes:
acquiring problem information input by a user, and converting the problem information into target information;
inputting the target information into a pre-trained non-uniform graph neural network model to obtain output information of the non-uniform graph neural network model;
and converting the output information into an answer corresponding to the question information, and displaying the answer for the user.
In a second aspect, an embodiment of the present invention further provides an intelligent question answering apparatus, where the apparatus includes:
the acquisition module is used for acquiring the problem information input by a user and converting the problem information into target information;
the input module is used for inputting the target information into a pre-trained non-uniform graph neural network model to obtain output information of the non-uniform graph neural network model;
and the output module is used for converting the output information into an answer corresponding to the question information and displaying the answer for the user.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
a memory for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors implement the intelligent question answering implementation method provided by any embodiment of the invention.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the intelligent question answering implementation method provided in any embodiment of the present invention.
In the embodiment of the invention, problem information input by a user is acquired and converted into target information; inputting target information into a pre-trained non-uniform graph neural network model to obtain output information of the non-uniform graph neural network model; and converting the output information into an answer corresponding to the question information, and displaying the answer for the user. In the embodiment of the invention, the pre-established non-uniform graph neural network model is utilized to analyze the question information of the user, the answer information corresponding to the question is accurately and quickly output for the user, the question processing speed is high, the accuracy rate of the output answer is high, and the user experience is further improved.
Drawings
Fig. 1 is a flowchart of an intelligent question answering implementation method provided by an embodiment of the present invention;
FIG. 2 is a flow chart of a method for training a non-uniform graph neural network model provided by an embodiment of the present invention;
FIG. 3 is a schematic diagram of a node-edge structure provided by an embodiment of the present invention;
FIG. 4a is a schematic diagram of a node-edge structure of a question-and-answer sample library according to an embodiment of the present invention;
fig. 4b is a schematic diagram of a node-edge structure after an edge with an edge degree smaller than 3 is deleted according to the embodiment of the present invention;
fig. 5 is a schematic structural diagram of an intelligent question answering implementation apparatus provided in the embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Fig. 1 is a flowchart of an intelligent question answering implementation method provided in the embodiment of the present invention, and the method of the embodiment can obtain answers to questions by using a non-uniform graph neural network model, and has a fast speed and high accuracy in question processing. The method may be implemented by the intelligent question-answering implementation apparatus in the embodiment of the present invention, the apparatus may be integrated in an electronic device, the electronic device may be a server, and the method may be implemented in a software and/or hardware manner. The intelligent question answering implementation method provided by the embodiment specifically comprises the following steps:
step 101, obtaining question information input by a user, and converting the question information into target information.
The question information is question information sent to the server by the user based on own requirements. The target information is information corresponding to the question information input by the user, such as a feature vector of the question information, which can be input into a pre-trained non-uniform graph neural network model.
In an alternative embodiment, the question information input by the user may be text information or voice information. When the question information is text information, the server may convert the text information into a feature vector (target information) using a word embedding model after receiving the text information sent by the user. The commonly used word embedding model comprises a vector space model, a theme model, a distributed model, a deep pre-training model, a text representation model under a specific task and the like. When the question information is voice information, the server may convert the received voice information into text information. And further, further processing the text information to obtain target information.
And 102, inputting the target information into a pre-trained non-uniform graph neural network model to obtain output information of the non-uniform graph neural network model.
Wherein, the non-uniform graph neural network model is one of the graph neural network models. The graph neural network model is a network model which uses a neural network to learn graph structure data, extracts and explores features and modes in the graph structure data and meets the requirements of graph learning tasks such as clustering, classification, prediction, segmentation, generation and the like. The graph neural network model can effectively extract data key information by utilizing the interactivity between data. The graph neural network model can more efficiently process unstructured data, such as text information, etc., than the conventional neural network model. The non-uniform graph neural network model may be used to process unstructured data, which refers to data that has problems with non-uniform distribution. The non-uniform graph neural network model can further process unstructured data by utilizing key elements in the data.
In an optional implementation manner, after obtaining the target information, the server may input the target information to a pre-trained non-uniform graph neural network model, and the trained non-uniform graph neural network model may process the target information to obtain output information corresponding to the target information. Thus, the initial graph neural network model may be trained prior to inputting the target information into the non-uniform graph neural network model. In this embodiment, optionally, training the initialization map neural network model to obtain the non-uniform map neural network model includes the following steps A1 to A2:
step A1: and if the non-uniform graph neural network model does not meet the preset convergence condition, extracting a sample from the question-answering sample library as the current sample.
The question and answer sample library comprises nodes, edges and position information of the nodes in a predetermined anchor structure. The convergence condition may be preset according to specific requirements and the actual training process. In the scheme, the convergence condition of the non-uniform graph neural network model is that the error between the sample data and the sample label is smaller than a preset error value. The question-answer sample library is composed of listening work order and listening knowledge library. The listening bill includes text information in each question-answer form, and the listening knowledge base is an information base including big data as well as various common knowledge. The nodes are the nouns in the listening work order and listening knowledge base, and the edges are the relations among the nouns. The anchor structure is a structure predetermined based on the key edges and key points.
In an optional implementation, when the non-uniform graph neural network model does not meet a preset convergence condition, the question-answer sample library is obtained, and a sample is extracted from the question-answer sample library as a current sample.
Step A2: and training the non-uniform graph neural network model by using the current sample until the non-uniform graph neural network model meets the convergence condition.
Specifically, after the current sample is determined, the current sample is input into the non-uniform graph neural network, and output data corresponding to the current sample is obtained. Further, a loss function of the non-uniform graph neural network model is determined according to the output data and the sample label, and then network parameters of the non-uniform graph neural network model are adjusted according to a calculation result of the loss function until the non-uniform graph neural network model meets a convergence condition.
In the above steps, the initial graph neural network model can be trained through a large number of existing question-answer information bases, so as to obtain a trained non-uniform graph neural network model for processing question information. The accuracy of the output answer information is improved, and the user experience is further improved.
And 103, converting the output information into an answer corresponding to the question information, and displaying the answer for the user.
And the output information is output after target information is input into the trained non-uniform graph neural network model. Specifically, after input information is input into the non-uniform graph neural network model, the non-uniform graph neural network model can output information corresponding to the input information through calculation. After the output information is obtained, the output information can be converted into answer information corresponding to the question information by using a natural language processing model.
For example, the question information input by the user is text information, and after the output information of the non-uniform graph neural network model is obtained, the output information can be converted into an answer in a text form, and the answer in the text form is sent to the user. When the question information input by the user is voice information, after the output information of the non-uniform graph neural network model is obtained, the output information is converted into text answers. Further, the answer information in the form of text is converted into a voice answer, and the voice answer is played for the user.
According to the scheme of the embodiment of the invention, the problem information input by a user is obtained and converted into the target information; inputting target information into a pre-trained non-uniform graph neural network model to obtain output information of the non-uniform graph neural network model; and converting the output information into an answer corresponding to the question information, and displaying the answer for the user. According to the technical scheme, the problem information of the user can be analyzed by utilizing the pre-established non-uniform graph neural network model, the answer information corresponding to the problem is accurately and quickly output for the user, the problem processing speed is high, the accuracy rate of the output answer is high, and the user experience is improved.
Fig. 2 is a flowchart of a training method of a non-uniform graph neural network model according to an embodiment of the present invention, and as shown in fig. 2, the method mainly includes the following steps:
step 201, determining a question-answer sample library of the initial graph neural network model and sample labels corresponding to all samples in the question-answer sample library.
The question and answer sample library comprises nodes, edges and position information of the nodes in the question and answer sample library. The information in the question and answer sample library comes from listening to the work order and listening to the knowledge library. The initial graph neural network model is a graph neural network model that has not yet begun to be trained. The sample label is idealized output information corresponding to the sample information in the question-and-answer sample library. In practical application, the problem of uneven distribution of data in the question and answer sample library often exists, and in order to overcome the problem of uneven data distribution and improve the training effect of the non-uniform graph neural network model, the anchor structure in the question and answer sample library needs to be determined. In this embodiment, optionally, before inputting the samples in the question-answer sample library into the initial graph neural network, determining the anchor structure mainly includes the following steps B1 to B5:
step B1: and acquiring node degrees of all nodes of the question and answer sample library.
Each noun in the question-answer sample library is a node. An edge is a relationship between each node. The node degree of a node represents the number of other nodes and edges associated with the node, and the more edges a node is connected with other nodes, the greater the node degree of the node is. Specifically, after the question and answer sample library is obtained, the node degrees of all nodes in the question and answer sample library are counted.
And step B2: and adding the node degrees of the two nodes connecting each edge and subtracting 1 to obtain the edge degree of each edge.
Wherein the node degree of a node represents the number of edges associated with the node by other nodes. The edge degree of each edge is related to the node degree of two nodes connecting the edge, and the larger the node degree of the two nodes connecting each edge is, the larger the edge degree of the edge is.
Specifically, after the node degrees of all the nodes are obtained, the edge degree of each edge is calculated according to the node degree of each node. By way of example, fig. 3 is a schematic diagram of a node-edge structure provided by an embodiment of the present invention. As shown in fig. 3, if the node degree of the node 1 is 2, the node degree of the node 2 is 3, the node degree of the node 3 is 1, the node degree of the node 4 is 1, and the node degree of the node 5 is 1, the edge degree of the edge between the node 1 and the node 2 is 4.
And step B3: taking the initial edge parameter value as the current edge; and deleting the edges with the edge degrees smaller than the current edge degrees in the question and answer sample library to obtain an initial key edge set.
Wherein, the initial key edge set is an edge set formed by the residual edges in the question and answer sample library, and the (current edge degree minus 1) degree edge set is an edge set formed by the edges deleted from the question and answer sample library. The edge parameter value is a parameter value set in advance. For example, if the initial edge parameter value is 2, the current edge degree is 2. Assuming that the question-answer sample library includes edges with the edge degrees of 1, 2, 3, 4 and 5, deleting the edges with the edge degree of 1 in the question-answer sample library to obtain a set of the rest edges as an initial key edge set.
For example, fig. 4a is a schematic node-edge structure diagram of a question-and-answer sample library provided in an embodiment of the present invention. Fig. 4b is a schematic diagram of a node-edge structure after an edge with an edge degree smaller than 3 is deleted according to the embodiment of the present invention. As shown in fig. 4a, the edge degree between node 1 and node 2 is 4, the edge degree between node 2 and node 4 is 2, the edge degree between node 2 and node 5 is 2, the edge degree between node 1 and node 3 is 2, and the edge degree between node 3 and node 6 is 2. As shown in fig. 4b, the nodes in the node-edge structure after deleting the edge with the edge degree less than 3 in the question-answer sample library only leave the node 1, the node 2, the node 3, the node 4, and the node 5, and the node 6 and the edge between the node 6 and the node 3 have been deleted.
And step B4: adding 1 to the current edge degree to be used as the current edge degree; the above operations are repeatedly executed until no edge with the edge degree smaller than the current edge degree exists in the initial key edge set.
Specifically, after deleting the edge with the edge degree smaller than the current edge degree in the question and answer sample library, adding 1 to the current edge degree to serve as the current edge degree, deleting the edge with the edge degree smaller than the current edge degree in the initial key edge set, and obtaining a new initial key edge set and a new edge set with the edge degree smaller than 1 degree. The above operations are repeatedly performed until there is no edge having an edge degree smaller than the current edge degree in the initial edge set.
For example, if the initial edge parameter value is 2, the current edge degree is 2. Assuming that the question-answer sample library includes edges with the edge degrees of 1, 2, 3, 4 and 5, deleting the edge with the edge degree of 1 in the question-answer sample library, and obtaining a set of the edges with the remaining edge degrees of 2, 3, 4 and 5 as an initial key edge set. Further, adding 1 to the current edge degree to obtain a current edge degree of 3, deleting edges with edge degrees smaller than 3 in the initial key edge set, and obtaining a new initial key edge set which is a set of edges with edge degrees of 3, 4 and 5. And repeating the operation until only the set of edges with the edge degree of 5 is left in the question and answer sample library when the front edge degree is 6, deleting the edges with the edge degree of 5, stopping updating when no edges with the edge degree of less than 6 exist in the question and answer sample library, and obtaining the final key edge set which is the set of edges with the edge degree of 5.
And step B5: an anchor structure is determined based on the set of key edges.
Wherein, the anchor structure is based on the structure that key point and key limit are constituteed. The key points are the points in the key edges. The structure consisting of the key points and key edges in the set of key edges is an anchor structure. After the anchor structure is obtained, further, position information of all nodes in the question and answer sample library based on the anchor structure is obtained.
In an alternative embodiment, the location information of each node may be determined based on the distance of each node from the key points in the anchor structure. For example, the smaller the node degree, the greater the distance of the node to the anchor structure, and the more "out of the way" the node is located in the question and answer sample repository. The larger the node degree, the smaller the distance from the node to the anchor structure, and the more "central" the node is in the question-answer sample library.
Through the steps, before the sample information is input into the initial graph neural network model, key points and key edges in the sample data can be determined, and the anchor structure and the position information of each point can be determined according to the key edges. The problem of uneven data distribution is solved, and the training effect of the non-uniform graph neural network model is improved.
Step 202, extracting a sample from the question-answer sample library as a current sample, and constructing a corresponding graph data structure based on the current sample library.
The graph data structure comprises a topological graph structure formed by nodes and edges and position information of each node in the topological graph structure. In an alternative embodiment, in the process of training the initial graph neural network, the encoder in the graph neural network model may encode the position information of the nodes in the topological graph into the node feature vectors, and use the node feature vectors as the input of the next encoding layer. Specifically, after the position information of each node in a topological graph formed by a question-answer sample library is determined, the nodes, edges and the position information of each node in the current sample are constructed into a graph data structure.
And step 203, inputting the graph data structure into the initial graph neural network model to obtain the output information of the initial graph neural network model.
The output information is information which corresponds to the input information and is output through calculation of the initial graph neural network model. In an alternative embodiment, after the graph data structure constructed by the nodes, edges and the position information of each node in the current sample is input into the initial graph neural network model, the initial graph neural network model is calculated to output information in a certain format.
And 204, determining a loss function of the initial graph neural network model based on the output information of the initial graph neural network model and the sample label, and adjusting network parameters in the initial graph neural network model based on the loss function.
The sample label is output information in an ideal state corresponding to the input information. The loss function is a function that maps the value of a random event or its associated random variable to a non-negative real number to represent the "risk" or "loss" of the random event. The loss function is typically associated with the optimization problem as a learning criterion, i.e. the model is solved and evaluated by minimizing the loss function. For example, in statistics and machine learning, for parameter estimation of models.
In practical application, a certain 'gap' exists between output information obtained by the initial graph neural network model based on input information and a sample label, and the 'gap' can be calculated through a loss function. And further, adjusting network parameters in the initial graph neural network according to the calculation result of the loss function until the initial graph neural network model meets a preset convergence condition, and stopping adjustment. And taking the adjusted initial graph neural network model as a trained non-uniform graph neural network model.
The training method of the non-uniform graph neural network model provided by the embodiment of the invention can determine a question and answer sample library of the initial graph neural network and a sample label corresponding to the question and answer sample library, extract a sample from the question and answer sample library as a current sample, construct a corresponding graph data structure based on the current sample library, input the graph data structure into the initial graph neural network model to obtain output information of the initial graph neural network, determine a loss function of the initial graph neural network model based on the output information of the initial graph neural network model and the sample label, and adjust network parameters in the initial graph neural network based on the loss function. According to the technical scheme, the initial graph neural network model can be trained simply, conveniently and quickly, the position information of each point is concerned in the training process, the problem of uneven data distribution is solved, and the training effect of the non-uniform graph neural network model is improved. The trained non-uniform graph neural network model can analyze the question information of the user, answer information corresponding to the question is accurately and quickly output for the user, the question processing speed is high, the accuracy rate of the output answer is high, and the user experience is further improved.
Fig. 5 is a schematic structural diagram of an intelligent question answering device according to an embodiment of the present invention. The embodiment of the invention provides an intelligent question answering device, which comprises:
an obtaining module 501, configured to obtain question information input by a user, and convert the question information into target information;
an input module 502, configured to input the target information into a pre-trained non-uniform graph neural network model, so as to obtain output information of the non-uniform graph neural network model;
an output module 503, configured to convert the output information into an answer corresponding to the question information, and display the answer for the user.
Optionally, before obtaining the question information input by the user, the input module 502 is specifically configured to: if the non-uniform graph neural network model does not meet the preset convergence condition, extracting a sample from a question-answer sample library as a current sample; the question and answer sample library comprises nodes, edges, a predetermined anchor structure and position information of the nodes in the question and answer sample library;
training the non-uniform graph neural network model using the current sample until the non-uniform graph neural network model satisfies the convergence condition.
Optionally, the input module 502 is further configured to: determining the current sample and a sample label corresponding to the current sample;
obtaining output information of the initial graph neural network model based on the current sample and the initial graph neural network model;
determining a loss function of the initial graph neural network model based on the output information of the initial graph neural network model and the sample label, and adjusting network parameters in the initial graph neural network model based on the loss function until the non-uniform graph neural network model meets the convergence condition.
Optionally, the input module 502 is further configured to: constructing a corresponding graph data structure based on the current sample;
and inputting the graph data structure into the initial graph neural network model to obtain the output information of the initial graph neural network model.
Optionally, the input module 502 is further configured to: acquiring node degrees of all nodes in the question and answer sample library;
determining the edge degrees of all edges in the question and answer sample library based on the node degrees of all nodes in the question and answer sample library;
and determining a key edge set based on the edge degrees of all edges in the question and answer sample library, and determining the anchor structure based on the key edge set.
Optionally, the input module 502 is further configured to: and adding the node degrees of the two nodes connecting each edge and subtracting 1 to obtain the edge degree of each edge.
Optionally, the input module 502 is further configured to: taking the initial edge parameter value as the current edge; deleting the edges with the edge degrees smaller than the current edge degree in the question and answer sample library to obtain an initial key edge set and an edge set with the current edge degree minus 1 degree;
adding 1 to the current edge as the current edge; the above operations are repeatedly executed until no edge with the edge degree smaller than the current edge degree exists in the initial edge set.
The intelligent question-answering realizing device provided by the embodiment of the invention can execute the intelligent question-answering realizing method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the executing method.
Fig. 6 is a schematic structural diagram of an electronic device provided by an embodiment of the present invention, and referring to fig. 6, a schematic structural diagram of a computer system 12 suitable for implementing the electronic device of the embodiment of the present invention is shown. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention. The components of the electronic device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 30 and/or cache memory 32. The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 6, and commonly referred to as a "hard drive"). Although not shown in FIG. 6, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of embodiments of the invention as described.
Electronic device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with electronic device 12, and/or with any devices (e.g., network card, modem, etc.) that enable electronic device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. In the electronic device 12 of the present embodiment, the display 24 is not provided as a separate body but is embedded in the mirror surface, and when the display surface of the display 24 is not displayed, the display surface of the display 24 and the mirror surface are visually integrated. Also, the electronic device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 20. As shown, the network adapter 20 communicates with the other modules of the electronic device 12 over the bus 18. It should be understood that although not shown in FIG. 6, other hardware and/or software modules may be used in conjunction with electronic device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, to name a few.
The processing unit 16 executes the programs stored in the system memory 28 to execute various functional applications and intelligent question and answer implementation, for example, to implement an intelligent question and answer implementation method provided by the embodiment of the present invention: acquiring problem information input by a user, and converting the problem information into target information; inputting the target information into a pre-trained non-uniform graph neural network model to obtain output information of the non-uniform graph neural network model; and converting the output information into an answer corresponding to the question information, and displaying the answer for the user.
The embodiment of the invention provides a computer readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the method for implementing intelligent question answering provided by all the embodiments of the invention comprises the following steps: acquiring problem information input by a user, and converting the problem information into target information; inputting the target information into a pre-trained non-uniform graph neural network model to obtain output information of the non-uniform graph neural network model; and converting the output information into an answer corresponding to the question information, and displaying the answer for the user. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing description is only exemplary of the invention and that the principles of the technology may be employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments illustrated herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. An intelligent question-answering implementation method is characterized by comprising the following steps:
acquiring problem information input by a user, and converting the problem information into target information;
inputting the target information into a pre-trained non-uniform graph neural network model to obtain output information of the non-uniform graph neural network model;
and converting the output information into an answer corresponding to the question information, and displaying the answer for the user.
2. The method of claim 1, wherein prior to said obtaining user-entered question information, the method further comprises:
if the non-uniform graph neural network model does not meet the preset convergence condition, extracting a sample from a question-answer sample library as a current sample; the question and answer sample library comprises nodes, edges, a predetermined anchor structure and position information of the nodes in the question and answer sample library;
training the non-uniform graph neural network model using the current sample until the non-uniform graph neural network model satisfies the convergence condition.
3. The method of claim 2, wherein training the non-uniform graph neural network model using the current sample until the non-uniform graph neural network model satisfies the convergence condition comprises:
determining the current sample and a sample label corresponding to the current sample;
obtaining output information of the initial graph neural network model based on the current sample and the initial graph neural network model;
determining a loss function of the initial graph neural network model based on the output information of the initial graph neural network model and the sample label, and adjusting network parameters in the initial graph neural network model based on the loss function until the non-uniform graph neural network model meets the convergence condition.
4. The method of claim 3, wherein deriving the output information of the initial graph neural network model based on the current sample and the initial graph neural network model comprises:
constructing a corresponding graph data structure based on the current sample;
and inputting the graph data structure into the initial graph neural network model to obtain the output information of the initial graph neural network model.
5. The method of claim 3, further comprising, prior to said training the non-uniform graph neural network model using the current samples:
acquiring node degrees of all nodes in the question and answer sample library;
determining the edge degrees of all edges in the question and answer sample library based on the node degrees of all nodes in the question and answer sample library;
and determining a key edge set based on the edge degrees of all edges in the question and answer sample library, and determining the anchor structure based on the key edge set.
6. The method of claim 5, wherein determining the degrees of all edges in the question and answer sample library comprises:
and adding the node degrees of the two nodes connecting each edge and subtracting 1 to obtain the edge degree of each edge.
7. The method of claim 5, wherein determining the set of key edges based on the edge degrees of all the edges comprises:
taking the initial edge parameter value as the current edge; deleting the edges with the edge degrees smaller than the current edge degrees in the question and answer sample library to obtain an initial key edge set;
adding 1 to the current edge as the current edge; and repeating the above operations until no edge with the edge degree smaller than the current edge degree exists in the initial key edge set.
8. An intelligent question-answering realizing device, which is characterized by comprising:
the acquisition module is used for acquiring the problem information input by a user and converting the problem information into target information;
the input module is used for inputting the target information into a pre-trained non-uniform graph neural network model to obtain output information of the non-uniform graph neural network model;
and the output module is used for converting the output information into an answer corresponding to the question information and displaying the answer for the user.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the intelligent question answering implementation method according to any one of claims 1 to 7 when executing the program.
10. A computer-readable storage medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the intelligent question-answering implementing method according to any one of claims 1 to 7.
CN202211317603.5A 2022-10-26 2022-10-26 Intelligent question and answer implementation method and device, electronic equipment and storage medium Pending CN115599898A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211317603.5A CN115599898A (en) 2022-10-26 2022-10-26 Intelligent question and answer implementation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211317603.5A CN115599898A (en) 2022-10-26 2022-10-26 Intelligent question and answer implementation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115599898A true CN115599898A (en) 2023-01-13

Family

ID=84851445

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211317603.5A Pending CN115599898A (en) 2022-10-26 2022-10-26 Intelligent question and answer implementation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115599898A (en)

Similar Documents

Publication Publication Date Title
CN112685565B (en) Text classification method based on multi-mode information fusion and related equipment thereof
JP7122341B2 (en) Method and apparatus for evaluating translation quality
CN110276023B (en) POI transition event discovery method, device, computing equipment and medium
CN107729300B (en) Text similarity processing method, device and equipment and computer storage medium
CN109599095B (en) Method, device and equipment for marking voice data and computer storage medium
CN112015859A (en) Text knowledge hierarchy extraction method and device, computer equipment and readable medium
CN112328761B (en) Method and device for setting intention label, computer equipment and storage medium
WO2021218028A1 (en) Artificial intelligence-based interview content refining method, apparatus and device, and medium
WO2021218029A1 (en) Artificial intelligence-based interview method and apparatus, computer device, and storage medium
CN110704597B (en) Dialogue system reliability verification method, model generation method and device
CN111177186A (en) Question retrieval-based single sentence intention identification method, device and system
CN112836521A (en) Question-answer matching method and device, computer equipment and storage medium
CN112214595A (en) Category determination method, device, equipment and medium
CN115757731A (en) Dialogue question rewriting method, device, computer equipment and storage medium
CN110647613A (en) Courseware construction method, courseware construction device, courseware construction server and storage medium
CN115438149A (en) End-to-end model training method and device, computer equipment and storage medium
CN111125550B (en) Point-of-interest classification method, device, equipment and storage medium
CN110826327A (en) Emotion analysis method and device, computer readable medium and electronic equipment
CN113705207A (en) Grammar error recognition method and device
CN117275466A (en) Business intention recognition method, device, equipment and storage medium thereof
CN112100355A (en) Intelligent interaction method, device and equipment
CN116402166A (en) Training method and device of prediction model, electronic equipment and storage medium
CN111753062A (en) Method, device, equipment and medium for determining session response scheme
CN116186219A (en) Man-machine dialogue interaction method, system and storage medium
CN113450764B (en) Text voice recognition method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination