CN114238611A - Method, apparatus, device and storage medium for outputting information - Google Patents
Method, apparatus, device and storage medium for outputting information Download PDFInfo
- Publication number
- CN114238611A CN114238611A CN202111587727.0A CN202111587727A CN114238611A CN 114238611 A CN114238611 A CN 114238611A CN 202111587727 A CN202111587727 A CN 202111587727A CN 114238611 A CN114238611 A CN 114238611A
- Authority
- CN
- China
- Prior art keywords
- knowledge
- input
- determining
- initial vector
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation or dialogue systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3344—Query execution using natural language analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3346—Query execution using probabilistic model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/02—Knowledge representation; Symbolic representation
- G06N5/022—Knowledge engineering; Knowledge acquisition
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H10/00—ICT specially adapted for the handling or processing of patient-related medical or healthcare data
- G16H10/60—ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Databases & Information Systems (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Biomedical Technology (AREA)
- Human Computer Interaction (AREA)
- Epidemiology (AREA)
- Medical Informatics (AREA)
- Primary Health Care (AREA)
- Public Health (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The present disclosure provides a method, an apparatus, a device and a storage medium for outputting information, and relates to the technical field of artificial intelligence such as natural language processing, knowledge graph, deep learning and the like. The specific implementation scheme is as follows: acquiring input of a user, wherein the input comprises a question and a corresponding candidate answer; determining an initial vector of an input; determining input related knowledge according to the initial vector and a pre-established knowledge base; determining a confidence level that the candidate answer belongs to a correct answer to the question based on the relevant knowledge; and outputting information corresponding to the confidence degree. The implementation mode can acquire related knowledge from the knowledge base according to the input of the user, so that the accuracy of judging the candidate answers can be improved.
Description
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to the field of artificial intelligence technologies such as natural language processing, knowledge mapping, deep learning, and in particular, to a method, an apparatus, a device, and a storage medium for outputting information.
Background
The question-answering system in the medical field is very highly dependent on knowledge and reasoning. The correct answer of the questions needs to comprehensively understand and reason the medical record content based on knowledge from various sources such as maps, teaching materials and the like. Although the current intelligent question-answering technology has been greatly developed, it is still difficult to meet the question-answering requirements in the medical field in terms of knowledge utilization and reasoning implementation.
Disclosure of Invention
The present disclosure provides a method, apparatus, device, and storage medium for outputting information.
According to a first aspect, there is provided a method for outputting information, comprising: acquiring input of a user, wherein the input comprises a question and a corresponding candidate answer; determining an initial vector of the input; determining the input related knowledge according to the initial vector and a pre-established knowledge base; determining a confidence level that the candidate answer belongs to a correct answer to the question based on the relevant knowledge; and outputting information corresponding to the confidence.
According to a second aspect, there is provided an apparatus for outputting information, comprising: an input acquisition unit configured to acquire an input of a user, the input including a question and a corresponding candidate answer; a vector determination unit configured to determine an initial vector of the input; a knowledge acquisition unit configured to determine the input related knowledge according to an initial vector and a pre-established knowledge base; an answer judging unit configured to determine a confidence level that the candidate answer belongs to a correct answer to the question based on the relevant knowledge; and an information output unit configured to output information corresponding to the confidence.
According to a third aspect, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described in the first aspect.
According to a fourth aspect, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method as described in the first aspect.
According to a fifth aspect, a computer program product comprising a computer program which, when executed by a processor, implements the method as described in the first aspect.
According to the technology disclosed by the invention, the relevant knowledge can be acquired from the knowledge base according to the input of the user, so that the accuracy of judging the candidate answers can be improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present disclosure may be applied;
FIG. 2 is a flow diagram for one embodiment of a method for outputting information, according to the present disclosure;
FIG. 3 is a schematic diagram of one application scenario of a method for outputting information according to the present disclosure;
FIG. 4 is a flow diagram of another embodiment of a method for outputting information according to the present disclosure;
FIG. 5 is a schematic block diagram illustrating one embodiment of an apparatus for outputting information according to the present disclosure;
fig. 6 is a block diagram of an electronic device for implementing a method for outputting information according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the disclosed method for outputting information or apparatus for outputting information may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. Various communication client applications, such as an intelligent question and answer application, may be installed on the terminal devices 101, 102, 103. The user may enter medical records, questions, and candidate answers via the above-described smart question-and-answer type application.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices including, but not limited to, smart phones, tablet computers, e-book readers, car computers, laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may be a server providing various services, such as a background server providing intelligent question answering on the terminal devices 101, 102, 103. The background server can use the knowledge in the pre-established knowledge base to judge the medical history, the questions and the candidate answers to obtain the judgment result, and feed back the judgment result to the terminal devices 101, 102 and 103.
The server 105 may be hardware or software. When the server 105 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server 105 is software, it may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be noted that the method for outputting information provided by the embodiments of the present disclosure is generally performed by the server 105. Accordingly, a device for outputting information is generally provided in the server 105.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for outputting information in accordance with the present disclosure is shown. The method for outputting information of the embodiment comprises the following steps:
In this embodiment, the execution subject of the method for outputting information may acquire the input of the user in various ways. For example, a user may enter information through a smart question and answer type application installed on a terminal. The inputs may include questions and candidate answers, and may also include medical records. The above-described problem may be a problem in the medical field or a problem in another field. The candidate answer may be various answers for answering the question, and the execution subject needs to determine whether the candidate answer is a correct answer to the question. For example, the question is "what drugs to take to reduce fever? ", the candidate answer may be" ibuprofen ".
The execution body may determine the initial vector of the input in a variety of ways. For example, the executing agent may determine the vector of the input using a vector determination model trained in advance, and use the vector as an initial vector. The vector determination model may be a language model or may be various feature extraction algorithms. The initial vector may contain input semantic information.
After determining the initial vector, the execution subject may determine the relevant knowledge of the input from a pre-established knowledge base according to the initial vector. The knowledge base may include various forms of knowledge, such as structured documents, key-value pairs, vector collections, and so forth. The executing agent may first vectorize each knowledge in the knowledge base, and then calculate the similarity between the initial vector and each vector obtained after vectorization. And taking the knowledge with the similarity larger than a preset threshold value as related knowledge.
The executing agent may determine whether the candidate answer belongs to the correct answer to the question using the relevant knowledge after determining the relevant knowledge. For example, the executing entity may input the relevant knowledge into a pre-trained model, and determine whether the candidate answer belongs to the correct answer of the question according to the output result of the model. The output result may be a confidence level, and if the confidence level is greater than or equal to a preset threshold, the candidate answer is considered to be a correct answer. And if the confidence is smaller than a preset threshold value, the candidate answer is not considered to be a correct answer.
And step 205, outputting information corresponding to the confidence.
In this embodiment, different confidence levels may correspond to different information. The execution subject may output different information for the user to view the confidence. For example, if the confidence level is greater than or equal to the preset threshold, the corresponding information may be "this answer is a correct answer". If the confidence is smaller than the preset threshold, the corresponding information may be "this answer is a wrong answer". Alternatively, the information may further include a confidence value.
With continued reference to fig. 3, a schematic diagram of one application scenario of a method for outputting information according to the present disclosure is shown. In the application scenario of fig. 3, a user inputs a question and a candidate answer through a medical record filling application installed in the terminal 301, and the medical record filling application may upload the question and the candidate answer to the server 302. Server 302 determines the relevant knowledge by accessing knowledge base 303 multiple times. Finally, the server 302 determines that the candidate answer is not the correct answer to the question according to the relevant knowledge, and feeds back the determination result to the terminal 301.
According to the method for outputting the information provided by the embodiment of the disclosure, the relevant knowledge can be acquired from the knowledge base according to the input of the user, so that the accuracy of judging the candidate answer can be improved.
With continued reference to fig. 4, a flow 400 of another embodiment of a method for outputting information in accordance with the present disclosure is shown. As shown in fig. 4, the method of the present embodiment may include the following steps:
And step 403, performing at least one knowledge acquisition operation from the knowledge base according to the initial vector to determine the input related knowledge.
In this embodiment, the executing entity may perform at least one knowledge obtaining operation from the knowledge base according to the initial vector. It will be appreciated that a portion of knowledge may be retrieved from the knowledge base each time a knowledge retrieval operation is performed. The execution subject may take knowledge obtained by each knowledge acquisition operation as the relevant knowledge.
In some optional implementations of this embodiment, a set of key-value pairs may be included in the knowledge base. The execution agent may determine a vector for each key in the set of key-value pairs. Then, based on the initial vector and the vectors of keys, phase keys are determined. Specifically, the execution subject may calculate a similarity between the initial vector and each key, and use a key with a similarity greater than a preset threshold as the related key. And taking the value corresponding to each key as related knowledge.
In some optional implementations of the embodiment, the performing agent may determine the relevant knowledge by:
and step 4031, combining the knowledge obtained by the first knowledge acquisition operation with the initial vector to obtain iterative knowledge.
And step 4032, combining the knowledge obtained by the knowledge acquisition operation after the first time with the iteration knowledge obtained at the previous time to obtain updated iteration knowledge.
4033, the iteration knowledge obtained each time is used as the relevant knowledge.
In this implementation, the execution subject may combine the knowledge obtained by the first knowledge acquisition operation with the initial vector, and use the combined knowledge as the iterative knowledge. Here, the combining may include vectorizing the knowledge obtained by the knowledge acquisition operation, splicing the vectorized knowledge with the initial vector, and then performing dimensionality reduction so that the dimensionality of the iterative knowledge is the same as the dimensionality of the initial vector. Alternatively, the executing agent may quantize the knowledge obtained by the knowledge obtaining operation and then weight the quantized knowledge with the initial vector to obtain the iterative knowledge.
Then, the executing body may combine the knowledge obtained by each knowledge obtaining operation later with the iterative knowledge obtained last time to obtain updated iterative knowledge. The combination here may be the same as or different from the first combination.
The execution subject may treat each iteration knowledge as a relevant knowledge.
The executive may determine probability values and weight values for each resulting knowledge of the iteration. And then, determining the value of the iterative knowledge obtained each time according to the probability value and the weight value. And adding the values of the knowledge of each iteration, and taking the obtained final value as a confidence coefficient.
In some optional implementation manners of this embodiment, the executing agent may determine a probability value of the iterative knowledge obtained each time by using a pre-trained classification model; and determining the weight value of the iterative knowledge obtained each time by using a pre-trained weight model.
The classification model and the weighting model can be a neural network or other algorithm modules.
According to the method for outputting the information, which is provided by the embodiment of the disclosure, the relevant knowledge can be acquired from the knowledge base for many times through many times of knowledge acquisition operations, so that multiple reasoning of the problem is realized, and the accuracy of determining the candidate answer can be improved.
With further reference to fig. 5, as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of an apparatus for outputting information, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable in various electronic devices.
As shown in fig. 5, the apparatus 500 for outputting information of the present embodiment includes: an input acquisition unit 501, a vector determination unit 502, a knowledge acquisition unit 503, an answer judgment unit 504, and an information output unit 505.
An input obtaining unit 501 configured to obtain an input of a user, the input including a question and a corresponding candidate answer.
A vector determination unit 502 configured to determine an initial vector of the input.
The knowledge acquisition unit 503 is configured to determine relevant knowledge of the input according to the initial vector and a pre-established knowledge base.
An answer judging unit 504 configured to determine a confidence that the candidate answer belongs to a correct answer to the question based on the relevant knowledge.
An information output unit 505 configured to output information corresponding to the above-described confidence.
In some optional implementations of this embodiment, the knowledge acquisition unit 503 may be further configured to: and performing at least one knowledge acquisition operation from a knowledge base according to the initial vector to determine the input related knowledge.
In some optional implementations of the embodiment, the knowledge base includes a set of key-value pairs. The knowledge acquisition unit 503 may be further configured to: determining input related keys according to the initial vectors and the vectors of the keys in the key value pair set in the knowledge base; and taking the value corresponding to the key as the related knowledge.
In some optional implementations of this embodiment, the knowledge acquisition unit 503 may be further configured to: combining knowledge obtained by the first knowledge acquisition operation with the initial vector to obtain iterative knowledge; combining knowledge obtained by the knowledge acquisition operation after the first time with iterative knowledge obtained at the previous time to obtain updated iterative knowledge; and taking the iterative knowledge obtained each time as related knowledge.
In some optional implementations of the present embodiment, the answer judging unit 504 may be further configured to: determining a probability value and a weight value of each obtained iteration knowledge; and determining the confidence degree of the correct answer of the candidate answer belonging to the question according to the probability value, the weight value and the corresponding threshold value.
In some optional implementations of the present embodiment, the answer judging unit 504 may be further configured to: determining the probability value of the iterative knowledge obtained each time by using a pre-trained classification model; and determining the weight value of the iterative knowledge obtained each time by using a pre-trained weight model.
It should be understood that units 501 to 505, which are described in the apparatus 500 for outputting information, correspond to the respective steps in the method described with reference to fig. 2, respectively. Thus, the operations and features described above for the method for outputting information are equally applicable to the apparatus 500 and the units included therein and will not be described again here.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to an embodiment of the present disclosure.
Fig. 6 shows a block diagram of an electronic device 600 that performs a method for outputting information according to an embodiment of the disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the electronic device 600 includes a processor 601 that may perform various suitable actions and processes in accordance with a computer program stored in a Read Only Memory (ROM)602 or a computer program loaded from a memory 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 can also be stored. The processor 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An I/O interface (input/output interface) 605 is also connected to the bus 604.
Various components in the electronic device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a memory 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the electronic device 600 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. The program code described above may be packaged as a computer program product. These program code or computer program products may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program code, when executed by the processor 601, causes the functions/acts specified in the flowchart and/or block diagram block or blocks to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable storage medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable storage medium may be a machine-readable signal storage medium or a machine-readable storage medium. A machine-readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions of the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.
Claims (15)
1. A method for outputting information, comprising:
acquiring input of a user, wherein the input comprises a question and a corresponding candidate answer;
determining an initial vector of the input;
determining the input related knowledge according to the initial vector and a pre-established knowledge base;
determining a confidence level that the candidate answer belongs to a correct answer to the question based on the relevant knowledge;
and outputting information corresponding to the confidence degree.
2. The method of claim 1, wherein said determining the relevant knowledge of the input based on the initial vector and a pre-established knowledge base comprises:
and according to the initial vector, performing at least one knowledge acquisition operation from the knowledge base to determine the input related knowledge.
3. The method of claim 1 or 2, wherein the knowledge base comprises a set of key-value pairs; and
determining the relevant knowledge of the input according to the initial vector and a pre-established knowledge base, wherein the determining comprises the following steps:
determining the input corresponding key according to the initial vector and the vector of each key in the key value pair set in the knowledge base;
and taking the value corresponding to the corresponding key as the related knowledge.
4. The method of claim 2, wherein said performing at least one knowledge acquisition operation from the knowledge base based on the initial vector to determine the relevant knowledge of the input comprises:
combining knowledge obtained by the first knowledge acquisition operation with the initial vector to obtain iterative knowledge;
combining knowledge obtained by the knowledge acquisition operation after the first time with iterative knowledge obtained at the previous time to obtain updated iterative knowledge;
and taking the iteration knowledge obtained each time as the related knowledge.
5. The method of claim 2, wherein said determining a confidence that the candidate answer belongs to a correct answer to the question based on the relevant knowledge comprises:
determining a probability value and a weight value of each obtained iteration knowledge;
and determining the confidence degree of the candidate answer belonging to the correct answer of the question according to the probability value, the weight value and the corresponding threshold value.
6. The method of claim 5, wherein the determining probability values and weight values for each derived iteration knowledge comprises:
determining the probability value of the iterative knowledge obtained each time by using a pre-trained classification model;
and determining the weight value of the iterative knowledge obtained each time by using a pre-trained weight model.
7. An apparatus for outputting information, comprising:
an input acquisition unit configured to acquire an input of a user, the input including a question and a corresponding candidate answer;
a vector determination unit configured to determine an initial vector of the input;
a knowledge acquisition unit configured to determine relevant knowledge of the input according to the initial vector and a pre-established knowledge base;
an answer determination unit configured to determine a confidence level that the candidate answer belongs to a correct answer to the question based on the relevant knowledge;
an information output unit configured to output information corresponding to the confidence.
8. The apparatus of claim 7, wherein the knowledge acquisition unit is further configured to:
and according to the initial vector, performing at least one knowledge acquisition operation from the knowledge base to determine the input related knowledge.
9. The apparatus of claim 7 or 8, wherein the knowledge base comprises a set of key-value pairs; and
the knowledge acquisition unit is further configured to:
determining the input corresponding key according to the initial vector and the vector of each key in the key value pair set in the knowledge base;
and taking the value corresponding to the corresponding key as the related knowledge.
10. The apparatus of claim 8, wherein the knowledge acquisition unit is further configured to:
combining knowledge obtained by the first knowledge acquisition operation with the initial vector to obtain iterative knowledge;
combining knowledge obtained by the knowledge acquisition operation after the first time with iterative knowledge obtained at the previous time to obtain updated iterative knowledge;
and taking the iteration knowledge obtained each time as the related knowledge.
11. The apparatus of claim 8, wherein the answer determination unit is further configured to:
determining a probability value and a weight value of each obtained iteration knowledge;
and determining the confidence degree of the candidate answer belonging to the correct answer of the question according to the probability value, the weight value and the corresponding threshold value.
12. The apparatus of claim 11, wherein the answer determination unit is further configured to:
determining the probability value of the iterative knowledge obtained each time by using a pre-trained classification model;
and determining the weight value of the iterative knowledge obtained each time by using a pre-trained weight model.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-6.
15. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111587727.0A CN114238611B (en) | 2021-12-23 | 2021-12-23 | Method, apparatus, device and storage medium for outputting information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111587727.0A CN114238611B (en) | 2021-12-23 | 2021-12-23 | Method, apparatus, device and storage medium for outputting information |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114238611A true CN114238611A (en) | 2022-03-25 |
CN114238611B CN114238611B (en) | 2023-05-16 |
Family
ID=80761774
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111587727.0A Active CN114238611B (en) | 2021-12-23 | 2021-12-23 | Method, apparatus, device and storage medium for outputting information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114238611B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115169364A (en) * | 2022-06-17 | 2022-10-11 | 北京百度网讯科技有限公司 | Intelligent question answering method, device, equipment and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106874441A (en) * | 2017-02-07 | 2017-06-20 | 腾讯科技(上海)有限公司 | Intelligent answer method and apparatus |
CN107066556A (en) * | 2017-03-27 | 2017-08-18 | 竹间智能科技(上海)有限公司 | Alternative answer sort method and device for artificial intelligence conversational system |
CN107885842A (en) * | 2017-11-10 | 2018-04-06 | 上海智臻智能网络科技股份有限公司 | Method, apparatus, server and the storage medium of intelligent answer |
CN109783624A (en) * | 2018-12-27 | 2019-05-21 | 联想(北京)有限公司 | Answer generation method, device and the intelligent conversational system in knowledge based library |
CN110019838A (en) * | 2017-12-25 | 2019-07-16 | 上海智臻智能网络科技股份有限公司 | Intelligent Answer System and intelligent terminal |
US10599821B2 (en) * | 2017-12-08 | 2020-03-24 | International Business Machines Corporation | Collecting user feedback through logon questions |
CN111382255A (en) * | 2020-03-17 | 2020-07-07 | 北京百度网讯科技有限公司 | Method, apparatus, device and medium for question and answer processing |
CN113515932A (en) * | 2021-07-28 | 2021-10-19 | 北京百度网讯科技有限公司 | Method, device, equipment and storage medium for processing question and answer information |
-
2021
- 2021-12-23 CN CN202111587727.0A patent/CN114238611B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106874441A (en) * | 2017-02-07 | 2017-06-20 | 腾讯科技(上海)有限公司 | Intelligent answer method and apparatus |
CN107066556A (en) * | 2017-03-27 | 2017-08-18 | 竹间智能科技(上海)有限公司 | Alternative answer sort method and device for artificial intelligence conversational system |
CN107885842A (en) * | 2017-11-10 | 2018-04-06 | 上海智臻智能网络科技股份有限公司 | Method, apparatus, server and the storage medium of intelligent answer |
US10599821B2 (en) * | 2017-12-08 | 2020-03-24 | International Business Machines Corporation | Collecting user feedback through logon questions |
CN110019838A (en) * | 2017-12-25 | 2019-07-16 | 上海智臻智能网络科技股份有限公司 | Intelligent Answer System and intelligent terminal |
CN109783624A (en) * | 2018-12-27 | 2019-05-21 | 联想(北京)有限公司 | Answer generation method, device and the intelligent conversational system in knowledge based library |
CN111382255A (en) * | 2020-03-17 | 2020-07-07 | 北京百度网讯科技有限公司 | Method, apparatus, device and medium for question and answer processing |
CN113515932A (en) * | 2021-07-28 | 2021-10-19 | 北京百度网讯科技有限公司 | Method, device, equipment and storage medium for processing question and answer information |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115169364A (en) * | 2022-06-17 | 2022-10-11 | 北京百度网讯科技有限公司 | Intelligent question answering method, device, equipment and storage medium |
CN115169364B (en) * | 2022-06-17 | 2024-03-08 | 北京百度网讯科技有限公司 | Intelligent question-answering method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN114238611B (en) | 2023-05-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112561079A (en) | Distributed model training apparatus, method and computer program product | |
US20200349226A1 (en) | Dictionary Expansion Using Neural Language Models | |
US20220398834A1 (en) | Method and apparatus for transfer learning | |
CN111368551A (en) | Method and device for determining event subject | |
CN114238611B (en) | Method, apparatus, device and storage medium for outputting information | |
CN114494747A (en) | Model training method, image processing method, device, electronic device and medium | |
CN112307738B (en) | Method and device for processing text | |
CN113806522A (en) | Abstract generation method, device, equipment and storage medium | |
CN113377924A (en) | Data processing method, device, equipment and storage medium | |
CN112948584A (en) | Short text classification method, device, equipment and storage medium | |
CN112906368A (en) | Industry text increment method, related device and computer program product | |
US20230070966A1 (en) | Method for processing question, electronic device and storage medium | |
CN113408304B (en) | Text translation method and device, electronic equipment and storage medium | |
CN113360672B (en) | Method, apparatus, device, medium and product for generating knowledge graph | |
CN112989797B (en) | Model training and text expansion methods, devices, equipment and storage medium | |
CN114841172A (en) | Knowledge distillation method, apparatus and program product for text matching double tower model | |
CN114385829A (en) | Knowledge graph creating method, device, equipment and storage medium | |
CN114943995A (en) | Training method of face recognition model, face recognition method and device | |
CN114662688A (en) | Model training method, data processing method, device, electronic device and medium | |
CN113886543A (en) | Method, apparatus, medium, and program product for generating an intent recognition model | |
CN113806541A (en) | Emotion classification method and emotion classification model training method and device | |
CN113313049A (en) | Method, device, equipment, storage medium and computer program product for determining hyper-parameters | |
CN113361621A (en) | Method and apparatus for training a model | |
CN115034198B (en) | Method for optimizing computation of embedded module in language model | |
US20220374603A1 (en) | Method of determining location information, electronic device, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |