WO2020186458A1 - 问题生成方法和装置、问诊系统、计算机可读存储介质 - Google Patents
问题生成方法和装置、问诊系统、计算机可读存储介质 Download PDFInfo
- Publication number
- WO2020186458A1 WO2020186458A1 PCT/CN2019/078734 CN2019078734W WO2020186458A1 WO 2020186458 A1 WO2020186458 A1 WO 2020186458A1 CN 2019078734 W CN2019078734 W CN 2019078734W WO 2020186458 A1 WO2020186458 A1 WO 2020186458A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- question
- candidate
- answer
- context
- candidate question
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation or dialogue systems
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
- G06F40/35—Discourse or dialogue representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
- G06F40/55—Rule-based translation
- G06F40/56—Natural language generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
- G06N7/01—Probabilistic graphical models, e.g. probabilistic networks
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H10/00—ICT specially adapted for the handling or processing of patient-related medical or healthcare data
- G16H10/20—ICT specially adapted for the handling or processing of patient-related medical or healthcare data for electronic clinical trials or questionnaires
Definitions
- the present disclosure relates to the field of computer technology, and in particular to a question generation method and device, a consultation system, and a computer-readable storage medium.
- the intelligent consultation system collects patient's symptoms and physical signs through human-computer interaction with patients.
- a question generation device including at least one processor, the at least one processor is configured to: obtain a candidate question set Q; calculate each candidate question in the candidate question set Q The information value of each candidate question; at least one question is generated based on the information value of each candidate question.
- generating at least one question includes: selecting a candidate question with the greatest information value from the candidate question set Q as the generated question.
- the at least one processor is configured to: obtain the candidate question set Q according to the dialogue context; the candidate question set Q includes the first selected question whose degree of relevance to the dialogue context meets a threshold.
- the number of candidate questions q i , i is a positive integer less than or equal to the first selected number; the generated at least one question includes the dialogue question to be asked to the dialogue target.
- the information value of each candidate question is positively correlated with the expected value of the amount of information brought by the candidate question.
- the at least one processor is configured to pass Obtain the information value of each candidate question, where: context represents the dialogue context; f(context, q i , a j ) represents the amount of information brought by each candidate question q i for the dialogue context; P(a j
- the amount of information f (context, q i , a j ) brought by each candidate question q i is positively related to the degree of correlation between the candidate question q i and the corresponding dialogue context, and the answer set A
- the accuracy of the answer a j is positively correlated;
- context, q i ) is positively correlated with the similarity of the candidate question q i and each other candidate question, and is related to the predicted answer and the result of the candidate question q i
- the similarity of answers a j in the answer set A is positively correlated.
- context,q i ) is expressed as P(a j
- the first circulation using a neural network to obtain a prediction of the candidate answers the question q i; computing for each candidate questions brought information q i f (context, q i using the second neural network cycle, a j ).
- the first recurrent neural network is a gated recurrent unit network; the second recurrent neural network is a long and short-term memory network.
- the at least one processor is further configured to use a stochastic gradient descent method based on a corpus and a loss function to train to form the first recurrent neural network and the second recurrent neural network, wherein,
- the corpus includes a training dialogue context Tcontext, a training candidate question set TQ, and a training answer set TA.
- the training candidate question set TQ includes a second selected number of training candidate questions q l , and the training answer
- the set TA includes answers a m corresponding to each candidate question in the training candidate question set TQ, l is a positive integer less than or equal to the second selected number, and m is less than or equal to the second selected number positive integer; similarity function and the loss of each candidate questions q l each of the other candidate questions and negatively correlated with the answer candidate prediction problem q l and the TA set of training answers with the answer of a m similarity negatively correlated inversely related to the candidate and the corresponding issue q l with relevance training Tcontext conversation context, and is negatively correlated with the accuracy of the answer of a m.
- cos (q l, q n) represents candidate questions and q n q l similarity
- G (Tcontext, q l) represents a prediction of the answer candidate questions q l
- cos (G (Tcontext, q l), a m) represents similarity answer and answers prediction candidate a m q l of problems
- cos (G (tcontext, q l), a n) represents a prediction candidate questions answers and the training answer set q l with a n an answer similar Degree
- ⁇ is the sigmoid function
- LSTM( Tcontext, q l , a m ) represents the amount of information brought by the candidate question q l for the training dialogue context Tcontext.
- a medical inquiry system including the question generating device according to any one of the preceding embodiments, the dialogue is a dialogue interaction in a medical inquiry, and the dialogue target is a patient;
- the question generating device is configured to generate a medical question to be asked to the patient according to the medical question context.
- the medical inquiry system further includes: an input device configured to obtain a medical inquiry context; an output device configured to output the medical inquiry question.
- the query context includes a description of the patient.
- the questionnaire context includes questionnaires that have been asked to the patient.
- a question generation method including: obtaining a candidate question set Q; calculating the information value of each candidate question in the candidate question set Q; generating according to the information value of each candidate question At least one question.
- generating at least one question includes: selecting a candidate question with the greatest information value from the candidate question set Q as the generated question.
- the candidate question set Q is obtained according to the dialogue context; the candidate question set Q includes a first selected number of candidate questions q i whose relevance to the dialogue context meets a threshold, where i is less than Or a positive integer equal to the first selected number; the generated at least one question includes a dialogue question to be asked to the dialogue target.
- the information value of each candidate question in the candidate question set Q is calculated according to the expected value of the amount of information brought by each candidate question.
- the information value of each candidate question is expressed as Among them: the amount of information brought by each candidate question is f(context, q i , a j ), which represents the amount of information brought by the candidate question q i for the dialogue context; P(a j
- the amount of information f (context, q i , a j ) brought by each candidate question is positively related to the degree of correlation between the candidate question q i and the corresponding dialogue context, and the answers in the answer set A
- the accuracy of a j is positively correlated;
- context, q i ) is positively correlated with the similarity of the candidate question q i and each other candidate question, and is related to the predicted answer and the answer of the candidate question q i
- the similarity of the answers a j in the set A is positively correlated.
- the first circulation using a neural network to obtain a prediction of the candidate answers the question q i; computing for each candidate questions brought information q i f (context, q i using the second neural network cycle, a j ).
- the question generation method further includes: training to form the first recurrent neural network and the second recurrent neural network based on a corpus and a loss function, using a stochastic gradient descent method, wherein the corpus It includes a training conversation context Tcontext, a training candidate question set TQ, and a training answer set TA.
- the training candidate question set TQ includes a second selected number of training candidate questions q l
- the training answer set TA includes The answer a m corresponding to each candidate question in the training candidate question set TQ, where l is a positive integer less than or equal to the second selected number, and m is a positive integer less than or equal to the second selected number ;
- the loss function is negatively related to the similarity between each candidate question q l and each other candidate question, and is negatively related to the similarity between the predicted answer of the candidate question q l and the answer a m in the training answer set TA
- Related negative issues related to the candidate and the corresponding training q l with relevance Tcontext conversation context and is negatively correlated with the accuracy of the answer of a m.
- a computer-readable storage medium having a computer program stored thereon, and when the program is executed by a processor, the problem generation method described in any of the foregoing embodiments is implemented.
- FIG. 1 is a flowchart illustrating a method for generating a question according to some embodiments of the present disclosure
- FIG. 2 is a flowchart showing a method for generating a question according to other embodiments of the present disclosure
- FIG. 3 is a block diagram showing a question generation device according to some embodiments of the present disclosure.
- Figure 4 is a block diagram showing an electronic device according to some embodiments of the present disclosure.
- FIG. 5 is a block diagram showing a system for asking a doctor according to some embodiments of the present disclosure
- Figure 6 is a block diagram illustrating a computer system for implementing some embodiments of the present disclosure.
- Question-answer dialogue is very common in various fields. For example, in industries such as hotel accommodations, guests may ask front desk staff multiple or multiple rounds of questions; in TV variety show word answers and poetry solitaire competitions, guests will continue to ask contestants questions; in telecommunications and cable In industries such as television, service providers also provide services such as telephone customer service to service subscribers.
- the present disclosure proposes a problem generation method, which can obtain as much information as possible with as few problems as possible, so that an accurate diagnosis can be made in a short time.
- a consultation dialogue in the medical field is taken as an example to illustrate the implementation process of the question generation method of the present disclosure. Those skilled in the art can understand that it is also applicable in other question-answer dialogues.
- FIG. 1 is a flowchart illustrating a method for generating a question according to some embodiments of the present disclosure. As shown in Figure 1, the question generation method includes steps S2-S6.
- step S2 a set of candidate questions Q is obtained.
- the candidate question set Q is obtained according to the context of the consultation.
- Interrogation context can include a description of the patient.
- the candidate question set Q includes a first selected number of candidate questions q i whose relevance to the medical inquiry context meets a threshold, and i is a positive integer less than or equal to the first selected number.
- the medical inquiry context can be used as a query sentence, and the first selected number (for example, select the top 10) that is closest to the medical inquiry context in the corpus through text retrieval tools such as whoosh, lucene, etc.
- the question q i corresponding to each medical inquiry context retrieved constitutes the candidate question set Q.
- step S2 the answer a j corresponding to each candidate question in the candidate question set Q is also obtained to obtain the answer set A, where j is a positive integer less than or equal to the first selected number.
- step S4 the information value of each candidate question in the candidate question set Q is calculated.
- the expected value of each candidate according to problems caused by the amount of information q i the value of information is calculated for each candidate in question Q candidate set of questions of q i.
- each candidate question q i can be expressed as
- the amount of information brought by each candidate question is f(context, q i , a j ), which represents the amount of information brought by the candidate question q i for the context of medical inquiry;
- context, q i ) Represents the probability that the answer a j is generated for the questioning context and the candidate question q i .
- the amount of information f (context, q i , a j ) brought by each candidate question q i is positively correlated with the degree of correlation between the candidate question q i and the corresponding medical inquiry context, and the set of answers The accuracy of the answer a j in A is positively correlated.
- f(context, q i , a j ) reflects the amount of information that comprehensively considers the candidate question q i and the answer a j .
- the candidate question q i is highly correlated with the context of the medical inquiry, and the answer a j accurately answers the candidate question q i
- f(context, q i , a j ) is larger.
- the candidate questions “Is the lower limb cramping?” and “Is the calf cramping?” are both highly relevant to the questioning context.
- the candidate question “Does the lower limbs spasm?” uses standard medical terms, there may be an answer such as "What does spasm mean?” due to lack of medical knowledge and not understanding the question, that is, the accuracy of the answer may be Lower.
- the candidate question "Does the calf cramp?" uses relatively easy-to-understand everyday expressions to get an accurate answer such as "Have a cramp in the calf once", that is, the answer is highly accurate.
- context,q i ) is positively correlated with the similarity between the candidate question q i and each other candidate question, and is related to the predicted answer of the candidate question q i and the set of answers The similarity of the answer a j in A is positively correlated.
- context,q i ) can be expressed as
- cos(q i , q j ) represents the similarity of candidate questions q i and q j
- G(context, q i ) represents the predicted answer of candidate questions q i
- cos(G(context, q i ), a j ) Represents the similarity between the predicted answer of the candidate question q i and the answer a j in the answer set A
- cos represents the cosine similarity
- exp represents the exponential function with the natural constant e as the base.
- candidate questions q i and answers a j can be represented by word vectors.
- the query context, the word vector representation of the candidate question q i and the answer a j can be implemented using technologies such as word2vector.
- the first recurrent neural network is used to obtain the predicted answer to the candidate question q i .
- the second recurrent neural network is used to calculate the amount of information f(context, q i , a j ) brought by each candidate question q i .
- Both the first recurrent neural network and the second recurrent neural network may be a gated recurrent unit (GRU) network or a long short-term memory (LSTM) network.
- the first recurrent neural network is a GRU network and the second recurrent neural network is an LSTM network.
- step S6 at least one question is generated based on the information value of each candidate question.
- the generated at least one question may include a medical question to be asked to the patient.
- the candidate question with the greatest information value is selected from the candidate question set Q as the generated question.
- the candidate question with the most information value is selected.
- the candidate question with the most current information value that is, the consultation question that has been asked to the patient
- the process of the aforementioned question generation method can be repeated. In this way, a series of questions can be generated with less calculation.
- candidate questions can also be selected by comprehensively considering the sum of the information of multiple questions generated. For example, assuming that 3 medical questions are to be generated, the 3 medical questions with the largest sum of information can be selected as the generated questions according to the information value of each candidate question. In this way, the overall efficiency of information collection can be improved.
- FIG. 2 is a flowchart showing a method for generating a question according to other embodiments of the present disclosure.
- the difference between FIG. 2 and FIG. 1 is that the question generation method in FIG. 2 further includes step S1. Only the differences between FIG. 2 and FIG. 1 will be described below, and the similarities will not be repeated.
- step S1 based on the corpus and loss function, a stochastic gradient descent method is used to train to form a first recurrent neural network and a second recurrent neural network.
- the training samples (diagnosis context Tcontext for training, candidate question set TQ for training, and answer set TA for training) can be obtained from the corpus in a manner similar to obtaining the candidate question set Q in step S2 to construct a training data set.
- the training candidate question set TQ includes a second selected number of training candidate questions q l , and l is a positive integer less than or equal to the second selected number.
- the training answer set TA includes answers a m corresponding to each candidate question in the training candidate question set TQ, where m is a positive integer less than or equal to the second selected number.
- the second selected number can be set according to training needs.
- a corpus can be constructed based on the relevant information crawled by the crawler from the Internet.
- the relevant information includes the initial description of the patient, the questions of the medical staff, and the answers of the patients to the questions.
- This information can be stored in the corpus as (C, q, a), where: the initial description of the patient corresponds to the questioning context, which can be represented by C; for the questioning context, the question of the medical staff is represented by q; The answer to question q is represented by a.
- a patient on a medical website asks, "I have a very sore throat, I thought it was caused by a cold when I caught a cold, but the cold is better or not", can be expressed by C; the doctor asks "How old are you this year? Have you had similar Do you have any medical history? What kind of medicine have you taken", which can be expressed as q 1 ; the patient replied, "It’s 23 this year, and I just took cold medicine without a similar medical history.
- the loss function is constructed to be negatively correlated with the similarity between each candidate question q l and each other candidate question, and is related to the predicted answer of the candidate question q l and the training answer set TA.
- similarity answer a m negatively correlated negatively related to the candidate and the corresponding issue q l with relevance training context Tcontext the inquiry, answer and negatively correlated with the degree of accuracy of a m.
- the loss function is expressed as
- Loss ⁇ l, m (L (Tcontext, q l, a m), + L (y l, Tcontext, q l, a m)), wherein
- cos (q l, q n) represents candidate questions and q n q l similarity
- G (Tcontext, q l) represents a prediction of the answer candidate questions q l
- cos (G (Tcontext, q l), a m) represents similarity answer and answers prediction candidate a m q l of problems
- cos (G (tcontext, q l), a n) represents a prediction candidate questions answers and the training answer set q l with a n an answer similar Degree
- ⁇ is the sigmoid function
- LSTM( Tcontext, q l , a m ) represents the amount of information brought by the candidate question q l for the training consultation context Tcontext.
- the stochastic gradient descent method randomly selects a training data for calculation in the direction where the calculation drops fastest, instead of scanning the entire training data set, which can speed up the iteration.
- the GRU network when calculating the probability P(a j
- FIG. 3 is a block diagram showing a question generating apparatus according to some embodiments of the present disclosure.
- the question generating device 30 includes at least one processor 320.
- the processor 320 is configured to: obtain a candidate question set Q; calculate the information value of each candidate question in the candidate question set Q; and generate at least one question according to the information value of each candidate question.
- the processor 320 may execute the question generation method shown in FIG. 1 or FIG. 2, for example.
- the above-mentioned question generation device 30 may also include a memory to store at least a set of candidate questions.
- Figure 4 is a block diagram showing an electronic device according to some embodiments of the present disclosure.
- the electronic device 40 includes a memory 410 and a processor 420 coupled to the memory 410.
- the memory 410 is configured to store instructions for executing the corresponding embodiment of the problem generation method.
- the processor 420 is configured to execute one or more steps in the question generation method in any of the embodiments of the present disclosure based on instructions stored in the memory 410.
- each step in the aforementioned problem generation method can be implemented by a processor, and can be implemented in any manner of software, hardware, firmware, or a combination thereof.
- embodiments of the present disclosure may also adopt the form of a computer program product implemented on one or more non-volatile storage media containing computer program instructions. Therefore, embodiments of the present disclosure also provide a computer-readable storage medium on which computer instructions are stored, and when the instructions are executed by a processor, the problem generation method in any of the foregoing embodiments is implemented.
- the embodiments of the present disclosure also provide a system for asking a doctor, including the question generating device or electronic device described in any of the foregoing embodiments.
- FIG. 5 is a block diagram showing a medical inquiry system according to some embodiments of the present disclosure.
- the consultation system 5 includes a question generating device 50.
- the question generating device 50 is configured to execute the question generating method described in any of the foregoing embodiments.
- the structure of the question generating device 50 may be similar to the aforementioned question generating device 30, and is used to generate the medical question to be asked to the patient according to the medical question context.
- the consultation system 5 further includes: an input device 51 and an output device 52.
- the input device 51 is used to obtain the context of the consultation.
- the context of consultation can be obtained based on multimedia data such as text, sound, or image.
- the output device 52 is used to output medical questions. For example, at least one of a display and a player can be used to output the medical question.
- the input device 51 and the output device 52 may be connected to the question generating apparatus 50 through a network, such as a wireless network, a wired network, and/or any combination of a wireless network and a wired network.
- the network may include a local area network, the Internet, a telecommunication network, the Internet of Things (Internet of Things) based on the Internet and/or a telecommunication network, and/or any combination of the above networks, etc.
- the wired network may, for example, use twisted pair, coaxial cable, or optical fiber transmission for communication, and the wireless network may use, for example, a 3G/4G/5G mobile communication network, Bluetooth, Zigbee, or Wi-Fi.
- the consultation system 5 can be implemented as a local service, that is, the input device, question generating device, and output device are all located on the user side; the consultation system 5 can also be implemented as a cloud service, that is, the input device and output device are located on the user side.
- the question generating device is located in the cloud, and the cloud question generating device can provide services to multiple users at the same time or time-sharing.
- Figure 6 is a block diagram illustrating a computer system for implementing some embodiments of the present disclosure.
- the computer system can be expressed in the form of a general-purpose computing device, and the computer system can be used to implement the problem generating apparatus of the above-mentioned embodiment.
- the computer system includes a memory 610, a processor 620, and a bus 600 connecting different system components.
- the memory 610 may include, for example, a system memory, a non-volatile storage medium, and the like.
- the system memory for example, stores an operating system, an application program, a boot loader (Boot Loader), and other programs.
- the system memory may include volatile storage media, such as random access memory (RAM) and/or cache memory.
- RAM random access memory
- the non-volatile storage medium stores, for example, instructions for executing corresponding embodiments of the display method.
- Non-volatile storage media include, but are not limited to, magnetic disk storage, optical storage, flash memory, etc.
- the processor 620 can be implemented by a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic devices, discrete gates or transistors and other discrete hardware components.
- DSP digital signal processor
- ASIC application specific integrated circuit
- FPGA field programmable gate array
- each module such as the judgment module and the determination module can be implemented by a central processing unit (CPU) running instructions for executing corresponding steps in the memory, or can be implemented by a dedicated circuit that executes the corresponding steps.
- CPU central processing unit
- the bus 600 can use any bus structure among a variety of bus structures.
- the bus structure includes, but is not limited to, an industry standard architecture (ISA) bus, a microchannel architecture (MCA) bus, and a peripheral component interconnect (PCI) bus.
- ISA industry standard architecture
- MCA microchannel architecture
- PCI peripheral component interconnect
- the computer system may also include an input/output interface 630, a network interface 640, a storage interface 650, and so on. These interfaces 630, 640, 650, and the memory 610 and the processor 620 may be connected through a bus 600.
- the input and output interface 630 can provide a connection interface for input and output devices such as a display, a mouse, and a keyboard.
- the network interface 640 provides a connection interface for various networked devices.
- the storage interface 640 provides a connection interface for external storage devices such as floppy disks, U disks, and SD cards.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Databases & Information Systems (AREA)
- Pathology (AREA)
- Human Computer Interaction (AREA)
- Probability & Statistics with Applications (AREA)
- Algebra (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Microelectronics & Electronic Packaging (AREA)
Abstract
Description
Claims (26)
- 一种问题生成装置,包括至少一个处理器,所述至少一个处理器被配置为:获取候选问题集合Q;计算所述候选问题集合Q中每个候选问题的信息价值;根据各候选问题的信息价值,生成至少一个问题。
- 根据权利要求1所述的问题生成装置,其中,生成至少一个问题包括:从所述候选问题集合Q中选择信息价值最大的候选问题,作为生成的问题。
- 根据权利要求1或2所述的问题生成装置,其中,所述至少一个处理器被配置为:根据对话上下文获取所述候选问题集合Q;所述候选问题集合Q包括与所述对话上下文的相关程度满足阈值的、第一所选数目的候选问题q i,i为小于或等于所述第一所选数目的正整数;生成的至少一个问题包括待向对话目标提出的对话问题。
- 根据权利要求1-3任一项所述的问题生成装置,其中,每个候选问题的信息价值与该候选问题带来的信息量的期望值正相关。
- 根据权利要求1-4任一项所述的问题生成装置,其中,所述至少一个处理器还被配置为:获取与所述候选问题集合Q中各候选问题对应的答案a j,得到答案集合A,其中,j为小于或等于所述第一所选数目的正整数,且j=i表示答案a j为候选问题q i对应的答案。
- 根据权利要求1-6任一项所述的问题生成装置,其中:每个候选问题q i带来的信息量f(context,q i,a j)与候选问题q i和对应的对话上下文context的相关程度正相关,且所述答案集合A中答案a j的准确程度正相关;概率P(a j|context,q i)与候选问题q i和每一个其他候选问题的相似度正相关,且与该候选问题q i的预测答案和所述答案集合A中的答案a j的相似度正相关。
- 根据权利要求1-7任一项所述的问题生成装置,其中,概率P(a j|context,q i)表示为P(a j|context,q i)∝exp(cos(G(context,q i),a j))×cos(q i,q j),其中,cos(q i,q j)表示候选问题q i和q j的相似度,G(context,q i)表示候选问题q i的预测答案,cos(G(context,q i),a j)表示候选问题q i的预测答案和所述答案集合A中的答案a j的相似度。
- 根据权利要求1-8任一项所述的问题生成装置,其中:利用第一循环神经网络来获得所述候选问题q i的预测答案;利用第二循环神经网络来计算每个候选问题q i带来的信息量f(context,q i,a j)。
- 根据权利要求1-9任一项所述的问题生成装置,其中:第一循环神经网络是门控循环单元网络;第二循环神经网络是长短期记忆网络。
- 根据权利要求1-10任一项所述的问题生成装置,其中,所述至少一个处理器还被配置为:基于语料库和损失函数,使用随机梯度下降方法,训练以形成所述第一循环神经网络和所述第二循环神经网络,其中,所述语料库包括训练用对话上下文Tcontext、训练用候选问题集合TQ和训练用答案集合TA,所述训练用候选问题集合TQ包括第二所选数目的训练用候选问题q l,所述训练用答案集合TA包括与所述训练用候选问题集合TQ中各候选问题对应的答 案a m,l为小于或等于所述第二所选数目的正整数,m为小于或等于所述第二所选数目的正整数;所述损失函数与每个候选问题q l和每一个其他候选问题的相似度负相关,与该候选问题q l的预测答案和所述训练用答案集合TA中的答案a m的相似度负相关,与该候选问题q l和对应的训练用对话上下文Tcontext的相关程度负相关,且与答案a m的准确程度负相关。
- 根据权利要求1-11任一项所述的问题生成装置,其中,所述损失函数表示为Loss=∑ l,m(L(Tcontext,q l,a m),+L(y l,Tcontext,q l,a m)),其中,L(y l,Tcontext,q l,a m)=-y llog(σ(LSTM(Tcontext,q l,a m))),cos(q l,q n)表示候选问题q l和q n的相似度,G(Tcontext,q l)表示候选问题q l的预测答案,cos(G(Tcontext,q l),a m)表示候选问题q l的预测答案和答案a m的相似度,cos(G(Tcontext,q l),a n)表示候选问题q l的预测答案和所述训练用答案集合中的答案a n的相似度,n为小于或等于所述第二所选数目的正整数;在l等于m的情况下y l=1,在l不等于m的情况下y l=0;σ是sigmoid函数;LSTM(Tcontext,q l,a m)表示对于训练用对话上下文Tcontext,候选问题q l带来的信息量。
- 一种问诊系统,包括权利要求3-12任一项所述的问题生成装置,所述对话为医疗问诊中的对话交互,所述对话目标为患者;所述问题生成装置配置成根据所述问诊上下文,生成待向患者提出的问诊问题。
- 根据权利要求13所述的问诊系统,还包括:输入设备,配置成获取问诊上下文;输出设备,配置成输出所述问诊问题。
- 根据权利要求13-14任一项所述的问题生成装置,其中,所述问诊上下文包 括所述患者的描述。
- 根据权利要求13-15任一项所述的问题生成装置,其中,所述问诊上下文包括已向所述患者提出的问诊问题。
- 一种问题生成方法,包括:获取候选问题集合Q;计算所述候选问题集合Q中每个候选问题的信息价值;根据各候选问题的信息价值,生成至少一个问题。
- 根据权利要求17所述的问题生成方法,其中,生成至少一个问题包括:从所述候选问题集合Q中选择信息价值最大的候选问题,作为生成的问题。
- 根据权利要求17或18所述的问题生成方法,其中:根据对话上下文获取所述候选问题集合Q;所述候选问题集合Q包括与所述对话上下文的相关程度满足阈值的、第一所选数目的候选问题q i,i为小于或等于所述第一所选数目的正整数;生成的至少一个问题包括待向对话目标提出的对话问题。
- 根据权利要求17-19任一项所述的问题生成方法,其中,根据每个候选问题带来的信息量的期望值,计算所述候选问题集合Q中每个候选问题的信息价值。
- 根据权利要求17-20任一项所述的问题生成方法,还包括:获取与所述候选问题集合Q中各候选问题对应的答案a j,得到答案集合A,其中,j为小于或等于所述第一所选数目的正整数,且j=i表示答案a j为候选问题q i对应的答案。
- 根据权利要求17-22任一项所述的问题生成方法,其中:每个候选问题带来的信息量f(context,q i,a j)与候选问题q i和对应的对话上下文context的相关程度正相关,且所述答案集合A中答案a j的准确程度正相关;概率P(a j|context,q i)与候选问题q i和每一个其他候选问题的相似度正相关,且与该候选问题q i的预测答案和所述答案集合A中的答案a j的相似度正相关。
- 根据权利要求17-23任一项所述的问题生成方法,其中:利用第一循环神经网络来获得所述候选问题q i的预测答案;利用第二循环神经网络来计算每个候选问题q i带来的信息量f(context,q i,a j)。
- 根据权利要求17-24任一项所述的问题生成方法,还包括:基于语料库和损失函数,使用随机梯度下降方法,训练以形成所述第一循环神经网络和所述第二循环神经网络,其中,所述语料库包括训练用对话上下文Tcontext、训练用候选问题集合TQ和训练用答案集合TA,所述训练用候选问题集合TQ包括第二所选数目的训练用候选问题q l,所述训练用答案集合TA包括与所述训练用候选问题集合TQ中各候选问题对应的答案a m,l为小于或等于所述第二所选数目的正整数,m为小于或等于所述第二所选数目的正整数;所述损失函数与每个候选问题q l和每一个其他候选问题的相似度负相关,与该候选问题q l的预测答案和所述训练用答案集合TA中的答案a m的相似度负相关,与该候选问题q l和对应的训练用对话上下文Tcontext的相关程度负相关,且与答案a m的准确程度负相关。
- 一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如权利要求17至25中任一项所述的问题生成方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/640,267 US11600389B2 (en) | 2019-03-19 | 2019-03-19 | Question generating method and apparatus, inquiring diagnosis system, and computer readable storage medium |
PCT/CN2019/078734 WO2020186458A1 (zh) | 2019-03-19 | 2019-03-19 | 问题生成方法和装置、问诊系统、计算机可读存储介质 |
CN201980000329.9A CN110088748B (zh) | 2019-03-19 | 2019-03-19 | 问题生成方法和装置、问诊系统、计算机可读存储介质 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/078734 WO2020186458A1 (zh) | 2019-03-19 | 2019-03-19 | 问题生成方法和装置、问诊系统、计算机可读存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020186458A1 true WO2020186458A1 (zh) | 2020-09-24 |
Family
ID=67424507
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/078734 WO2020186458A1 (zh) | 2019-03-19 | 2019-03-19 | 问题生成方法和装置、问诊系统、计算机可读存储介质 |
Country Status (3)
Country | Link |
---|---|
US (1) | US11600389B2 (zh) |
CN (1) | CN110088748B (zh) |
WO (1) | WO2020186458A1 (zh) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210257096A1 (en) * | 2020-02-19 | 2021-08-19 | University Of Virginia Patent Foundation | Influencing end-stage renal disease outcomes through predicting physiological parameters and determining dosing recommendations |
CN114944227A (zh) * | 2022-06-17 | 2022-08-26 | 平安科技(深圳)有限公司 | 皮肤病在线问诊方法、装置、系统、设备及介质 |
KR102660276B1 (ko) * | 2023-10-11 | 2024-04-25 | 주식회사 마인즈앤컴퍼니 | 거대언어모델을 활용한 업무수행 챗봇 서비스장치 및 그 장치의 구동방법 |
CN117894473A (zh) * | 2024-01-16 | 2024-04-16 | 上海交通大学医学院附属新华医院 | 一种自适应集成学习模型的出院准备评估方法及系统 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150324492A1 (en) * | 2014-05-09 | 2015-11-12 | Autodesk, Inc | Techniques for using controlled natural language to capture design intent for computer-aided design |
CN105893523A (zh) * | 2016-03-31 | 2016-08-24 | 华东师范大学 | 利用答案相关性排序的评估度量来计算问题相似度的方法 |
CN108595619A (zh) * | 2018-04-23 | 2018-09-28 | 海信集团有限公司 | 一种问答方法及设备 |
CN109036588A (zh) * | 2018-09-10 | 2018-12-18 | 百度在线网络技术(北京)有限公司 | 线上问诊的方法、装置、设备及计算机可读介质 |
CN109271505A (zh) * | 2018-11-12 | 2019-01-25 | 深圳智能思创科技有限公司 | 一种基于问题答案对的问答系统实现方法 |
Family Cites Families (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2004104919A2 (de) * | 2003-05-22 | 2004-12-02 | Siemens Aktiengesellschaft | Expertensystem zur bewertung eines informationsgewinns über einen zustand eines systems |
US20050197988A1 (en) * | 2004-02-17 | 2005-09-08 | Bublitz Scott T. | Adaptive survey and assessment administration using Bayesian belief networks |
US20070207449A1 (en) * | 2005-05-19 | 2007-09-06 | Feierstein Roslyn E | Method of analyzing question responses to select among defined possibilities and means of accomplishing same |
CN101286161B (zh) * | 2008-05-28 | 2010-10-06 | 华中科技大学 | 一种基于概念的智能中文问答系统 |
US20140052722A1 (en) * | 2012-08-16 | 2014-02-20 | Dimitris J. Bertsimas | Optimization-based regimen method and system for personalized diabetes and diet management |
US9257052B2 (en) * | 2012-08-23 | 2016-02-09 | International Business Machines Corporation | Evaluating candidate answers to questions in a target knowledge domain |
CN104572734B (zh) * | 2013-10-23 | 2019-04-30 | 腾讯科技(深圳)有限公司 | 问题推荐方法、装置及系统 |
US20150332021A1 (en) * | 2014-05-15 | 2015-11-19 | ThoroughCare, Inc. | Guided Patient Interview and Health Management Systems |
US20160371663A1 (en) * | 2015-06-17 | 2016-12-22 | Bruce Knuteson | Information Exchange System and Methods |
US20170116870A1 (en) * | 2015-10-21 | 2017-04-27 | Duolingo, Inc. | Automatic test personalization |
US20170364804A1 (en) * | 2016-06-15 | 2017-12-21 | International Business Machines Corporation | Answer Scoring Based on a Combination of Specificity and Informativity Metrics |
EP3270331A1 (en) * | 2016-07-15 | 2018-01-17 | Little Brain NV | A medical consultation support tool |
US10540967B2 (en) * | 2016-11-14 | 2020-01-21 | Xerox Corporation | Machine reading method for dialog state tracking |
CN108228637B (zh) * | 2016-12-21 | 2020-09-04 | 中国电信股份有限公司 | 自然语言客户自动应答方法和系统 |
WO2019070763A1 (en) * | 2017-10-02 | 2019-04-11 | New Sun Technologies, Inc. | LEARNING TRAINING SYSTEM MEDIATION MACHINE BY A CAREGIVER |
CN108509463B (zh) | 2017-02-28 | 2022-03-29 | 华为技术有限公司 | 一种问题的应答方法及装置 |
US11729120B2 (en) | 2017-03-16 | 2023-08-15 | Microsoft Technology Licensing, Llc | Generating responses in automated chatting |
US10713289B1 (en) * | 2017-03-31 | 2020-07-14 | Amazon Technologies, Inc. | Question answering system |
US10854335B2 (en) * | 2017-06-16 | 2020-12-01 | Htc Corporation | Computer aided medical method and medical system for medical prediction |
CN107832047B (zh) * | 2017-11-27 | 2018-11-27 | 北京理工大学 | 一种基于lstm的非api函数实参推荐方法 |
US10635521B2 (en) * | 2017-12-15 | 2020-04-28 | International Business Machines Corporation | Conversational problem determination based on bipartite graph |
CN108242266A (zh) * | 2018-01-22 | 2018-07-03 | 北京颐圣智能科技有限公司 | 辅助诊断装置和方法 |
CN108491433B (zh) * | 2018-02-09 | 2022-05-03 | 平安科技(深圳)有限公司 | 聊天应答方法、电子装置及存储介质 |
JP7149560B2 (ja) * | 2018-04-13 | 2022-10-07 | 国立研究開発法人情報通信研究機構 | リクエスト言換システム、リクエスト言換モデル及びリクエスト判定モデルの訓練方法、及び対話システム |
CA3048436A1 (en) * | 2018-07-03 | 2020-01-03 | University Of Waterloo | System, method and computer program product for query clarification |
CN109002540B (zh) * | 2018-07-23 | 2021-03-16 | 电子科技大学 | 一种中文通告文档问题答案对自动生成方法 |
GB201815743D0 (en) * | 2018-09-27 | 2018-11-14 | Microsoft Technology Licensing Llc | Gathering data in a communication system |
US20200219617A1 (en) * | 2019-01-03 | 2020-07-09 | International Business Machines Corporation | Apparatus and method for initial information gathering from patients at the point of care |
US10387575B1 (en) * | 2019-01-30 | 2019-08-20 | Babylon Partners Limited | Semantic graph traversal for recognition of inferred clauses within natural language inputs |
US11145414B2 (en) * | 2019-02-28 | 2021-10-12 | Babylon Partners Limited | Dialogue flow using semantic simplexes |
-
2019
- 2019-03-19 CN CN201980000329.9A patent/CN110088748B/zh active Active
- 2019-03-19 US US16/640,267 patent/US11600389B2/en active Active
- 2019-03-19 WO PCT/CN2019/078734 patent/WO2020186458A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150324492A1 (en) * | 2014-05-09 | 2015-11-12 | Autodesk, Inc | Techniques for using controlled natural language to capture design intent for computer-aided design |
CN105893523A (zh) * | 2016-03-31 | 2016-08-24 | 华东师范大学 | 利用答案相关性排序的评估度量来计算问题相似度的方法 |
CN108595619A (zh) * | 2018-04-23 | 2018-09-28 | 海信集团有限公司 | 一种问答方法及设备 |
CN109036588A (zh) * | 2018-09-10 | 2018-12-18 | 百度在线网络技术(北京)有限公司 | 线上问诊的方法、装置、设备及计算机可读介质 |
CN109271505A (zh) * | 2018-11-12 | 2019-01-25 | 深圳智能思创科技有限公司 | 一种基于问题答案对的问答系统实现方法 |
Also Published As
Publication number | Publication date |
---|---|
CN110088748A (zh) | 2019-08-02 |
US20210034305A1 (en) | 2021-02-04 |
US11600389B2 (en) | 2023-03-07 |
CN110088748B (zh) | 2023-11-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Lu et al. | Machine learning for synthetic data generation: a review | |
WO2020186458A1 (zh) | 问题生成方法和装置、问诊系统、计算机可读存储介质 | |
Volkova et al. | Forecasting influenza-like illness dynamics for military populations using neural networks and social media | |
Wang et al. | Methods for correcting inference based on outcomes predicted by machine learning | |
Paul et al. | Discovering health topics in social media using topic models | |
JP7116435B2 (ja) | エンティティ・モデルの確立 | |
WO2021159738A1 (zh) | 基于医学领域的数据推荐方法、设备、服务器及存储介质 | |
US11276494B2 (en) | Predicting interactions between drugs and diseases | |
Li et al. | Extracting medical knowledge from crowdsourced question answering website | |
WO2023178971A1 (zh) | 就医的互联网挂号方法、装置、设备及存储介质 | |
CN108417272A (zh) | 带时序约束的相似病例推荐方法及装置 | |
US10936962B1 (en) | Methods and systems for confirming an advisory interaction with an artificial intelligence platform | |
WO2022160614A1 (zh) | 构建医学实体关系图的方法及装置、医嘱质控的方法及装置、设备、介质 | |
Rader et al. | Bias-corrected estimates for logistic regression models for complex surveys with application to the United States’ Nationwide Inpatient Sample | |
CN112183881A (zh) | 一种基于社交网络的舆情事件预测方法、设备及存储介质 | |
Friedrich et al. | On the role of benchmarking data sets and simulations in method comparison studies | |
Keikha et al. | DeepLink: A novel link prediction framework based on deep learning | |
Hong et al. | Event2vec: Learning representations of events on temporal sequences | |
US20200143104A1 (en) | Methods for arbitrating online disputes and anticipating outcomes using machine intelligence | |
Nolan et al. | Developing syndromic surveillance to monitor and respond to adverse health events related to psychoactive substance use: methods and applications | |
Yörük et al. | Random sampling in corpus design: Cross-context generalizability in automated multicountry protest event collection | |
CN114758742A (zh) | 病历信息处理方法、装置、电子设备及存储介质 | |
WO2021174923A1 (zh) | 概念词序列生成方法、装置、计算机设备及存储介质 | |
US12087442B2 (en) | Methods and systems for confirming an advisory interaction with an artificial intelligence platform | |
CN114360732B (zh) | 医疗数据分析方法、装置、电子设备及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19919839 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19919839 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 04/02/2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19919839 Country of ref document: EP Kind code of ref document: A1 |