WO2020186458A1 - 问题生成方法和装置、问诊系统、计算机可读存储介质 - Google Patents

问题生成方法和装置、问诊系统、计算机可读存储介质 Download PDF

Info

Publication number
WO2020186458A1
WO2020186458A1 PCT/CN2019/078734 CN2019078734W WO2020186458A1 WO 2020186458 A1 WO2020186458 A1 WO 2020186458A1 CN 2019078734 W CN2019078734 W CN 2019078734W WO 2020186458 A1 WO2020186458 A1 WO 2020186458A1
Authority
WO
WIPO (PCT)
Prior art keywords
question
candidate
answer
context
candidate question
Prior art date
Application number
PCT/CN2019/078734
Other languages
English (en)
French (fr)
Inventor
张振中
陈雪
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to US16/640,267 priority Critical patent/US11600389B2/en
Priority to PCT/CN2019/078734 priority patent/WO2020186458A1/zh
Priority to CN201980000329.9A priority patent/CN110088748B/zh
Publication of WO2020186458A1 publication Critical patent/WO2020186458A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • G06F40/35Discourse or dialogue representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/55Rule-based translation
    • G06F40/56Natural language generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/20ICT specially adapted for the handling or processing of patient-related medical or healthcare data for electronic clinical trials or questionnaires

Definitions

  • the present disclosure relates to the field of computer technology, and in particular to a question generation method and device, a consultation system, and a computer-readable storage medium.
  • the intelligent consultation system collects patient's symptoms and physical signs through human-computer interaction with patients.
  • a question generation device including at least one processor, the at least one processor is configured to: obtain a candidate question set Q; calculate each candidate question in the candidate question set Q The information value of each candidate question; at least one question is generated based on the information value of each candidate question.
  • generating at least one question includes: selecting a candidate question with the greatest information value from the candidate question set Q as the generated question.
  • the at least one processor is configured to: obtain the candidate question set Q according to the dialogue context; the candidate question set Q includes the first selected question whose degree of relevance to the dialogue context meets a threshold.
  • the number of candidate questions q i , i is a positive integer less than or equal to the first selected number; the generated at least one question includes the dialogue question to be asked to the dialogue target.
  • the information value of each candidate question is positively correlated with the expected value of the amount of information brought by the candidate question.
  • the at least one processor is configured to pass Obtain the information value of each candidate question, where: context represents the dialogue context; f(context, q i , a j ) represents the amount of information brought by each candidate question q i for the dialogue context; P(a j
  • the amount of information f (context, q i , a j ) brought by each candidate question q i is positively related to the degree of correlation between the candidate question q i and the corresponding dialogue context, and the answer set A
  • the accuracy of the answer a j is positively correlated;
  • context, q i ) is positively correlated with the similarity of the candidate question q i and each other candidate question, and is related to the predicted answer and the result of the candidate question q i
  • the similarity of answers a j in the answer set A is positively correlated.
  • context,q i ) is expressed as P(a j
  • the first circulation using a neural network to obtain a prediction of the candidate answers the question q i; computing for each candidate questions brought information q i f (context, q i using the second neural network cycle, a j ).
  • the first recurrent neural network is a gated recurrent unit network; the second recurrent neural network is a long and short-term memory network.
  • the at least one processor is further configured to use a stochastic gradient descent method based on a corpus and a loss function to train to form the first recurrent neural network and the second recurrent neural network, wherein,
  • the corpus includes a training dialogue context Tcontext, a training candidate question set TQ, and a training answer set TA.
  • the training candidate question set TQ includes a second selected number of training candidate questions q l , and the training answer
  • the set TA includes answers a m corresponding to each candidate question in the training candidate question set TQ, l is a positive integer less than or equal to the second selected number, and m is less than or equal to the second selected number positive integer; similarity function and the loss of each candidate questions q l each of the other candidate questions and negatively correlated with the answer candidate prediction problem q l and the TA set of training answers with the answer of a m similarity negatively correlated inversely related to the candidate and the corresponding issue q l with relevance training Tcontext conversation context, and is negatively correlated with the accuracy of the answer of a m.
  • cos (q l, q n) represents candidate questions and q n q l similarity
  • G (Tcontext, q l) represents a prediction of the answer candidate questions q l
  • cos (G (Tcontext, q l), a m) represents similarity answer and answers prediction candidate a m q l of problems
  • cos (G (tcontext, q l), a n) represents a prediction candidate questions answers and the training answer set q l with a n an answer similar Degree
  • is the sigmoid function
  • LSTM( Tcontext, q l , a m ) represents the amount of information brought by the candidate question q l for the training dialogue context Tcontext.
  • a medical inquiry system including the question generating device according to any one of the preceding embodiments, the dialogue is a dialogue interaction in a medical inquiry, and the dialogue target is a patient;
  • the question generating device is configured to generate a medical question to be asked to the patient according to the medical question context.
  • the medical inquiry system further includes: an input device configured to obtain a medical inquiry context; an output device configured to output the medical inquiry question.
  • the query context includes a description of the patient.
  • the questionnaire context includes questionnaires that have been asked to the patient.
  • a question generation method including: obtaining a candidate question set Q; calculating the information value of each candidate question in the candidate question set Q; generating according to the information value of each candidate question At least one question.
  • generating at least one question includes: selecting a candidate question with the greatest information value from the candidate question set Q as the generated question.
  • the candidate question set Q is obtained according to the dialogue context; the candidate question set Q includes a first selected number of candidate questions q i whose relevance to the dialogue context meets a threshold, where i is less than Or a positive integer equal to the first selected number; the generated at least one question includes a dialogue question to be asked to the dialogue target.
  • the information value of each candidate question in the candidate question set Q is calculated according to the expected value of the amount of information brought by each candidate question.
  • the information value of each candidate question is expressed as Among them: the amount of information brought by each candidate question is f(context, q i , a j ), which represents the amount of information brought by the candidate question q i for the dialogue context; P(a j
  • the amount of information f (context, q i , a j ) brought by each candidate question is positively related to the degree of correlation between the candidate question q i and the corresponding dialogue context, and the answers in the answer set A
  • the accuracy of a j is positively correlated;
  • context, q i ) is positively correlated with the similarity of the candidate question q i and each other candidate question, and is related to the predicted answer and the answer of the candidate question q i
  • the similarity of the answers a j in the set A is positively correlated.
  • the first circulation using a neural network to obtain a prediction of the candidate answers the question q i; computing for each candidate questions brought information q i f (context, q i using the second neural network cycle, a j ).
  • the question generation method further includes: training to form the first recurrent neural network and the second recurrent neural network based on a corpus and a loss function, using a stochastic gradient descent method, wherein the corpus It includes a training conversation context Tcontext, a training candidate question set TQ, and a training answer set TA.
  • the training candidate question set TQ includes a second selected number of training candidate questions q l
  • the training answer set TA includes The answer a m corresponding to each candidate question in the training candidate question set TQ, where l is a positive integer less than or equal to the second selected number, and m is a positive integer less than or equal to the second selected number ;
  • the loss function is negatively related to the similarity between each candidate question q l and each other candidate question, and is negatively related to the similarity between the predicted answer of the candidate question q l and the answer a m in the training answer set TA
  • Related negative issues related to the candidate and the corresponding training q l with relevance Tcontext conversation context and is negatively correlated with the accuracy of the answer of a m.
  • a computer-readable storage medium having a computer program stored thereon, and when the program is executed by a processor, the problem generation method described in any of the foregoing embodiments is implemented.
  • FIG. 1 is a flowchart illustrating a method for generating a question according to some embodiments of the present disclosure
  • FIG. 2 is a flowchart showing a method for generating a question according to other embodiments of the present disclosure
  • FIG. 3 is a block diagram showing a question generation device according to some embodiments of the present disclosure.
  • Figure 4 is a block diagram showing an electronic device according to some embodiments of the present disclosure.
  • FIG. 5 is a block diagram showing a system for asking a doctor according to some embodiments of the present disclosure
  • Figure 6 is a block diagram illustrating a computer system for implementing some embodiments of the present disclosure.
  • Question-answer dialogue is very common in various fields. For example, in industries such as hotel accommodations, guests may ask front desk staff multiple or multiple rounds of questions; in TV variety show word answers and poetry solitaire competitions, guests will continue to ask contestants questions; in telecommunications and cable In industries such as television, service providers also provide services such as telephone customer service to service subscribers.
  • the present disclosure proposes a problem generation method, which can obtain as much information as possible with as few problems as possible, so that an accurate diagnosis can be made in a short time.
  • a consultation dialogue in the medical field is taken as an example to illustrate the implementation process of the question generation method of the present disclosure. Those skilled in the art can understand that it is also applicable in other question-answer dialogues.
  • FIG. 1 is a flowchart illustrating a method for generating a question according to some embodiments of the present disclosure. As shown in Figure 1, the question generation method includes steps S2-S6.
  • step S2 a set of candidate questions Q is obtained.
  • the candidate question set Q is obtained according to the context of the consultation.
  • Interrogation context can include a description of the patient.
  • the candidate question set Q includes a first selected number of candidate questions q i whose relevance to the medical inquiry context meets a threshold, and i is a positive integer less than or equal to the first selected number.
  • the medical inquiry context can be used as a query sentence, and the first selected number (for example, select the top 10) that is closest to the medical inquiry context in the corpus through text retrieval tools such as whoosh, lucene, etc.
  • the question q i corresponding to each medical inquiry context retrieved constitutes the candidate question set Q.
  • step S2 the answer a j corresponding to each candidate question in the candidate question set Q is also obtained to obtain the answer set A, where j is a positive integer less than or equal to the first selected number.
  • step S4 the information value of each candidate question in the candidate question set Q is calculated.
  • the expected value of each candidate according to problems caused by the amount of information q i the value of information is calculated for each candidate in question Q candidate set of questions of q i.
  • each candidate question q i can be expressed as
  • the amount of information brought by each candidate question is f(context, q i , a j ), which represents the amount of information brought by the candidate question q i for the context of medical inquiry;
  • context, q i ) Represents the probability that the answer a j is generated for the questioning context and the candidate question q i .
  • the amount of information f (context, q i , a j ) brought by each candidate question q i is positively correlated with the degree of correlation between the candidate question q i and the corresponding medical inquiry context, and the set of answers The accuracy of the answer a j in A is positively correlated.
  • f(context, q i , a j ) reflects the amount of information that comprehensively considers the candidate question q i and the answer a j .
  • the candidate question q i is highly correlated with the context of the medical inquiry, and the answer a j accurately answers the candidate question q i
  • f(context, q i , a j ) is larger.
  • the candidate questions “Is the lower limb cramping?” and “Is the calf cramping?” are both highly relevant to the questioning context.
  • the candidate question “Does the lower limbs spasm?” uses standard medical terms, there may be an answer such as "What does spasm mean?” due to lack of medical knowledge and not understanding the question, that is, the accuracy of the answer may be Lower.
  • the candidate question "Does the calf cramp?" uses relatively easy-to-understand everyday expressions to get an accurate answer such as "Have a cramp in the calf once", that is, the answer is highly accurate.
  • context,q i ) is positively correlated with the similarity between the candidate question q i and each other candidate question, and is related to the predicted answer of the candidate question q i and the set of answers The similarity of the answer a j in A is positively correlated.
  • context,q i ) can be expressed as
  • cos(q i , q j ) represents the similarity of candidate questions q i and q j
  • G(context, q i ) represents the predicted answer of candidate questions q i
  • cos(G(context, q i ), a j ) Represents the similarity between the predicted answer of the candidate question q i and the answer a j in the answer set A
  • cos represents the cosine similarity
  • exp represents the exponential function with the natural constant e as the base.
  • candidate questions q i and answers a j can be represented by word vectors.
  • the query context, the word vector representation of the candidate question q i and the answer a j can be implemented using technologies such as word2vector.
  • the first recurrent neural network is used to obtain the predicted answer to the candidate question q i .
  • the second recurrent neural network is used to calculate the amount of information f(context, q i , a j ) brought by each candidate question q i .
  • Both the first recurrent neural network and the second recurrent neural network may be a gated recurrent unit (GRU) network or a long short-term memory (LSTM) network.
  • the first recurrent neural network is a GRU network and the second recurrent neural network is an LSTM network.
  • step S6 at least one question is generated based on the information value of each candidate question.
  • the generated at least one question may include a medical question to be asked to the patient.
  • the candidate question with the greatest information value is selected from the candidate question set Q as the generated question.
  • the candidate question with the most information value is selected.
  • the candidate question with the most current information value that is, the consultation question that has been asked to the patient
  • the process of the aforementioned question generation method can be repeated. In this way, a series of questions can be generated with less calculation.
  • candidate questions can also be selected by comprehensively considering the sum of the information of multiple questions generated. For example, assuming that 3 medical questions are to be generated, the 3 medical questions with the largest sum of information can be selected as the generated questions according to the information value of each candidate question. In this way, the overall efficiency of information collection can be improved.
  • FIG. 2 is a flowchart showing a method for generating a question according to other embodiments of the present disclosure.
  • the difference between FIG. 2 and FIG. 1 is that the question generation method in FIG. 2 further includes step S1. Only the differences between FIG. 2 and FIG. 1 will be described below, and the similarities will not be repeated.
  • step S1 based on the corpus and loss function, a stochastic gradient descent method is used to train to form a first recurrent neural network and a second recurrent neural network.
  • the training samples (diagnosis context Tcontext for training, candidate question set TQ for training, and answer set TA for training) can be obtained from the corpus in a manner similar to obtaining the candidate question set Q in step S2 to construct a training data set.
  • the training candidate question set TQ includes a second selected number of training candidate questions q l , and l is a positive integer less than or equal to the second selected number.
  • the training answer set TA includes answers a m corresponding to each candidate question in the training candidate question set TQ, where m is a positive integer less than or equal to the second selected number.
  • the second selected number can be set according to training needs.
  • a corpus can be constructed based on the relevant information crawled by the crawler from the Internet.
  • the relevant information includes the initial description of the patient, the questions of the medical staff, and the answers of the patients to the questions.
  • This information can be stored in the corpus as (C, q, a), where: the initial description of the patient corresponds to the questioning context, which can be represented by C; for the questioning context, the question of the medical staff is represented by q; The answer to question q is represented by a.
  • a patient on a medical website asks, "I have a very sore throat, I thought it was caused by a cold when I caught a cold, but the cold is better or not", can be expressed by C; the doctor asks "How old are you this year? Have you had similar Do you have any medical history? What kind of medicine have you taken", which can be expressed as q 1 ; the patient replied, "It’s 23 this year, and I just took cold medicine without a similar medical history.
  • the loss function is constructed to be negatively correlated with the similarity between each candidate question q l and each other candidate question, and is related to the predicted answer of the candidate question q l and the training answer set TA.
  • similarity answer a m negatively correlated negatively related to the candidate and the corresponding issue q l with relevance training context Tcontext the inquiry, answer and negatively correlated with the degree of accuracy of a m.
  • the loss function is expressed as
  • Loss ⁇ l, m (L (Tcontext, q l, a m), + L (y l, Tcontext, q l, a m)), wherein
  • cos (q l, q n) represents candidate questions and q n q l similarity
  • G (Tcontext, q l) represents a prediction of the answer candidate questions q l
  • cos (G (Tcontext, q l), a m) represents similarity answer and answers prediction candidate a m q l of problems
  • cos (G (tcontext, q l), a n) represents a prediction candidate questions answers and the training answer set q l with a n an answer similar Degree
  • is the sigmoid function
  • LSTM( Tcontext, q l , a m ) represents the amount of information brought by the candidate question q l for the training consultation context Tcontext.
  • the stochastic gradient descent method randomly selects a training data for calculation in the direction where the calculation drops fastest, instead of scanning the entire training data set, which can speed up the iteration.
  • the GRU network when calculating the probability P(a j
  • FIG. 3 is a block diagram showing a question generating apparatus according to some embodiments of the present disclosure.
  • the question generating device 30 includes at least one processor 320.
  • the processor 320 is configured to: obtain a candidate question set Q; calculate the information value of each candidate question in the candidate question set Q; and generate at least one question according to the information value of each candidate question.
  • the processor 320 may execute the question generation method shown in FIG. 1 or FIG. 2, for example.
  • the above-mentioned question generation device 30 may also include a memory to store at least a set of candidate questions.
  • Figure 4 is a block diagram showing an electronic device according to some embodiments of the present disclosure.
  • the electronic device 40 includes a memory 410 and a processor 420 coupled to the memory 410.
  • the memory 410 is configured to store instructions for executing the corresponding embodiment of the problem generation method.
  • the processor 420 is configured to execute one or more steps in the question generation method in any of the embodiments of the present disclosure based on instructions stored in the memory 410.
  • each step in the aforementioned problem generation method can be implemented by a processor, and can be implemented in any manner of software, hardware, firmware, or a combination thereof.
  • embodiments of the present disclosure may also adopt the form of a computer program product implemented on one or more non-volatile storage media containing computer program instructions. Therefore, embodiments of the present disclosure also provide a computer-readable storage medium on which computer instructions are stored, and when the instructions are executed by a processor, the problem generation method in any of the foregoing embodiments is implemented.
  • the embodiments of the present disclosure also provide a system for asking a doctor, including the question generating device or electronic device described in any of the foregoing embodiments.
  • FIG. 5 is a block diagram showing a medical inquiry system according to some embodiments of the present disclosure.
  • the consultation system 5 includes a question generating device 50.
  • the question generating device 50 is configured to execute the question generating method described in any of the foregoing embodiments.
  • the structure of the question generating device 50 may be similar to the aforementioned question generating device 30, and is used to generate the medical question to be asked to the patient according to the medical question context.
  • the consultation system 5 further includes: an input device 51 and an output device 52.
  • the input device 51 is used to obtain the context of the consultation.
  • the context of consultation can be obtained based on multimedia data such as text, sound, or image.
  • the output device 52 is used to output medical questions. For example, at least one of a display and a player can be used to output the medical question.
  • the input device 51 and the output device 52 may be connected to the question generating apparatus 50 through a network, such as a wireless network, a wired network, and/or any combination of a wireless network and a wired network.
  • the network may include a local area network, the Internet, a telecommunication network, the Internet of Things (Internet of Things) based on the Internet and/or a telecommunication network, and/or any combination of the above networks, etc.
  • the wired network may, for example, use twisted pair, coaxial cable, or optical fiber transmission for communication, and the wireless network may use, for example, a 3G/4G/5G mobile communication network, Bluetooth, Zigbee, or Wi-Fi.
  • the consultation system 5 can be implemented as a local service, that is, the input device, question generating device, and output device are all located on the user side; the consultation system 5 can also be implemented as a cloud service, that is, the input device and output device are located on the user side.
  • the question generating device is located in the cloud, and the cloud question generating device can provide services to multiple users at the same time or time-sharing.
  • Figure 6 is a block diagram illustrating a computer system for implementing some embodiments of the present disclosure.
  • the computer system can be expressed in the form of a general-purpose computing device, and the computer system can be used to implement the problem generating apparatus of the above-mentioned embodiment.
  • the computer system includes a memory 610, a processor 620, and a bus 600 connecting different system components.
  • the memory 610 may include, for example, a system memory, a non-volatile storage medium, and the like.
  • the system memory for example, stores an operating system, an application program, a boot loader (Boot Loader), and other programs.
  • the system memory may include volatile storage media, such as random access memory (RAM) and/or cache memory.
  • RAM random access memory
  • the non-volatile storage medium stores, for example, instructions for executing corresponding embodiments of the display method.
  • Non-volatile storage media include, but are not limited to, magnetic disk storage, optical storage, flash memory, etc.
  • the processor 620 can be implemented by a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic devices, discrete gates or transistors and other discrete hardware components.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • each module such as the judgment module and the determination module can be implemented by a central processing unit (CPU) running instructions for executing corresponding steps in the memory, or can be implemented by a dedicated circuit that executes the corresponding steps.
  • CPU central processing unit
  • the bus 600 can use any bus structure among a variety of bus structures.
  • the bus structure includes, but is not limited to, an industry standard architecture (ISA) bus, a microchannel architecture (MCA) bus, and a peripheral component interconnect (PCI) bus.
  • ISA industry standard architecture
  • MCA microchannel architecture
  • PCI peripheral component interconnect
  • the computer system may also include an input/output interface 630, a network interface 640, a storage interface 650, and so on. These interfaces 630, 640, 650, and the memory 610 and the processor 620 may be connected through a bus 600.
  • the input and output interface 630 can provide a connection interface for input and output devices such as a display, a mouse, and a keyboard.
  • the network interface 640 provides a connection interface for various networked devices.
  • the storage interface 640 provides a connection interface for external storage devices such as floppy disks, U disks, and SD cards.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Probability & Statistics with Applications (AREA)
  • Algebra (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Microelectronics & Electronic Packaging (AREA)

Abstract

本公开涉及一种问题生成方法和装置、问诊系统、计算机可读存储介质。问题生成方法装置包括至少一个处理器,所述至少一个处理器被配置为:获取候选问题集合Q;计算所述候选问题集合Q中每个候选问题的信息价值;根据各候选问题的信息价值,生成至少一个问题。

Description

问题生成方法和装置、问诊系统、计算机可读存储介质 技术领域
本公开涉及计算机技术领域,特别涉及一种问题生成方法和装置、问诊系统、计算机可读存储介质。
背景技术
随着人机交互技术的快速发展,智能问诊成为人工智能的重要研究领域之一。智能问诊系统通过同患者进行人机交互,收集患者的症状和体征信息。
相关技术中,需要很多次与患者交互,才能收集全面的症状和体征信息,以便做出准确的诊断。
发明内容
根据本公开的一些实施例,提供了一种问题生成装置,包括至少一个处理器,所述至少一个处理器被配置为:获取候选问题集合Q;计算所述候选问题集合Q中每个候选问题的信息价值;根据各候选问题的信息价值,生成至少一个问题。
在一些实施例中,生成至少一个问题包括:从所述候选问题集合Q中选择信息价值最大的候选问题,作为生成的问题。
在一些实施例中,所述至少一个处理器被配置为:根据对话上下文获取所述候选问题集合Q;所述候选问题集合Q包括与所述对话上下文的相关程度满足阈值的、第一所选数目的候选问题q i,i为小于或等于所述第一所选数目的正整数;生成的至少一个问题包括待向对话目标提出的对话问题。
在一些实施例中,每个候选问题的信息价值与该候选问题带来的信息量的期望值正相关。
在一些实施例中,所述至少一个处理器还被配置为:获取与所述候选问题集合Q中各候选问题对应的答案a j,得到答案集合A,其中,j为小于或等于所述第一所选数目的正整数,且j=i表示答案a j为候选问题q i对应的答案。
在一些实施例中,所述至少一个处理器被配置为通过
Figure PCTCN2019078734-appb-000001
获得每个候选问题的信息价值,其中:context表示对话上下文;f(context,q i,a j)表示对于对话上下文context,每个候选问 题q i带来的信息量;P(a j|context,q i)表示对于对话上下文context和候选问题q i,产生答案a j的概率。
在一些实施例中,每个候选问题q i带来的信息量f(context,q i,a j)与候选问题q i和对应的对话上下文context的相关程度正相关,且所述答案集合A中答案a j的准确程度正相关;概率P(a j|context,q i)与候选问题q i和每一个其他候选问题的相似度正相关,且与该候选问题q i的预测答案和所述答案集合A中的答案a j的相似度正相关。
在一些实施例中,概率P(a j|context,q i)表示为P(a j|context,q i)∝exp(cos(G(context,q i),a j))×cos(q i,q j),其中,cos(q i,q j)表示候选问题q i和q j的相似度,G(context,q i)表示候选问题q i的预测答案,cos(G(context,q i),a j)表示候选问题q i的预测答案和所述答案集合A中的答案a j的相似度。
在一些实施例中,利用第一循环神经网络来获得所述候选问题q i的预测答案;利用第二循环神经网络来计算每个候选问题q i带来的信息量f(context,q i,a j)。
在一些实施例中,第一循环神经网络是门控循环单元网络;第二循环神经网络是长短期记忆网络。
在一些实施例中,所述至少一个处理器还被配置为:基于语料库和损失函数,使用随机梯度下降方法,训练以形成所述第一循环神经网络和所述第二循环神经网络,其中,所述语料库包括训练用对话上下文Tcontext、训练用候选问题集合TQ和训练用答案集合TA,所述训练用候选问题集合TQ包括第二所选数目的训练用候选问题q l,所述训练用答案集合TA包括与所述训练用候选问题集合TQ中各候选问题对应的答案a m,l为小于或等于所述第二所选数目的正整数,m为小于或等于所述第二所选数目的正整数;所述损失函数与每个候选问题q l和每一个其他候选问题的相似度负相关,与该候选问题q l的预测答案和所述训练用答案集合TA中的答案a m的相似度负相关,与该候选问题q l和对应的训练用对话上下文Tcontext的相关程度负相关,且与答案a m的准确程度负相关。
在一些实施例中,所述损失函数表示为Loss=∑ l,m(L(Tcontext,q l,a m),+L(y l,Tcontext,q l,a m)),其中,
Figure PCTCN2019078734-appb-000002
L(y l,Tcontext,q l,a m)=-y llog(σ(LSTM(Tcontext,q l,a m))),
cos(q l,q n)表示候选问题q l和q n的相似度,G(Tcontext,q l)表示候选问题q l的预测 答案,cos(G(Tcontext,q l),a m)表示候选问题q l的预测答案和答案a m的相似度,cos(G(Tcontext,q l),a n)表示候选问题q l的预测答案和所述训练用答案集合中的答案a n的相似度,n为小于或等于所述第二所选数目的正整数;在l等于m的情况下y l=1,在l不等于m的情况下y l=0;σ是sigmoid函数;LSTM(Tcontext,q l,a m)表示对于训练用对话上下文Tcontext,候选问题q l带来的信息量。
根据本公开的另一些实施例,提供了一种问诊系统,包括前述任一实施例所述的问题生成装置,所述对话为医疗问诊中的对话交互,所述对话目标为患者;所述问题生成装置配置成根据所述问诊上下文,生成待向患者提出的问诊问题。
在一些实施例中,所述问诊系统还包括:输入设备,配置成获取问诊上下文;输出设备,配置成输出所述问诊问题。
在一些实施例中,所述问诊上下文包括所述患者的描述。
在一些实施例中,所述问诊上下文包括已向所述患者提出的问诊问题。
根据本公开的又一些实施例,提供了一种问题生成方法,包括:获取候选问题集合Q;计算所述候选问题集合Q中每个候选问题的信息价值;根据各候选问题的信息价值,生成至少一个问题。
在一些实施例中,生成至少一个问题包括:从所述候选问题集合Q中选择信息价值最大的候选问题,作为生成的问题。
在一些实施例中,根据对话上下文获取所述候选问题集合Q;所述候选问题集合Q包括与所述对话上下文的相关程度满足阈值的、第一所选数目的候选问题q i,i为小于或等于所述第一所选数目的正整数;生成的至少一个问题包括待向对话目标提出的对话问题。
在一些实施例中,根据每个候选问题带来的信息量的期望值,计算所述候选问题集合Q中每个候选问题的信息价值。
在一些实施例中,所述问题生成方法还包括:获取与所述候选问题集合Q中各候选问题对应的答案a j,得到答案集合A,其中,j为小于或等于所述第一所选数目的正整数,且j=i表示答案a j为候选问题q i对应的答案。
在一些实施例中,每个候选问题的信息价值表示为
Figure PCTCN2019078734-appb-000003
其中:每个候选问题带来的信息量为f(context,q i,a j),表示对于对话上下文context,候选问题q i带来的信息量;P(a j|context,q i)表示对于对话上下文context和候选问题q i,产生答案a j的概率。
在一些实施例中,每个候选问题带来的信息量f(context,q i,a j)与候选问题q i和对应的对话上下文context的相关程度正相关,且所述答案集合A中答案a j的准确程度正相关;概率P(a j|context,q i)与候选问题q i和每一个其他候选问题的相似度正相关,且与该候选问题q i的预测答案和所述答案集合A中的答案a j的相似度正相关。
在一些实施例中,利用第一循环神经网络来获得所述候选问题q i的预测答案;利用第二循环神经网络来计算每个候选问题q i带来的信息量f(context,q i,a j)。
在一些实施例中,所述问题生成方法还包括:基于语料库和损失函数,使用随机梯度下降方法,训练以形成所述第一循环神经网络和所述第二循环神经网络,其中,所述语料库包括训练用对话上下文Tcontext、训练用候选问题集合TQ和训练用答案集合TA,所述训练用候选问题集合TQ包括第二所选数目的训练用候选问题q l,所述训练用答案集合TA包括与所述训练用候选问题集合TQ中各候选问题对应的答案a m,l为小于或等于所述第二所选数目的正整数,m为小于或等于所述第二所选数目的正整数;所述损失函数与每个候选问题q l和每一个其他候选问题的相似度负相关,与该候选问题q l的预测答案和所述训练用答案集合TA中的答案a m的相似度负相关,与该候选问题q l和对应的训练用对话上下文Tcontext的相关程度负相关,且与答案a m的准确程度负相关。
根据本公开的再一些实施例,提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现前述任一实施例所述的问题生成方法。
通过以下参照附图对本公开的示例性实施例的详细描述,本公开的其它特征及其优点将会变得清楚。
附图说明
构成说明书的一部分的附图描述了本公开的实施例,并且连同说明书一起用于解释本公开的原理。
参照附图,根据下面的详细描述,可以更加清楚地理解本公开,其中:
图1是示出根据本公开一些实施例的问题生成方法的流程图;
图2是示出根据本公开另一些实施例的问题生成方法的流程图;
图3是示出根据本公开一些实施例的问题生成装置的框图;
图4是示出根据本公开一些实施例的电子设备的框图;
图5是示出根据本公开一些实施例的问诊系统的框图;
图6是示出用于实现本公开一些实施例的计算机系统的框图。
应当明白,附图中所示出的各个部分的尺寸并不是按照实际的比例关系绘制的。此外,相同或类似的参考标号表示相同或类似的构件。
具体实施方式
现在将参照附图来详细描述本公开的各种示例性实施例。对示例性实施例的描述仅仅是说明性的,决不作为对本公开及其应用或使用的任何限制。本公开可以以许多不同的形式实现,不限于这里所述的实施例。提供这些实施例是为了使本公开透彻且完整,并且向本领域技术人员充分表达本公开的范围。应注意到:除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置应被解释为仅仅是示例性的,而不是作为限制。
本公开使用的所有术语(包括技术术语或者科学术语)与本公开所属领域的普通技术人员理解的含义相同,除非另外特别定义。还应当理解,在诸如通用字典中定义的术语应当被解释为具有与它们在相关技术的上下文中的含义相一致的含义,而不应用理想化或极度形式化的意义来解释,除非这里明确地这样定义。
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。
基于问题-答案的对话在各种领域是非常普遍的。例如,在酒店住宿等行业,住客可能会向前台服务人员提出多个或多轮问题;在电视综艺节目的词语回答、诗歌接龙比赛中,嘉宾会向选手持续的提出问题;在电信、有线电视等行业,服务提供商也会向服务订阅用户提供电话客服等服务。
可能需要大量的对话过程,消耗较长的时间,才得以定位用户所提出的问题的准确答案,这将会导致人力、时间等的成本。
示例而言,在医疗领域,上述问题是明显地。需要很多次与患者交互,花费较长的时间,才能做出准确的疾病、病情、用药等的诊断。这可能会耽误患者的就诊时间和浪费医疗资源。
本公开提出一种问题生成方法,能够用尽可能少的问题获得尽可能多的信息,从而能够花费较短的时间做出准确的诊断。
在如下实施例中,以医疗领域的问诊对话为例,说明本公开问题生成方法的实施过程。本领域技术人员可以理解,在其它的问题-答案的对话过程中同样是可以适用 的。
图1是示出根据本公开一些实施例的问题生成方法的流程图。如图1所示,问题生成方法包括步骤S2-S6。
在步骤S2,获取候选问题集合Q。
在一些实施例中,根据问诊上下文获取候选问题集合Q。问诊上下文可以包括患者的描述。候选问题集合Q包括与问诊上下文的相关程度满足阈值的、第一所选数目的候选问题q i,i为小于或等于第一所选数目的正整数。
例如,可以将问诊上下文作为查询语句,通过whoosh、lucene等文本检索工具,在语料库中检索与问诊上下文最相近的第一所选数目(例如,选择前10个)问诊上下文,得到与检索到的每个问诊上下文对应的问题q i,构成候选问题集合Q。
在一些实施例中,在步骤S2,还获取与候选问题集合Q中各候选问题对应的答案a j,得到答案集合A,其中,j为小于或等于所述第一所选数目的正整数。j=i表示答案a j为候选问题q i对应的答案。
在步骤S4,计算候选问题集合Q中每个候选问题的信息价值。
在一些实施例中,根据每个候选问题q i带来的信息量的期望值,计算候选问题集合Q中每个候选问题q i的信息价值。
例如,每个候选问题q i的信息价值可以表示为
Figure PCTCN2019078734-appb-000004
其中,每个候选问题带来的信息量为f(context,q i,a j),表示对于问诊上下文context,候选问题q i带来的信息量;P(a j|context,q i)表示对于问诊上下文context和候选问题q i,产生答案a j的概率。
在一些实施例中,每个候选问题q i带来的信息量f(context,q i,a j)与候选问题q i和对应的问诊上下文context的相关程度正相关,且所述答案集合A中答案a j的准确程度正相关。
f(context,q i,a j)反映了综合考虑候选问题q i和答案a j的信息量。在候选问题q i和问诊上下文context相关程度较高,且答案a j准确回答候选问题q i的情况下,f(context,q i,a j)较大。
例如,对于问诊上下文为“小腿疼”,候选问题“下肢是否痉挛?”和“小腿抽筋吗?”与问诊上下文的相关程度都较高。但是,由于候选问题“下肢是否痉挛?”使用的是标准医学用语,可能会出现因缺少医学知识不理解问题而导致得到例如“痉挛是什么意 思?”的答案,即得到的答案的准确程度可能较低。
与之相对地,候选问题“小腿抽筋吗?”,由于使用相对容易理解的日常用语,就会得到例如“小腿抽筋过一次”的准确答案,即得到的答案的准确程度较高。
因此,对于问诊上下文“小腿疼”,候选问题“小腿抽筋吗?”带来的信息量大于候选问题“下肢是否痉挛?”带来的信息量。
在另一些实施例中,概率P(a j|context,q i)与候选问题q i和每一个其他候选问题的相似度正相关,且与该候选问题q i的预测答案和所述答案集合A中的答案a j的相似度正相关。
例如,概率P(a j|context,q i)可以表示为
P(a j|context,q i)∝exp(cos(G(context,q i),a j))×cos(q i,q j),
其中,cos(q i,q j)表示候选问题q i和q j的相似度,G(context,q i)表示候选问题q i的预测答案,cos(G(context,q i),a j)表示候选问题q i的预测答案和所述答案集合A中的答案a j的相似度,cos表示余弦相似度,exp表示以自然常数e为底的指数函数。
问诊上下文context,候选问题q i和答案a j可以通过词向量来表示。问诊上下文context,候选问题q i和答案a j的词向量表示可以利用如word2vector等技术来实现。
在一些实施例中,利用第一循环神经网络来获得所述候选问题q i的预测答案。利用第二循环神经网络来计算每个候选问题q i带来的信息量f(context,q i,a j)。
第一循环神经网络和第二循环神经网络都可以是门控循环单元(GRU)网络或长短期记忆(LSTM)网络。在一些实施例中,第一循环神经网络是GRU网络,第二循环神经网络是LSTM网络。
在步骤S6,根据各候选问题的信息价值,生成至少一个问题。
生成的至少一个问题可以包括待向患者提出的问诊问题。
在一些实施例中,从所述候选问题集合Q中选择信息价值最大的候选问题,作为生成的问题。
例如,每次向患者提出问诊问题,都选择信息价值最大的一个候选问题。对于下一个问诊问题,可将当前信息价值最大的候选问题(即已向患者提出的问诊问题)加入到问诊上下文context中,然后重复前述问题生成方法的流程。这样,可以用较少的计算量生成一系列问题。
在另一些实施例中,也可综合考虑生成的多个问题的信息量之和来选择候选问题。例如,假设要生成3个问诊问题,可以根据各个候选问题的信息价值,选择带来的信 息量之和最大的3个问诊问题作为生成的问题。这样,可以提高信息收集的整体效率。
图2是示出根据本公开另一些实施例的问题生成方法的流程图。图2与图1的不同之处在于,图2中的问题生成方法还包括步骤S1。下面将仅描述图2与图1的不同之处,相同之处不再赘述。
在步骤S1,基于语料库和损失函数,使用随机梯度下降方法,来训练以形成第一循环神经网络和第二循环神经网络。
可以采用与步骤S2中获取候选问题集合Q类似的方式,从语料库中获取训练样本(训练用问诊上下文Tcontext、训练用候选问题集合TQ和训练用答案集合TA),以构建训练数据集。
训练用候选问题集合TQ包括第二所选数目的训练用候选问题q l,l为小于或等于第二所选数目的正整数。训练用答案集合TA包括与训练用候选问题集合TQ中各候选问题对应的答案a m,m为小于或等于第二所选数目的正整数。第二所选数目可以根据训练需要来设置。
可以基于爬虫从互联网爬取的相关信息来构建语料库,相关信息包括患者最初的描述、医护人员的提问、患者对提问的回答。这些信息在语料库中可以存储为(C,q,a),其中:患者最初的描述对应问诊上下文,可以用C来表示;针对该问诊上下文,医护人员的提问用q来表示;患者对提问q的回答用a来表示。
例如,某医疗网站上患者询问“嗓子非常疼,感冒的时候以为就是感冒引起的,可是感冒好了还是没好转……”,可以用C表示;医生提问“你今年多大岁数?以前有过类似的病史吗?已经吃过什么药……”,可以用q 1表示;患者回答“今年23了,没有类似的病史,就吃的感冒药,具体哪些记不清了,好像有罗红霉素、清热解毒冲剂”,可以用a 1表示;医生接着又做了追问,“这次嗓子疼有多长时间啦?”可以用q 2表示;患者也针对医生的追问做了回答,“八九天吧”,可以用a 2表示。即,上述信息可以存储为(C,q 1,a 1,q 2,a 2)。
尽管上述示意了利用互联网的信息构建语料库的方法,其它的来源,例如基于医院、诊所等医疗场所的病历等也可以构建所需的语料库。
在一些实施例中,损失函数被构建为:与每个候选问题q l和每一个其他候选问题的相似度负相关,与该候选问题q l的预测答案和所述训练用答案集合TA中的答案a m的相似度负相关,与该候选问题q l和对应的训练用问诊上下文Tcontext的相关程度负相关,且与答案a m的准确程度负相关。
例如,损失函数表示为
Loss=∑ l,m(L(Tcontext,q l,a m),+L(y l,Tcontext,q l,a m)),其中,
Figure PCTCN2019078734-appb-000005
L(y l,Tcontext,q l,a m)=-y llog(σ(LSTM(Tcontext,q l,a m)))。
cos(q l,q n)表示候选问题q l和q n的相似度,G(Tcontext,q l)表示候选问题q l的预测答案,cos(G(Tcontext,q l),a m)表示候选问题q l的预测答案和答案a m的相似度,cos(G(Tcontext,q l),a n)表示候选问题q l的预测答案和所述训练用答案集合中的答案a n的相似度,n为小于或等于所述第二所选数目的正整数;在l等于m的情况下y l=1,在l不等于m的情况下y l=0;σ是sigmoid函数;LSTM(Tcontext,q l,a m)表示对于训练用问诊上下文Tcontext,候选问题q l带来的信息量。
随机梯度下降方法在计算下降最快的方向随机选一个训练数据进行计算,而不是扫描全部训练数据集,这样可加快迭代速度。
如前所述,在计算概率P(a j|context,q i)时,利用GRU网络,可以提高训练效率;而在计算信息量f(context,q i,a j)时,利用LSTM网络,可以在数据量较大时实现更好的表达性能。
图3是示出根据本公开一些实施例的问题生成装置的框图。如图3所示,问题生成装置30包括至少一个处理器320。
处理器320被配置为:获取候选问题集合Q;计算所述候选问题集合Q中每个候选问题的信息价值;根据各候选问题的信息价值,生成至少一个问题。处理器320例如可以执行如图1或图2所示的问题生成方法。
本领域技术人员容易理解,上述问题生成装置30还可以包括存储器,以用来至少存储候选问题集合。
图4是示出根据本公开一些实施例的电子设备的框图。
如图4所示,电子设备40包括:存储器410以及耦接至该存储器410的处理器420。存储器410配置成存储执行问题生成方法对应实施例的指令。处理器420被配置为基于存储在存储器410中的指令,执行本公开中任意一些实施例中的问题生成方法中的一个或多个步骤。
应当理解,前述问题生成方法中的各个步骤都可以通过处理器来实现,并且可以软件、硬件、固件或其结合的任一种方式实现。
除了问题生成方法、装置之外,本公开实施例还可采用在一个或多个包含有计算机程序指令的非易失性存储介质上实施的计算机程序产品的形式。因此,本公开实施例还提供一种计算机可读存储介质,其上存储有计算机指令,该指令被处理器执行时实现前述任意实施例中的问题生成方法。
本公开实施例还提供一种问诊系统,包括前述任一实施例所述的问题生成装置或电子设备。
图5是示出根据本公开一些实施例的问诊系统的框图。
如图5所示,问诊系统5包括问题生成装置50。问题生成装置50被配置为执行前述任一实施例所述的问题生成方法。问题生成装置50的结构可以类似与前述的问题生成装置30,用于根据问诊上下文,生成待向患者提出的问诊问题。
在一些实施例中,问诊系统5还包括:输入设备51和输出设备52。输入设备51用于获取问诊上下文。例如,可以根据文字、声音或图像等多媒体数据获取问诊上下文。输出设备52用于输出问诊问题。例如,可以利用显示器、播放器中的至少一种输出问诊问题。
其中,输入设备51、输出设备52可以通过网络连接到问题生成装置50,例如无线网络、有线网络、和/或无线网络和有线网络的任意组合。网络可以包括局域网、互联网、电信网、基于互联网和/或电信网的物联网(Internet of Things)、和/或以上网络的任意组合等。有线网络例如可以采用双绞线、同轴电缆或光纤传输等方式进行通信,无线网络例如可以采用3G/4G/5G移动通信网络、蓝牙、Zigbee或者Wi-Fi等通信方式。
其中,问诊系统5可以实现为本地服务的方式,即输入设备、问题生成装置、输出设备均位于用户端;问诊系统5还可以实现为云服务的方式,即输入设备、输出设备位于用户端,问题生成装置位于云端,云端的问题生成装置可以同时或分时为多个用户提供服务。
图6是示出用于实现本公开一些实施例的计算机系统的框图。
如图6所示,计算机系统可以通用计算设备的形式表现,该计算机系统可以用来实现上述实施例的问题生成装置。计算机系统包括存储器610、处理器620和连接不同系统组件的总线600。
存储器610例如可以包括系统存储器、非易失性存储介质等。系统存储器例如存储有操作系统、应用程序、引导装载程序(Boot Loader)以及其他程序等。系统存 储器可以包括易失性存储介质,例如随机存取存储器(RAM)和/或高速缓存存储器。非易失性存储介质例如存储有执行显示方法的对应实施例的指令。非易失性存储介质包括但不限于磁盘存储器、光学存储器、闪存等。
处理器620可以用通用处理器、数字信号处理器(DSP)、应用专用集成电路(ASIC)、现场可编程门阵列(FPGA)或其它可编程逻辑设备、分立门或晶体管等分立硬件组件方式来实现。相应地,诸如判断模块和确定模块的每个模块,可以通过中央处理器(CPU)运行存储器中执行相应步骤的指令来实现,也可以通过执行相应步骤的专用电路来实现。
总线600可以使用多种总线结构中的任意总线结构。例如,总线结构包括但不限于工业标准体系结构(ISA)总线、微通道体系结构(MCA)总线、外围组件互连(PCI)总线。
计算机系统还可以包括输入输出接口630、网络接口640、存储接口650等。这些接口630、640、650以及存储器610和处理器620之间可以通过总线600连接。输入输出接口630可以为显示器、鼠标、键盘等输入输出设备提供连接接口。网络接口640为各种联网设备提供连接接口。存储接口640为软盘、U盘、SD卡等外部存储设备提供连接接口。
至此,已经详细描述了本公开的各种实施例。为了避免遮蔽本公开的构思,没有描述本领域所公知的一些细节。本领域技术人员根据上面的描述,完全可以明白如何实施这里公开的技术方案。
虽然已经通过示例对本公开的一些特定实施例进行了详细说明,但是本领域的技术人员应该理解,以上示例仅是为了进行说明,而不是为了限制本公开的范围。本领域的技术人员应该理解,可在不脱离本公开的范围和精神的情况下,对以上实施例进行修改或者对部分技术特征进行等同替换。本公开的范围由所附权利要求来限定。

Claims (26)

  1. 一种问题生成装置,包括至少一个处理器,所述至少一个处理器被配置为:
    获取候选问题集合Q;
    计算所述候选问题集合Q中每个候选问题的信息价值;
    根据各候选问题的信息价值,生成至少一个问题。
  2. 根据权利要求1所述的问题生成装置,其中,生成至少一个问题包括:
    从所述候选问题集合Q中选择信息价值最大的候选问题,作为生成的问题。
  3. 根据权利要求1或2所述的问题生成装置,其中,所述至少一个处理器被配置为:
    根据对话上下文获取所述候选问题集合Q;
    所述候选问题集合Q包括与所述对话上下文的相关程度满足阈值的、第一所选数目的候选问题q i,i为小于或等于所述第一所选数目的正整数;
    生成的至少一个问题包括待向对话目标提出的对话问题。
  4. 根据权利要求1-3任一项所述的问题生成装置,其中,每个候选问题的信息价值与该候选问题带来的信息量的期望值正相关。
  5. 根据权利要求1-4任一项所述的问题生成装置,其中,所述至少一个处理器还被配置为:
    获取与所述候选问题集合Q中各候选问题对应的答案a j,得到答案集合A,其中,j为小于或等于所述第一所选数目的正整数,且j=i表示答案a j为候选问题q i对应的答案。
  6. 根据权利要求1-5任一项所述的问题生成装置,其中,所述至少一个处理器被配置为通过
    Figure PCTCN2019078734-appb-100001
    获得每个候选问题的信息价值,其中:
    context表示对话上下文;
    f(context,q i,a j)表示对于对话上下文context,每个候选问题q i带来的信息量;
    P(a j|context,q i)表示对于对话上下文context和候选问题q i,产生答案a j的概率。
  7. 根据权利要求1-6任一项所述的问题生成装置,其中:
    每个候选问题q i带来的信息量f(context,q i,a j)与候选问题q i和对应的对话上下文context的相关程度正相关,且所述答案集合A中答案a j的准确程度正相关;
    概率P(a j|context,q i)与候选问题q i和每一个其他候选问题的相似度正相关,且与该候选问题q i的预测答案和所述答案集合A中的答案a j的相似度正相关。
  8. 根据权利要求1-7任一项所述的问题生成装置,其中,概率P(a j|context,q i)表示为P(a j|context,q i)∝exp(cos(G(context,q i),a j))×cos(q i,q j),其中,
    cos(q i,q j)表示候选问题q i和q j的相似度,
    G(context,q i)表示候选问题q i的预测答案,
    cos(G(context,q i),a j)表示候选问题q i的预测答案和所述答案集合A中的答案a j的相似度。
  9. 根据权利要求1-8任一项所述的问题生成装置,其中:
    利用第一循环神经网络来获得所述候选问题q i的预测答案;
    利用第二循环神经网络来计算每个候选问题q i带来的信息量f(context,q i,a j)。
  10. 根据权利要求1-9任一项所述的问题生成装置,其中:
    第一循环神经网络是门控循环单元网络;
    第二循环神经网络是长短期记忆网络。
  11. 根据权利要求1-10任一项所述的问题生成装置,其中,所述至少一个处理器还被配置为:基于语料库和损失函数,使用随机梯度下降方法,训练以形成所述第一循环神经网络和所述第二循环神经网络,其中,
    所述语料库包括训练用对话上下文Tcontext、训练用候选问题集合TQ和训练用答案集合TA,所述训练用候选问题集合TQ包括第二所选数目的训练用候选问题q l,所述训练用答案集合TA包括与所述训练用候选问题集合TQ中各候选问题对应的答 案a m,l为小于或等于所述第二所选数目的正整数,m为小于或等于所述第二所选数目的正整数;
    所述损失函数与每个候选问题q l和每一个其他候选问题的相似度负相关,与该候选问题q l的预测答案和所述训练用答案集合TA中的答案a m的相似度负相关,与该候选问题q l和对应的训练用对话上下文Tcontext的相关程度负相关,且与答案a m的准确程度负相关。
  12. 根据权利要求1-11任一项所述的问题生成装置,其中,所述损失函数表示为
    Loss=∑ l,m(L(Tcontext,q l,a m),+L(y l,Tcontext,q l,a m)),其中,
    Figure PCTCN2019078734-appb-100002
    L(y l,Tcontext,q l,a m)=-y llog(σ(LSTM(Tcontext,q l,a m))),
    cos(q l,q n)表示候选问题q l和q n的相似度,G(Tcontext,q l)表示候选问题q l的预测答案,cos(G(Tcontext,q l),a m)表示候选问题q l的预测答案和答案a m的相似度,cos(G(Tcontext,q l),a n)表示候选问题q l的预测答案和所述训练用答案集合中的答案a n的相似度,n为小于或等于所述第二所选数目的正整数;
    在l等于m的情况下y l=1,在l不等于m的情况下y l=0;
    σ是sigmoid函数;
    LSTM(Tcontext,q l,a m)表示对于训练用对话上下文Tcontext,候选问题q l带来的信息量。
  13. 一种问诊系统,包括权利要求3-12任一项所述的问题生成装置,所述对话为医疗问诊中的对话交互,所述对话目标为患者;所述问题生成装置配置成根据所述问诊上下文,生成待向患者提出的问诊问题。
  14. 根据权利要求13所述的问诊系统,还包括:
    输入设备,配置成获取问诊上下文;
    输出设备,配置成输出所述问诊问题。
  15. 根据权利要求13-14任一项所述的问题生成装置,其中,所述问诊上下文包 括所述患者的描述。
  16. 根据权利要求13-15任一项所述的问题生成装置,其中,所述问诊上下文包括已向所述患者提出的问诊问题。
  17. 一种问题生成方法,包括:
    获取候选问题集合Q;
    计算所述候选问题集合Q中每个候选问题的信息价值;
    根据各候选问题的信息价值,生成至少一个问题。
  18. 根据权利要求17所述的问题生成方法,其中,生成至少一个问题包括:
    从所述候选问题集合Q中选择信息价值最大的候选问题,作为生成的问题。
  19. 根据权利要求17或18所述的问题生成方法,其中:
    根据对话上下文获取所述候选问题集合Q;
    所述候选问题集合Q包括与所述对话上下文的相关程度满足阈值的、第一所选数目的候选问题q i,i为小于或等于所述第一所选数目的正整数;
    生成的至少一个问题包括待向对话目标提出的对话问题。
  20. 根据权利要求17-19任一项所述的问题生成方法,其中,根据每个候选问题带来的信息量的期望值,计算所述候选问题集合Q中每个候选问题的信息价值。
  21. 根据权利要求17-20任一项所述的问题生成方法,还包括:
    获取与所述候选问题集合Q中各候选问题对应的答案a j,得到答案集合A,其中,j为小于或等于所述第一所选数目的正整数,且j=i表示答案a j为候选问题q i对应的答案。
  22. 根据权利要求17-21任一项所述的问题生成方法,其中,每个候选问题的信息价值表示为
    Figure PCTCN2019078734-appb-100003
    其中:
    每个候选问题带来的信息量为f(context,q i,a j),表示对于对话上下文context,候 选问题q i带来的信息量;
    P(a j|context,q i)表示对于对话上下文context和候选问题q i,产生答案a j的概率。
  23. 根据权利要求17-22任一项所述的问题生成方法,其中:
    每个候选问题带来的信息量f(context,q i,a j)与候选问题q i和对应的对话上下文context的相关程度正相关,且所述答案集合A中答案a j的准确程度正相关;
    概率P(a j|context,q i)与候选问题q i和每一个其他候选问题的相似度正相关,且与该候选问题q i的预测答案和所述答案集合A中的答案a j的相似度正相关。
  24. 根据权利要求17-23任一项所述的问题生成方法,其中:
    利用第一循环神经网络来获得所述候选问题q i的预测答案;
    利用第二循环神经网络来计算每个候选问题q i带来的信息量f(context,q i,a j)。
  25. 根据权利要求17-24任一项所述的问题生成方法,还包括:
    基于语料库和损失函数,使用随机梯度下降方法,训练以形成所述第一循环神经网络和所述第二循环神经网络,其中,
    所述语料库包括训练用对话上下文Tcontext、训练用候选问题集合TQ和训练用答案集合TA,所述训练用候选问题集合TQ包括第二所选数目的训练用候选问题q l,所述训练用答案集合TA包括与所述训练用候选问题集合TQ中各候选问题对应的答案a m,l为小于或等于所述第二所选数目的正整数,m为小于或等于所述第二所选数目的正整数;
    所述损失函数与每个候选问题q l和每一个其他候选问题的相似度负相关,与该候选问题q l的预测答案和所述训练用答案集合TA中的答案a m的相似度负相关,与该候选问题q l和对应的训练用对话上下文Tcontext的相关程度负相关,且与答案a m的准确程度负相关。
  26. 一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如权利要求17至25中任一项所述的问题生成方法。
PCT/CN2019/078734 2019-03-19 2019-03-19 问题生成方法和装置、问诊系统、计算机可读存储介质 WO2020186458A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US16/640,267 US11600389B2 (en) 2019-03-19 2019-03-19 Question generating method and apparatus, inquiring diagnosis system, and computer readable storage medium
PCT/CN2019/078734 WO2020186458A1 (zh) 2019-03-19 2019-03-19 问题生成方法和装置、问诊系统、计算机可读存储介质
CN201980000329.9A CN110088748B (zh) 2019-03-19 2019-03-19 问题生成方法和装置、问诊系统、计算机可读存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/078734 WO2020186458A1 (zh) 2019-03-19 2019-03-19 问题生成方法和装置、问诊系统、计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2020186458A1 true WO2020186458A1 (zh) 2020-09-24

Family

ID=67424507

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/078734 WO2020186458A1 (zh) 2019-03-19 2019-03-19 问题生成方法和装置、问诊系统、计算机可读存储介质

Country Status (3)

Country Link
US (1) US11600389B2 (zh)
CN (1) CN110088748B (zh)
WO (1) WO2020186458A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210257096A1 (en) * 2020-02-19 2021-08-19 University Of Virginia Patent Foundation Influencing end-stage renal disease outcomes through predicting physiological parameters and determining dosing recommendations
CN114944227A (zh) * 2022-06-17 2022-08-26 平安科技(深圳)有限公司 皮肤病在线问诊方法、装置、系统、设备及介质
KR102660276B1 (ko) * 2023-10-11 2024-04-25 주식회사 마인즈앤컴퍼니 거대언어모델을 활용한 업무수행 챗봇 서비스장치 및 그 장치의 구동방법
CN117894473A (zh) * 2024-01-16 2024-04-16 上海交通大学医学院附属新华医院 一种自适应集成学习模型的出院准备评估方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150324492A1 (en) * 2014-05-09 2015-11-12 Autodesk, Inc Techniques for using controlled natural language to capture design intent for computer-aided design
CN105893523A (zh) * 2016-03-31 2016-08-24 华东师范大学 利用答案相关性排序的评估度量来计算问题相似度的方法
CN108595619A (zh) * 2018-04-23 2018-09-28 海信集团有限公司 一种问答方法及设备
CN109036588A (zh) * 2018-09-10 2018-12-18 百度在线网络技术(北京)有限公司 线上问诊的方法、装置、设备及计算机可读介质
CN109271505A (zh) * 2018-11-12 2019-01-25 深圳智能思创科技有限公司 一种基于问题答案对的问答系统实现方法

Family Cites Families (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004104919A2 (de) * 2003-05-22 2004-12-02 Siemens Aktiengesellschaft Expertensystem zur bewertung eines informationsgewinns über einen zustand eines systems
US20050197988A1 (en) * 2004-02-17 2005-09-08 Bublitz Scott T. Adaptive survey and assessment administration using Bayesian belief networks
US20070207449A1 (en) * 2005-05-19 2007-09-06 Feierstein Roslyn E Method of analyzing question responses to select among defined possibilities and means of accomplishing same
CN101286161B (zh) * 2008-05-28 2010-10-06 华中科技大学 一种基于概念的智能中文问答系统
US20140052722A1 (en) * 2012-08-16 2014-02-20 Dimitris J. Bertsimas Optimization-based regimen method and system for personalized diabetes and diet management
US9257052B2 (en) * 2012-08-23 2016-02-09 International Business Machines Corporation Evaluating candidate answers to questions in a target knowledge domain
CN104572734B (zh) * 2013-10-23 2019-04-30 腾讯科技(深圳)有限公司 问题推荐方法、装置及系统
US20150332021A1 (en) * 2014-05-15 2015-11-19 ThoroughCare, Inc. Guided Patient Interview and Health Management Systems
US20160371663A1 (en) * 2015-06-17 2016-12-22 Bruce Knuteson Information Exchange System and Methods
US20170116870A1 (en) * 2015-10-21 2017-04-27 Duolingo, Inc. Automatic test personalization
US20170364804A1 (en) * 2016-06-15 2017-12-21 International Business Machines Corporation Answer Scoring Based on a Combination of Specificity and Informativity Metrics
EP3270331A1 (en) * 2016-07-15 2018-01-17 Little Brain NV A medical consultation support tool
US10540967B2 (en) * 2016-11-14 2020-01-21 Xerox Corporation Machine reading method for dialog state tracking
CN108228637B (zh) * 2016-12-21 2020-09-04 中国电信股份有限公司 自然语言客户自动应答方法和系统
WO2019070763A1 (en) * 2017-10-02 2019-04-11 New Sun Technologies, Inc. LEARNING TRAINING SYSTEM MEDIATION MACHINE BY A CAREGIVER
CN108509463B (zh) 2017-02-28 2022-03-29 华为技术有限公司 一种问题的应答方法及装置
US11729120B2 (en) 2017-03-16 2023-08-15 Microsoft Technology Licensing, Llc Generating responses in automated chatting
US10713289B1 (en) * 2017-03-31 2020-07-14 Amazon Technologies, Inc. Question answering system
US10854335B2 (en) * 2017-06-16 2020-12-01 Htc Corporation Computer aided medical method and medical system for medical prediction
CN107832047B (zh) * 2017-11-27 2018-11-27 北京理工大学 一种基于lstm的非api函数实参推荐方法
US10635521B2 (en) * 2017-12-15 2020-04-28 International Business Machines Corporation Conversational problem determination based on bipartite graph
CN108242266A (zh) * 2018-01-22 2018-07-03 北京颐圣智能科技有限公司 辅助诊断装置和方法
CN108491433B (zh) * 2018-02-09 2022-05-03 平安科技(深圳)有限公司 聊天应答方法、电子装置及存储介质
JP7149560B2 (ja) * 2018-04-13 2022-10-07 国立研究開発法人情報通信研究機構 リクエスト言換システム、リクエスト言換モデル及びリクエスト判定モデルの訓練方法、及び対話システム
CA3048436A1 (en) * 2018-07-03 2020-01-03 University Of Waterloo System, method and computer program product for query clarification
CN109002540B (zh) * 2018-07-23 2021-03-16 电子科技大学 一种中文通告文档问题答案对自动生成方法
GB201815743D0 (en) * 2018-09-27 2018-11-14 Microsoft Technology Licensing Llc Gathering data in a communication system
US20200219617A1 (en) * 2019-01-03 2020-07-09 International Business Machines Corporation Apparatus and method for initial information gathering from patients at the point of care
US10387575B1 (en) * 2019-01-30 2019-08-20 Babylon Partners Limited Semantic graph traversal for recognition of inferred clauses within natural language inputs
US11145414B2 (en) * 2019-02-28 2021-10-12 Babylon Partners Limited Dialogue flow using semantic simplexes

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150324492A1 (en) * 2014-05-09 2015-11-12 Autodesk, Inc Techniques for using controlled natural language to capture design intent for computer-aided design
CN105893523A (zh) * 2016-03-31 2016-08-24 华东师范大学 利用答案相关性排序的评估度量来计算问题相似度的方法
CN108595619A (zh) * 2018-04-23 2018-09-28 海信集团有限公司 一种问答方法及设备
CN109036588A (zh) * 2018-09-10 2018-12-18 百度在线网络技术(北京)有限公司 线上问诊的方法、装置、设备及计算机可读介质
CN109271505A (zh) * 2018-11-12 2019-01-25 深圳智能思创科技有限公司 一种基于问题答案对的问答系统实现方法

Also Published As

Publication number Publication date
CN110088748A (zh) 2019-08-02
US20210034305A1 (en) 2021-02-04
US11600389B2 (en) 2023-03-07
CN110088748B (zh) 2023-11-14

Similar Documents

Publication Publication Date Title
Lu et al. Machine learning for synthetic data generation: a review
WO2020186458A1 (zh) 问题生成方法和装置、问诊系统、计算机可读存储介质
Volkova et al. Forecasting influenza-like illness dynamics for military populations using neural networks and social media
Wang et al. Methods for correcting inference based on outcomes predicted by machine learning
Paul et al. Discovering health topics in social media using topic models
JP7116435B2 (ja) エンティティ・モデルの確立
WO2021159738A1 (zh) 基于医学领域的数据推荐方法、设备、服务器及存储介质
US11276494B2 (en) Predicting interactions between drugs and diseases
Li et al. Extracting medical knowledge from crowdsourced question answering website
WO2023178971A1 (zh) 就医的互联网挂号方法、装置、设备及存储介质
CN108417272A (zh) 带时序约束的相似病例推荐方法及装置
US10936962B1 (en) Methods and systems for confirming an advisory interaction with an artificial intelligence platform
WO2022160614A1 (zh) 构建医学实体关系图的方法及装置、医嘱质控的方法及装置、设备、介质
Rader et al. Bias-corrected estimates for logistic regression models for complex surveys with application to the United States’ Nationwide Inpatient Sample
CN112183881A (zh) 一种基于社交网络的舆情事件预测方法、设备及存储介质
Friedrich et al. On the role of benchmarking data sets and simulations in method comparison studies
Keikha et al. DeepLink: A novel link prediction framework based on deep learning
Hong et al. Event2vec: Learning representations of events on temporal sequences
US20200143104A1 (en) Methods for arbitrating online disputes and anticipating outcomes using machine intelligence
Nolan et al. Developing syndromic surveillance to monitor and respond to adverse health events related to psychoactive substance use: methods and applications
Yörük et al. Random sampling in corpus design: Cross-context generalizability in automated multicountry protest event collection
CN114758742A (zh) 病历信息处理方法、装置、电子设备及存储介质
WO2021174923A1 (zh) 概念词序列生成方法、装置、计算机设备及存储介质
US12087442B2 (en) Methods and systems for confirming an advisory interaction with an artificial intelligence platform
CN114360732B (zh) 医疗数据分析方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19919839

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19919839

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 04/02/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19919839

Country of ref document: EP

Kind code of ref document: A1