CN114328864A - Ophthalmic question-answering system based on artificial intelligence and knowledge graph - Google Patents

Ophthalmic question-answering system based on artificial intelligence and knowledge graph Download PDF

Info

Publication number
CN114328864A
CN114328864A CN202111524880.9A CN202111524880A CN114328864A CN 114328864 A CN114328864 A CN 114328864A CN 202111524880 A CN202111524880 A CN 202111524880A CN 114328864 A CN114328864 A CN 114328864A
Authority
CN
China
Prior art keywords
question
answer
text data
ophthalmic
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111524880.9A
Other languages
Chinese (zh)
Inventor
陆宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Orbis Technology Co ltd
Original Assignee
Hefei Orbis Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Orbis Technology Co ltd filed Critical Hefei Orbis Technology Co ltd
Priority to CN202111524880.9A priority Critical patent/CN114328864A/en
Publication of CN114328864A publication Critical patent/CN114328864A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Eye Examination Apparatus (AREA)

Abstract

The invention provides an ophthalmic question-answering system based on artificial intelligence and a knowledge graph, and relates to the technical field of medical question-answering systems. Firstly, training an ophthalmologic medical entity recognition model and an ophthalmologic medical relation extraction model according to labeled ophthalmologic medical text data, and then constructing a knowledge graph by using the unlabeled ophthalmologic medical text data and realizing a first sub-question-answer model based on the knowledge graph. And then training a second question-answer model based on the image according to the ophthalmic disease image data set, and combining the second question-answer model based on the image and the first question-answer model based on the knowledge graph to obtain the ophthalmic medical question-answer system based on artificial intelligence and the knowledge graph. The system can realize intelligent question answering and diagnosis according to the questions or corresponding symptoms input by the user, and the user can input corresponding symptoms and images to realize more accurate diagnosis of the ophthalmic diseases.

Description

Ophthalmic question-answering system based on artificial intelligence and knowledge graph
Technical Field
The invention relates to the technical field of medical question-answering systems, in particular to an ophthalmic question-answering system based on artificial intelligence and a knowledge graph.
Background
The medical question-answering system can provide relatively reliable medical question-answering and diagnosis services for the user, so that the user can answer and diagnose related diseases and symptoms through the medical question-answering system without going out.
Most of the existing medical question-answering systems are based on knowledge graphs. For example, according to the patient consultation text and a preset consultation question-answer model, semantic vector retrieval is simultaneously carried out in a preset medical knowledge map to obtain a recall result; then, obtaining a highest-grade target recall result in all recall results based on a preset ranking grade model; and searching answers of the user questions in a preset medical knowledge map through image-text book searching, and determining the answer which best meets the user questions from all the searched answers.
Such a question-answering and diagnosis method often cannot accurately diagnose and ask a disease only by means of a single user description. And generally, the symptoms or questions input by the user also affect the accuracy of the diagnosis and question-answering system due to the inexpert description. In addition, the existing disease diagnosis method based on medical images cannot perform more accurate diagnosis on diseases due to the lack of subjective symptom description of users, and cannot provide better follow-up suggestions for users.
Disclosure of Invention
Technical problem to be solved
Aiming at the defects of the prior art, the invention provides an ophthalmic question-answering system based on artificial intelligence and knowledge maps, which solves the problem that the existing medical question-answering system is inaccurate.
(II) technical scheme
In order to achieve the purpose, the invention is realized by the following technical scheme:
in a first aspect, an ophthalmic question-answering system based on artificial intelligence and knowledge-maps is provided, the system comprising:
the text data acquisition module is used for acquiring ophthalmic medical text data; the ophthalmic medical text data comprises labeled text data and unlabeled text data;
an image data acquisition module for acquiring ophthalmic medical image data; the ophthalmic medical image data comprises annotated image data and unlabeled image data;
the first question-answer sub-model building module is used for training by using the labeled text data to obtain an entity extraction model and a relation extraction model; extracting structured data from the unlabeled text data by using an entity extraction model and a relation extraction model to construct an ophthalmologic medical knowledge map; constructing a first question-answer sub-model based on the ophthalmic medical knowledge graph;
the second question-answer sub-model building module is used for training the deep learning network by using the labeled image data to obtain a second question-answer sub-model;
the question answering and diagnosis module is used for outputting a question answering result through the first question answering sub-model when only the unlabeled text data input by the user is obtained; when the unlabeled text data and the unlabeled image data input by the user are acquired simultaneously, the output question-answer results are obtained by weighting the question-answer results of the first question-answer submodel and the second question-answer submodel.
Further, the entity extraction model and the relationship extraction model both use an open-source pre-trained BERT model.
Further, the extracting of the structured data from the unlabeled text data by using the entity extraction model and the relationship extraction model to construct the ophthalmic medical knowledge map comprises:
performing entity extraction on the unlabeled text data by using the entity extraction model;
carrying out relation extraction on sentences containing a plurality of entities by using the relation extraction model to obtain structured data;
and constructing an ophthalmic medical knowledge map based on the obtained structured data.
Furthermore, an IOBES method is adopted for entity labeling of the ophthalmic medical text data, and the relationship labeling marks out corresponding entities and types of relationships among the entities by taking each sentence in the text as a labeling unit.
Further, the labeled image data is an ophthalmic medical image labeled with a disease type.
In a second aspect, there is provided a computer-readable storage medium storing a computer program for an ophthalmic question-answering, wherein the computer program causes a computer to execute the method of:
acquiring ophthalmic medical text data; the ophthalmic medical text data comprises labeled text data and unlabeled text data;
acquiring ophthalmic medical image data; the ophthalmic medical image data comprises annotated image data and unlabeled image data;
training by using the labeled text data to obtain an entity extraction model and a relation extraction model; extracting structured data from the unlabeled text data by using an entity extraction model and a relation extraction model to construct an ophthalmologic medical knowledge map; constructing a first question-answer sub-model based on the ophthalmic medical knowledge graph;
training the deep learning network by using the labeled image data to obtain a second question-answer model;
when only un-labeled text data input by a user is acquired, outputting a question and answer result through a first question and answer model; when the unlabeled text data and the unlabeled image data input by the user are acquired simultaneously, the output question-answer results are obtained by weighting the question-answer results of the first question-answer submodel and the second question-answer submodel.
In a third aspect, an electronic device is provided, including:
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the programs comprising instructions for performing the method of:
acquiring ophthalmic medical text data; the ophthalmic medical text data comprises labeled text data and unlabeled text data;
acquiring ophthalmic medical image data; the ophthalmic medical image data comprises annotated image data and unlabeled image data;
training by using the labeled text data to obtain an entity extraction model and a relation extraction model; extracting structured data from the unlabeled text data by using an entity extraction model and a relation extraction model to construct an ophthalmologic medical knowledge map; constructing a first question-answer sub-model based on the ophthalmic medical knowledge graph;
training the deep learning network by using the labeled image data to obtain a second question-answer model;
when only un-labeled text data input by a user is acquired, outputting a question and answer result through a first question and answer model; when the unlabeled text data and the unlabeled image data input by the user are acquired simultaneously, the output question-answer results are obtained by weighting the question-answer results of the first question-answer submodel and the second question-answer submodel.
(III) advantageous effects
The invention provides an ophthalmic question-answering system based on artificial intelligence and a knowledge graph. Compared with the prior art, the method has the following beneficial effects:
(1) the first question-answer model based on the knowledge graph can diagnose and ask for an ophthalmic disease through symptoms input by a user.
(2) According to the invention, the second question-answering model based on the image can be used for accurately diagnosing the ophthalmic diseases according to the fundus medical image.
(3) The two models can be combined to provide a more accurate and reliable answer to an ophthalmic disease based on a medical image input by a user and a description of the disease symptoms.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram of an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention are clearly and completely described, and it is obvious that the described embodiments are a part of the embodiments of the present invention, but not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the application solves the problem that the existing medical question-answering system is inaccurate by providing the ophthalmology question-answering system based on artificial intelligence and knowledge maps.
In order to solve the technical problems, the general idea of the embodiment of the application is as follows:
firstly, training an ophthalmologic medical entity recognition model and an ophthalmologic medical relation extraction model according to labeled ophthalmologic medical text data, and then realizing ophthalmologic medical entity extraction and relation extraction in a large amount of unlabeled ophthalmologic medical text data by using the trained model so as to obtain structured data. And constructing a knowledge graph through the obtained structured data and realizing a first question-answer model based on the knowledge graph. And then training a second question-answer model based on the image according to the ophthalmic disease image data set, and combining the second question-answer model based on the image and the first question-answer model based on the knowledge graph to obtain the ophthalmic medical question-answer system based on artificial intelligence and the knowledge graph.
In order to better understand the technical solution, the technical solution will be described in detail with reference to the drawings and the specific embodiments.
Example 1:
the invention provides an ophthalmic question-answering system based on artificial intelligence and knowledge maps, which comprises:
the text data acquisition module is used for acquiring ophthalmic medical text data; the ophthalmic medical text data comprises labeled text data and unlabeled text data;
an image data acquisition module for acquiring ophthalmic medical image data; the ophthalmic medical image data comprises annotated image data and unlabeled image data;
the first question-answer sub-model building module is used for training by using the labeled text data to obtain an entity extraction model and a relation extraction model; extracting structured data from the unlabeled text data by using an entity extraction model and a relation extraction model to construct an ophthalmologic medical knowledge map; constructing a first question-answer sub-model based on the ophthalmic medical knowledge graph;
the second question-answer sub-model building module is used for training the deep learning network by using the labeled image data to obtain a second question-answer sub-model;
the question answering and diagnosis module is used for outputting a question answering result through the first question answering sub-model when only the unlabeled text data input by the user is obtained; when the unlabeled text data and the unlabeled image data input by the user are acquired simultaneously, the output question-answer results are obtained by weighting the question-answer results of the first question-answer submodel and the second question-answer submodel.
The beneficial effect of this embodiment does:
the invention can not only ask and answer according to the characters input by the user, but also diagnose the diseases of the user more accurately according to the symptoms input by the user and the ophthalmology medical images. Compared with the traditional method, the method not only uses the text information, but also uses the image information to realize more accurate and more spectrum-dependent ophthalmology disease diagnosis and question-answering.
Referring to fig. 1, the following describes the implementation of the embodiment of the present invention in detail:
and S1, acquiring the labeled text data through the text data acquisition module for the training of the subsequent entity extraction model and the relationship extraction model.
In the concrete implementation, the non-labeled ophthalmic medical text data is labeled, wherein the entity labeling mode adopts an IOBES method, the method labels the starting character, the middle character, the ending character, the character entity and other characters outside the ophthalmic medical entity, and the labeled data is used as training data of the ophthalmic medical entity extraction model. And in relation extraction labeling, each sentence in the text is used as a labeling unit to label out corresponding entities and types of relations among the entities, and labeled ophthalmic medical text data (labeled text data) is used as training data of the ophthalmic medical relation extraction model.
And S2, training by using the labeled text data through the first question-answering model building module to obtain an entity extraction model and a relation extraction model based on deep learning.
In specific implementation, the entity extraction model converts the sequence labeling task of the entity extraction into a machine reading understanding task by using an open-source pre-trained BERT model. The entity extraction model adds downstream tasks on the basis of the pre-trained BERT model, so that the entity extraction model has the capability of extracting entities. In the converted machine reading understanding task, a question needs to be generated for a specified text, and the generated question is answered by an entity related to ophthalmic medical treatment appearing in a target sentence. And in the training stage, the generated questions and sentences in the text are simultaneously input into the entity extraction model for training.
Similar to the method of entity extraction, the classification task of the relation extraction is also converted into a machine understanding task by the relation extraction model on the basis of the entity extraction. The relation extraction model adds downstream tasks on the basis of the pre-trained BERT model, so that the relation extraction model has the capacity of relation extraction on the basis of known entities.
The method for realizing the two models converts the tasks of entity extraction and relationship extraction into the tasks for reading and understanding, and problems can be generated according to target statements when the tasks of entity extraction and relationship extraction are carried out, so that the models have prior information when input, and the models can have better effect when relatively smaller data sets are trained.
S3, extracting structured data from the unlabeled text data by utilizing the entity extraction model and the relation extraction model through the first question-answer sub model building module to build the ophthalmic medical knowledge map, and building the first question-answer sub model based on the ophthalmic medical knowledge map.
In specific implementation, firstly, entity extraction is performed on the unlabeled ophthalmic medical text data (unlabeled text data) acquired by the text data acquisition module by using an entity extraction model. After the entity extraction step is completed, the relation extraction is carried out on the sentences containing two or more entities by using a relation extraction model. After the steps are completed, a large amount of unlabeled ophthalmic medical text data can be converted into triples in the target knowledge graph. And then storing the extracted structured data into a graph database to form the ophthalmic medical knowledge graph. And constructing an ophthalmic medical question-answer model based on the knowledge graph through the ophthalmic medical knowledge graph.
And S4, acquiring the labeled image data through the image data acquisition module for the subsequent training of the second question-answer model based on the image.
In the specific implementation, a professional ophthalmologist labels the unmarked ophthalmology medical image according to the disease type.
S5, the model trained by the second question answering model building module by using the labeled ophthalmic medical image (labeled image data) has the function of disease diagnosis through the ophthalmic medical image input by the user.
S5, acquiring data input by a user through a text data acquisition module and an image data acquisition module, combining a first question-answer sub-model and a second question-answer sub-model in cooperation with a question-answer and diagnosis module, and outputting a question-answer result through the first question-answer sub-model when only unlabeled text data input by the user is acquired; when the unlabeled text data and the unlabeled image data input by the user are acquired simultaneously, the question answering results of the first question answering sub-model and the second question answering sub-model are weighted to obtain the output question answering result. Of course, the disease can be diagnosed by inputting the image data. The system can realize intelligent question answering and diagnosis according to the questions or corresponding symptoms input by the user, and the user can input corresponding symptoms and images to realize more accurate diagnosis of the ophthalmic diseases.
Example 2
In a second aspect, there is provided a computer-readable storage medium storing a computer program for an ophthalmic question-answering, wherein the computer program causes a computer to execute the method of:
acquiring ophthalmic medical text data; the ophthalmic medical text data comprises labeled text data and unlabeled text data;
acquiring ophthalmic medical image data; the ophthalmic medical image data comprises annotated image data and unlabeled image data;
training by using the labeled text data to obtain an entity extraction model and a relation extraction model; extracting structured data from the unlabeled text data by using an entity extraction model and a relation extraction model to construct an ophthalmologic medical knowledge map; constructing a first question-answer sub-model based on the ophthalmic medical knowledge graph;
training the deep learning network by using the labeled image data to obtain a second question-answer model;
when only un-labeled text data input by a user is acquired, outputting a question and answer result through a first question and answer model; when the unlabeled text data and the unlabeled image data input by the user are acquired simultaneously, the output question-answer results are obtained by weighting the question-answer results of the first question-answer submodel and the second question-answer submodel.
Example 3
In a third aspect, an electronic device is provided, including:
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the programs comprising instructions for performing the method of:
acquiring ophthalmic medical text data; the ophthalmic medical text data comprises labeled text data and unlabeled text data;
acquiring ophthalmic medical image data; the ophthalmic medical image data comprises annotated image data and unlabeled image data;
training by using the labeled text data to obtain an entity extraction model and a relation extraction model; extracting structured data from the unlabeled text data by using an entity extraction model and a relation extraction model to construct an ophthalmologic medical knowledge map; constructing a first question-answer sub-model based on the ophthalmic medical knowledge graph;
training the deep learning network by using the labeled image data to obtain a second question-answer model;
when only un-labeled text data input by a user is acquired, outputting a question and answer result through a first question and answer model; when the unlabeled text data and the unlabeled image data input by the user are acquired simultaneously, the output question-answer results are obtained by weighting the question-answer results of the first question-answer submodel and the second question-answer submodel.
It is to be understood that the computer-readable storage medium and the electronic device provided in the embodiments of the present invention correspond to the above ophthalmic question-answering system based on artificial intelligence and knowledge graph, and the explanation, examples, and beneficial effects of the relevant contents may refer to the corresponding contents in the ophthalmic question-answering system based on artificial intelligence and knowledge graph, which are not described herein again.
In summary, compared with the prior art, the invention has the following beneficial effects:
the ophthalmic disease diagnosis model based on the knowledge graph can diagnose and ask for questions and answers to ophthalmic diseases through symptoms input by a user.
The ophthalmology disease diagnosis model based on the image can accurately diagnose the ophthalmology disease according to the eyeground medical image.
The ophthalmic disease diagnosis system based on the combination of the knowledge graph and the image can carry out more accurate diagnosis and question answering on ophthalmic diseases through two models.
It should be noted that, through the above description of the embodiments, those skilled in the art can clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform. With this understanding, the above technical solutions may be embodied in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments. In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (7)

1. An ophthalmic question-answering system based on artificial intelligence and knowledge maps, the system comprising:
the text data acquisition module is used for acquiring ophthalmic medical text data; the ophthalmic medical text data comprises labeled text data and unlabeled text data;
an image data acquisition module for acquiring ophthalmic medical image data; the ophthalmic medical image data comprises annotated image data and unlabeled image data;
the first question-answer sub-model building module is used for training by using the labeled text data to obtain an entity extraction model and a relation extraction model; extracting structured data from the unlabeled text data by using an entity extraction model and a relation extraction model to construct an ophthalmologic medical knowledge map; constructing a first question-answer sub-model based on the ophthalmic medical knowledge graph;
the second question-answer sub-model building module is used for training the deep learning network by using the labeled image data to obtain a second question-answer sub-model;
the question answering and diagnosis module is used for outputting a question answering result through the first question answering sub-model when only the unlabeled text data input by the user is obtained; when the unlabeled text data and the unlabeled image data input by the user are acquired simultaneously, the output question-answer results are obtained by weighting the question-answer results of the first question-answer submodel and the second question-answer submodel.
2. The artificial intelligence and knowledge-graph based ophthalmic question-answering system according to claim 1, wherein the entity extraction model and the relationship extraction model both use an open-source pre-trained BERT model.
3. The artificial intelligence and knowledge-graph based ophthalmic question-answering system according to claim 1, wherein the extracting of structured data from unlabeled text data using the entity extraction model and the relationship extraction model to construct the ophthalmic medical knowledge-graph comprises:
performing entity extraction on the unlabeled text data by using the entity extraction model;
carrying out relation extraction on sentences containing a plurality of entities by using the relation extraction model to obtain structured data;
and constructing an ophthalmic medical knowledge map based on the obtained structured data.
4. The ophthalmic question-answering system based on artificial intelligence and knowledge-graph according to claim 1, wherein the entity labeling of the ophthalmic medical text data employs IOBES method, and the relationship labeling labels the corresponding entities and the types of the relationships between the entities in units of each sentence in the text.
5. The artificial intelligence and knowledge-graph based ophthalmic question-answering system according to claim 1, wherein the labeled image data is an ophthalmic medical image labeled with a disease type.
6. A computer-readable storage medium storing a computer program for an ophthalmic question answering, wherein the computer program causes a computer to execute a method of:
acquiring ophthalmic medical text data; the ophthalmic medical text data comprises labeled text data and unlabeled text data;
acquiring ophthalmic medical image data; the ophthalmic medical image data comprises annotated image data and unlabeled image data;
training by using the labeled text data to obtain an entity extraction model and a relation extraction model; extracting structured data from the unlabeled text data by using an entity extraction model and a relation extraction model to construct an ophthalmologic medical knowledge map; constructing a first question-answer sub-model based on the ophthalmic medical knowledge graph;
training the deep learning network by using the labeled image data to obtain a second question-answer model;
when only un-labeled text data input by a user is acquired, outputting a question and answer result through a first question and answer model; when the unlabeled text data and the unlabeled image data input by the user are acquired simultaneously, the output question-answer results are obtained by weighting the question-answer results of the first question-answer submodel and the second question-answer submodel.
7. An electronic device, comprising:
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the programs comprising instructions for performing the method of:
acquiring ophthalmic medical text data; the ophthalmic medical text data comprises labeled text data and unlabeled text data;
acquiring ophthalmic medical image data; the ophthalmic medical image data comprises annotated image data and unlabeled image data;
training by using the labeled text data to obtain an entity extraction model and a relation extraction model; extracting structured data from the unlabeled text data by using an entity extraction model and a relation extraction model to construct an ophthalmologic medical knowledge map; constructing a first question-answer sub-model based on the ophthalmic medical knowledge graph;
training the deep learning network by using the labeled image data to obtain a second question-answer model;
when only un-labeled text data input by a user is acquired, outputting a question and answer result through a first question and answer model; when the unlabeled text data and the unlabeled image data input by the user are acquired simultaneously, the output question-answer results are obtained by weighting the question-answer results of the first question-answer submodel and the second question-answer submodel.
CN202111524880.9A 2021-12-14 2021-12-14 Ophthalmic question-answering system based on artificial intelligence and knowledge graph Pending CN114328864A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111524880.9A CN114328864A (en) 2021-12-14 2021-12-14 Ophthalmic question-answering system based on artificial intelligence and knowledge graph

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111524880.9A CN114328864A (en) 2021-12-14 2021-12-14 Ophthalmic question-answering system based on artificial intelligence and knowledge graph

Publications (1)

Publication Number Publication Date
CN114328864A true CN114328864A (en) 2022-04-12

Family

ID=81051033

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111524880.9A Pending CN114328864A (en) 2021-12-14 2021-12-14 Ophthalmic question-answering system based on artificial intelligence and knowledge graph

Country Status (1)

Country Link
CN (1) CN114328864A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115482926A (en) * 2022-09-20 2022-12-16 浙江大学 Knowledge-driven rare disease visual question-answer type auxiliary differential diagnosis system and method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115482926A (en) * 2022-09-20 2022-12-16 浙江大学 Knowledge-driven rare disease visual question-answer type auxiliary differential diagnosis system and method
CN115482926B (en) * 2022-09-20 2024-04-09 浙江大学 Knowledge-driven rare disease visual question-answer type auxiliary differential diagnosis system and method

Similar Documents

Publication Publication Date Title
CN108491486B (en) Method, device, terminal equipment and storage medium for simulating patient inquiry dialogue
CN112863630A (en) Personalized accurate medical question-answering system based on data and knowledge
CN110675944A (en) Triage method and device, computer equipment and medium
CN112786194A (en) Medical image diagnosis guide inspection system, method and equipment based on artificial intelligence
US20140122109A1 (en) Clinical diagnosis objects interaction
US11670420B2 (en) Drawing conclusions from free form texts with deep reinforcement learning
CN111813957A (en) Medical diagnosis guiding method based on knowledge graph and readable storage medium
US20210287800A1 (en) Ai supported personalized, natural language-based patient interface for medical-bot
WO2023178971A1 (en) Internet registration method, apparatus and device for seeking medical advice, and storage medium
WO2020224433A1 (en) Target object attribute prediction method based on machine learning and related device
CN111259111B (en) Medical record-based decision-making assisting method and device, electronic equipment and storage medium
CN109979568B (en) Mental health early warning method, server, family member terminal and system
KR20200080290A (en) Machine-assisted conversation systems and devices and methods for interrogating medical conditions
CN111008269A (en) Data processing method and device, storage medium and electronic terminal
CN113724882A (en) Method, apparatus, device and medium for constructing user portrait based on inquiry session
CN113010657A (en) Answer processing method and answer recommending method based on answering text
CN112837772A (en) Pre-inquiry case history generation method and device
CN117809798B (en) Verification report interpretation method, system, equipment and medium based on large model
CN114999676A (en) Method, system, apparatus and medium for automatically replying to medical consultation
CN114328864A (en) Ophthalmic question-answering system based on artificial intelligence and knowledge graph
Liao et al. Medical data inquiry using a question answering model
EP3901875A1 (en) Topic modelling of short medical inquiries
CN113254609A (en) Question-answering model integration method based on negative sample diversity
Heilmann Profiling effects of syntactic complexity in translation: a multi-method approach
CN111783473B (en) Method and device for identifying best answer in medical question and answer and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination