CN117558439A - Question interaction method and device based on large language model and related equipment - Google Patents

Question interaction method and device based on large language model and related equipment Download PDF

Info

Publication number
CN117558439A
CN117558439A CN202311378700.XA CN202311378700A CN117558439A CN 117558439 A CN117558439 A CN 117558439A CN 202311378700 A CN202311378700 A CN 202311378700A CN 117558439 A CN117558439 A CN 117558439A
Authority
CN
China
Prior art keywords
question
information
report
inspection
language model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311378700.XA
Other languages
Chinese (zh)
Inventor
贺志阳
程德美
沈静
何双池
杜倩云
李珊珊
胡加学
赵景鹤
鹿晓亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Iflytek Medical Technology Co ltd
Original Assignee
Iflytek Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Iflytek Medical Technology Co ltd filed Critical Iflytek Medical Technology Co ltd
Priority to CN202311378700.XA priority Critical patent/CN117558439A/en
Publication of CN117558439A publication Critical patent/CN117558439A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • G06N5/041Abduction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H15/00ICT specially adapted for medical reports, e.g. generation or transmission thereof
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Theoretical Computer Science (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Pathology (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The application discloses a query interaction method, device and related equipment based on a large language model. The method comprises the following steps: acquiring inspection related information of an inspection report of a target object; acquiring an analysis report corresponding to the inspection related information by using a large language model; and responding to the input first interactive question and answer information of the analysis report, determining second interactive question and answer information corresponding to the first interactive question and answer information by using a large language model so as to conduct question interactive question and answer, and generating a question report based on the question interactive question and answer. By the scheme, the medical inquiry efficiency can be improved.

Description

Question interaction method and device based on large language model and related equipment
Technical Field
The application relates to the technical field of medical information processing, in particular to a method and a device for interview interaction based on a large language model and related equipment.
Background
Along with the development of society, the living standard of people is continuously improved, more and more people pay attention to physical health, and the health management demands are also continuously increased.
Examination test report is the reference basis for doctor to diagnose and treat diseases, and is also the way for patient to know various examination items and self health. Some medical expertise is usually required to understand various indexes in the inspection report, so that an ordinary patient cannot understand the various indexes and meanings thereof, and it is difficult to obtain accurate and comprehensive interpretation information of the inspection report from a network or a non-medical professional. In addition, mainly medical staff can not obtain interpretation and inquiry of a test report in time by manual interpretation analysis and inquiry, so that the medical inquiry efficiency is low.
Disclosure of Invention
The technical problem which is mainly solved by the method and the device for inquiring interaction based on the large language model and related equipment can improve the medical inquiring efficiency.
In order to solve the above problems, a first aspect of the present application provides a query interaction method based on a large language model, the method comprising: acquiring inspection related information of an inspection report of a target object; acquiring an analysis report corresponding to the inspection related information by using a large language model; and responding to the input first interactive question and answer information of the analysis report, determining second interactive question and answer information corresponding to the first interactive question and answer information by using a large language model so as to conduct question interactive question and answer, and generating a question report based on the question interactive question and answer. By the scheme, the medical inquiry efficiency can be improved.
In order to solve the above problems, a second aspect of the present application provides a query interaction device based on a large language model, the device including an acquisition unit, an analysis unit, and an interaction unit; an acquisition unit configured to acquire inspection related information of an inspection report of a target object; the analysis unit is used for acquiring an analysis report corresponding to the inspection related information by using the large language model; and the interaction unit is used for responding to the input first interaction question and answer information of the analysis report, determining second interaction question and answer information corresponding to the first interaction question and answer information by utilizing the large language model so as to conduct question interaction question and answer, and generating a question report based on the question interaction question and answer.
To solve the above-mentioned problem, a third aspect of the present application provides a computer device, which includes a memory and a processor coupled to each other, where the memory stores program data, and the processor is configured to execute the program data to implement any step of the above-mentioned query interaction method based on a large language model.
In order to solve the above-mentioned problems, a fourth aspect of the present application provides a computer-readable storage medium storing program data executable by a processor for implementing any one of the steps of the above-mentioned large language model-based inquiry interaction method.
According to the scheme, the large language model is utilized to acquire the relevant information of the inspection report of the target object for interpretation and analysis, so that the corresponding analysis report is obtained, the large language model is utilized to determine the second interactive question-answer information corresponding to the first interactive question-answer information in response to the first interactive question-answer information of the input analysis report, so that the question interactive question-answer is carried out, the question report is generated based on the question interactive question-answer, the interactive question-answer can be carried out with a patient, namely, the interpretation analysis and the interactive question-answer of the timely inspection report can be provided for the patient, and the medical question-answer efficiency can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
For a clearer description of the technical solutions in the present application, the drawings required in the description of the embodiments will be briefly described below, it being obvious that the drawings described below are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. Wherein:
FIG. 1 is a flow chart of a first embodiment of a large language model-based interrogation interaction method of the present application;
FIG. 2 is a flow chart of an embodiment of the step S11 of the present application;
FIG. 3 is an exemplary schematic diagram of an embodiment of a verification report of the present application;
FIG. 4 is an exemplary diagram of one embodiment of the present application for verifying relevant information;
FIG. 5 is a flow chart of an embodiment of step S12 of the present application;
FIG. 6 is an exemplary schematic diagram of an embodiment of an analytical report of the present application;
FIG. 7 is a flow chart of an embodiment of step S13 of the present application;
FIG. 8 is an exemplary schematic diagram of an embodiment of a interview interaction question of the present application;
FIG. 9 is a flow chart of a second embodiment of a large language model based interrogation interaction method of the present application;
FIG. 10 is another flow chart of a second embodiment of a large language model based interrogation interaction method of the present application;
FIG. 11 is a schematic structural diagram of an embodiment of a large language model-based interrogation interaction device of the present application;
FIG. 12 is a schematic diagram of an embodiment of a computer device of the present application;
FIG. 13 is a schematic diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The following description of the technical solutions in the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The terms "first," "second," and the like in this application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship. Further, "a plurality" herein means two or more than two. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
The present application provides the following examples, each of which is specifically described below.
Referring to fig. 1, fig. 1 is a flowchart of a first embodiment of a query interaction method based on a large language model according to the present application. The method may comprise the steps of:
s11: inspection related information of an inspection report of a target object is acquired.
The inquiry interaction method based on the large language model of the embodiment can be applied to the field of medical inquiry, and relates to the field of inquiry based on inspection reports. The computer device may be any device having processing capabilities, such as a computer, a mobile phone, a tablet computer, etc., which is not limited in this application.
The target object may be a person, an animal, etc., for example, in application scenarios such as a hospital, a checking mechanism, etc., a person needs to be diagnosed, and the target object may refer to a person; in application scenarios such as pet hospitals and animal hospitals, an animal needs to be diagnosed, and a target object can refer to the animal. The following description will take the target object as an example, which is not limited in this application.
Test reports are the reference bases for doctors to diagnose and treat diseases, and are also the ways for patients to know various examination and test items and their own health. The test report may be an assay, test, etc. report of an examination performed during a medical consultation, physical examination, etc., such as a test report of an examination including, but not limited to, at least one of the following: blood routine, liver function, lung function, kidney function, gastroscope, urine routine, electrolyte, pathology, electrocardiogram, ultrasound, nuclear magnetic resonance, CT (Computed Tomography ), X-ray (X-ray examination) and other physical examination and review reports.
Inspection related information of an inspection report of a target object may be acquired. For example, in a test report, various kinds of information such as index data and video images are generally included, and information such as index data and video images can be obtained by extracting information.
In some embodiments, the query interaction method of the present application may be implemented using a large language model (e.g., LLM), which refers to a deep learning model trained using a large amount of text data, which may generate natural language text or understand the meaning of the language text. The large language model can process various natural language tasks, such as text classification, question-answering, dialogue and the like, and is an important path to artificial intelligence. The large language model of the present application includes, for example, an OPT, BLOOM, or LLaMA model, and the present application is not limited thereto.
In some embodiments, the large language model of the present application may include at least an information extraction module for obtaining verification-related information corresponding to the verification report.
In some embodiments, referring to fig. 2, step S11 of the above embodiments may be further extended. Acquiring inspection related information of an inspection report of a target object, the embodiment may include the steps of:
s111: an inspection report of the input target object is received.
Alternatively, the paper inspection report may be placed at a set position, and the inspection report of the target object may be obtained by image capturing the paper inspection report by the image capturing device. In some application scenarios, the captured image of the inspection report may be captured or cropped, etc., to obtain a clearer, more easily identifiable inspection report.
Optionally, the paper inspection report is input into a scanning device to scan the inspection report, so as to obtain the inspection report of the target object.
Optionally, an account login interface may be provided to log in an account to obtain a test report of the electronic file, where the log in account may be an account provided by a test system of a medical institution, a patient management system, etc. for a patient that may view data of patient medical records, test reports, etc. If the account information input by the user or the patient and the selected instruction of the inspection report are received, the inspection report of the target object is obtained through reading.
It will be appreciated that the inspection report of the target object whose at least one manner is described above for receiving user input may be determined according to the application scenario, which is not limited in this application.
S112: and acquiring the inspection related information of the inspection report of the target object by adopting an information extraction module.
The large language model at least comprises an information extraction module, and the information extraction module has information extraction capability and text recognition capability. And acquiring the inspection related information of the inspection report of the target object by adopting an information extraction module.
In some embodiments, referring to fig. 3, as an example, the inspection report includes medical image and/or medical text data. Medical image images such as CT images, electrocardiography, X-ray images, etc., and medical text data such as blood routine, liver function, lung function, kidney function, urine routine, etc. It will be appreciated that the inspection report of the present application is merely an example, and in some application scenarios, the inspection report may also include indexes and data of other inspection items, and the present application is not limited thereto.
In some embodiments, the format of the inspection report is an image format or an electronic file format. In the case of image formats such as inspection reports as scanned pictures, screenshots, photographs, etc. The electronic file format such as reading the inspection report of the electronic file can directly acquire various index data.
In some embodiments, when the test report includes medical text data, and the format of the test report is an image format, an information extraction module (such as an optical character identifier) is used to perform text recognition on the medical text data of the test report, so as to obtain text recognition information, that is, index data, such as test data, normal data range, and the like, of each test on the physical examination report. And performing feature coding on the text identification information to obtain text feature vectors serving as inspection related information corresponding to the medical text data.
Referring to fig. 4, as an example, the inspection related information may include information of inspection items, various items of inspection data, normal data ranges, and the like. The inspection related information such as inspection report includes:
the test items are: nine items of hyperthyroidism.
Triiodothyronine: results: 1.40; normal range: 1.30-3.10.
Free triiodothyronine: results: 4.46; normal range: 3.10-6.80.
……
Anti-thyroglobulin antibody: results: 17.4; normal range: 0.0-115.0.
Calcitonin: results: 0.92; normal range: female|0.00-6.40.
It will be appreciated that this example is merely a part of examples of the inspection related information of the inspection report, and specific inspection related information may be extracted from graphic information of an inspection item, medical text data, medical image, or the like included in the inspection report, which is not limited in this application.
In some embodiments, in response to the format of the inspection report being an electronic file format, an information extraction module (e.g., a text encoder) may be employed to feature encode the medical text data of the inspection report to obtain text feature vectors as inspection related information corresponding to the medical text data.
In some embodiments, when the inspection report includes a medical image, the information extraction module is a visual encoder, and the visual encoder may be used to identify image information of the medical image in the inspection report, and perform feature encoding on the image information to obtain a visual feature vector as inspection related information corresponding to the medical image.
In some embodiments, the large language model of the present application may be a multi-modal large language model, and the information extraction module may include an optical character recognizer (or a text encoder) and a visual encoder, so as to obtain a corresponding visual feature vector and a text feature vector by extracting information of the medical image and the medical text data included in the inspection report, that is, obtain inspection related information corresponding to the inspection report.
S12: and acquiring an analysis report corresponding to the inspection related information by using the large language model.
The large language model can be used for interpretation and analysis of the inspection related information in combination with medical field expertise, for example, the inspection report generally comprises medical text data and/or medical image images, and the report can be automatically generated in a deep learning mode in combination with the inspection related information of the medical text data and/or the medical image images (image-text information).
When the test report contains medical text data of the medical image, interpretation analysis can be carried out on text feature vectors corresponding to the medical text data, for example, the test data are compared with a normal data range, and the report can be automatically generated in a deep learning mode on the text feature vectors.
The inspection report contains medical image, usually, an image doctor analyzes normal or abnormal performance in the image by reading the medical image and combining with self experience knowledge to form an image inspection report, and related text description about the medical image in the report is an important basis for clinical diagnosis. Because the language description about normal expression in the image report is very similar, the report can be automatically generated by the deep learning mode of the image information of the medical image.
In some embodiments, referring to fig. 5, step S12 of the above embodiments may be further extended. The method for obtaining the analysis report corresponding to the inspection related information by using the large language model can comprise the following steps:
s121: and analyzing the inspection related information by using an inspection analysis module according to the medical field expertise to obtain data analysis information.
The large language model at least comprises a checking and analyzing module, and the checking and analyzing module has the application capability of general expertise and medical expertise. The inspection analysis module can be used for analyzing the inspection related information according to the general expertise and the medical field expertise to obtain data analysis information.
In some implementations, the verification-related information includes at least one of a visual feature vector, a text feature vector. The data analysis information includes key information and/or anomaly information for at least one of the visual feature vectors, text feature vectors. The text feature vector may be matched with a normal text feature vector to determine data analysis information, such as key information and/or anomaly information. The visual feature vectors may be matched with normal visual feature vectors to determine data analysis information, such as key information and/or anomaly information.
For example, the test data of the medical text data may be compared with the normal data range, and it may be determined whether or not the test data is abnormal, and if the test data is not within the normal data range, the test data is abnormal, and a test item, a test index, or the like corresponding to the abnormal data is used as the abnormality information. For example, the image information of the medical image may be compared with the normal medical image to determine whether the medical image is abnormal, for example, comparing the similarity of the visual feature vectors, and if the similarity is smaller than the set similarity, the medical image is abnormal. Thus, the abnormal location, area, and the like can be specified as the abnormality information.
S122: medical knowledge information corresponding to the data analysis information is determined to obtain an analysis report.
And determining knowledge information related to the matched data analysis information, such as the key information of at least one of the extracted visual feature vector and the text feature vector and/or the knowledge information related to the abnormal information, according to the general expertise and the medical field expertise, so as to obtain the corresponding medical knowledge information. Then, all information of the target object inspection report is integrated again to output the interpretation analysis content of the system professional inspection report so as to obtain an analysis report.
Referring to fig. 6, as an example, interpretation analysis may be performed on the abnormality information and the key information, etc. in the analysis report, and a consultation suggestion, a medical suggestion, etc. of the abnormality information and/or the key information may be given. It is to be understood that this example is only a part of examples of analysis reports obtained by analyzing and interpreting test reports, and a specific test report may be obtained by interpreting and analyzing test-related information extracted by graphic information, such as test items, medical text data, and medical image images, included in the test report, which is not limited in this application.
S13: and responding to the input first interactive question and answer information of the analysis report, determining second interactive question and answer information corresponding to the first interactive question and answer information by using a large language model so as to conduct question interactive question and answer, and generating a question report based on the question interactive question and answer.
After the analysis report is generated, a query interaction may be performed with the patient, such as a query as to whether the patient needs to be asked, a question to be asked, etc.
The patient can input first interactive question-answering information of an analysis report, a test report or the like through text input, language input or the like, and if the input is voice input, voice recognition can be performed to obtain the first interactive question-answering information. And determining second interactive question-answer information corresponding to the first interactive question-answer information by using the large language model, wherein one of the first interactive question-answer information and the second interactive question-answer information is a question, and the other is an answer. In the case where the interactive question-answering information of the answer or question is unclear, the inquiry or answer or the like may be continued. After conducting the rounds of the inquiry interaction questions, generating inquiry reports based on the inquiry interaction questions.
In some embodiments, the large language model may be utilized to provide a more comprehensive understanding of the test-related information and its analysis report, and to provide personalized active interrogation interactions for problems that are reflected in the analysis report (e.g., abnormal information), which are of concern or need to be noted by the patient.
In some embodiments, based on the graphic recognition, knowledge application, language understanding and generation and question-answer interaction capability of the large language model, the graphic information of the test report can be obtained, the test report is read and analyzed, personalized question-answer interaction is actively carried out on the test report, such as report reading and analysis are carried out on the patient with normal test indexes, and the active question-answer interaction of proper medical care is provided; after report interpretation analysis is carried out on patients with abnormal test indexes, the analysis of abnormal points and the interaction of related problems and suggested active questions and answers are emphasized. By the method, the patient can be helped to know the content and meaning of the inspection report more systematically and accurately, and timely inquiry and treatment can be achieved by the patient, so that the workload of doctors can be reduced, the doctor seeking time of the patient can be shortened, and the efficiency of inspection such as physical examination and review in a medical system and the healthy living standard of the patient can be improved.
In addition, the system professional interpretation analysis and personalized active inquiry are carried out on the inspection report through the large language model, so that the inquiry efficiency and quality of the inspection report are improved, and the ever-increasing health management requirements of people are met. In addition, the active inquiry of the inspection report based on the large language model can promote the construction of an automatic and intelligent medical system and improve the health level of people.
In some embodiments, referring to fig. 7, step S13 of the above embodiments may be further extended. In response to the input first interactive question-answer information of the analysis report, determining second interactive question-answer information corresponding to the first interactive question-answer information by using a large language model, the embodiment may include the steps of:
s131: and responding to the input first interactive question-answer information of the analysis report, and carrying out semantic understanding on the first interactive question-answer information by utilizing a question interaction module to obtain an understanding interaction text.
The large language model at least comprises a consultation interaction module, wherein the consultation interaction module has language understanding and generating and question-answer interaction capability, and personalized active consultation interaction can be performed after reading and analyzing the inspection related information of the inspection report.
And responding to the input first interactive question-answer information of the analysis report, and carrying out semantic understanding on the first interactive question-answer information by utilizing language understanding by utilizing a question-and-answer interaction module to obtain an understanding interaction text.
S132: determining second interaction question-answering information corresponding to the understanding interaction text until question interaction question-answering is completed; one of the first interactive question-answer information and the second interactive question-answer information is a question, and the other is an answer.
And the dialogue interaction capability of the inquiry interaction module is utilized to automatically ask questions or answer the questions to the user, and the inquiry interaction module is utilized to determine second interaction inquiry and answer information corresponding to the understanding interaction text so as to conduct personalized inspection report active inquiry interaction until inquiry interaction inquiry and answer is completed. The patient may answer or further ask questions of the large language model, which in some application scenarios may integrate the analysis report of the inspection report with the historical dialogue information to analyze and understand the user's inquiry intent to further personalize the inquiry or question-answer until the inquiry interaction is terminated. If there are questions to be consulted in the inquiry, the patient replies to the questions that do not need to be consulted. Such as instructions to stop the interrogation interaction in response to user input (e.g., button instructions, instructions for text or voice input, etc.). If the large language model is used for determining that all interactive question and answer flows such as questions and answers are completed, diagnosis and the like can be made, and the completion of the question and answer interaction can be determined.
In some implementations, the interview interaction interview process includes at least one of: question and answer is conducted on question and answer information input by a user; question answering is carried out on the abnormal part of the analysis report; determining a consultation advice (e.g., consultation advice, treatment advice, improvement advice, attention items, etc.) that analyzes the abnormal portion of the report; performing question answering based on the integrated analysis report and the question intents determined by the historical interaction question answering information, wherein the historical interaction question answering information is the interaction question answering information before the current question answering; the key information or the normal part of the analysis report is questioned and answered to read and analyze the normal part.
The method can acquire information from the images and texts of the inspection report, perform system professional interpretation analysis based on the acquired inspection report information, and perform personalized active inquiry interaction based on the inspection report information and the interpretation analysis thereof, such as report interpretation analysis on patients with normal inspection indexes, and provide appropriate active inquiry interaction for medical care; after reporting and interpretation analysis is carried out on a patient with abnormal examination and detection indexes, the abnormal point analysis and the related problems are emphasized to interact with the suggested active questions and answers.
Referring to fig. 8, as an example, in the inquiry interaction inquiry process, a personalized question may be automatically output by using a large language model to start inquiry interaction, and the patient may continue to perform personalized interaction inquiry by inputting the relevant answer or question. And integrating analysis reports and inquiry intents determined by the historical interaction inquiry information by utilizing the large language model, so as to answer questions of patients or answer further questions of the patients and the like until the inquiry interaction is terminated.
It will be appreciated that this example is merely a partial example of a query interaction query process performed by the test report, and the specific query interaction query may be performed according to test items, test related information such as medical text data and medical image, analysis report, and the like included in the test report, which is not limited in this application.
After the completion of the inquiry interaction, an inquiry report is generated based on at least one of the interaction inquiry information of the inquiry interaction inquiry, the analysis report, the inspection-related information of the inspection report, and the like.
In some embodiments, in the above steps S11 to S13, the corresponding function or module corresponding to the large language model may be activated using the corresponding instruction.
Taking the large language model as an example of the multi-model large language model, the following description is given by way of an example. For example, in the first stage, after inputting an image of a physical examination or review examination report, the image text recognition capability of the information extraction module of the large language model is activated in response to the instruction 1, and examination-related information is acquired from the image text of the examination report. And a second stage, responding to the instruction 2 to activate the knowledge application capability of the inspection analysis module of the large language model, so as to use the general domain knowledge and the medical domain expertise learned by the large language model to carry out system comprehensive interpretation analysis on the inspection related information of the inspection report acquired in the first stage, and obtain an analysis report. And in the third stage, in response to the instruction 3, activating language understanding and generating and question-answer interaction capability of the question-answer interaction module of the large language model, and actively performing personalized question-answer interaction with the patient based on the test related information of the test report, the analysis report or the requirement of the patient.
Wherein instruction 1 may indicate a large language model: [ input image of inspection report ], now medical image-text recognition scenario, please recognize the image-text information of the above inspection report image, and output [ inspection related information of inspection report ]. Wherein the image of the inspection report is such as blood routine, liver function, lung function, kidney function, gastroscope, urine routine, electrolyte, pathology, electrocardiogram, ultrasound, nuclear magnetic resonance, CT, X-ray, etc. the inspection report is such as report scan picture, screenshot or photo.
Instruction 2 may indicate a large language model: now in a medical setting, playing a specialized doctor, interprets and analyzes the following test reports, test related information, and outputs [ analysis report ], the following are test related information: [ test-related information of test report ].
Instruction 3 may indicate a large language model: now the examination report inquiry scenario, which plays a specialized doctor, performs personalized active inquiry interactions with the patient based on the following analysis report of examination related information and its interpretation. Checking relevant information: [ test-related information of test report ], analytical report of interpretation: [ analytical report ].
According to the method, the large language model is utilized to acquire the relevant information of the inspection report of the target object for interpretation and analysis, so that a corresponding analysis report is obtained, the large language model is utilized to determine the second interactive question-answer information corresponding to the first interactive question-answer information so as to conduct question-and-answer interaction question-and-answer, and a question report is generated based on the question-and-answer interaction question-and-answer, so that an interactive question-and-answer can be conducted with a patient, namely, interpretation analysis and interactive question-and-answer of the timely inspection report can be provided for the patient, and the medical question-and-answer efficiency can be improved; in addition, the inquiry interaction method does not depend on specific examination and examination equipment, and can complete acquisition of examination report data only by generating examination reports such as physical examination, review and the like, so that applicability of the interaction inquiry can be improved.
In some embodiments, before the foregoing embodiments are performed, as in step S11 or S12, model parameters of the large language model may be pre-trained, model parameters of the large language model may be adjusted, etc., so that the query interaction using the pre-trained, adjusted large language model is more accurate.
Referring to fig. 9, fig. 9 is a flowchart of a second embodiment of a query interaction method based on a large language model according to the present application. The method may comprise the steps of:
s21: and adjusting model parameters of the large language model by using the reference data to obtain an adjusted large language model. The large language model at least comprises a checking and analyzing module and a consultation interaction module; the reference data are medical field dialogue data, reference text feature vectors and reference visual feature vectors, the medical field dialogue data are used for carrying out first adjustment on model parameters of the large language model, and the reference text feature vectors and the reference visual feature vectors are used for carrying out second adjustment on the large language model after the first adjustment.
Referring to fig. 10, the model parameters of the large language model are adjusted using the reference data, which may include a first adjustment and a second adjustment. In this process, the adjustment may be fine tuning, that is, fine tuning (such as full parameter fine tuning or LoRA efficient parameter fine tuning) of model parameters of a large language model (such as OPT, BLOOM, LLaMA, etc.). The following description will take a large language model as an example of a multi-modal large language model.
The large language model at least comprises an information extraction module, a checking and analyzing module and a consultation interaction module. In this embodiment, the information extraction module may include an optical character recognizer, a text encoder, and a visual encoder.
In the first adjustment process of the first stage, the reference data may include medical field dialogue data, which may include a plurality of data sets of questioning answers for interactive questioning, such as a data set of questioning answers for test reports, medical text data for medical imaging, test related information, analysis reports, etc., the data set of questioning answers including interactive questioning for a plurality of test reports, each test report, such as a plurality of questions, and answers for each question. The text editor is used for encoding the medical field dialogue data to obtain text encoding vectors, and the text encoding vectors are input into model parameters of a large language model (such as a large language model OPT, BLOOM or LLaMA for general field pre-training) for fine adjustment, so that the large language model obtains medical field expertise which is beneficial to analysis of test report interpretation and medical field dialogue response capability which is beneficial to personalized active inquiry interaction. The large language model for the universal field pre-training can be obtained by pre-training by adopting a universal field data set, and the pre-training process is not limited in the application.
In a second adjustment process of the second stage, the reference data may include a reference text feature vector and a reference visual feature vector. The text set of the inspection report can be obtained, that is, each inspection report can contain medical text data (text part) and medical image (image part), an optical character recognizer is adopted to perform text recognition on the text part in the inspection report, and then the text part of the text recognition is encoded to obtain a reference text feature vector. And encoding the image part of the inspection report by adopting a visual encoder to obtain a reference visual characteristic vector. And inputting the reference text feature vector and the reference visual feature vector into the large language model after the first adjustment, and then performing the second adjustment, so that the adjusted multi-mode large language model obtains the image-text recognition capability, the interpretation analysis capability and the personalized active inquiry capability of the inspection report.
In some embodiments, the adjusted large language model is used to implement the query interaction method based on the large language model of the first embodiment. For example, the method can be used for realizing the above analysis report corresponding to the verification related information obtained by using the large language model; and responding to the input first interactive question and answer information of the analysis report, determining second interactive question and answer information corresponding to the first interactive question and answer information by utilizing a large language model so as to conduct question interactive question and answer, and generating a question report based on the question interactive question and answer.
In some embodiments, the reference text feature vector and the reference visual feature vector of the present embodiment may be the teletext feature vector obtained for the inspection report of the target object obtained in step S11, that is, the text feature vector and the visual feature vector described above. In this embodiment, the optical character recognizer (or text encoder) and the visual encoder of the large language model may be used to recognize the text information in the text of the inspection report, i.e. to obtain the text feature vector and the visual feature vector. And then, performing interpretation analysis on the test report by using the general knowledge, the medical dialogue and the professional knowledge of fine-tuning learning on the general field pre-training large model by using the test report data. And then, the language understanding and generating capability and the question-answer interaction capability of the large language model are utilized to conduct personalized active question-answer interaction with the patient.
According to the scheme, the capability and knowledge of the multi-mode large language model can be fully utilized, the examination report can be read and analyzed, and personalized active inquiry interaction can be carried out on the examination report. By adopting the personalized active inquiry interaction method based on the large language model, timely and systematic professional examination and inspection report interpretation analysis and personalized active inquiry interaction can be provided for patients. In addition, the inquiry interaction method can be applied to the clinical practice and teaching field of medical inquiry, and can also be applied to the construction and popularization field of intelligent medical systems, so that the applicability of the inquiry is improved.
In the inquiry interaction process based on the large language model in the above embodiment, if the technical scheme of the application relates to personal information, the product applying the technical scheme of the application explicitly informs the personal information processing rule before processing the personal information, and obtains the personal autonomous agreement. If the technical scheme of the application relates to sensitive personal information, the product applying the technical scheme of the application obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of 'explicit consent'. For example, a clear and remarkable mark is set at a personal information acquisition device such as a camera to inform that the personal information acquisition range is entered, personal information is acquired, and if the personal voluntarily enters the acquisition range, the personal information is considered as consent to be acquired; or on the device for processing the personal information, under the condition that obvious identification/information is utilized to inform the personal information processing rule, personal authorization is obtained by popup information or a person is requested to upload personal information and the like; the personal information processing rule may include information such as a personal information processor, a personal information processing purpose, a processing mode, and a type of personal information to be processed.
Referring to fig. 11, fig. 11 is a schematic structural diagram of an exemplary embodiment of a query interaction device based on a large language model. The large language model-based inquiry interaction means 30 may include: an acquisition unit 31, an analysis unit 32 and an interaction unit 33. Wherein the acquisition unit 31, the analysis unit 32 and the interaction unit 33 are connected to each other.
The acquisition unit 31 may be configured to acquire inspection related information of an inspection report of the target object.
The analysis unit 32 may be configured to obtain an analysis report corresponding to the inspection related information using the large language model.
The interaction unit 33 may be configured to determine, in response to the input first interactive question and answer information of the analysis report, second interactive question and answer information corresponding to the first interactive question and answer information using the large language model, to perform a question interactive question and answer, and generate a question report based on the question interactive question and answer.
In some embodiments, when the large language model includes at least an information extraction module, the obtaining unit 31 is further configured to receive an inspection report of the input target object; and acquiring the inspection related information of the inspection report of the target object by adopting an information extraction module.
The inspection report comprises medical image images, and the information extraction module is a visual encoder; the obtaining unit 31 is further configured to identify image information of the medical image in the inspection report by using the visual encoder; and carrying out feature coding on the image information to obtain a visual feature vector which is used as the inspection related information corresponding to the medical image.
The format of the inspection report is an image format or an electronic file format, and the inspection report contains medical text data; the obtaining unit 31 is further configured to perform text recognition on the medical text data of the inspection report by using the information extraction module in response to the format of the inspection report being an image format, obtain text recognition information, and perform feature encoding on the text recognition information to obtain a text feature vector as inspection related information corresponding to the medical text data; or, responding to the format of the inspection report as an electronic file format, and adopting an information extraction module to perform feature coding on the medical text data of the inspection report to obtain text feature vectors which are used as inspection related information corresponding to the medical text data.
In some embodiments, when the large language model contains at least a verification analysis module; the analysis unit 32 is further configured to analyze the inspection related information according to the medical field expertise by using the inspection analysis module to obtain data analysis information; medical knowledge information corresponding to the data analysis information is determined to obtain an analysis report.
Wherein the verification-related information includes at least one of a visual feature vector, a text feature vector; the data analysis information includes key information and/or anomaly information for at least one of the visual feature vectors, text feature vectors.
In some implementations, when the large language model contains at least a consultation interaction module; the interaction unit 33 is further configured to perform semantic understanding on the first interaction question-answer information by using the question interaction module in response to the input first interaction question-answer information of the analysis report, so as to obtain an understanding interaction text; determining second interaction question-answering information corresponding to the understanding interaction text until question interaction question-answering is completed; one of the first interactive question-answer information and the second interactive question-answer information is a question, and the other is an answer.
Wherein the inquiry interaction inquiry process comprises at least one of the following: question and answer is conducted on question and answer information input by a user; question answering is carried out on the abnormal part of the analysis report; determining a consultation proposal of an abnormal part of the analysis report; and carrying out question answering based on the question intents determined by integrating the analysis report and the historical interaction question answering information, wherein the historical interaction question answering information is the interaction question answering information before the current question answering.
In some embodiments, the query interaction device based on a large language model further includes an adjustment unit (not shown), where the adjustment unit is configured to adjust model parameters of the large language model using the reference data to obtain an adjusted large language model, and the adjusted large language model is used to obtain an analysis report corresponding to the verification related information by using the large language model; and determining second interactive question-answering information corresponding to the first interactive question-answering information by using a large language model in response to the input first interactive question-answering information of the analysis report so as to conduct question interactive question-answering, and generating a question report based on the question interactive question-answering.
The large language model at least comprises a checking and analyzing module and a consultation interaction module; the reference data are medical field dialogue data, reference text feature vectors and reference visual feature vectors, the medical field dialogue data are used for carrying out first adjustment on model parameters of the large language model, and the reference text feature vectors and the reference visual feature vectors are used for carrying out second adjustment on the large language model after the first adjustment.
For the implementation of this embodiment, reference may be made to the implementation process of the foregoing embodiment, which is not described herein.
For the foregoing embodiments, the present application provides a computer device, please refer to fig. 12, fig. 12 is a schematic structural diagram of an embodiment of the computer device of the present application. The computer device 40 comprises a memory 41 and a processor 42, wherein the memory 41 and the processor 42 are coupled to each other, and program data is stored in the memory 41, and the processor 42 is configured to execute the program data to implement the steps of any embodiment of the above-mentioned query interaction method based on a large language model.
In the present embodiment, the processor 42 may also be referred to as a CPU (Central Processing Unit ). The processor 42 may be an integrated circuit chip having signal processing capabilities. Processor 42 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The general purpose processor may be a microprocessor or the processor 42 may be any conventional processor or the like.
For the method of the above embodiment, which may be implemented in the form of a computer program, the present application proposes a computer readable storage medium, please refer to fig. 13, fig. 13 is a schematic structural diagram of an embodiment of the computer readable storage medium of the present application. The computer readable storage medium 50 stores therein program data 51 capable of being executed by a processor, the program data 51 being executable by the processor to implement the steps of any of the embodiments of the method for interview interactions based on a large language model described above.
The computer readable storage medium 50 of the present embodiment may be a medium such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, which may store the program data 51, or may be a server storing the program data 51, which may send the stored program data 51 to another device for operation, or may also run the stored program data 51 by itself.
In the several embodiments provided in the present application, it should be understood that the disclosed methods and apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical, or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer-readable storage medium, which is a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all or part of the technical solution contributing to the prior art or in the form of a software product stored in a storage medium, including several instructions to cause an electronic device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to perform all or part of the steps of the methods of the embodiments of the present application.
It will be apparent to those skilled in the art that the modules or steps of the application described above may be implemented in a general purpose computing device, they may be centralized on a single computing device, or distributed over a network of computing devices, or they may alternatively be implemented in program code executable by computing devices, such that they may be stored in a computer readable storage medium for execution by computing devices, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps within them may be fabricated into a single integrated circuit module. Thus, the present application is not limited to any specific combination of hardware and software.
The foregoing description is only exemplary embodiments of the present application and is not intended to limit the scope of the present application, and all equivalent structures or equivalent processes using the descriptions and the drawings of the present application, or direct or indirect application in other related technical fields are included in the scope of the present application.

Claims (12)

1. A method of interview interaction based on a large language model, comprising:
acquiring inspection related information of an inspection report of a target object;
Acquiring an analysis report corresponding to the inspection related information by using a large language model;
and responding to the input first interactive question and answer information of the analysis report, determining second interactive question and answer information corresponding to the first interactive question and answer information by using a large language model so as to conduct question interactive question and answer, and generating a question report based on the question interactive question and answer.
2. The method of claim 1, wherein the large language model comprises at least a verification analysis module; the obtaining the analysis report corresponding to the inspection related information by using the large language model comprises the following steps:
analyzing the inspection related information by using the inspection analysis module according to the medical field expertise to obtain data analysis information;
and determining medical knowledge information corresponding to the data analysis information to obtain the analysis report.
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
the inspection related information includes at least one of a visual feature vector, a text feature vector; the data analysis information includes key information and/or anomaly information for at least one of the visual feature vectors, text feature vectors.
4. The method of claim 1, wherein the large language model comprises at least a consultation interaction module; the determining, by using a large language model, second interactive question-answer information corresponding to the first interactive question-answer information in response to the input first interactive question-answer information of the analysis report includes:
responding to the input first interactive question-answer information of the analysis report, and carrying out semantic understanding on the first interactive question-answer information by utilizing the question interaction module to obtain an understanding interaction text;
determining second interactive question-answering information corresponding to the understanding interactive text until the question interactive question-answering is completed; one of the first interactive question-answer information and the second interactive question-answer information is a question, and the other is an answer.
5. The method of claim 4, wherein the step of determining the position of the first electrode is performed,
the inquiry interaction inquiry process comprises at least one of the following steps:
question and answer is conducted on question and answer information input by a user;
question answering is carried out on the abnormal part of the analysis report;
determining a consultation suggestion of an abnormal part of the analysis report;
and carrying out question answering based on the question intents determined by integrating the analysis report and the historical interaction question answering information, wherein the historical interaction question answering information is the interaction question answering information before the current question answering.
6. The method of claim 1, wherein the large language model comprises at least an information extraction module; the acquiring the inspection related information of the inspection report of the target object includes:
receiving an input inspection report of the target object;
and acquiring the inspection related information of the inspection report of the target object by adopting the information extraction module.
7. The method of claim 6, wherein the inspection report includes a medical image, and the information extraction module is a visual encoder; the acquiring, by the information extraction module, the inspection related information of the inspection report of the target object includes:
identifying image information of a medical image in the inspection report by adopting the visual encoder;
and carrying out feature coding on the image information to obtain a visual feature vector which is used as the inspection related information corresponding to the medical image.
8. The method of claim 6, wherein the inspection report is in an image format or an electronic file format, and the inspection report contains medical text data; the acquiring, by the information extraction module, the inspection related information of the inspection report of the target object includes:
Responding to the format of the inspection report as an image format, adopting the information extraction module to carry out text recognition on medical text data of the inspection report to obtain text recognition information, carrying out feature coding on the text recognition information to obtain text feature vectors serving as inspection related information corresponding to the medical text data; or,
and responding to the format of the inspection report to be an electronic file format, and adopting the information extraction module to perform feature coding on the medical text data of the inspection report to obtain text feature vectors serving as inspection related information corresponding to the medical text data.
9. The method according to claim 1, wherein the method further comprises:
adjusting model parameters of the large language model by using reference data to obtain an adjusted large language model, wherein the large language model at least comprises a checking analysis module and a consultation interaction module; the reference data are medical field dialogue data, reference text feature vectors and reference visual feature vectors, the medical field dialogue data are used for carrying out first adjustment on model parameters of the large language model, and the reference text feature vectors and the reference visual feature vectors are used for carrying out second adjustment on the large language model after the first adjustment;
The adjusted large language model is used for realizing that the analysis report corresponding to the inspection related information is obtained by using the large language model; and determining second interactive questioning and answering information corresponding to the first interactive questioning and answering information by utilizing a large language model in response to the input first interactive questioning and answering information of the analysis report so as to conduct questioning and answering interaction, and generating a questioning report based on the questioning and answering interaction questioning and answering.
10. A large language model-based inquiry interaction device, comprising:
an acquisition unit configured to acquire inspection related information of an inspection report of a target object;
the analysis unit is used for acquiring an analysis report corresponding to the inspection related information by using a large language model;
and the interaction unit is used for responding to the input first interaction question and answer information of the analysis report, determining second interaction question and answer information corresponding to the first interaction question and answer information by utilizing a large language model so as to conduct question interaction question and answer, and generating a question report based on the question interaction question and answer.
11. A computer device comprising a memory and a processor coupled to each other, the memory having stored therein program data, the processor being adapted to execute the program data to implement the steps of the method of any of claims 1 to 9.
12. A computer readable storage medium, characterized in that program data executable by a processor are stored, said program data being for implementing the steps of the method according to any one of claims 1 to 9.
CN202311378700.XA 2023-10-23 2023-10-23 Question interaction method and device based on large language model and related equipment Pending CN117558439A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311378700.XA CN117558439A (en) 2023-10-23 2023-10-23 Question interaction method and device based on large language model and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311378700.XA CN117558439A (en) 2023-10-23 2023-10-23 Question interaction method and device based on large language model and related equipment

Publications (1)

Publication Number Publication Date
CN117558439A true CN117558439A (en) 2024-02-13

Family

ID=89815614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311378700.XA Pending CN117558439A (en) 2023-10-23 2023-10-23 Question interaction method and device based on large language model and related equipment

Country Status (1)

Country Link
CN (1) CN117558439A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117809798A (en) * 2024-03-01 2024-04-02 金堂县第一人民医院 Verification report interpretation method, system, equipment and medium based on large model
CN117995334A (en) * 2024-04-07 2024-05-07 北京惠每云科技有限公司 Intelligent interpretation and treatment suggestion method and device based on inspection report
CN118194097A (en) * 2024-05-13 2024-06-14 福建依时利软件股份有限公司 Intelligent laboratory management method, device, equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117809798A (en) * 2024-03-01 2024-04-02 金堂县第一人民医院 Verification report interpretation method, system, equipment and medium based on large model
CN117809798B (en) * 2024-03-01 2024-04-26 金堂县第一人民医院 Verification report interpretation method, system, equipment and medium based on large model
CN117995334A (en) * 2024-04-07 2024-05-07 北京惠每云科技有限公司 Intelligent interpretation and treatment suggestion method and device based on inspection report
CN118194097A (en) * 2024-05-13 2024-06-14 福建依时利软件股份有限公司 Intelligent laboratory management method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN117558439A (en) Question interaction method and device based on large language model and related equipment
EP2601608B1 (en) Report authoring
CN110675951A (en) Intelligent disease diagnosis method and device, computer equipment and readable medium
US20110245632A1 (en) Medical Diagnosis Using Biometric Sensor Protocols Based on Medical Examination Attributes and Monitored Data
CN109785311B (en) Disease diagnosis device, electronic equipment and storage medium
CN112151155A (en) Ultrasonic image intelligent training method and system based on artificial intelligence and application system
CN115994902A (en) Medical image analysis method, electronic device and storage medium
WO2018211688A1 (en) Computer system, subject diagnosis method, and program
CN115862868A (en) Psychological assessment system, psychological assessment platform, electronic device and storage medium
CN109147927B (en) Man-machine interaction method, device, equipment and medium
CN117237351B (en) Ultrasonic image analysis method and related device
CN117809798A (en) Verification report interpretation method, system, equipment and medium based on large model
CN117524401A (en) Medical report generation method and related device
WO2018179219A1 (en) Computer system, method for diagnosing animal, and program
US20230238151A1 (en) Determining a medical professional having experience relevant to a medical procedure
CN114328864A (en) Ophthalmic question-answering system based on artificial intelligence and knowledge graph
CN115497621A (en) Old person cognitive status evaluation system
Duhaime et al. Nonexpert crowds outperform expert individuals in diagnostic accuracy on a skin lesion diagnosis task
KR20230094778A (en) Method and apparatus for providing patient imformation, computer-readable storage medium and computer program
Rogers Visual interaction: a link between perception and problem-solving
Chin et al. CWD2GAN: Generative adversarial network of chronic wound depth detection for predicting chronic wound depth
CN112801943A (en) Medical image processing method and device
CN117951190B (en) Human body index abnormal data processing method and system based on artificial intelligence
Ribeiro et al. Automated Image Quality and Protocol Adherence Assessment of Examinations in Teledermatology: First Results
US20240197287A1 (en) Artificial Intelligence System for Determining Drug Use through Medical Imaging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination