CN116580801A - Ultrasonic inspection method based on large language model - Google Patents

Ultrasonic inspection method based on large language model Download PDF

Info

Publication number
CN116580801A
CN116580801A CN202310553078.5A CN202310553078A CN116580801A CN 116580801 A CN116580801 A CN 116580801A CN 202310553078 A CN202310553078 A CN 202310553078A CN 116580801 A CN116580801 A CN 116580801A
Authority
CN
China
Prior art keywords
information
scanning
image
language model
patient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310553078.5A
Other languages
Chinese (zh)
Inventor
黄孟钦
朱瑞星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Shenzhi Information Technology Co ltd
Original Assignee
Shanghai Shenzhi Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Shenzhi Information Technology Co ltd filed Critical Shanghai Shenzhi Information Technology Co ltd
Priority to CN202310553078.5A priority Critical patent/CN116580801A/en
Publication of CN116580801A publication Critical patent/CN116580801A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/237Lexical tools
    • G06F40/242Dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • G06F40/295Named entity recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Computational Linguistics (AREA)
  • Physics & Mathematics (AREA)
  • Epidemiology (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Primary Health Care (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Databases & Information Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The invention relates to the technical field of ultrasonic inspection, in particular to an ultrasonic inspection method based on a large language model, which comprises the following steps: step S1: for a patient to be inspected, extracting self-description information and historical treatment information of the patient to be inspected as pre-treatment information; step S2: processing the pre-diagnosis information to obtain input information; step S3: inputting information to an external large language model to obtain a scanning suggestion fed back by the large language model, wherein the scanning suggestion is used for carrying out ultrasonic scanning on a patient to be inspected to obtain a scanning image. The beneficial effects are that: and generating corresponding scanning suggestions by introducing an external large language model so as to guide doctors to scan. In order to achieve better adaptation of the large language model, the method and the device further process self-describing information and historical treatment information of a patient, convert the self-describing information and the historical treatment information into input information capable of obtaining more accurate feedback from the large language model, and achieve better quality control of scanning process.

Description

Ultrasonic inspection method based on large language model
Technical Field
The invention relates to the technical field of ultrasonic inspection, in particular to an ultrasonic inspection method based on a large language model.
Background
Ultrasound image recognition is a technique applied to ultrasound medical examination. Because of the traditional ultrasonic medical examination, such as B ultrasonic, the generated gray-scale image is greatly influenced by the manipulation of an operator, physical characteristics of a patient and other factors, so that the ultrasonic film reading process is very dependent on the operation experience of a doctor, and a small burden is brought to the doctor. Therefore, identification and marking of lesions and organ abnormal parts in ultrasonic images are realized through an artificial intelligence technology, so that the film reading efficiency of doctors is becoming a mainstream technical direction. A large language model (Large Language Model, LLM) that can use deep learning models trained on large amounts of text data, can generate natural language text or understand meaning of language text, and handle multiple natural language tasks such as text classification, questions and answers, conversations, and the like. Compared to traditional artificial intelligence models for natural language processing (Natural Language Processing, NLP), large language models have more complex structures and parameters, such as the GPT-4 model published by OpenAI corporation, which has about 100 trillion parameters, and thus can be used to accomplish the task of more complex language text understanding, processing.
In the prior art, there have been proposals to employ artificial intelligence techniques in the course of ultrasound examination. The technical scheme is to train a corresponding artificial intelligent model in advance aiming at the part to be scanned, and is used for dividing and identifying the focus of an image obtained by ultrasonic scanning and assisting a doctor in realizing a diagnosis process.
However, in the practical implementation process, the inventor finds that although the above scheme can better assist a doctor with insufficient experience to quickly locate the suspicious lesion area from the image, the doctor still recognizes the image result after scanning, and the defect caused by the scanning mode of the doctor in the scanning process can not be effectively overcome.
Disclosure of Invention
In order to solve the above problems in the prior art, an ultrasonic inspection method based on a large language model is provided.
The specific technical scheme is as follows:
an ultrasound inspection method based on a large language model, comprising:
step S1: for a patient to be inspected, extracting self-description information and historical treatment information of the patient to be inspected as pre-treatment information;
step S2: processing the pre-diagnosis information to obtain input information;
step S3: and inputting the information to an external large language model to obtain a scanning suggestion fed back by the large language model, wherein the scanning suggestion is used for carrying out ultrasonic scanning on the patient to be inspected to obtain a scanning image.
On the other hand, after executing the step S3, the method further includes:
step S4: scanning the patient to be inspected and generating feedback information;
step S5: and (4) adjusting the scanning method according to the feedback information, and returning to the step (S4) until the scanning is finished.
On the other hand, in the step S1, the historical diagnosis information includes an electronic medical record and a test report of the patient to be inspected;
the step S2 includes:
step S21: preprocessing the historical diagnosis information to obtain preprocessed information;
step S22: extracting keywords from the pretreatment information to obtain a plurality of target keywords for describing the patient to be inspected;
step S23: and splicing the target keywords to obtain the input information corresponding to the large language model.
On the other hand, the step S21 includes:
step S211: dictionary matching is carried out on the historical diagnosis information so as to correct misspelling information in the historical diagnosis information and obtain correction information;
step S212: performing semantic recognition on the correction information to remove repeated information and error information in the correction information and obtain duplicate removal information;
step S213: and carrying out format processing on the duplicate removal information to obtain the preprocessing information.
On the other hand, the history visit information further includes a history examination image, and then, before executing the step S21, further includes an image text conversion process, where the image text conversion process includes:
step A11: acquiring the historical inspection image, and judging the image type of the historical inspection image to determine a scanning part corresponding to the historical inspection image;
step A12: invoking a corresponding morphological model according to the scanned part to perform image recognition on the historical inspection image to obtain at least one key region in the historical image;
the key area comprises a focus area and a suspected lesion area;
step A13: and calling a description template according to the key region and the scanning part to assemble, obtaining a description sentence corresponding to the historical examination image, replacing the historical examination image by adopting the description sentence, and then executing the step S21.
On the other hand, the step S22 includes:
step S221: performing word segmentation on the duplication removal information to obtain a plurality of word segmentation results;
step S222: carrying out named entity matching on the word segmentation result to obtain a plurality of named entities;
step S223: and counting the named entities, and determining keywords as the target keywords according to the counting result and outputting the keywords.
On the other hand, the step S4 includes:
step S41: acquiring the input information and the scanning advice, and comparing the input information and the feedback advice with a pre-constructed knowledge graph according to the input information and the feedback advice so as to acquire a desired scanning result;
step S42: scanning the patient based on the scanning advice to obtain a scanning image;
step S43: and recognizing the focus of the scanned image, and comparing the recognition result with the expected scanning result to obtain the feedback suggestion.
On the other hand, in the process of executing the step S4, the method further includes:
step B1: acquiring a scanning record when the patient to be inspected is scanned;
step B2: and inputting the scanning record and the scanning suggestion into the large language model to obtain a scanning quality control suggestion fed back by the large language model.
The technical scheme has the following advantages or beneficial effects:
aiming at the problem that the ultrasonic examination process in the prior art is easily influenced by a doctor scanning method, the scheme generates corresponding scanning advice by introducing an external large-scale language model so as to guide a doctor to scan. In order to realize better adaptation of the large-scale language model, the large-scale language model can accurately understand the actual condition of a patient, and the scheme also converts the self-description information and the historical treatment information of the patient into input information which can obtain more accurate feedback from the large-scale language model, so that better quality control of the scanning process is realized.
Drawings
Embodiments of the present invention will now be described more fully with reference to the accompanying drawings. The drawings, however, are for illustration and description only and are not intended as a definition of the limits of the invention.
FIG. 1 is an overall schematic of an embodiment of the present invention;
FIG. 2 is a schematic diagram of step S4 in an embodiment of the present invention;
FIG. 3 is a schematic diagram showing the sub-steps of step S2 in the embodiment of the invention;
FIG. 4 is a schematic diagram showing the substep of step S21 in the embodiment of the present invention;
FIG. 5 is a schematic diagram of an image text conversion process according to an embodiment of the present invention;
FIG. 6 is a schematic diagram showing the substep of step S22 in the embodiment of the invention;
FIG. 7 is a schematic diagram of the sub-step S4 in the embodiment of the invention;
fig. 8 is a schematic diagram of a quality control process according to an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that, without conflict, the embodiments of the present invention and features of the embodiments may be combined with each other.
The invention is further described below with reference to the drawings and specific examples, which are not intended to be limiting.
The invention comprises the following steps:
an ultrasound inspection method based on a large language model, comprising:
step S1: for a patient to be inspected, extracting self-description information and historical treatment information of the patient to be inspected as pre-treatment information;
step S2: processing the pre-diagnosis information to obtain input information;
step S3: inputting information to an external large language model to obtain a scanning suggestion fed back by the large language model, wherein the scanning suggestion is used for carrying out ultrasonic scanning on a patient to be inspected to obtain a scanning image.
Specifically, aiming at the problem that the ultrasonic examination process in the prior art is easily influenced by a doctor scanning method, the scheme generates corresponding scanning suggestions by introducing an external large language model so as to guide the doctor to scan. In order to realize better adaptation of the large-scale language model, the large-scale language model can accurately understand the actual condition of a patient, and the scheme also converts the self-description information and the historical treatment information of the patient into input information which can obtain more accurate feedback from the large-scale language model, so that better quality control of the scanning process is realized.
In the implementation process, the self-described information refers to information acquired by a doctor through inquiry of a patient to be checked in the current checking link, and the information is mainly a text paragraph formed by voice-to-text (TTS), and can also be a record made by the doctor in the inquiry process. The historical treatment information is relevant information recorded in advance in a hospital information system, and comprises electronic medical records, test reports, historical examination data and the like of patients, and also can comprise information such as workflow, image storage, video storage, measurement, annotation and the like. The input information is text information obtained by further cleaning the pre-diagnosis information and compiling according to specific rules, and the input information can be well adapted to a large-scale language model, so that the large-scale language model can accurately understand the meaning of the text information and make correct replies. The large language model mainly refers to an artificial intelligent model which is trained by adopting a corresponding training set in advance, can understand text information in the medical field and generate feedback information, such as scanning advice. The large language model can be configured as an existing public cloud implementation-based model, such as a GPT series model provided by OpenAI, or a large language model which is deployed locally and is constructed by adopting a specific medical data set. The scan suggestion refers to text information output by the large language model. In one embodiment, the scan suggestion includes: scanning purpose: knowing the purpose of a patient's visit, for example, whether to detect a tumor, cyst, or the like; scanning range: determining the range to be scanned, including organs and parts to be inspected; the scanning method comprises the following steps: the scanning method adopted, such as conventional ultrasound, color Doppler ultrasound and the like, is determined according to the scanning purpose and the scanning range; scanning parameters: setting proper scanning parameters such as frequency, gain, depth and the like according to the scanning method and the specific situation; scanning sequence: determining the scanning sequence according to the scanning range and the scanning purpose, and ensuring the comprehensive and systematic scanning; scanning and evaluating: and evaluating the scanning result, analyzing and judging the scanning result, and providing corresponding diagnosis suggestions and treatment schemes. The content can request the corresponding content fed back from the large language model by adding the corresponding prompt words.
In one embodiment, as shown in fig. 2, after performing step S3, the method further includes:
step S4: scanning a patient to be inspected and generating feedback information;
step S5: and (4) adjusting the scanning method according to the feedback information, and returning to the step (S4) until the scanning is finished.
Specifically, in order to achieve a better scanning effect, in this embodiment, after the scanning advice is obtained, the patient is scanned according to the scanning advice, feedback information is generated in real time in the scanning process and returned to the large-scale language model, and the scanning method is adjusted by the large-scale language model until the scanning is finished, so that a better control process of the scanning process is achieved.
In one embodiment, in step S1, the historical visit information includes an electronic medical record and a test report of the patient to be examined;
as shown in fig. 3, step S2 includes:
step S21: preprocessing the pre-diagnosis information to obtain preprocessed information;
step S22: extracting keywords from the pretreatment information to obtain a plurality of target keywords for describing the patient to be checked;
step S23: and splicing the target keywords to obtain input information corresponding to the large language model.
Specifically, considering that the sources of the pre-diagnosis information are complex and huge, including the oral content of the patient, the related records transferred by the doctor, the inspection report, the past medical history in the electronic medical record and the like, after the large language model is input, the large language model easily misunderstands the semantics of the large language model, and the scanning advice expected to be generated by the user cannot be accurately identified. Aiming at the problem, in the embodiment, the pretreatment information after cleaning is obtained by preprocessing the pre-diagnosis information and correcting and eliminating the error part and the redundant part; then, aiming at the pretreatment information with huge quantity, extracting keywords by a statistical and identification method to obtain target keywords capable of describing the patient to be checked; and finally, splicing the target keywords according to specific language orders, rules and templates, and carrying out corresponding modification to obtain input information. The input information can enable the large language model to better understand the current condition of the patient and the part needing ultrasonic examination, and then corresponding scanning advice is given.
In one embodiment, as shown in fig. 4, step S21 includes:
step S211: performing spelling correction on the pre-diagnosis information to correct spelling error information in the pre-diagnosis information so as to obtain correction information;
step S212: carrying out semantic recognition on the correction information to remove repeated information and error information in the correction information and obtain duplicate removal information;
step S213: and carrying out format processing on the duplicate removal information to obtain pretreatment information.
Specifically, for pre-diagnosis information including patient self-description information, in order to achieve a relatively accurate preprocessing effect, in this embodiment, dictionary matching is performed on the pre-diagnosis information in advance, and spelling error information of spelling errors possibly existing in the process of converting voice into characters is corrected, so as to obtain correction information, where the correction information can be implemented based on an algorithm of an editing distance. And then, aiming at the correction information, further carrying out semantic recognition on the correction information, and judging whether useless, repeated, wrong and other redundant information exists in the part of correction information, so as to clean the correction information and avoid the influence on the subsequent keyword extraction process caused by repeated recording. Then, formatting the duplicate removal information, for example, uniformly processing the duplicate removal information into a YYYY-MM-DD form for a date format; aiming at the prior medical history of the patient, the method adjusts the sequence according to the time arrangement, so that the prior medical history of the patient shows the characteristics related to the context in the text information, classifies different symptom descriptions and medical history information into different categories, classifies the inspection reports according to different inspection items, enables the content contained in the preprocessing information obtained by the final organization to show the correlation in the time dimension on the text sequence through the sequence, and shows the correlation between the different categories through the distribution positions of different sentences in the text and the correlation between the contexts, so that the input information is obtained through the subsequent processing and assembling of the model, and improves the accuracy of the large language model on the recognition of the input information.
In one embodiment, the historical doctor information further includes a historical examination image, and then the image text conversion process is further included before executing step S21, as shown in fig. 5, where the image text conversion process includes:
step A11: acquiring a historical inspection image, and judging the image type of the historical inspection image to determine a scanning part corresponding to the historical inspection image;
step A12: invoking a corresponding image recognition model according to the scanned part to perform image recognition on the historical inspection image to obtain at least one key region in the historical image;
the key area comprises a focus area and a suspected lesion area;
step A13: and calling a description template according to the key region and the scanning part to assemble, obtaining a description sentence corresponding to the historical examination image, replacing the historical examination image by the description sentence, and then executing step S21.
Specifically, in order to accurately establish the association between the input image and other text type information by using the large language model in order to solve the problem that the source of the pre-diagnosis information is relatively complex, in this embodiment, text conversion is selected for the history examination image. In the process, firstly, carrying out image feature recognition and segmentation on a historical inspection image, then matching with different types of scanning images, and judging a scanning part corresponding to the historical inspection image; and for the scanned part, determining at least one key area in the historical inspection image by calling a corresponding image recognition model to recognize. The process comprises the steps of adopting a medical image position recognition deep learning network to recognize a history inspection image input in advance, removing redundant parts, including frames, marks, backgrounds and the like, and only reserving a region of interest; and further identifying the region of interest by adopting an image focus identification and analysis deep learning network, and extracting a key region associated with the focus. Then, based on the key area and the scanned part, a corresponding description template can be called to assemble the image, for example, the K part on the A date is scanned in an L type, wherein Y focus and Z focus exist in the X group images, and the positions of the Y focus and the Z focus are distributed on M, N, so that the text description process of the image is realized.
In an implementation, the medical image location recognition deep learning network includes a convolution layer (Convolutional layer) disposed in sequence: for extracting features in the image, including edges, textures, etc.; pooling layer (Pooling layer): the method is used for reducing the size of the image, thereby reducing the number of network parameters and preventing overfitting; batch normalization layer (Batch normalization layer): the method is used for normalizing input data and accelerating network training; full tie layer (Fully connected layer): for mapping the extracted features to the location coordinates of the region of interest. The extraction and identification of the image features are realized through the layers which are sequentially arranged, so that the position of the corresponding region of interest of the actual scanned image is judged. The image focus recognition and analysis deep learning network comprises the following components in sequence: convolution layer (Convolutional layer): for extracting features in the image, including edges, textures, etc.; pooling layer (Pooling layer): the method is used for reducing the size of the image, thereby reducing the number of network parameters and preventing overfitting; batch normalization layer (Batch normalization layer): the method is used for normalizing input data and accelerating network training; residual block (Residual block): the method is used for solving the problem of network degradation and accelerating network training; pyramid pooling layer (Pyramid pooling layer): the method is used for extracting the features of different scales and improving the robustness of the network; classifier (Classifier): for mapping the extracted features to the type and location of the lesion. The focus position in the region of interest is positioned and marked through the structure.
In one embodiment, as shown in fig. 6, step S22 includes:
step S221: performing word segmentation on the duplication removal information to obtain a plurality of word segmentation results;
step S222: performing named entity matching on the word segmentation result to obtain a plurality of named entities;
step S223: and counting the named entities, and determining keywords as target keywords according to the counting result and outputting the target keywords.
Specifically, in order to achieve a better process of generating input information, in this embodiment, a word segmentation and named entity recognition process is further performed on the duplication removal information, so as to determine named entities that may be used to describe the patient condition, such as a disease name, a drug name, a symptom name, and the like. Then, aiming at the text of the split named entity, further carrying out statistics and determination on key words, such as the key information of disease names, symptom descriptions, treatment schemes and the like, and outputting the key words as target key words, thereby realizing better key word extraction effect.
In one embodiment, as shown in fig. 7, step S4 includes:
step S41: acquiring input information and scanning advice, and comparing the input information and the scanning advice with a pre-constructed knowledge graph to acquire a desired scanning result;
step S42: scanning the patient based on the scanning advice to obtain a scanning image;
step S43: and recognizing the focus of the scanned image, and comparing the recognition result with the expected scanning result to obtain feedback information.
Specifically, in order to achieve a better control effect on the whole scanning process, in this embodiment, after comparing according to input information and scanning advice in the scanning process, the expected scanning result that may be shown in a typical case in the scanning process, such as a focus form, a position range, a size range, and the like generated at a certain place, may be predicted by comparing according to an indication map established by combining with an expert knowledge base in advance. Along with goods, in the scanning process, the real-time scanning image is extracted, so that whether the focus obtained by current scanning is similar to the predicted result is judged, and feedback information is obtained. The feedback information can be used for further prompting the large language model to perform feedback processing, so that the whole flow control of the scanning process is realized.
For example, a doctor is performing a breast ultrasound examination on a patient, and knowing that the patient is symptomatic of a breast tumor through a history of cases, the interactive module may collect user communication information in conjunction with image information during the scanning process, analyze the information through the LLM module, and propose some related suggestions and protocols, such as suggesting the use of a specific probe, increasing the time and depth of scanning, adjusting the scanning direction, etc., in order to more accurately detect the location and nature of the tumor. If the physician indicates that an abnormal nodule is found during the scan, the interaction module may further query for detailed information about the nodule, such as the size, shape, location, boundaries, internal structure, etc. of the nodule, to more accurately determine the nature of the nodule, and to propose relevant advice and protocols, such as advice to perform further underarm lymph node scans or other examinations such as tissue biopsies, etc.
In one embodiment, as shown in fig. 8, in the process of executing step S4, the method further includes:
step B1: acquiring a scanning record when a patient to be inspected is scanned;
step B2: and inputting the scanning record and the scanning suggestion into a large language model to obtain the scanning quality control suggestion fed back by the large language model.
Specifically, in order to achieve a better quality control effect on the scanning process, in this embodiment, by collecting real-time scanning records while scanning a patient to be inspected, including information such as scanning parameters, angles, motion trajectories, real-time imaging results, and the like, and scanning suggestions provided by the large-scale language model are fed back to the large-scale language model together, so that the large-scale language model can judge the scanning condition of the current doctor, remind an ultrasonic doctor of missing or incorrect information, provide scanning quality control suggestions, and help the doctor to adjust the scanning process.
As an alternative implementation mode, after scanning is finished, the large language model automatically generates a preliminary diagnosis report according to the information of ultrasonic images, medical history, examination results and the like of the patient, so that the time of an ultrasonic doctor is saved, and the working efficiency is improved. Meanwhile, according to the ultrasonic images and medical history of the patient, possible diagnosis suggestions are provided to assist the ultrasonic doctor in diagnosis.
The foregoing is merely illustrative of the preferred embodiments of the present invention and is not intended to limit the embodiments and scope of the present invention, and it should be appreciated by those skilled in the art that equivalent substitutions and obvious variations may be made using the description and illustrations of the present invention, and are intended to be included in the scope of the present invention.

Claims (8)

1. An ultrasound inspection method based on a large language model, comprising:
step S1: for a patient to be inspected, extracting self-description information and historical treatment information of the patient to be inspected as pre-treatment information;
step S2: processing the pre-diagnosis information to obtain input information;
step S3: and inputting the information to an external large language model to obtain a scanning suggestion fed back by the large language model, wherein the scanning suggestion is used for carrying out ultrasonic scanning on the patient to be inspected to obtain a scanning image.
2. The ultrasonic inspection method according to claim 1, further comprising, after performing the step S3:
step S4: scanning the patient to be inspected and generating feedback information;
step S5: and (4) adjusting the scanning method according to the feedback information, and returning to the step (S4) until the scanning is finished.
3. The ultrasound examination method according to claim 1, wherein in the step S1, the history visit information includes an electronic medical record and a test report of the patient to be examined;
the step S2 includes:
step S21: preprocessing the pre-diagnosis information to obtain preprocessed information;
step S22: extracting keywords from the pretreatment information to obtain a plurality of target keywords for describing the patient to be inspected;
step S23: and splicing the target keywords to obtain the input information corresponding to the large language model.
4. The ultrasonic inspection method according to claim 2, wherein the step S21 includes:
step S211: performing spelling correction on the pre-diagnosis information to correct spelling error information in the pre-diagnosis information so as to obtain correction information;
step S212: performing semantic recognition on the correction information to remove repeated information and error information in the correction information and obtain duplicate removal information;
step S213: and carrying out format processing on the duplicate removal information to obtain the preprocessing information.
5. The ultrasonic inspection method according to claim 3, wherein the history visit information further includes a history review image, and further includes an image-to-text conversion process before performing the step S21, the image-to-text conversion process including:
step A11: acquiring the historical inspection image, and judging the image type of the historical inspection image to determine a scanning part corresponding to the historical inspection image;
step A12: invoking a corresponding image according to the scanned part to perform image recognition on the historical inspection image by a model to obtain at least one key region in the historical image;
the key area comprises a focus area and a suspected lesion area;
step A13: and calling a description template according to the key region and the scanning part to assemble, obtaining a description sentence corresponding to the historical examination image, adopting the description sentence as the pre-diagnosis information, and then executing the step S21.
6. The ultrasonic inspection method of claim 3, wherein said step S22 comprises:
step S221: performing word segmentation on the duplication removal information to obtain a plurality of word segmentation results;
step S222: carrying out named entity matching on the word segmentation result to obtain a plurality of named entities;
step S223: and counting the named entities, and determining keywords as the target keywords according to the counting result and outputting the keywords.
7. The ultrasonic inspection method according to claim 2, wherein the step S4 includes:
step S41: acquiring the input information and the scanning advice, and comparing the input information and the scanning advice with a pre-constructed knowledge graph according to the input information and the scanning advice so as to acquire a desired scanning result;
step S42: scanning the patient based on the scanning advice to obtain a scanning image;
step S43: and recognizing the focus of the scanned image, and comparing the recognition result with the expected scanning result to obtain the feedback information.
8. The ultrasonic inspection method according to claim 2, further comprising, in performing the step S4:
step B1: acquiring a scanning record when the patient to be inspected is scanned;
step B2: and inputting the scanning record and the scanning suggestion into the large language model to obtain a scanning quality control suggestion fed back by the large language model.
CN202310553078.5A 2023-05-16 2023-05-16 Ultrasonic inspection method based on large language model Pending CN116580801A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310553078.5A CN116580801A (en) 2023-05-16 2023-05-16 Ultrasonic inspection method based on large language model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310553078.5A CN116580801A (en) 2023-05-16 2023-05-16 Ultrasonic inspection method based on large language model

Publications (1)

Publication Number Publication Date
CN116580801A true CN116580801A (en) 2023-08-11

Family

ID=87540807

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310553078.5A Pending CN116580801A (en) 2023-05-16 2023-05-16 Ultrasonic inspection method based on large language model

Country Status (1)

Country Link
CN (1) CN116580801A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116913450A (en) * 2023-09-07 2023-10-20 北京左医科技有限公司 Method and device for generating medical records in real time

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116913450A (en) * 2023-09-07 2023-10-20 北京左医科技有限公司 Method and device for generating medical records in real time
CN116913450B (en) * 2023-09-07 2023-12-19 北京左医科技有限公司 Method and device for generating medical records in real time

Similar Documents

Publication Publication Date Title
CN107247881B (en) Multi-mode intelligent analysis method and system
CN107563383A (en) A kind of medical image auxiliary diagnosis and semi-supervised sample generation system
JP2008259682A (en) Section recognition result correcting device, method and program
US20230117134A1 (en) A method of and system for calcium scoring of coronary arteries
WO2023204944A1 (en) Training of text and image models
CN112541066B (en) Text-structured-based medical and technical report detection method and related equipment
CN117524402A (en) Method for analyzing endoscope image and automatically generating diagnostic report
CN116580801A (en) Ultrasonic inspection method based on large language model
CN115206478A (en) Medical report generation method and device, electronic equipment and readable storage medium
CN118098585A (en) Medical AI assistant implementation method and system based on data driving and large model
CN116825266A (en) Diagnosis report auxiliary generation system based on medical image big data
CN115471512A (en) Medical image segmentation method based on self-supervision contrast learning
CN117237351B (en) Ultrasonic image analysis method and related device
CN114519705A (en) Ultrasonic standard data processing method and system for medical selection and identification
WO2021112141A1 (en) Document creation assistance device, method, and program
CN117333462A (en) Ultrasonic diagnosis intelligent interaction system based on liver attribute analysis
CN117174238A (en) Automatic pathology report generation method based on artificial intelligence
CN112200810A (en) Multi-modal automated ventricular segmentation system and method of use thereof
AU2021102129A4 (en) Automatic labeling method of emphysema in CT image based on image report
CN116978508A (en) Inspection report generation method and system based on large language model
CN115132314B (en) Examination impression generation model training method, examination impression generation model training device and examination impression generation model generation method
US20230096522A1 (en) Method and system for annotation of medical images
Zhang et al. Ultrasonic Image's Annotation Removal: A Self-supervised Noise2Noise Approach
KR101919877B1 (en) Method for preventing incorrect input of medical documents and apparatus using the same
CN117036224A (en) Focus evaluation, model training method and device, ultrasonic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination