WO2019062846A1 - 结合影像识别与报告编辑的医学影像辅助诊断方法及系统 - Google Patents

结合影像识别与报告编辑的医学影像辅助诊断方法及系统 Download PDF

Info

Publication number
WO2019062846A1
WO2019062846A1 PCT/CN2018/108311 CN2018108311W WO2019062846A1 WO 2019062846 A1 WO2019062846 A1 WO 2019062846A1 CN 2018108311 W CN2018108311 W CN 2018108311W WO 2019062846 A1 WO2019062846 A1 WO 2019062846A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
lesion
region
interest
medical image
Prior art date
Application number
PCT/CN2018/108311
Other languages
English (en)
French (fr)
Inventor
陶鹏
Original Assignee
北京西格码列顿信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京西格码列顿信息技术有限公司 filed Critical 北京西格码列顿信息技术有限公司
Publication of WO2019062846A1 publication Critical patent/WO2019062846A1/zh
Priority to US16/833,512 priority Critical patent/US11101033B2/en

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H15/00ICT specially adapted for medical reports, e.g. generation or transmission thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N99/00Subject matter not provided for in other groups of this subclass
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/24Character recognition characterised by the processing or recognition method
    • G06V30/242Division of the character sequences into groups prior to recognition; Selection of dictionaries
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • G06T2207/30064Lung nodule
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Definitions

  • the invention relates to a medical image assisted diagnosis method, in particular to a medical image assisted diagnosis method combining image recognition and report editing, and relates to a corresponding medical image assisted diagnosis system, and belongs to the technical field of medical image assisted diagnosis.
  • the primary technical problem to be solved by the present invention is to provide a medical image-assisted diagnosis method combining image recognition and report editing;
  • Another technical problem to be solved by the present invention is to provide a medical image assisted diagnosis system combining image recognition and report editing.
  • a medical image assisted diagnosis method combining image recognition and report editing, comprising the following steps:
  • S2 acquiring a medical image of the patient, determining a region of interest on the two-dimensional image, expressing the knowledge map according to the image semantics, and providing the candidate lesion option of the patient according to the region of interest;
  • step S1 the method further includes the following steps:
  • step S3 the method further includes the following steps:
  • S301 Perform positioning analysis on the region of interest based on the type of lesion to which the region of interest belongs, calculate a spatial location of the region of interest, and segment the lesion region.
  • step S3 the method further includes the following steps:
  • S311 Perform positioning analysis on the determined region of interest based on the type of lesion to which the region of interest belongs, determine a lesion type to which the region of interest belongs, and extend the determined region of interest from the two-dimensional image to the three-dimensional image or two-dimensional image. Dynamic image and segment the lesion area of the overall image.
  • step S311 the method further includes the following steps:
  • Step 1 Based on the type of lesion determined by the expert, combined with the shape and texture features corresponding to the type of the lesion, the gray value of the two-dimensional image is called, and the lesion region is segmented according to the connection relationship of the organ, and the corresponding image is obtained in the cross section of the two-dimensional image. a major closed area of the closed core lesion area of the lesion area;
  • Step 2 Based on the main closed region, extending to the upper and the second of the spatial sequence of the two-dimensional image, based on the shape and texture features corresponding to the type of the lesion, segmenting the lesion according to the connection relationship of the organs, and obtaining the conformity a closed area described by the type of lesion;
  • Step 3 Continue the operation of step 2, and perform the closed operation of the mathematical morphology of the three-dimensional space, remove the other areas connected to the closed core lesion area in the three-dimensional space until the closed core lesion area no longer grows; outline the closed core lesion area edge;
  • Step 4 Calculate the maximum and minimum values of the X, Y, and Z axes in the edge pixel coordinates of the closed core lesion region, thereby constructing a three-dimensional cube region.
  • step S311 the method further includes the following steps:
  • Step 1 Preprocessing each frame in the motion image to output an image of a relatively fixed human organ region
  • Step 2 Obtain a complete sequence of observation frames in which the probe position is relatively fixed in the motion image
  • Step 3 Completely acquire a complete series of observation frames corresponding to the region of interest based on the region of interest, the determined type of lesion, and the sequence of observation frames in which the region of interest is determined.
  • the complete observation frame sequence in which the probe position is relatively fixed in the dynamic image is obtained in step 2, and the following steps are included:
  • the instrument If the probe is moving fast, the instrument is considered to be looking for the region of interest; otherwise, the probe is considered to be stationary, and attention is being paid to the change of the image in a certain region over time;
  • the complete observation frame sequence in which the probe position is relatively fixed in the motion image is obtained in step 2, including the following steps:
  • the probe If the probe is moving fast, it is considered to be looking for the region of interest; otherwise, the probe is considered to be stationary, and attention is being paid to the change of the image in a certain region over time;
  • a complete sequence of observation frames for the same scene is determined based on analysis of adjacent frames and similar scenes.
  • the structured report includes a hyperlink of the image semantics corresponding to the determined lesion region and a hyperlink associated with the lesion region; by clicking the hyperlink, the lesion region displayed by the image can be simultaneously viewed and The content of the image semantics corresponding to the lesion area.
  • the expression content of the image semantics corresponding to the region of interest is input and sent to other experts for verification, and after verification, the The expression of the lesion area and the corresponding image semantics is added to the corresponding lesion image library.
  • a medical image assisted diagnosis system combining image recognition and report editing, including a knowledge map creation module, an information acquisition module, a region of interest determination module, a candidate lesion option generation module, and a lesion region. Determining a module, a report generation module, and a correction module;
  • the knowledge map establishing module is configured to establish an image semantic expression knowledge map according to a standardized dictionary database in the image field and a historical medical image report analysis;
  • the information acquisition module is configured to acquire a medical image of a patient
  • the region of interest determination module is configured to determine a region of interest of a patient medical image
  • the candidate lesion option generating module is configured to express a knowledge map according to image semantics and a candidate region option of the patient according to the region of interest;
  • the lesion region determining module is configured to determine a lesion type according to the region of interest and the candidate lesion option; and segment the lesion region according to the lesion type;
  • the report generation module is configured to generate a structured report associated with the region of interest of the patient medical image according to the segmented lesion region and the expression content of the corresponding image semantics;
  • the correction module is configured to add the lesion area and the corresponding image semantic expression content to the corresponding lesion image library.
  • the lesion region determining module includes a lesion type determining unit and a lesion region determining unit; wherein
  • the lesion type determining unit is configured to determine a lesion type among the candidate lesion options provided according to the region of interest;
  • the lesion region determining unit is configured to perform positioning analysis on the region of interest, segment the lesion region, and determine a lesion type corresponding to the lesion region according to the image semantic expression knowledge map;
  • the lesion region determining module is configured to perform positioning analysis on the region of interest, calculate a spatial location of the region of interest, and segment the lesion region.
  • the medical image-assisted diagnosis method combining image recognition and report editing provided by the invention combines various machine learning with image semantic expression knowledge map to perform medical image recognition, can systematically accumulate sample images in depth, and continuously improve image semantic expression.
  • the knowledge map enables the annotation lesions of many images to be continuously collected under the same sub-label.
  • more and more annotation of the accumulation of lesions, through machine learning, combined with manual in-depth research can continue to refine the labeling of lesions, further enrich the measurement of imaging omics, and enhance the auxiliary analysis ability of medical imaging.
  • FIG. 1 is a flowchart of a medical image assisted diagnosis method combining image recognition and report editing provided by the present invention
  • FIG. 2 is a schematic diagram of an image of a solid nodule in an embodiment provided by the present invention.
  • FIG. 3 is a schematic view showing an image of a full-type ground glass density shadow in an embodiment provided by the present invention.
  • FIG. 4 is a schematic view showing an image of a mixed type grinding glass density shadow according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of an image of a wallless cavity in an embodiment provided by the present invention.
  • FIG. 6 is a schematic view showing an image of a thin-walled cavity in an embodiment provided by the present invention.
  • FIG. 7 is a schematic diagram of an image of a thick-walled void in an embodiment provided by the present invention.
  • Figure 8 is a diagram showing the distribution of organs in the upper torso portion of the human body in the embodiment provided by the present invention.
  • FIG. 9 is a schematic diagram showing a specific two-dimensional cross section of a human chest lung CT and a series of relatively easy-to-identify organs according to an embodiment of the present invention.
  • FIG. 10a is a schematic diagram showing a specific two-dimensional cross-sectional view of a human chest lung CT and a corresponding air-inscribed portion of the lung in the embodiment provided by the present invention
  • Fig. 10b is a schematic view showing the structure of the structure shown in Fig. 10a after threshold processing and connectivity analysis, indicating the air portion in the lung.
  • the medical image-assisted diagnosis method combining the image recognition and report editing provided by the invention is based on the preliminary medical anatomical structure expression and the preliminary lesion recognition ability in the medical image, and changes the current doctor's production, editing and reviewing the image report.
  • a physiological structure organ, tissue
  • that a doctor belongs to a particular image area whether that area belongs to a two-dimensional image, a two-dimensional slice in a three-dimensional image, or a portion of a particular screenshot in a motion image
  • delineating or pointing delineating or pointing, automatically or semi-automatically generating an image report description content item corresponding to the physiological structure (ie, the expression content of the image semantics), and correspondingly the named entity in the item corresponds to a specific area of the corresponding image link.
  • the present invention is a medical image reading machine learning method and system for simulating interactive training. It can greatly improve the efficiency of reading and report generation, as well as the efficiency of report editing and review, and construct a continuous communication with high-end doctors, and continuously learn to improve reading ability and report through the re-creation of the imaging report process commonly used by imaging doctors.
  • the ability of the artificial intelligence reading technology system is a medical image reading machine learning method and system for simulating interactive training. It can greatly improve the efficiency of reading and report generation, as well as the efficiency of report editing and review, and construct a continuous communication with high-end doctors, and continuously learn to improve reading ability and report through the re-creation of the imaging report process commonly used by imaging doctors.
  • the ability of the artificial intelligence reading technology system is a medical image reading machine learning method and system for simulating interactive training.
  • the medical image-assisted diagnosis method combined with image recognition and report editing mainly comprises the following steps: First, according to a standardized dictionary database in the image field and a medical image report accumulated in the history of the lesion image database.
  • the image semantic expression knowledge map is analyzed and analyzed; then, the patient medical image is acquired, the region of interest on the two-dimensional image is determined, the knowledge map is expressed according to the image semantics, and the region of interest provides candidate lesion options for the patient.
  • the lesion type is determined according to the region of interest and the candidate lesion option; the lesion region is segmented according to the lesion type, and a structured report associated with the region of interest of the patient medical image is generated, and the lesion region is simultaneously And the corresponding image semantic expression content is added to the corresponding lesion image library, and the structured report is distributed to the patient.
  • the image semantic expression knowledge map is established according to the standardized dictionary database in the image field and the historical medical image report analysis in the lesion image library; the image semantic expression knowledge map is a general term for the medical image report knowledge map and the image semantic system.
  • the image semantic expression knowledge map of various organs, various lesions and their lesion descriptions is established, which includes the following steps:
  • S11 based on a standardized dictionary library in the image field, forms a basic list of named entities.
  • a basic list of named entities is formed; the named entities include various organs, lesions, and the like.
  • the medical imaging report contains a description of the status of various organs and some local lesion descriptions.
  • a current trend is to establish a structured report on RADS (Image Reporting and Data Systems) covering RSNA (North American Radiological Society) and ACR (American Institute of Radiology) based on the standardized dictionary library RADELX in the field of imaging.
  • This structured report will clearly describe the location, nature, and grade of the lesion in the image.
  • the spatial relationship between a particular type of lesion and the organ is relatively clear, and there is a relatively specific gray distribution (including the distribution of gray scale in spatial position) and texture structure, so there is a clear semantic expression on the image.
  • the characterization specification text of each named entity is converted into an expression of image semantics, and the image semantics of the medical image are jointly created by each named entity and the expression content of the image and image semantics corresponding to the named entity.
  • the characterization specification text of the named entity is transformed into the expression content of the image semantics, including spatial attributes, gray distribution, texture structure description, etc., to constitute the image semantic expression knowledge map in the medical ontology involved in the image report.
  • the image semantic representation knowledge map in addition to the structured description of the text and data, also includes the labeling samples of those images (mostly partial images) corresponding to each named entity (including the basic components and lesions that are easily identifiable).
  • Location - is included in: 1) in the left and right lungs; 2) in the trachea and bronchus.
  • Location - Neighbor 1) surrounded by air in the lungs, or 2) connected to the lung wall, or 3) connected to a blood vessel, or 4) connected to a bronchus.
  • Shape near circular (three-dimensional: spherical) or oval (three-dimensional: cobblestone).
  • Micronodules diameter ⁇ 5mm;
  • Nodules 10 to 20 mm in diameter
  • Lump Direct > 20mm.
  • Boundary Clear (ie sharp grayscale changes) with or without burrs.
  • the malignant probability of micronodules was ⁇ 1%, and the follow-up interval was confirmed from 6 to 12 months.
  • Biopsy or surgery based on growth rate if a lung nodule grows faster during follow-up;
  • Nodules and masses have a high chance of malignancy, direct biopsy or surgery.
  • GGO Ground Gloss Opacity: Also known as frosted glass nodules, the corresponding image semantics are expressed as;
  • Location - is included in: 1) in the left and right lungs; 2) in the trachea and bronchus.
  • Location - Neighbor 1) surrounded by air in the lungs, 2) connected to the lung wall, 3) connected to the blood vessels, 4) connected to the bronchi.
  • Type full type (PGGO), mixed type (MGGO) as shown in Figure 3 and Figure 4;
  • Density-spatial distribution slightly high-density shadow (glass-like) on a low-density background in the lung field, or partially high-density shadows (solid components with a density distribution similar to a solid nodule).
  • the complete type has no solid component (excluding high density shadow), and the mixed type contains high density shadow portion.
  • Shape block accumulation.
  • Boundary Clear or unclear, with many burrs.
  • Location - is included in: 1) left and right lungs.
  • Insect-like a wallless cavity with a low-density shadow surrounded by a slightly dense shadow;
  • Thin wall a low-density image portion surrounded by a thin wall (high-density shadow);
  • Thick wall A low-density image portion surrounded by thick walls (high-density shadows).
  • Insect-like cavity large leaf cheese pneumonia, as shown in Figure 5;
  • Thin wall cavity secondary tuberculosis, etc., as shown in Figure 6;
  • Thick-walled cavities tuberculosis, lung squamous cell carcinoma, etc., as shown in Figure 7.
  • the medical image-assisted diagnosis system After initialization, the medical image-assisted diagnosis system initially constructs an image semantic expression knowledge map, which forms different attribute descriptions corresponding to different organs, lesions and lesions in medical images scanned by different types, different purposes and different parts, and its attributes.
  • the description (qualitative or quantitative) can be calculated, that is, the attribute description is calculated in the corresponding feature extraction after the specific object (Object) in the medical image is recognized, for example, the relative spatial position range and density of the specific lesion in the image. Mean, standard deviation, entropy, roundness or sphericity, edge burr, edge sharpness, histogram, density distribution around the center, correlation matrix of texture expression, etc.
  • S2 obtaining medical image of the patient, and preprocessing the system to locate and locate some basic easily identifiable components of the medical image of the patient, and combining the basic information with the region of interest drawn by the expert on the two-dimensional image, according to image semantic expression
  • the knowledge map and the region of interest provide candidate candidate lesion options for the patient.
  • To generate a structured report, you need to first obtain the patient's personal information and the patient's medical image.
  • technical means for obtaining medical images of a patient include, but are not limited to, chest lung CT, abdominal CT, cerebrovascular MRI, breast MRI, abdominal ultrasound, and the like.
  • the system After obtaining the patient's medical image, the system pre-processes and identifies some basic easily identifiable components of the patient's medical image, that is, it is necessary to identify some basic components, such as the air, bones, vertebrae, etc. in the lungs, and then combine these basic components.
  • FIG. 8 is a diagram showing the distribution of organs in the upper torso portion of the human body. Taking lung CT (plain or enhancement) as an example, it is easier to distinguish the vertebrae, trachea, bronchus, lymph, lung air, blood vessels and other parts, as well as some large variations such as pleural effusion. Big placeholders, etc.
  • Figure 9 is a cross-sectional view of a three-dimensional CT chest lung image.
  • the high-brightness part is the bone
  • the bottom middle triangle (middle gray) part is the vertebra
  • the inner left and right sides of the black (HU low density) part of the large connected area is the air in the lung
  • the black part of the central part is the cross section of the bronchi.
  • Blood vessels are those relatively bright segments or circular elliptical cross-sections surrounded by air in the lungs.
  • Figure 10b shows the original lung CT image ( Figure 2a for its two-dimensional screenshot) after thresholding and connectivity analysis, labeling the (in red) part of the air in the lungs.
  • the system analyzes and infers the possible type of lesion, which has a strong auxiliary limitation meaning; cum, only the type of lesion surrounded or adjacent to the region needs to be Further analysis (texture analysis, density-space analysis, convolutional neural network matching, etc.), in which high-potential lesion types were introduced by the system for expert selection.
  • the expert determines the region of interest on the two-dimensional image based on the patient's medical image, ie, the region that the expert believes may be present. For example, if the radiologist is in the lung CT image, the body of the region of interest is drawn in the lung, connected to the blood vessel, and the region is surrounded by air in the lung (where the identification of the air in the lung is shown in Figure 10).
  • the medical image-assisted diagnosis system can automatically analyze these position features, and through the segmentation algorithm in the region of interest (ROI) and the HU density distribution and texture analysis, the nodule is estimated to be connected to the vessel, and the solidity is not High may also be a frosted glass nodule, and a large piece of texture may be a lung infection. Based on these characteristics, as well as the image semantic representation knowledge map and the expertly outlined region of interest, the medical image-assisted diagnosis system can automatically pop up a list of options after preliminary calculation, and arrange multiple description options based on the possibility, that is, candidate candidates. For the lesion option, the candidate lesion option can be one item or multiple items.
  • ROI region of interest
  • HU density distribution and texture analysis the nodule is estimated to be connected to the vessel, and the solidity is not High may also be a frosted glass nodule, and a large piece of texture may be a lung infection.
  • the medical image-assisted diagnosis system can automatically pop up a list of options after preliminary
  • the knowledge map is expressed according to the image semantics.
  • the image feature model corresponding to the named entity determines the type of the named entity (specifically certain types of lesions) that may be included in the region of interest marked on the two-dimensional image section, through the interactive interface (may The graphical interface option form, or voice question and answer form) is pushed to the expert for expert selection.
  • the determination of the region of interest here can be done by an expert manually clicking on the computer or by an image recognition algorithm.
  • the manual completion of the expert is that the doctor browses and observes through the medical image display system such as PACS, and finds some suspected lesion areas, manually draws on the two-dimensional image cross section, and draws a closed curve form, that is, positioning on a two-dimensional section.
  • a region of interest is that the doctor browses and observes through the medical image display system such as PACS, and finds some suspected lesion areas, manually draws on the two-dimensional image cross section, and draws a closed curve form, that is, positioning on a two-dimensional section.
  • the image recognition algorithm is accomplished by a computer with a certain degree of image reading (image reading) capability through certain types of lesion recognition algorithms (eg, traditional feature-based or rule-based image recognition algorithms; or deep learning algorithms such as CNN, RNN, etc.; assist with migration learning or enhanced learning) to identify and locate and automatically prompt.
  • lesion recognition algorithms eg, traditional feature-based or rule-based image recognition algorithms; or deep learning algorithms such as CNN, RNN, etc.; assist with migration learning or enhanced learning
  • S301 Perform positioning analysis on the region of interest based on the determined type of lesion to which the region of interest belongs, calculate a spatial location of the region of interest, and segment the lesion region.
  • a structured report associated with the region of interest, and the expression of the lesion region and corresponding image semantics is added to the corresponding lesion image library.
  • S3011 Determine a type of lesion to which the region of interest belongs according to the region of interest and the candidate lesion option.
  • human organs generally have a relatively fixed position, and their position in the medical image and image display (gray value on the image, etc.) are generally obvious, and are easy to identify, locate, and deconstruct.
  • the targeted region of interest is analyzed, and the type of lesion to which the region of interest belongs is determined according to the outlined region of interest and the selected lesion option.
  • S3012 Calculate a spatial location of the region of interest based on the type of lesion determined by the region of interest, and segment the lesion region.
  • Each type of organ has its own unique spatial position, gray space distribution and texture properties.
  • isolated pulmonary nodules are surrounded by the air in the lungs in isolation, and after three-dimensional imaging as threshold treatment and connected branch analysis, the surrounding is surrounded by air branches (HU density below a certain threshold and in the lungs). And the density of the HU inside the branch of the nodule matches the distribution of the center of the branch and conforms to a certain distribution.
  • the small pulmonary nodules that connect the blood vessels are surrounded by the air in the lungs, but can be connected to the trachea/bronchus, pleura, and lobes through one or more blood vessels (high density).
  • the periphery is surrounded by air branches (HU density is lower than a certain threshold and in the lung), but there is a high-density narrow-channel blood vessel branch (HU density is higher than a certain threshold and Connected in the lungs).
  • the high-density narrow-channel blood vessel branches are filtered by Morphological Analysis's Opening operator (with spherical elements of different scales).
  • the internal HU density-space distribution is somewhat different from the isolated nodules. The part around the small pulmonary nodules that connect the lung wall is surrounded by the air in the lungs, but one side is close to the lung wall.
  • Moringological Analysis's Opening operator (with different scale spherical structure elements) can be filtered in the image to segment the lesion.
  • nodules such as frosted glass nodules.
  • the lesion region is segmented in the region of interest, and the grayscale value of the two-dimensional image can be used, and the region that does not meet the threshold requirement or the texture feature is segmented as the lesion region based on the threshold or texture of the lesion. That is, by calculating the degree of matching of the region of interest with the known type of lesion, a possible lesion or lesion region is obtained.
  • the segmented lesion area and the expression content of the image semantics corresponding to the lesion area are added to the report, and a structured report is generated along with the patient information.
  • the image semantics corresponding to the lesion area is a structured lesion description item, which determines the lesion options and their attributes (including the size, burr, clarity, density average, and HU of the lesion).
  • a map distribution, etc. is generated that is associated with a region of interest (or determined to be a lesion region) of the medical image.
  • the named entity portion of the lesion option determined in the structured report is a hyperlink associated with the region of interest.
  • the hyperlink is a hyperlink associated with the image semantics corresponding to the determined lesion region and a hyperlink associated with the lesion region; by clicking the hyperlink, the lesion region and the lesion region (the region of interest of the two-dimensional image) can be simultaneously viewed. Or the representation content of the image semantics corresponding to the space-time segment segmented by the 3D image and the 2D motion image. It effectively solves the tedious need to find the corresponding image in the existing report according to the image semantics corresponding to the lesion area, and improves the efficiency of viewing the report.
  • the lesion area and the corresponding image semantic expression content are added to the corresponding lesion image database, and used as sample accumulation for subsequent updating and supplementing the image semantic expression knowledge map, so that the expert can work
  • the accumulation of samples in the process does not require additional human and financial resources to conduct data mining, which improves the efficiency of the use of structured reports.
  • the image representation of the knowledge is combined with various machine learning, especially deep learning and reinforcement learning for medical image recognition.
  • One advantage is that the sample images can be accumulated in a planned manner, and the marked lesions of many images can be continuously collected. Under the same subtab.
  • the so-called evolution, on the one hand, is the accumulation of quantity.
  • the accumulation of sample images of the lesions of the same label will inevitably lead to an increase in the number of samples that can be used for deep learning. Therefore, regardless of algorithms such as CNN, RNN, DNN, or LSTM, the increase of samples generally leads to an increase in recognition ability. Identify improvements in sensitivity and specificity. This is obvious.
  • the medical image-assisted diagnosis system will be easy to use migration learning and other means to quickly learn some new or less-sized lesions.
  • breast MRI masses and non-tumor-enhancing lesions and pulmonary CT nodules and GGO lesions have many spatial-density similarities, but they differ in specific parameters. These characteristics are suitable for some When there are not enough lesion samples of the category or label, cross-domain migration learning (parameter model obtained from other lesion samples with certain image similarity is applied to the lesion sample for parameter adjustment), or Borrowed Strength parameter estimation.
  • a second embodiment of the present invention provides a medical image-assisted diagnosis method combining image recognition and report editing, which is different from the above-described first embodiment in that:
  • step S3 of the first embodiment after acquiring the medical image of the patient, extending the determined region of interest from the two-dimensional image to the three-dimensional image or the two-dimensional motion image (a video image that changes with time), based on the sense
  • the type of lesion included in the region of interest, the location analysis of the region of interest, the spatial location of the region of interest is calculated, and the lesion region of the overall image is segmented.
  • performing a positioning analysis on the region of interest to segment the lesion region of the overall image includes the following steps:
  • the determined region of interest is extended from the two-dimensional image to the three-dimensional image or the two-dimensional motion image, and the lesion region of the overall image is segmented.
  • Step 1 based on the type of lesion determined by the expert, and the shape and texture characteristics corresponding to the type of the lesion, using the gray value of the two-dimensional image, segmenting based on the threshold or texture of the lesion type; in some cases, Further using a two-dimensional image of a mathematical morphology operator or other segmentation operator to connect certain lesions that are connected to a certain part of the organ (for example, a solid nodule that connects the lung wall and a lump that connects the gland, the pixels of both are connected Grayscale and texture features are similarly segmented to obtain a primary closed region of one or more closed core lesion regions corresponding to this lesion (ie, lesion region) in this two-dimensional sub-image (frame) section.
  • a mathematical morphology operator or other segmentation operator to connect certain lesions that are connected to a certain part of the organ (for example, a solid nodule that connects the lung wall and a lump that connects the gland, the pixels of both are connected Grayscale and texture features are similarly segmente
  • Step 2 Based on the main closed region, extending to the previous and subsequent spaces of the spatial sequence of the image, segmenting based on features such as threshold or texture of the lesion type; and in some cases, further using 2D images
  • the mathematical morphology operator or other segmentation operator segments certain lesions that are connected to a certain part of the organ to obtain one or several closed regions that correspond to the description of the lesion type. Of these regions, only the closed loop region that is connected in three dimensions (generally in a 6-neighbor connection process) to the previously identified primary closed region is merged into the closed core lesion region.
  • Step 3 Continue the operation of step 2 above, and perform a mathematical morphology closing operation in three-dimensional space to filter out other areas connected to the closed core lesion area in three-dimensional space (in terms of masses and nodules, some catheters, Blood vessels and some organ glands) until the closed core lesion area no longer grows.
  • Step 4 In this way, the edge of the closed core lesion area is outlined and pixel-level labeled. At the same time, the maximum and minimum values of the X, Y, and Z axes in the coordinates of the edge pixel points of the closed core lesion area are calculated, and the cube of the space is formed, that is, the three-dimensional cube area including the lesion area.
  • the user's annotation of a region of interest or lesion is generally limited to a static image of a particular time segment (often in ultrasound is fixed at a certain point) One or several frames in the scanned image frame of the time segment).
  • the complete mapping of the region of interest or lesion is based on the computer's algorithm (space adjacent, texture, grayscale, etc.) to sketch the region of interest of the user, and further extrapolate to other adjacent two-dimensional images of the motion image. frame.
  • the characteristics of B-ultrasound are that doctors are constantly moving the probes of the test instruments, and some of the images of the parts monitored by the test probes are constantly changing over time (such as heart and blood flow).
  • doctors operate probes in two states: fast-moving probes to find suspicious area states; basically stationary or with slight slips, focusing on changes in ultrasound images over time in certain areas (eg, changes in blood flow) status.
  • the medical image assisted diagnosis system is used to draw the region of interest on the dynamic image presented by the host display.
  • the medical image assisted diagnosis system will determine the time series of the motion image corresponding to the region of interest by the following steps.
  • the specific instructions are as follows:
  • Step 1 Pre-process each frame in the motion image, and output relatively fixed human organ regions, such as bones, muscles, heart core regions (contraction/diastolic common parts), lung regions (respiratory common parts), etc., and obtain real-time processing. After the motion picture.
  • relatively fixed human organ regions such as bones, muscles, heart core regions (contraction/diastolic common parts), lung regions (respiratory common parts), etc.
  • Step 2 Obtain a complete sequence of observation frames with relatively fixed probe positions in the motion image.
  • the specific implementation methods are as follows:
  • the processed dynamic image (the output image of step 1) is analyzed in real time to determine whether the probe is moving fast - looking for interest Area, or still static (with slight movement) - has focused on changes in images over time in a certain area (such as changes in blood flow) and based on analysis of adjacent frames and similar scenes (such algorithms are already in MPEG4) Mature) Determines the complete sequence of observation frames for the same scene.
  • the MPEG4 compression algorithm provided in the second embodiment is provided with an algorithm module for detecting whether the scene is changed (including detecting a scaling of the scene - that is, a detail enlargement or scene expansion of the same scene), the scene shifting, and The scene is completely switched).
  • Medical dynamic images are generally mainly scene shifting. The scene is switched completely less, generally when the probe is placed on the body and removed. I won't go into details here.
  • Step 3 Based on the aforementioned region of interest and the type of lesion determined by the expert, and the sequence of observation frames in which the expert determines the region of interest, the complete series of observation frames in which it is located is completely obtained.
  • the sequence of frames in which the expert determines the region of interest refers to a specific one or several consecutive two-dimensional images (frames) when the expert determines the region of interest.
  • the system expands the two-dimensional image or part of the two-dimensional dynamic image series corresponding to the region of interest in time to the complete series of observation frames (the entire probe position is relatively fixed for a period of time).
  • each region of interest of the extended frame can be simply processed and still limited to the two-dimensional region of interest; it can be further processed based on the originally determined two-dimensional region of interest, and finally the expert Determine the type of lesion, perform an image analysis, and re-segment the more accurate lesions in the extended frame.
  • the expression content is added to the corresponding lesion image library.
  • the candidate lesion options are not consistent with the region of interest, the expert needs to manually input the expression content of the image semantics corresponding to the region of interest, and send it to other experts for verification. After the verification is passed, the lesion region and the corresponding image are obtained.
  • the semantic expression is added to the corresponding lesion image library.
  • the candidate lesion options are inconsistent with the region of interest, including the lesion option corresponding to the region of interest in the candidate lesion option or the lesion option is inaccurate in describing the region of interest.
  • the specific instructions are as follows:
  • the attributes of the lesion and its corresponding local image of the lesion will be recorded and added to the image database of the lesion, and will be submitted to other experts for cross-validation as a new finding that is inconsistent with the system judgment, once the new discovery is artificially Confirmation, the corresponding knowledge (including the lesion area and the corresponding content of the image semantics) will be added to the lesion image library, added to the training set as a new training sample, and the new knowledge will be added when the system is updated regularly or irregularly. To the image semantic representation knowledge map. If it is falsified by other experts, the manual entry result of this expert is corrected, and the system identification result is adopted.
  • the new type of lesion together with the attribute and its corresponding local image of the lesion will be recorded and added to the new lesion type of temporary lesion image library, and will be submitted to other experts for cross-validation as new findings. If the discovery is confirmed by human beings, the corresponding knowledge will be added to the image semantic expression knowledge map, and the lesion image is added to the corresponding lesion image library, and added to the training set as a new training sample. If it is falsified by other experts, the manual entry result of this expert is corrected, and the previous recognition result of the medical image assisted diagnosis system is adopted.
  • the medical image-assisted diagnostic system can wait for such new samples to accumulate to a certain extent for training. When such samples are found, the medical image-assisted diagnostic system can also be based on expert research and other knowledge of such new samples, combined with GAN-generated confrontation networks - generating more similar samples, etc., learning when the sample is small. .
  • Third embodiment Third embodiment
  • the third embodiment provided by the present invention provides a medical image assisted diagnosis system combining image recognition and report editing.
  • the system comprises a knowledge map creation module, an information acquisition module, a region of interest determination module, a candidate lesion option generation module, a lesion region determination module, a report generation module and a correction module.
  • the knowledge map building module is configured to establish an image semantic expression knowledge map according to a standardized dictionary database in the image field and a historical medical image report analysis.
  • the information acquisition module is used to acquire a patient medical image.
  • the region of interest determination module is used by the expert to determine the region of interest of the patient's medical image based on the patient medical image transmitted by the information acquisition module.
  • the candidate lesion option generating module is configured to provide a candidate lesion option of the patient according to the image semantic expression knowledge map transmitted by the knowledge map establishing module and the region of interest transmitted by the region of interest determining module.
  • the lesion region determining module is configured to determine a lesion type according to the candidate region transmitted by the region of interest and the candidate lesion option generated by the candidate lesion option generating module; and segment the lesion region according to the lesion type.
  • the report generation module is configured to generate a structured report associated with the region of interest of the patient medical image based on the segmented lesion region and the corresponding expression of the image semantics.
  • the correction module is used to add the expression of the lesion area and the corresponding image semantics to the corresponding lesion image library.
  • the lesion region determining module includes a lesion type determining unit and a lesion region determining unit.
  • the lesion type determining unit is configured to determine a lesion type in the candidate lesion option provided by the candidate lesion option generating module according to the region of interest transmitted by the region of interest determination module; the lesion region determining unit is configured to transmit the region of interest to the region of interest determining module.
  • the region of interest is subjected to localization analysis, and the lesion region is segmented, and the lesion selection corresponding to the lesion region is determined according to the image semantic expression knowledge map transmitted by the knowledge map building module, thereby determining the lesion type.
  • the medical image-assisted diagnosis method combined with image recognition and report editing provided by the present invention combines various machine learning with image semantic expression knowledge map to perform medical image recognition, and can systematically accumulate sample images in depth, and Continuously improve the image semantic representation knowledge map, so that the marked lesions of many images are continuously collected under the same sub-label.
  • the sample images of the lesions of the same label accumulate more and more, the number of samples that can be used for deep learning is increasing, and the increase of the sample generally leads to an increase in recognition ability and an increase in recognition sensitivity and specificity.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Public Health (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Quality & Reliability (AREA)
  • Computer Graphics (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Geometry (AREA)
  • Evolutionary Computation (AREA)
  • Pathology (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

一种结合影像识别与报告编辑的医学影像辅助诊断方法及系统。该医学影像辅助诊断方法包括如下步骤:S1,建立医学影像的图像语义表达知识图谱;S2,获取患者医学影像,确定在二维影像上的感兴趣区域,根据图像语义表达知识图谱以及所述感兴趣区域提供患者的候选病灶选项;S3,根据所述感兴趣区域和所述候选病灶选项确定病灶类型;根据所述病灶类型分割出病变区域,生成与患者医学影像的感兴趣区域相关联的结构化报告,同时将所述病变区域和对应的影像语义的表达内容加入到对应的病灶图像库中。该方法以图像语义表达知识图谱结合各类机器学习进行医疗影像识别,可以深入累积样本图像,不断完善图像语义表达知识图谱,增强医学影像的辅助诊断能力。

Description

结合影像识别与报告编辑的医学影像辅助诊断方法及系统 技术领域
本发明涉及一种医学影像辅助诊断方法,尤其涉及一种结合影像识别与报告编辑的医学影像辅助诊断方法,同时涉及相应的医学影像辅助诊断系统,属于医学影像辅助诊断技术领域。
背景技术
众所周知,影像科的普通医生虽然在课堂上经过大量学习,在实习及后续的住院医师临床实践中,仍然要持续地接受资深专家的指点。而这些指点,无一不是在发生在具体的诊疗实践中。资深专家需要结合具体的医学影像,针对普通医生的影像报告进行修改指正。
但是,目前针对医学影像的诊断工作存在如下问题:
1)医学影像中展示的各个器官、病灶跟影像报告中对应的描述内容之间高度隔离,相互的关联完全没有系统级的支持。两部分内容之间的对应和关联,完全依赖阅读及审核人员的眼睛和专业知识。这样导致,高级医生审核低级医生报告时,及会诊转诊中需要审核之前的报告时,难以从报告中的病灶描述找到对应的影像表现,耗时长效率低。
2)医学影像协会推动的结构化报告,具有对病灶描述精准、描述规范统一等优点,但是在实践中较为繁琐、效率低下,导致难以应用推广。
3)现有应用于医学影像领域的深度学习方法,需要进行大量的病灶标注工作,而且大量的标注样本没有得到充分利用,缺乏可解释性。另外,传统的PACS系统(图片存档及通信系统)的标注及医学影像中的标注,需要额外耗费大量人工,而且不能与现有影像科医生的日常工作进行有机结合。
为了解决上述问题,很多机构进行了大量的研究和探索,但是,对于现有技术中大量的标注样本没有得到充分利用,不能与影像科医生的日常工作有机结合等问题,现有技术方案并没有有效解决,而且生成系统的扩展性不强,不能及时对动态诊断报告的准确性进行完善。 另一方面,影像领域的结构化报告是医学影像管理领域的必然发展方向。没有基于专业化、结构化的报告,大数据的概念就是虚假的,数据挖掘和在线决策支持就不能实现,也无法基于“诊断金标准”给出诊断的分级,更不可能对临床科室提供面向治疗方案的实用报告。
发明内容
针对现有技术的不足,本发明所要解决的首要技术问题在于提供一种结合影像识别与报告编辑的医学影像辅助诊断方法;
本发明所要解决的另一技术问题在于提供一种结合影像识别与报告编辑的医学影像辅助诊断系统。
为实现上述发明目的,本发明采用下述的技术方案:
根据本发明实施例的第一方面,提供一种结合影像识别与报告编辑的医学影像辅助诊断方法,包括如下步骤:
S1,建立医学影像的图像语义表达知识图谱;
S2,获取患者医学影像,确定在二维影像上的感兴趣区域,根据图像语义表达知识图谱以及所述感兴趣区域提供患者的候选病灶选项;
S3,根据所述感兴趣区域和所述候选病灶选项确定病灶类型;根据所述病灶类型分割出病变区域,生成与患者医学影像的感兴趣区域相关联的结构化报告,同时将所述病变区域和对应的影像语义的表达内容加入到对应的病灶图像库中。
其中较优地,在步骤S1中,进一步包括如下步骤:
S11,基于医学影像领域的标准化词典库,形成基础的命名实体列表;
S12,通过对病灶图像库中的历史积累的医学影像报告进行分析,形成命名实体的特性描述文字规范;
S13,基于专家知识及特定病灶类型对应的局部病灶影像,将获得的命名实体的特性描述文字规范转变为影像语义的表达内容,将每个命名实体以及所述命名实体对应的影像、影像语义的表达内容共同建立所述图像语义表达知识图谱。
其中较优地,在步骤S3中,进一步包括如下步骤:
S301,基于所述感兴趣区域所属的病灶类型,对所述感兴趣区域 进行定位分析,计算所述感兴趣区域的空间位置,并分割出病变区域。
其中较优地,在步骤S3中,进一步包括如下步骤:
S311,基于所述感兴趣区域所属的病灶类型,对确定的感兴趣区域进行定位分析,确定感兴趣区域所属的病灶类型;对确定的感兴趣区域从二维影像延伸到三维立体影像或者二维动态影像,并分割出整体图像的病变区域。
其中较优地,当图像类型为三维影像时,在步骤S311中,进一步包括如下步骤:
步骤1:基于专家确定的病灶类型,结合所述病灶类型对应的形状和纹理特征,调用二维影像的灰度值,根据器官的连接关系对病变区域进行分割,获得在二维影像截面中对应所述病变区域的闭合核心病灶区域的主要闭合区域;
步骤2:基于所述主要闭合区域,向二维影像的空间序列的上一幅及后一幅延伸,基于病灶类型对应的形状和纹理特征,根据器官的连接关系对病变区域进行分割,获得符合病灶类型描述的闭合区域;
步骤3:继续步骤2的操作,并进行三维空间的数学形态学的闭运算,除掉三维空间与闭合核心病灶区域连接的其他区域,直到闭合核心病灶区域不再生长;勾画出闭合核心病灶区域边缘;
步骤4:计算闭合核心病灶区域的边缘像素点坐标中的X、Y、Z轴的最大值及最小值,由此构成三维立方体区域。
其中较优地,当图像类型为二维动态影像时,在步骤S311中,进一步包括如下步骤:
步骤1:对动态影像中的各帧进行预处理,输出相对固定的人体器官区域的图像;
步骤2:获得动态影像中探头位置相对固定的完整的观察帧序列;
步骤3:基于所述感兴趣区域、确定的所述病灶类型,以及确定所述感兴趣区域时所处的观察帧序列,完整获取感兴趣区域对应的完整的观察帧系列。
其中较优地,当扫描探头自带位置移动感应器时,步骤2中所述获得动态影像中探头位置相对固定的完整观察帧序列,包括如下步骤:
基于位置移动感应器,确定探头是否在快速移动;
如果探头在快速移动,则认为检测仪器在寻找感兴趣区域;否则,认为探头基本静止,正在重点关注某个区域内的图像随时间的变化;
基于位置随时间的变化,确定探头位置相对固定的完整的观察帧序列。
其中较优地,当扫描探头自身不带有位置移动感应器时,步骤2中所述获得动态影像中探头位置相对固定的完整的观察帧序列,包括如下步骤:
实时分析动态影像,确定探头是否在快速移动;
如果探头在快速移动,则认为在寻找感兴趣区域;否则,认为探头基本静止,正在重点关注某个区域内的图像随时间的变化;
基于相邻帧及相似场景的分析确定同样场景的完整的观察帧序列。
其中较优地,所述结构化报告中包含确定的病变区域对应的影像语义的表达内容与所述病变区域关联的超链接;通过点击超链接,可以同时查看影像展示的病变区域以及与所述病变区域对应的影像语义的表达内容。
其中较优地,当所述候选病灶选项均与所述感兴趣区域不相符时,输入与所述感兴趣区域对应的影像语义的表达内容,发送给其他专家进行验证,验证通过后,将所述病变区域和对应的影像语义的表达内容加入到对应的病灶图像库中。
根据本发明实施例的第二方面,提供一种结合影像识别与报告编辑的医学影像辅助诊断系统,包括知识图谱建立模块、信息获取模块、感兴趣区域确定模块、候选病灶选项生成模块、病变区域确定模块、报告生成模块和修正模块;
其中,所述知识图谱建立模块用于根据影像领域的标准化词典库和历史积累的医学影像报告分析建立图像语义表达知识图谱;
所述信息获取模块用于获取患者医学影像;
所述感兴趣区域确定模块用于确定患者医学影像的感兴趣区域;
所述候选病灶选项生成模块用于根据图像语义表达知识图谱以及感兴趣区域提供患者的候选病灶选项;
所述病变区域确定模块用于根据所述感兴趣区域和所述候选病灶 选项确定病灶类型;并根据所述病灶类型分割出病变区域;
所述报告生成模块用于根据分割出病变区域以及对应的影像语义的表达内容生成与患者医学影像的感兴趣区域相关联的结构化报告;
所述修正模块用于将所述病变区域和对应的影像语义的表达内容加入到对应的病灶图像库中。
其中较优地,所述病变区域确定模块包括病灶类型确定单元和病变区域确定单元;其中,
所述病灶类型确定单元用于根据所述感兴趣区域,在提供的所述候选病灶选项中确定病灶类型;
所述病变区域确定单元用于对所述感兴趣区域进行定位分析,分割出病变区域,根据所述图像语义表达知识图谱确定病变区域对应的病变类型;
病变区域确定模块用于对所述感兴趣区域进行定位分析,计算所述感兴趣区域的空间位置,并分割出病变区域。
本发明所提供的结合影像识别与报告编辑的医学影像辅助诊断方法以图像语义表达知识图谱结合各类机器学习,进行医疗影像识别,可以有计划地深入地累积样本图像,并不断完善图像语义表达知识图谱,使很多图像的标注病灶可以被不断汇集在同一个子标签下。除此之外,越来越多标注病灶的积累,通过机器学习手段,结合人工的深入研究,可以不断细化病灶的标签标注,进一步丰富影像组学的测度,增强医学影像的辅助分析能力。
附图说明
图1为本发明所提供的结合影像识别与报告编辑的医学影像辅助诊断方法的流程图;
图2为本发明所提供的实施例中,实性结节的影像示意图;
图3为本发明所提供的实施例中,完全型磨玻璃密度影的影像示意图;
图4为本发明所提供的实施例中,混合型磨玻璃密度影的影像示意图;
图5为本发明所提供的实施例中,无壁空洞的影像示意图;
图6为本发明所提供的实施例中,薄壁空洞的影像示意图;
图7为本发明所提供的实施例中,厚壁空洞的影像示意图;
图8为本发明所提供的实施例中,人体上半部躯干部位的器官分布图;
图9为本发明所提供的实施例中,人体胸肺CT的一个具体二维截面及其对应的一系列较易识别的器官结构示意图;
图10a为本发明所提供的实施例中,人体胸肺CT的一个具体二维截面图及对应的肺内空气部分标注结构示意图;
图10b为10a所示的结构示意图经过阈值处理及连通分析后,标注出肺内空气部分的示意图。
具体实施方式
下面结合附图和具体实施例对本发明的技术内容进行详细具体的说明。
本发明所提供的结合影像识别与报告编辑的医学影像辅助诊断方法,是在医学影像中初步的医学解剖结构表达及初步的病灶识别能力的基础上,改变目前医生制作、编辑及审核影像报告的流程,通过医生对属于特定的影像区域(无论该区域是属于二维影像、三维影像中的二维切片或动态影像中的某个特定截图的某个部分)的某个生理结构(器官、组织或病灶)进行勾画或指点,自动或半自动地生成对应于该生理结构的影像报告描述内容条目(即影像语义的表达内容),并且将该条目内的命名实体与所对应的影像的特定区域对应链接。
与现有技术相比较,本发明是一个模拟互动式训练的医疗影像读片机器学习方法及系统。它通过对影像科医生常用的读片报告流程的再造,可以大大提升读片及报告生成效率,以及报告编辑和审核效率,同时构造一个可以持续与高端医生沟通,不断学习提升阅片能力和报告能力的人工智能阅片技术体系。
如图1所示,本发明所提供的结合影像识别与报告编辑的医学影像辅助诊断方法,主要包括如下步骤:首先,根据影像领域的标准化词典库和病灶图像库中的历史积累的医学影像报告分析建立图像语义表达知识图谱;然后,获取患者医学影像,确定在二维影像上的感兴趣区域,根据图像语义表达知识图谱以及所述感兴趣区域提供患者的候选病灶选项。最后,根据所述感兴趣区域和所述候选病灶选项确定 病灶类型;根据所述病灶类型分割出病变区域,生成与患者医学影像的感兴趣区域相关联的结构化报告,同时将所述病变区域和对应的影像语义的表达内容加入到对应的病灶图像库中,并将结构化报告发放给患者。下面对这一过程做详细具体的说明。
S1,根据影像领域的标准化词典库和病灶图像库中的历史积累的医学影像报告分析建立图像语义表达知识图谱;这里的图像语义表达知识图谱是医疗影像报告知识图谱及影像语义体系的统称。
根据影像领域的标准化词典库和历史积累的医学影像报告分析建立各类器官、各类病灶及其病变描述的图像语义表达知识图谱,具体包括如下步骤:
S11,基于影像领域的标准化词典库,形成基础的命名实体列表。
基于影像领域的标准化词典库(在本发明所提供的实施例中,基于标准化词典库RADELX),形成基础的命名实体列表;命名实体包括各类器官、病灶等。
S12,通过对病灶图像库中的历史积累的医学影像报告进行分析,形成各个命名实体的特性描述文字规范。
通过对不同类型的医学影像(包括但不限于肺部CT、钼靶、脑血管MRI、心血管超声等)的海量医学影像报告进行分析以及专家知识,形成对应各类器官病变及病灶的各类特性描述的文字规范。
医学影像报告包含各类器官的状态描述及内部的某些局部的病灶描述。当前的一个趋势是基于影像领域的标准化词典库RADELX,建立覆盖RSNA(北美放射学会)和ACR(美国放射学院)的RADS(影像报告和数据系统)结构化报告。该结构化报告会明确地描述影像中病变的位置、性质及等级。一般来说,特定类型的病灶与器官的空间位置关系比较明确,也有相对特定的灰度分布(包括灰度在空间位置上的分布)及纹理结构,因而在影像上有较为明确的语义表达。
S13,基于专家知识,将获得的各个命名实体的特性描述文字规范转变为影像语义的表达,将每个命名实体以及所述命名实体对应的影像、影像语义的表达内容共同创建医学影像的图像语义表达知识图谱。
基于专家知识,将获得的命名实体的特性描述文字规范转变为影像语义的表达内容,包括空间属性、灰度分布、纹理结构描述等,从 而构成影像报告涉及的医疗本体中的图像语义表达知识图谱。图像语义表达知识图谱,除了文字及数据的结构化描述,还包含每个命名实体(包括易于识别的基本组成部分及病灶)类型对应的那些图像(多为局部图像)标注样本。
下面以几个具体实施例来说明图像语义表达知识图谱的结构和内容。例如:
1.如图2所示,为实性结节(Solid Nodule)的影像图,其对应的影像语义的表达为:
位置-被包含于:1)左右肺内;2)气管、支气管内。
位置-邻居:1)被肺内空气包围,或2)连接肺壁,或3)连接血管,或4)连接支气管。
形状:近圆形(三维:球形)或卵圆形(三维:鹅卵石状)。
大小分类:
微结节:直径<5mm;
小结节:直径5~10mm;
结节:直径10~20mm;
肿块:直接>20mm。
密度-空间分布:
边界:清晰(即灰度变化锐利),毛刺有或无。
对应病变:
微结节恶性几率<1%,随访间隔6~12月确认;
小结节恶性几率25%~30%,随访间隔3~6个月CT复查(建议LDCT);
随访中若某肺结节增长较快,则活检或手术(基于增长速度);
结节及肿块恶性几率大,直接活检或手术。
2.磨玻璃密度影(GGO:Ground Gloss Opacity):别称毛玻璃结节,其对应的影像语义的表达为;
位置-被包含于:1)左右肺内;2)气管、支气管内。
位置-邻居:1)被肺内空气包围,2)连接肺壁,3)连接血管,4)连接支气管。
类型:完全型(PGGO)、混合型(MGGO)分别如图3和图4所示;
密度-空间分布:肺野低密度背景上略高密度影(磨玻璃状),或部分高密度影(实性成分,其密度分布类似于实性结节)。其中完全型无实性成分(不含高密度影),混合型含有高密度影部分。
形状:块状堆积。
边界:清晰或不清晰,多有毛刺。
3.空洞(Cavity),其对应的影像语义的表达为:
位置-被包含于:1)左右肺内。
密度-空间分布:
虫蚀样:无壁空洞,周围被略高密度影包围的低密度影部分;
薄壁:薄壁(高密度影)包围的低密度影像部分;
厚壁:厚壁(高密度影)包围的低密度影像部分。
对应病变:
虫蚀样空洞:大叶干酪性肺炎等,如图5所示;
薄壁空洞:继发性肺结核等,如图6所示;
厚壁空洞:结核球、肺鳞癌等,如图7所示。
经过初始化,本医学影像辅助诊断系统初步构建了图像语义表达知识图谱,对应于不同类型、不同目的、不同部位扫描的医疗影像中对应的器官、病变、病灶形成了不同的属性描述,并且其属性描述(定性或定量)均可计算,即在医疗影像中的具体的物体(Object)识别后对应的特征提取中计算获得其属性描述,例如具体病灶在该影像中的相对的空间位置范围、密度平均值、标准偏差、熵、圆度或球形度、边缘毛刺度、边缘清晰度、直方图、围绕中心的距离的密度分布图、纹理表达的相关性矩阵等。
S2,获取患者医学影像,系统预处理识别定位出患者医学图像的一些基本的易于识别的组成部分,并以这些基本信息,结合专家在二维影像上划出的感兴趣区域,根据图像语义表达知识图谱以及所述感兴趣区域提供患者的候选病灶选项。
要生成结构化报告,需要先获取患者的个人信息以及患者的医学影像。在本发明中,获取患者医学影像的技术手段包括但不限于胸肺CT、腹部CT、脑血管MRI、乳腺MRI、腹部超声等。获取患者医学影像之后,系统预处理识别定位出患者医学图像的一些基本的易于识别 的组成部分,就是需要先识别出一些基本组成部分-如肺内空气、骨骼、椎骨等,然后结合这些基本组成部分的识别和定位,再结合专家手动画的ROI(感兴趣区域)与这些识别出的基本组成部分的空间位置关系,来初步确定可能ROI内部的病灶选项列表,然后进一步进行筛选。人体器官普遍有相对固定的位置,其在医学影像中的位置及图像显示(图像上的灰度值等)一般较为明显,容易识别、定位、解构。图8为人体上半部躯干部位的器官分布图。以肺部CT(平扫或增强)为例,可以较为容易区分出其中的椎骨、气管、支气管、淋巴、肺内空气、血管等部分,以及其中的一些大的变异现象,如胸肺积液、大的占位等。MRI影像及超声动态影像中,运用人眼或者计算机的图像诊断系统也同样容易识别并准确定位这些器官部分。这些器官部分相对容易利用图像识别算法,通过阈值分析、边缘检测、结合位置信息来进行初步的病灶识别及定位。图9为三维CT胸肺影像的某个横截面图。高亮度部分是骨骼,底部中间三角形(中间灰色)部分是椎骨,内部左右两边的黑色(HU低密度)部分大型连通区域是肺内空气,中央部分的几个黑色圆形区域是支气管的截面图。血管是肺内空气环绕的那些比较亮的线段或圆形椭圆形截面。图10b显示了原始的肺部CT图像(图10a为其二维截图)经过阈值处理及连通分析后,标注出(红色部分)的肺内空气部分。这部分的识别定位,对于专家在该区域内划出感兴趣区域时,系统分析推理出可能的病灶类型,有很强的辅助限制意义;暨,仅处于该区域包围或邻接的病灶类型需要被进一步分析(纹理分析、密度-空间分析、卷积神经网络匹配等),其中高可能性的病灶类型被系统推出,供专家选择。
在获取患者医学影像之后,专家根据患者医学影像确定在二维影像上的感兴趣区域,即专家认为可能存在病变的区域。例如:如果放射科医生在肺部CT检查图像中,勾画的感兴趣区域主体在肺内,连着血管,同时该区域为肺内空气环绕(其中肺内空气的识别定位见图10),本医学影像辅助诊断系统可以自动分析出这些位置特征,并通过感兴趣区域(ROI)内的分割算法及HU密度分布和纹理分析,推算出此处大概率为连接血管的结节,实性度不高也可能是毛玻璃结节,大片出现纹理杂乱则有可能是肺部感染。基于这些特性,以及图像语义表达 知识图谱、专家勾画的感兴趣区域,本医学影像辅助诊断系统在初步计算后,可以自动弹出一个选项列表,基于可能性排列出多个描述选项,即患者的候选病灶选项,该候选病灶选项可以为一项,也可以为多项。
具体的,当专家勾画出感兴趣区域之后,对感兴趣区域进行初步的定位分析,基于感兴趣区域的二维影像截面上的位置、纹理、灰度分布等特征,根据图像语义表达知识图谱中的命名实体(如某些病灶)对应的影像特征模型,确定在二维影像截面上所标的感兴趣区域可能包含的特征相似的命名实体类型(具体的某几类病灶),通过交互界面(可以图形界面的选项形式,或语音问答形式)推送给专家,供专家选择。
这里的确定感兴趣区域可以是通过专家手动点击计算机完成或者通过图像识别算法完成。专家手动完成是医生通过PACS等医学影像展示系统进行浏览及观察,并发现部分疑似病变区域的,在二维影像截面上进行手工勾画,画出闭合曲线的形式,即在一个二维截面上定位一个感兴趣区域。
通过图像识别算法完成是由具有一定程度的读片(影像读取)能力的计算机通过某些类型的病灶识别算法(例如:传统的基于特征或规则的图像识别算法;或者深度学习算法如CNN、RNN等;辅助以迁移学习或增强学习)进行识别定位并自动提示。或者,通过与同类型的正常医学影像进行对比,找到患者医学影像与正常医学影像存在区别的区域,确定为感兴趣区域。
S3,根据所述感兴趣区域和所述候选病灶选项确定病灶类型;根据所述病灶类型分割出病变区域,生成与患者医学影像的感兴趣区域相关联的结构化报告,同时将所述病变区域和对应的影像语义的表达内容加入到对应的病灶图像库中。
S301,基于确定的感兴趣区域所属的病灶类型,对感兴趣区域进行定位分析,计算感兴趣区域的空间位置,并分割出病变区域。
根据专家确定的感兴趣区域和提供的候选病灶选项确定病灶选项,进而确定病灶类型;对感兴趣区域进行定位分析,计算感兴趣区域的空间位置,并分割出病变区域,生成与患者医学影像的感兴趣区 域相关联的结构化报告,同时将病变区域和对应的影像语义的表达内容加入到对应的病灶图像库中。其中,确定感兴趣区域之后,基于确定的感兴趣区域所属的病灶类型,对感兴趣区域进行定位分析,计算所述感兴趣区域的空间位置,并分割出病变区域,具体包括如下步骤:
S3011,根据感兴趣区域和候选病灶选项确定感兴趣区域所属的病灶类型。
前已述及,人体器官普遍有相对固定的位置,其在医学影像中的位置及图像显示(图像上的灰度值等)一般较为明显,容易识别、定位、解构。针对患者医学影像,对勾画出的感兴趣区域进行定位分析,根据勾画的感兴趣区域以及选择的病灶选项确定感兴趣区域所属的病灶类型。
S3012,基于该感兴趣区域确定的病灶类型,计算该感兴趣区域的空间位置,并分割出病变区域。
每类器官都有它独特的空间位置、灰度空间分布及纹理属性。例如:孤立的肺小结节都孤立地被肺内空气包围,在三维影像上表现为阈值处理及连通分支分析后,周边均为空气分支(HU密度低于某阈值且在肺内)包围。且该结节分支内部的HU密度值随分支中心的分布,符合某特定分布。连通血管的肺小结节周围基本被肺内空气包围,但可以通过一条或多条血管(高密度)连通到气管/支气管、胸膜、肺叶。在三维影像上表现为通过阈值处理及连通分支分析后,周边均为空气分支(HU密度低于某阈值且在肺内)包围,但有高密度窄通道血管分支(HU密度高于某阈值且在肺内)连接出去。高密度窄通道血管分支经过Morphological Analysis的Opening算子(以不同尺度的球形结构元素),可以在影像中被过滤。其内部的HU密度-空间分布与孤立结节有些差异。而连接肺壁的肺小结节周围部分被肺内空气包围,但有一边紧贴肺壁。在三维影像上表现为通过阈值处理及连通分支分析后,经过Morphological Analysis的Opening算子(以不同尺度的球形结构元素),可以影像中被过滤,分割出病灶。还有毛玻璃结节等其他类型结节。
基于该患者医学影像所确定的病灶类型计算该区域的空间位置、周边区域,即确定感兴趣区域是否在某个部位、器官内与某些器官邻 接,并在该感兴趣区域内采用多种方法(例如阈值计算、边缘提取、纹理分析等)进行分割,进一步划分出其中可能的病灶或病变区域。在该感兴趣区域内分割出病变区域,可以使用二维影像的灰度值,基于该类型病灶的阈值或纹理等特征,将不符合阈值要求或者纹理特征要求的区域分割出来作为病变区域。即通过计算感兴趣区域与已知的病灶类型的匹配度,得到可能的病灶或病变区域。
将分割出的病变区域以及病变区域对应的影像语义的表达内容添加到报告中,连同患者信息一起生成结构化报告。其中,病变区域对应的影像语义的表达内容为一种结构化的病灶描述条目,它将确定的病灶选项及其属性(包括该病灶的大小、毛刺度、清晰度、密度平均值、HU的直方图分布等等)生成与该医学影像的感兴趣区域(或判定为病灶区域)相关联。在本发明所提供的实施例中,结构化报告中确定的病灶选项的命名实体部分为一个与感兴趣区域关联的超链接。具体的,该超链接为确定的病变区域对应的影像语义的表达内容与该病变区域关联的超链接;通过点击超链接,可以同时查看病变区域以及与该病变区域(二维影像的感兴趣区域,或其在三维影像和二维动态影像分割出的时空片段)对应的影像语义的表达内容。有效的解决了现有报告中需要根据病变区域对应的影像语义的表达内容再去图库中寻找相应影像的繁琐,提高了查看报告的效率。
生成结构化报告之后,将病变区域和对应的影像语义的表达内容加入到对应的病灶图像库中,作为样本积累,供后续对图像语义表达知识图谱进行更新和补充时使用,这样专家可以在工作过程中完成样本的积累,不需要耗费额外的人力财力专门去进行数据挖掘,提高了结构化报告的使用效率。
在本发明中,以图像语义表达知识图谱结合各类机器学习,尤其深度学习、强化学习进行医疗影像识别,一个好处是可以有计划地深入地累积样本图像,很多图像的标注病灶可以被不断汇集在同一个子标签下。所谓进化,一方面是一种量的积累。同一标签的病灶的样本影像积累越来越多,必然导致深度学习可使用的样本不断增加,因而无论使用CNN、RNN、DNN,或是LSTM等算法,样本的增多一般会导致识别能力的增强,识别敏感度和特异度的提高。这点是显而易见的。 另一方面,越来越多的标注病灶的积累,通过机器学习手段,结合人工的深入研究,可以不断细化病灶的标签标注,进一步丰富影像组学的测度,从而不断细化病灶的影像表现类型。也就是说,原有的病症的属性描述,类型会进一步增加,或者量化描述点会进一步增加。前者,如可能发现空间-密度分布有所不同的MGGO及结节,即增加的新的子类型;后者,如对病灶的边缘,或许会增加新的测度,例如边缘的毛刺度,并且该参数的增加,或许会增强基于CT或MRI的影像,预测病灶恶性程度的精度。
进一步地,结合图像语义表达知识图谱的病灶类型描述,参考已经获得的识别良好的病灶识别模型,本医学影像辅助诊断系统将易于采用迁移学习等手段,快速学习一些新生的或者样本较少的病灶。例如乳腺MRI的肿块及非肿样强化病灶与肺部CT的结节及GGO病灶,有很多空间-密度分布的相似性,然而在具体的参数上又有所不同,这些特点很适合在某个类别或标签的病灶样本不够多时,进行跨域的迁移学习(使用其他有一定影像表现相似性的病灶样本获取的参数模型应用于该病灶样本进行参数调整),或者Borrowed Strength参数估计。
第二实施例
本发明的第二实施例提供了一种结合影像识别与报告编辑的医学影像辅助诊断方法,与上述第一实施例的不同之处在于:
一.在第一实施例的步骤S3中,获取患者医学影像之后,对确定的感兴趣区域从二维影像延伸到三维立体影像或者二维动态影像(随时间变动的类视频影像),基于感兴趣区域所包含的病灶类型,对感兴趣区域进行定位分析,计算所述感兴趣区域的空间位置,并分割出整体图像的病变区域。
具体地说,在确定感兴趣区域之后,对感兴趣区域进行定位分析,分割出整体图像的病变区域,具体包括如下步骤:
对确定的感兴趣区域进行定位分析,确定感兴趣区域所属的病灶类型;
对确定的感兴趣区域从二维影像延伸到三维立体影像或者二维动态影像,并分割出整体图像的病变区域。
一般而言,考虑到操作的便捷性,用户都是在二维影像截面上进 行初步勾画的。对于二维形式的患者医学影像,进一步从勾画出的感兴趣区域形成对应的矩形区域或者立方体区域(通过CT、MRI等获得),或者动态影像(通过超声获得),通过计算感兴趣区域与已知的病灶类型的匹配度,得到可能的病灶或病变区域。
接下来进行定位分析,确定感兴趣区域所属的病灶类型。
对于基于CT或MRI获得的三维影像中的某个感兴趣区域或病灶的标注勾画定位,一般局限于在某个空间截面的二维影像,对于该截面的感兴趣区域进行勾画,基于相似的纹理灰度分布,进一步外推到其前后(上下)的其它相邻的多个二维帧。具体说明如下:
步骤1:基于专家确定的病灶类型,及该病灶类型对应的形状和纹理特征,使用二维影像的灰度值,基于该病灶类型的阈值或纹理等特征进行分割;在某些情况下,并进一步使用二维影像的数学形态学算子或其他分割算子将某些跟器官某部分连接的病灶(例如,连接肺壁的实性结节和连接腺体的肿块,相连的两者的像素灰度及纹理特征很像)进行分割,获得在这个二维子影像(帧)截面的一个或几个对应于这个病灶(即病变区域)的闭合核心病灶区域的主要闭合区域。
闭合核心病灶区域必须满足下述两点:
(1)该闭合核心病灶区域完全包含在所画的感兴趣区域内(不会连向外部);
(2)该闭合核心病灶区域所占像素数在该感兴趣区域的比例不得低于某个数字(如30%)。
步骤2:基于此主要闭合区域,向该影像的空间序列的上一幅及后一幅延伸,基于该病灶类型的阈值或纹理等特征进行分割;在某些情况下,并进一步使用二维影像的数学形态学算子或其他分割算子将某些跟器官某部分连接的病灶进行分割,获得一个或数个符合病灶类型描述的闭合区域。这些区域中,仅与前述已认定之主要闭合区域在三维上连接(一般按照6-邻域连接处理)的闭环区域,被归并到闭合核心病灶区域。
步骤3:继续上述步骤2的操作,并进行三维空间的数学形态学的闭运算,以滤除掉三维空间与闭合核心病灶区域连接的其他区域(就肿块和结节而言,是一些导管、血管及一些器官腺体),直到闭合核心 病灶区域不再生长。
步骤4:这样,闭合核心病灶区域边缘被勾画出并进行像素级标注。同时,计算闭合核心病灶区域的边缘像素点坐标中的X、Y、Z轴的最大值及最小值,构成空间的立方体,即包含病变区域的三维立方体区域。
对于基于超声(B超)的动态影像,用户对某个感兴趣区域或病灶的标注勾画定位,一般局限于某个特定时间片段的一个静态影像(在超声中往往是固定于某个点的一个时间片段的扫描图像帧中的某一个或几个帧)。完整的感兴趣区域或病灶的勾画,是基于计算机通过算法(空间相邻、纹理、灰度等)将用户对截面的感兴趣区域的勾画,进一步外推到这个动态影像的其它相邻二维帧。
B超的特点是医生也在不断移动检测仪器的探头,而且检测仪器探头监测的部位的图像中,也有一部分是随时间不断变化的(如心脏、血流)。一般而言,医生操作探头,一般会处于两种状态:快速移动探头,寻找可疑区域状态;基本静止或含轻微滑动,重点关注某个区域内的超声图像随时间的变化(如血流变化)状态。医生一般在后一种状态时,会使用本医学影像辅助诊断系统在主机显示器呈现的动态影像上画出感兴趣区域。本医学影像辅助诊断系统将通过如下步骤,以确定感兴趣区域对应的动态影像的时间序列。具体说明如下:
步骤1:对动态影像中的各帧进行预处理,输出相对固定的人体器官区域,如骨骼、肌肉、心脏核心区域(收缩/舒张共有部分)、肺区域(呼吸共有部分)等,实时获得处理后的动态影像。
步骤2:获得动态影像中的探头位置相对固定的完整观察帧序列,具体实现方法有下述两种:
(1)基于位置移动感应器,确定探头是否在快速移动-寻找感兴趣区域,还是基本静止(含轻微移动)-已经在重点关注某个区域内的图像随时间的变化(如血流变化),并基于探头位置随时间的变化,直接确定探头位置相对固定的完整的观察帧序列。
(2)基于MPEG4压缩算法中的相邻帧间的预测编码及帧间相关性计算部分算法,实时分析处理后的动态影像(步骤1的输出图像),确定探头是否在快速移动-寻找感兴趣区域,还是基本静止(含轻微移动) -已经在重点关注某个区域内的图像随时间的变化(如血流变化),并基于相邻帧及相似场景的分析(这类算法在MPEG4中早已成熟)确定同样场景的完整的观察帧序列。
其中,在该第二实施例中提供的基于MPEG4压缩算法,带有检测场景是否改变的算法模块((包括检测到场景的伸缩变换-即同一场景的细节放大或场景扩张),场景平移,及场景彻底切换)。医学的动态影像一般主要是场景平移。场景彻底切换较少,一般发生于探头放到人体上和移走时。在此便不在赘述了。
步骤3:基于前述的感兴趣区域及专家确定的病灶类型,以及专家确定感兴趣区域时其所处的观察帧序列,完整获取其所处的完整的观察帧系列。
其中,专家确定感兴趣区域时其所处的帧序列,指专家确定感兴趣区域时,对应的具体一幅或数幅连续的二维图像(帧)。系统将感兴趣区域对应的二维图像或部分二维动态图像系列,在时间上向前后扩展到完整的观察帧系列(整个的探头位置相对固定的时间段)。此时,每一幅被扩展帧的感兴趣区域,可以简单处理,仍被限定于所述的二维感兴趣区域周围;也可以进一步处理,基于原先确定的二维感兴趣区域,及最后专家确定的病灶类型,进行图像分析,在扩展的帧中,重新分割出更精准的病灶部分。
二.根据感兴趣区域和候选病灶选项确定病灶类型;根据病灶类型分割出病变区域,生成与患者医学影像的感兴趣区域相关联的结构化报告,同时将所述病变区域和对应的影像语义的表达内容加入到对应的病灶图像库中。其中,当候选病灶选项均与感兴趣区域不相符时,需要专家手动输入与感兴趣区域对应的影像语义的表达内容,发送给其他专家进行验证,验证通过后,将该病变区域和对应的影像语义的表达内容加入到对应的病灶图像库中。
其中,候选病灶选项均与感兴趣区域不相符包括候选病灶选项中不包含感兴趣区域对应的病灶选项或者病灶选项对感兴趣区域的描述不准确。具体说明如下:
在生成的报告中存在一种可能性,推送的病灶类型存在遗漏,即专家认为报告中记录的所有候选病灶选项均不能准确描述该病变区 域,需手工输入对应的(已知)病灶名称,及对应的一些属性,并插入到报告中。该种可能性尤其存在于系统尚未完备时。
此时,该病灶的属性及其对应的病灶局部影像,将被记录补充到此病灶影像库中,并作为与系统判断不一致的新发现,提交给其他专家进行交叉验证,一旦该新发现被人为确证,则相应的知识(包括病变区域以及对应的影像语义的表达内容)会补充到病灶图像库中,作为新增训练样本加入训练集,当系统定时或不定时更新时,将该新知识补充到图像语义表达知识图谱。如被其他专家证伪,则纠正此专家的手工录入结果,采用系统的识别结果。
在生成的报告中还存在一种可能性,即报告中记录的候选病灶选项存在遗漏,需输入对应的(未知)病灶名称及对应的一些属性,并插入到报告中。该种可能性可能发生在本医学影像辅助诊断系统尚未完备时,也可能发生在新类型病灶发现时。
此时,新类型病灶连同属性及其对应的病灶局部影像,将被记录补充到新增的病灶类型的临时病灶影像库中,并作为新发现病灶,提交给其他专家进行交叉验证,一旦该新发现被人为确证,则相应的知识会补充到图像语义表达知识图谱中,且该病灶影像被补充到对应的病灶影像库,作为新增训练样本加入训练集。如被其他专家证伪,则纠正此专家的手工录入结果,采用本医学影像辅助诊断系统先前的识别结果。
一般而言,本医学影像辅助诊断系统可以等待此类新增样本积累到一定程度时,进行训练。当发现此类样本时,本医学影像辅助诊断系统也可以基于专家对这类新增样本的研究及其他知识,结合GAN-生成对抗网络-生成更多类似样本等,在样本较少时进行学习。第三实施例
本发明提供的第三实施例提供了一种结合影像识别与报告编辑的医学影像辅助诊断系统。该系统包括知识图谱建立模块、信息获取模块、感兴趣区域确定模块、候选病灶选项生成模块、病变区域确定模块、报告生成模块和修正模块。其中,知识图谱建立模块用于根据影像领域的标准化词典库和历史积累的医学影像报告分析建立图像语义表达知识图谱。信息获取模块用于获取患者医学影像。感兴趣区域确 定模块用于专家根据信息获取模块传送来的患者医学影像确定患者医学影像的感兴趣区域。候选病灶选项生成模块用于根据知识图谱建立模块传送来的图像语义表达知识图谱以及感兴趣区域确定模块传送来的感兴趣区域提供患者的候选病灶选项。病变区域确定模块用于根据感兴趣区域确定模块传送来的感兴趣区域和候选病灶选项生成模块传送来的候选病灶选项确定病灶类型;并根据所述病灶类型分割出病变区域。报告生成模块用于根据分割出病变区域以及对应的影像语义的表达内容生成与患者医学影像的感兴趣区域相关联的结构化报告。修正模块用于将病变区域和对应的影像语义的表达内容加入到对应的病灶图像库中。
其中,病变区域确定模块包括病灶类型确定单元和病变区域确定单元。其中,病灶类型确定单元用于根据感兴趣区域确定模块传送的感兴趣区域,在候选病灶选项生成模块提供的候选病灶选项中确定病灶类型;病变区域确定单元用于对感兴趣区域确定模块传送的感兴趣区域进行定位分析,分割出病变区域,根据知识图谱建立模块传送的图像语义表达知识图谱确定病变区域对应的病灶选项,进而确定病变类型。
综上所述,本发明所提供的结合影像识别与报告编辑的医学影像辅助诊断方法以图像语义表达知识图谱结合各类机器学习,进行医疗影像识别,可以有计划地深入地累积样本图像,并不断完善图像语义表达知识图谱,使很多图像的标注病灶被不断汇集在同一个子标签下。另一方面,随着同一标签的病灶的样本影像积累越来越多,导致深度学习可使用的样本不断增加,样本的增多一般会导致识别能力的增强,识别敏感度和特异度的提高。除此之外,越来越多的标注病灶的积累,通过机器学习手段,结合人工的深入研究,可以不断细化病灶的标签标注,进一步丰富影像组学的测度,从而不断细化病灶的影像表现类型,增强医学影像的辅助诊断能力。
上面对本发明所提供的结合影像识别与报告编辑的医学影像辅助诊断方法及系统进行了详细的说明。对本领域的一般技术人员而言,在不背离本发明实质精神的前提下对它所做的任何显而易见的改动,都将构成对本发明专利权的侵犯,将承担相应的法律责任。

Claims (12)

  1. 一种结合影像识别与报告编辑的医学影像辅助诊断方法,其特征在于包括如下步骤:
    S1,建立医学影像的图像语义表达知识图谱;
    S2,获取患者医学影像,确定在二维影像上的感兴趣区域,根据图像语义表达知识图谱以及所述感兴趣区域提供患者的候选病灶选项;
    S3,根据所述感兴趣区域和所述候选病灶选项确定病灶类型;根据所述病灶类型分割出病变区域,生成与患者医学影像的感兴趣区域相关联的结构化报告,同时将所述病变区域和对应的影像语义的表达内容加入到对应的病灶图像库中。
  2. 如权利要求1所述的医学影像辅助诊断方法,其特征在于在步骤S1中,建立医学影像的图像语义表达知识图谱进一步包括如下步骤:
    S11,基于医学影像领域的标准化词典库,形成基础的命名实体列表;
    S12,通过对病灶图像库中的历史积累的医学影像报告进行分析,形成命名实体的特性描述文字规范;
    S13,基于专家知识及特定病灶类型对应的局部病灶影像,将获得的命名实体的特性描述文字规范转变为影像语义的表达内容,将每个命名实体以及所述命名实体对应的影像、影像语义的表达内容共同建立所述图像语义表达知识图谱。
  3. 如权利要求1所述的医学影像辅助诊断方法,其特征在于在步骤S3中,进一步包括如下步骤:
    S301,基于所述感兴趣区域所属的病灶类型,对所述感兴趣区域进行定位分析,计算所述感兴趣区域的空间位置,并分割出病变区域。
  4. 如权利要求1所述的医学影像辅助诊断方法,其特征在于在步骤S3中,进一步包括如下步骤:
    S311,基于所述感兴趣区域所属的病灶类型,对确定的感兴趣区域进行定位分析,确定感兴趣区域所属的病灶类型;对确定的感兴趣 区域从二维影像延伸到三维立体影像或者二维动态影像,并分割出整体图像的病变区域。
  5. 如权利要求4所述的医学影像辅助诊断方法,其特征在于当图像类型为二维动态影像时,在步骤S311中,进一步包括如下步骤:
    步骤1:基于专家确定的病灶类型,结合所述病灶类型对应的形状和纹理特征,调用二维影像的灰度值,根据器官的连接关系对病变区域进行分割,获得在二维影像截面中对应所述病变区域的闭合核心病灶区域的主要闭合区域;
    步骤2:基于所述主要闭合区域,向二维影像的空间序列的上一幅及后一幅延伸,基于病灶类型对应的形状和纹理特征,根据器官的连接关系对病变区域进行分割,获得符合病灶类型描述的闭合区域;
    步骤3:继续步骤2的操作,并进行三维空间的数学形态学的闭运算,除掉三维空间与闭合核心病灶区域连接的其他区域,直到闭合核心病灶区域不再生长;勾画出闭合核心病灶区域边缘;
    步骤4:计算闭合核心病灶区域的边缘像素点坐标中的X、Y、Z轴的最大值及最小值,由此构成三维立方体区域。
  6. 如权利要求4所述的医学影像辅助诊断方法,其特征在于当图像类型为二维动态影像时,在步骤S311中,进一步包括如下步骤:
    步骤1:对动态影像中的各帧进行预处理,输出相对固定的人体器官区域的图像;
    步骤2:获得动态影像中探头位置相对固定的完整的观察帧序列;
    步骤3:基于所述感兴趣区域、确定的所述病灶类型,以及确定所述感兴趣区域时所处的观察帧序列,完整获取感兴趣区域对应的完整的观察帧系列。
  7. 如权利要求6所述的医学影像辅助诊断方法,其特征在于当扫描探头自身不带有位置移动感应器时,步骤2中所述获得动态影像中探头位置相对固定的完整的观察帧序列,包括如下步骤:
    基于位置移动感应器,确定探头是否在快速移动;
    如果探头在快速移动,则认为检测仪器在寻找感兴趣区域;否则,认为探头基本静止,正在重点关注某个区域内的图像随时间的变化;
    基于位置随时间的变化,确定探头位置相对固定的完整的观察帧 序列。
  8. 如权利要求6所述的医学影像辅助诊断方法,其特征在于当扫描探头自身不带有位置移动感应器时,步骤2中所述获得动态影像中探头位置相对固定的完整的观察帧序列,包括如下步骤:
    实时分析动态影像,确定探头是否在快速移动;
    如果探头在快速移动,则认为在寻找感兴趣区域;否则,认为探头基本静止,正在重点关注某个区域内的图像随时间的变化;
    基于相邻帧及相似场景的分析确定同样场景的完整的观察帧序列。
  9. 如权利要求1所述的医学影像辅助诊断方法,其特征在于:
    所述结构化报告中包含确定的病变区域对应的影像语义的表达内容与所述病变区域关联的超链接;通过点击超链接,可以同时查看影像展示的病变区域以及与所述病变区域对应的影像语义的表达内容。
  10. 如权利要求1所述的医学影像辅助诊断方法,其特征在于:
    当所述候选病灶选项均与所述感兴趣区域不相符时,输入与所述感兴趣区域对应的影像语义的表达内容,发送给其他专家进行验证,验证通过后,将所述病变区域和对应的影像语义的表达内容加入到对应的病灶图像库中。
  11. 一种结合影像识别与报告编辑的医学影像辅助诊断系统,其特征在于包括知识图谱建立模块、信息获取模块、感兴趣区域确定模块、候选病灶选项生成模块、病变区域确定模块、报告生成模块和修正模块;
    其中,所述知识图谱建立模块用于根据影像领域的标准化词典库和历史积累的医学影像报告分析建立图像语义表达知识图谱;
    所述信息获取模块用于获取患者医学影像;
    所述感兴趣区域确定模块用于确定患者医学影像的感兴趣区域;
    所述候选病灶选项生成模块用于根据图像语义表达知识图谱以及感兴趣区域提供患者的候选病灶选项;
    所述病变区域确定模块用于根据所述感兴趣区域和所述候选病灶选项确定病灶类型;并根据所述病灶类型分割出病变区域;
    所述报告生成模块用于根据分割出病变区域以及对应的影像语义 的表达内容生成与患者医学影像的感兴趣区域相关联的结构化报告;
    所述修正模块用于将所述病变区域和对应的影像语义的表达内容加入到对应的病灶图像库中。
  12. 如权利要求11所述的医学影像辅助诊断系统,其特征在于所述病变区域确定模块包括病灶类型确定单元和病变区域确定单元;其中,
    所述病灶类型确定单元用于根据所述感兴趣区域,在提供的所述候选病灶选项中确定病灶类型;
    所述病变区域确定单元用于对所述感兴趣区域进行定位分析,分割出病变区域,根据所述图像语义表达知识图谱确定病变区域对应的病变类型;
    病变区域确定模块用于对所述感兴趣区域进行定位分析,计算所述感兴趣区域的空间位置,并分割出病变区域。
PCT/CN2018/108311 2017-09-28 2018-09-28 结合影像识别与报告编辑的医学影像辅助诊断方法及系统 WO2019062846A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/833,512 US11101033B2 (en) 2017-09-28 2020-03-28 Medical image aided diagnosis method and system combining image recognition and report editing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710895420.4A CN109583440B (zh) 2017-09-28 2017-09-28 结合影像识别与报告编辑的医学影像辅助诊断方法及系统
CN201710895420.4 2017-09-28

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/833,512 Continuation US11101033B2 (en) 2017-09-28 2020-03-28 Medical image aided diagnosis method and system combining image recognition and report editing

Publications (1)

Publication Number Publication Date
WO2019062846A1 true WO2019062846A1 (zh) 2019-04-04

Family

ID=65900854

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/108311 WO2019062846A1 (zh) 2017-09-28 2018-09-28 结合影像识别与报告编辑的医学影像辅助诊断方法及系统

Country Status (3)

Country Link
US (1) US11101033B2 (zh)
CN (1) CN109583440B (zh)
WO (1) WO2019062846A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111028173A (zh) * 2019-12-10 2020-04-17 北京百度网讯科技有限公司 图像增强方法、装置、电子设备及可读存储介质
CN111275707A (zh) * 2020-03-13 2020-06-12 北京深睿博联科技有限责任公司 肺炎病灶分割方法和装置
CN111325767A (zh) * 2020-02-17 2020-06-23 杭州电子科技大学 基于真实场景的柑橘果树图像集合的合成方法
CN117476163A (zh) * 2023-12-27 2024-01-30 万里云医疗信息科技(北京)有限公司 用于确定疾病结论的方法、装置以及存储介质

Families Citing this family (75)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6984020B2 (ja) * 2018-07-31 2021-12-17 オリンパス株式会社 画像解析装置および画像解析方法
GB201817049D0 (en) * 2018-10-19 2018-12-05 Mirada Medical Ltd System and method for automatic delineation of medical images
CN110111864B (zh) * 2019-04-15 2023-05-26 中山大学 一种基于关系模型的医学报告生成系统及其生成方法
CN110162639A (zh) * 2019-04-16 2019-08-23 深圳壹账通智能科技有限公司 识图知意的方法、装置、设备及存储介质
CN110097969A (zh) * 2019-05-10 2019-08-06 安徽科大讯飞医疗信息技术有限公司 一种诊断报告的分析方法、装置及设备
CN111986137B (zh) * 2019-05-21 2024-06-28 梁红霞 生物器官病变检测方法、装置、设备及可读存储介质
CN110136809B (zh) * 2019-05-22 2022-12-27 腾讯科技(深圳)有限公司 一种医疗图像处理方法、装置、电子医疗设备和存储介质
CN110223761B (zh) * 2019-06-13 2023-08-22 上海联影医疗科技股份有限公司 一种勾勒数据导入方法、装置、电子设备及存储介质
CN110379492A (zh) * 2019-07-24 2019-10-25 复旦大学附属中山医院青浦分院 一种全新的ai+pacs系统及其检查报告构建方法
CN110600122B (zh) * 2019-08-23 2023-08-29 腾讯医疗健康(深圳)有限公司 一种消化道影像的处理方法、装置、以及医疗系统
CN110610181B (zh) * 2019-09-06 2024-08-06 腾讯科技(深圳)有限公司 医学影像识别方法及装置、电子设备及存储介质
CN110738655B (zh) * 2019-10-23 2024-04-26 腾讯科技(深圳)有限公司 影像报告生成方法、装置、终端及存储介质
CN110853743A (zh) * 2019-11-15 2020-02-28 杭州依图医疗技术有限公司 医学影像的显示方法、信息处理方法及存储介质
CN110946615B (zh) * 2019-11-19 2023-04-25 苏州佳世达电通有限公司 超声波诊断装置及使用其的操作方法
CN113261012B (zh) * 2019-11-28 2022-11-11 华为云计算技术有限公司 处理图像的方法、装置及系统
CN111179227B (zh) * 2019-12-16 2022-04-05 西北工业大学 基于辅助诊断和主观美学的乳腺超声图像质量评价方法
CN111048170B (zh) * 2019-12-23 2021-05-28 山东大学齐鲁医院 基于图像识别的消化内镜结构化诊断报告生成方法与系统
CN112365436B (zh) * 2020-01-09 2023-04-07 西安邮电大学 一种针对ct影像的肺结节恶性度分级系统
CN113254608A (zh) * 2020-02-07 2021-08-13 台达电子工业股份有限公司 通过问答生成训练数据的系统及其方法
CN111311705B (zh) * 2020-02-14 2021-06-04 广州柏视医疗科技有限公司 基于webgl的高适应性医学影像多平面重建方法及系统
CN113314202A (zh) * 2020-02-26 2021-08-27 张瑞明 基于大数据来处理医学影像的系统
CN111369532A (zh) * 2020-03-05 2020-07-03 北京深睿博联科技有限责任公司 乳腺x射线影像的处理方法和装置
CN111047609B (zh) * 2020-03-13 2020-07-24 北京深睿博联科技有限责任公司 肺炎病灶分割方法和装置
CN111339076A (zh) * 2020-03-16 2020-06-26 北京大学深圳医院 肾脏病理报告镜检数据处理方法、装置及相关设备
CN111563877B (zh) * 2020-03-24 2023-09-26 北京深睿博联科技有限责任公司 一种医学影像的生成方法及装置、显示方法及存储介质
CN111563876B (zh) * 2020-03-24 2023-08-25 北京深睿博联科技有限责任公司 一种医学影像的获取方法、显示方法
CN111430014B (zh) * 2020-03-31 2023-08-04 杭州依图医疗技术有限公司 腺体医学影像的显示方法、交互方法及存储介质
TWI783219B (zh) * 2020-04-01 2022-11-11 緯創資通股份有限公司 醫學影像辨識方法及醫學影像辨識裝置
CN111476775B (zh) * 2020-04-07 2021-11-16 广州柏视医疗科技有限公司 Dr征象识别装置和方法
CN111667897A (zh) * 2020-04-24 2020-09-15 杭州深睿博联科技有限公司 一种影像诊断结果的结构化报告系统
CN111554369B (zh) * 2020-04-29 2023-08-04 杭州依图医疗技术有限公司 医学数据的处理方法、交互方法及存储介质
CN111681737B (zh) * 2020-05-07 2023-12-19 陈�峰 用于建设肝癌影像数据库的结构化报告系统及方法
CN111528907A (zh) * 2020-05-07 2020-08-14 万东百胜(苏州)医疗科技有限公司 一种超声影像肺炎辅助诊断方法及系统
CN111507979A (zh) * 2020-05-08 2020-08-07 延安大学 一种医学影像计算机辅助分析方法
CN111507978A (zh) * 2020-05-08 2020-08-07 延安大学 一种泌尿外科用智能数字影像处理系统
CN111681730B (zh) * 2020-05-22 2023-10-27 上海联影智能医疗科技有限公司 医学影像报告的分析方法和计算机可读存储介质
CN111768844B (zh) * 2020-05-27 2022-05-13 中国科学院大学宁波华美医院 用于ai模型训练的肺部ct影像标注方法
CN111640503B (zh) * 2020-05-29 2023-09-26 上海市肺科医院 一种晚期肺癌患者的肿瘤突变负荷的预测系统及方法
CN111933251B (zh) * 2020-06-24 2021-04-13 安徽影联云享医疗科技有限公司 一种医学影像标注方法及系统
CN111951952A (zh) * 2020-07-17 2020-11-17 北京欧应信息技术有限公司 一种基于医疗影像信息自动诊断骨科疾病的装置
CN112530550A (zh) * 2020-12-10 2021-03-19 武汉联影医疗科技有限公司 影像报告生成方法、装置、计算机设备和存储介质
EP4170670A4 (en) * 2020-07-17 2023-12-27 Wuhan United Imaging Healthcare Co., Ltd. METHOD AND SYSTEM FOR PROCESSING MEDICAL DATA
US11883687B2 (en) 2020-09-08 2024-01-30 Shanghai United Imaging Healthcare Co., Ltd. X-ray imaging system for radiation therapy
EP3985679A1 (en) * 2020-10-19 2022-04-20 Deepc GmbH Technique for providing an interactive display of a medical image
CN112401915A (zh) * 2020-11-19 2021-02-26 华中科技大学同济医学院附属协和医院 一种新冠肺炎ct复查的图像融合比对方法
CN112420150B (zh) * 2020-12-02 2023-11-14 沈阳东软智能医疗科技研究院有限公司 医学影像报告的处理方法、装置、存储介质及电子设备
CN112419340B (zh) * 2020-12-09 2024-06-28 东软医疗系统股份有限公司 脑脊液分割模型的生成方法、应用方法及装置
CN112669925A (zh) * 2020-12-16 2021-04-16 华中科技大学同济医学院附属协和医院 一种新冠肺炎ct复查的报告模板及形成方法
CN112599216B (zh) * 2020-12-31 2021-08-31 四川大学华西医院 脑肿瘤mri多模态标准化报告输出系统及方法
CN112863649B (zh) * 2020-12-31 2022-07-19 四川大学华西医院 玻璃体内肿瘤影像结果输出系统及方法
US20220284542A1 (en) * 2021-03-08 2022-09-08 Embryonics LTD Semantically Altering Medical Images
CN113160166B (zh) * 2021-04-16 2022-02-15 宁波全网云医疗科技股份有限公司 通过卷积神经网络模型进行医学影像数据挖掘工作方法
WO2022252107A1 (zh) * 2021-06-01 2022-12-08 眼灵(上海)智能科技有限公司 一种基于眼部图像的疾病检测系统及方法
CN113658107A (zh) * 2021-07-21 2021-11-16 杭州深睿博联科技有限公司 一种基于ct图像的肝脏病灶诊断方法及装置
JP2023027663A (ja) * 2021-08-17 2023-03-02 富士フイルム株式会社 学習装置、方法およびプログラム、並びに情報処理装置、方法およびプログラム
CN113486195A (zh) * 2021-08-17 2021-10-08 深圳华声医疗技术股份有限公司 超声图像处理方法、装置、超声设备及存储介质
CN113592857A (zh) * 2021-08-25 2021-11-02 桓由之 医学影像中图形要素的识别、提取和标注的方法
CN113763345A (zh) * 2021-08-31 2021-12-07 苏州复颖医疗科技有限公司 医学影像病灶位置查看方法、系统、设备及存储介质
CN113838560A (zh) * 2021-09-09 2021-12-24 王其景 一种基于医学影像的远程诊断系统及方法
CN113889213A (zh) * 2021-12-06 2022-01-04 武汉大学 超声内镜报告的生成方法、装置、计算机设备及存储介质
US12100512B2 (en) * 2021-12-21 2024-09-24 National Cheng Kung University Medical image project management platform
CN114530224A (zh) * 2022-01-18 2022-05-24 深圳市智影医疗科技有限公司 基于医学影像的诊断报告辅助生成方法及系统
CN114463323B (zh) * 2022-02-22 2023-09-08 数坤(上海)医疗科技有限公司 一种病灶区域识别方法、装置、电子设备和存储介质
CN114565582B (zh) * 2022-03-01 2023-03-10 佛山读图科技有限公司 一种医学图像分类和病变区域定位方法、系统及存储介质
CN114972806A (zh) * 2022-05-12 2022-08-30 上海工程技术大学 一种基于计算机视觉的医学图像分析方法
CN114708952B (zh) * 2022-06-02 2022-10-04 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) 一种图像标注方法、装置、存储介质和电子设备
CN114724670A (zh) * 2022-06-02 2022-07-08 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) 一种医学报告生成方法、装置、存储介质和电子设备
CN115295125B (zh) * 2022-08-04 2023-11-17 天津市中西医结合医院(天津市南开医院) 一种基于人工智能的医学影像文件管理系统及方法
CN115063425B (zh) * 2022-08-18 2022-11-11 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) 基于读片知识图谱的结构化检查所见生成方法及系统
CN115062165B (zh) * 2022-08-18 2022-12-06 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) 基于读片知识图谱的医学影像诊断方法及装置
JP2024032079A (ja) * 2022-08-29 2024-03-12 富士通株式会社 病変検出方法および病変検出プログラム
CN116779093B (zh) * 2023-08-22 2023-11-28 青岛美迪康数字工程有限公司 一种医学影像结构化报告的生成方法、装置和计算机设备
CN116797889B (zh) * 2023-08-24 2023-12-08 青岛美迪康数字工程有限公司 医学影像识别模型的更新方法、装置和计算机设备
CN117457142A (zh) * 2023-11-17 2024-01-26 浙江飞图影像科技有限公司 用于报告生成的医学影像处理系统及方法
CN118334017B (zh) * 2024-06-12 2024-09-10 中国人民解放军总医院第八医学中心 一种面向呼吸道传染病的风险辅助评估方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102365641A (zh) * 2009-03-26 2012-02-29 皇家飞利浦电子股份有限公司 基于诊断信息自动检索报告模板的系统
CN103793611A (zh) * 2014-02-18 2014-05-14 中国科学院上海技术物理研究所 医学信息的可视化方法和装置
CN105184074A (zh) * 2015-09-01 2015-12-23 哈尔滨工程大学 一种基于多模态医学影像数据模型的医学数据提取和并行加载方法
CN106909778A (zh) * 2017-02-09 2017-06-30 北京市计算中心 一种基于深度学习的多模态医学影像识别方法及装置
CN107103187A (zh) * 2017-04-10 2017-08-29 四川省肿瘤医院 基于深度学习的肺结节检测分级与管理的方法及系统

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9390236B2 (en) * 2009-05-19 2016-07-12 Koninklijke Philips N.V. Retrieving and viewing medical images
US8799013B2 (en) * 2009-11-24 2014-08-05 Penrad Technologies, Inc. Mammography information system
US8311303B2 (en) * 2010-01-12 2012-11-13 Siemens Corporation Method and system for semantics driven image registration
US10282840B2 (en) * 2010-07-21 2019-05-07 Armin Moehrle Image reporting method
US9014485B2 (en) * 2010-07-21 2015-04-21 Armin E. Moehrle Image reporting method
CN102156715A (zh) * 2011-03-23 2011-08-17 中国科学院上海技术物理研究所 面向医学影像数据库的基于多病灶区域特征的检索系统
US9349186B2 (en) * 2013-02-11 2016-05-24 General Electric Company Systems and methods for image segmentation using target image intensity
US9721340B2 (en) * 2013-08-13 2017-08-01 H. Lee Moffitt Cancer Center And Research Institute, Inc. Systems, methods and devices for analyzing quantitative information obtained from radiological images
KR101576047B1 (ko) * 2014-01-17 2015-12-09 주식회사 인피니트헬스케어 의료 영상 판독 과정에서 구조화된 관심 영역 정보 생성 방법 및 그 장치
KR20150108701A (ko) * 2014-03-18 2015-09-30 삼성전자주식회사 의료 영상 내 해부학적 요소 시각화 시스템 및 방법
US20180092696A1 (en) * 2015-02-05 2018-04-05 Koninklijke Philips N.V. Contextual creation of report content for radiology reporting
CN106021281A (zh) * 2016-04-29 2016-10-12 京东方科技集团股份有限公司 医学知识图谱的构建方法、其装置及其查询方法
US9589374B1 (en) * 2016-08-01 2017-03-07 12 Sigma Technologies Computer-aided diagnosis system for medical images using deep convolutional neural networks
CN106295186B (zh) * 2016-08-11 2019-03-15 中国科学院计算技术研究所 一种基于智能推理的辅助疾病诊断的系统
CN106776711B (zh) * 2016-11-14 2020-04-07 浙江大学 一种基于深度学习的中文医学知识图谱构建方法
CN106933994B (zh) * 2017-02-27 2020-07-31 广东省中医院 一种基于中医药知识图谱的核心症证关系构建方法
US10169873B2 (en) * 2017-03-23 2019-01-01 International Business Machines Corporation Weakly supervised probabilistic atlas generation through multi-atlas label fusion
CN107145744B (zh) * 2017-05-08 2018-03-02 合肥工业大学 医学知识图谱的构建方法、装置及辅助诊断方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102365641A (zh) * 2009-03-26 2012-02-29 皇家飞利浦电子股份有限公司 基于诊断信息自动检索报告模板的系统
CN103793611A (zh) * 2014-02-18 2014-05-14 中国科学院上海技术物理研究所 医学信息的可视化方法和装置
CN105184074A (zh) * 2015-09-01 2015-12-23 哈尔滨工程大学 一种基于多模态医学影像数据模型的医学数据提取和并行加载方法
CN106909778A (zh) * 2017-02-09 2017-06-30 北京市计算中心 一种基于深度学习的多模态医学影像识别方法及装置
CN107103187A (zh) * 2017-04-10 2017-08-29 四川省肿瘤医院 基于深度学习的肺结节检测分级与管理的方法及系统

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111028173A (zh) * 2019-12-10 2020-04-17 北京百度网讯科技有限公司 图像增强方法、装置、电子设备及可读存储介质
CN111028173B (zh) * 2019-12-10 2023-11-17 北京百度网讯科技有限公司 图像增强方法、装置、电子设备及可读存储介质
CN111325767A (zh) * 2020-02-17 2020-06-23 杭州电子科技大学 基于真实场景的柑橘果树图像集合的合成方法
CN111325767B (zh) * 2020-02-17 2023-06-02 杭州电子科技大学 基于真实场景的柑橘果树图像集合的合成方法
CN111275707A (zh) * 2020-03-13 2020-06-12 北京深睿博联科技有限责任公司 肺炎病灶分割方法和装置
CN111275707B (zh) * 2020-03-13 2023-08-25 北京深睿博联科技有限责任公司 肺炎病灶分割方法和装置
CN117476163A (zh) * 2023-12-27 2024-01-30 万里云医疗信息科技(北京)有限公司 用于确定疾病结论的方法、装置以及存储介质
CN117476163B (zh) * 2023-12-27 2024-03-08 万里云医疗信息科技(北京)有限公司 用于确定疾病结论的方法、装置以及存储介质

Also Published As

Publication number Publication date
US20200303062A1 (en) 2020-09-24
US11101033B2 (en) 2021-08-24
CN109583440B (zh) 2021-12-17
CN109583440A (zh) 2019-04-05

Similar Documents

Publication Publication Date Title
US11101033B2 (en) Medical image aided diagnosis method and system combining image recognition and report editing
US11488306B2 (en) Immediate workup
CN112086197B (zh) 基于超声医学的乳腺结节检测方法及系统
JP5222082B2 (ja) 情報処理装置およびその制御方法、データ処理システム
CN111214255A (zh) 一种医学超声图像计算机辅助诊断方法
CN114782307A (zh) 基于深度学习的增强ct影像直肠癌分期辅助诊断系统
WO2020027228A1 (ja) 診断支援システム及び診断支援方法
Zhao et al. Automatic thyroid ultrasound image classification using feature fusion network
CN113855079A (zh) 基于乳腺超声影像的实时检测和乳腺疾病辅助分析方法
Yang et al. Assessing inter-annotator agreement for medical image segmentation
KR20220124665A (ko) 의료용 인공 신경망 기반 사용자 선호 스타일을 제공하는 의료 영상 판독 지원 장치 및 방법
Ma et al. A novel deep learning framework for automatic recognition of thyroid gland and tissues of neck in ultrasound image
US20150065868A1 (en) System, method, and computer accessible medium for volumetric texture analysis for computer aided detection and diagnosis of polyps
KR20210060923A (ko) 의료용 인공 신경망 기반 대표 영상을 제공하는 의료 영상 판독 지원 장치 및 방법
Peña-Solórzano et al. Findings from machine learning in clinical medical imaging applications–Lessons for translation to the forensic setting
Fan et al. Research on abnormal target detection method in chest radiograph based on YOLO v5 algorithm
CN117218419B (zh) 一种胰胆肿瘤分型分级分期的评估系统和评估方法
Li et al. A dual attention-guided 3D convolution network for automatic segmentation of prostate and tumor
WO2023198166A1 (zh) 图像检测方法、系统、装置及存储介质
CN115690556B (zh) 一种基于多模态影像学特征的图像识别方法及系统
CN114757894A (zh) 一种骨肿瘤病灶分析系统
Tan et al. A segmentation method of lung parenchyma from chest CT images based on dual U-Net
WO2022112731A1 (en) Decision for double reader
Wu et al. B-ultrasound guided venipuncture vascular recognition system based on deep learning
Begimov EXTRACTING TAGGING FROM EXOCARDIOGRAPHIC IMAGES VIA MACHINE LEARNING ALGORITHMICS

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18863637

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18863637

Country of ref document: EP

Kind code of ref document: A1