US11101033B2 - Medical image aided diagnosis method and system combining image recognition and report editing - Google Patents
Medical image aided diagnosis method and system combining image recognition and report editing Download PDFInfo
- Publication number
- US11101033B2 US11101033B2 US16/833,512 US202016833512A US11101033B2 US 11101033 B2 US11101033 B2 US 11101033B2 US 202016833512 A US202016833512 A US 202016833512A US 11101033 B2 US11101033 B2 US 11101033B2
- Authority
- US
- United States
- Prior art keywords
- image
- roi
- focus
- region
- medical image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H15/00—ICT specially adapted for medical reports, e.g. generation or transmission thereof
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N99/00—Subject matter not provided for in other groups of this subclass
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three-dimensional [3D] modelling for computer graphics
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
- G06T7/0014—Biomedical image inspection using an image reference approach
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/40—Analysis of texture
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/24—Character recognition characterised by the processing or recognition method
- G06V30/242—Division of the character sequences into groups prior to recognition; Selection of dictionaries
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20036—Morphological image processing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30061—Lung
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30061—Lung
- G06T2207/30064—Lung nodule
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Definitions
- the invention relates to a medical image aided diagnosis method, more particularly, to a medical image aided diagnosis method combining image recognition and report editing, and also to a corresponding medical image aided diagnosis system, which belong to the technical field of medical image aided diagnosis.
- the primary technical problem to be solved by the invention is to provide a medical image aided diagnosis method combining image recognition and report editing.
- Another technical problem to be solved by the invention is to provide a medical image aided diagnosis system combining image recognition and report editing.
- a medical image aided diagnosis method combining image recognition and report editing.
- the method includes the following steps:
- step S1 further includes the following steps:
- step S3 further includes the following steps:
- step S3 further includes the following steps:
- step S311 further includes the following steps:
- step 1 calling a gray value of the two-dimensional image based on a focus type determined by an expert in combination with shape and texture features corresponding to the focus type, dividing the lesion region according to a connection relationship of organs, and obtaining a main closed region of a closed core focus region corresponding to the lesion region in a two-dimensional image section;
- step 2 extending to previous and next images in a spatial sequence of the two-dimensional image based on the main closed region, dividing the lesion region according to the connection relationship of organs based on shape and texture features corresponding to the focus type, and obtaining a closed region that matches the description of the focus type;
- step 3 continuing the operation in step 2, performing a mathematical morphological closed operation in a three-dimensional space, removing other regions connected to the closed core focus region in the three-dimensional space until the closed core focus region no longer grows, and delineating a closed core focus region edge;
- step 4 calculating maximum and minimum values of X, Y, and Z axes in edge pixel point coordinates of the closed core focus region, so as to form a three-dimensional cube region.
- step S311 further includes the following steps:
- step 1 pre-processing each frame in a dynamic image, and outputting a relatively fixed image of a human organ region
- step 2 obtaining a complete sequence of observation frames with relatively fixed probe positions in the dynamic image
- step 3 completely obtaining a complete sequence of observation frames corresponding to the ROI based on the ROI, the determined focus type, and a sequence of observation frames in which the ROI is determined.
- obtaining a complete sequence of observation frames with relatively fixed probe positions in the dynamic image in step 2 includes the following steps:
- the probe is moving fast, considering that a detection instrument is looking for the ROI, otherwise, considering that the probe is basically stationary and is focusing on the change of an image in a certain region with time;
- obtaining a complete sequence of observation frames with relatively fixed probe positions in the dynamic image in step 2 includes the following steps:
- the probe is moving fast, considering to be looking for the ROI, otherwise, considering that the probe is basically stationary and is focusing on the change of an image in a certain region with time;
- the structured report contains a hyperlink of image semantic representation corresponding to a determined lesion region related to the lesion region, and the lesion region displayed in the image and the image semantic representation corresponding to the lesion region can be viewed simultaneously by clicking the hyperlink.
- the image semantic representation corresponding to the ROI is input and sent to other experts for verification, and after the verification is passed, the lesion region and the corresponding image semantic representation are added to the corresponding focus image library.
- a medical image aided diagnosis system combining image recognition and report editing.
- the system includes: a knowledge graph establishment module, an information acquisition module, an ROI determination module, a candidate focus option generation module, a lesion region determination module, a report generation module, and a correction module.
- the knowledge graph establishment module is configured to establish an image semantic representation knowledge graph according to a standardized dictionary library in the field of images and historically accumulated medical image report analysis.
- the information acquisition module is configured to acquire a medical image of a patient.
- the ROI determination module is configured to determine an ROI of the medical image of the patient.
- the candidate focus option generation module is configured to provide candidate focus options of the patient according to the image semantic representation knowledge graph and the ROI.
- the lesion region determination module is configured to determine a focus type according to the ROI and the candidate focus options, and perform division (extraction) to obtain a lesion region according to the focus type.
- the report generation module is configured to generate a structured report related to the ROI of the medical image of the patient according to the divided lesion region and corresponding image semantic representation.
- the correction module is configured to add the lesion region and the corresponding image semantic representation into a corresponding focus image library.
- the lesion region determination module includes a focus type determination unit and a lesion region determination unit.
- the focus type determination unit is configured to determine a focus type in the provided candidate focus options according to the ROI.
- the lesion region determination unit is configured to perform localization analysis on the ROI, perform division to obtain a lesion region, and determine a lesion type corresponding to the lesion region according to the image semantic representation knowledge graph.
- the lesion region determination module is configured to perform localization analysis on the ROI, calculate a spatial position of the ROI, and perform division to obtain a lesion region.
- an image semantic representation knowledge graph and a variety of machine learning are combined to perform medical image recognition, sample images can be systematically and deeply accumulated, and the image semantic representation knowledge graph can be continuously improved, so that labeled focuses of many images can be continuously collected under the same sub-label.
- labeling of the focuses can be continuously refined by means of machine learning in combination with manual in-depth research, thereby further enriching the measures of radiomics and enhancing aided analysis capabilities of medical images.
- FIG. 1 is a flowchart of a medical image aided diagnosis method combining image recognition and report editing according to the invention.
- FIG. 2 is a schematic image diagram of solid nodules in an embodiment provided by the invention.
- FIG. 3 is a schematic image diagram of pure ground glass opacity (PGGO) in an embodiment provided by the invention.
- FIG. 4 is a schematic image diagram of mixed ground glass opacity (MGGO) in an embodiment provided by the invention.
- FIG. 5 is a schematic image diagram of a wall-less cavity in an embodiment provided by the invention.
- FIG. 6 is a schematic image diagram of a thin-walled cavity in an embodiment provided by the invention.
- FIG. 7 is a schematic image diagram of a thick-walled cavity in an embodiment provided by the invention.
- FIG. 8 is a distribution diagram of organs in the upper torso of a human body in an embodiment provided by the invention.
- FIG. 9 is a schematic structure diagram of a specific two-dimensional section of human chest and lung CT and a corresponding series of easy-to-recognize organs in an embodiment provided by the invention.
- FIG. 10 a is a schematic structure diagram of a specific two-dimensional section view of human chest and lung CT and corresponding labeling of a pulmonary air part in an embodiment provided by the invention.
- FIG. 10 b is a schematic diagram of labeling a pulmonary air part after threshold processing and connectivity analysis of the schematic structure diagram shown in FIG. 10 a.
- a medical image aided diagnosis method combining image recognition and report editing provided by the invention, based on preliminary medical anatomical structure expressions in medical images and preliminary focus (focus region) recognition capabilities, the current process of producing, editing and reviewing image reports by doctors is changed.
- a certain physiological structure organ, tissue or focus
- an image report description content entry i.e. a content of an image semantic representation
- corresponding to the physiological structure can be generated automatically or semi-automatically, and named entities in the entry are linked to a specific region of a corresponding image.
- the invention is a medical image reading machine learning method and system simulating interactive training.
- a reading report process commonly used by image department doctors
- the efficiency of reading and report generation, as well as the efficiency of report editing and reviewing can be greatly improved
- an artificial intelligent reading technical system capable of continuing to communicate with superior doctors and continuously learning to improve reading and reporting capabilities is constructed.
- the medical image aided diagnosis method combining image recognition and report editing mainly includes the following steps. Firstly, an image semantic representation knowledge graph is established according to a standardized dictionary library in the field of images and analysis of historically accumulated medical image reports in a focus image library. Subsequently, a medical image of a patient is acquired to determine an ROI on the two-dimensional image of the patient, and then candidate focus options of the patient are provided according to the image semantic representation knowledge graph and the ROI. Finally, a focus type is determined according to the ROI and the candidate focus options, a lesion region is obtained through division according to the focus type, a structured report related to the ROI of the medical image of the patient is generated. Meanwhile, the lesion region and corresponding image semantic representation contents are added into the corresponding focus image library, and the structured report is delivered to the patient. The process will be described in detail below.
- an image semantic representation knowledge graph is established according to a standardized dictionary library in the field of images and accumulated medical image report analysis in a focus image library.
- the image semantic representation knowledge graph here is a general term of a medical image report knowledge graph and an image semantic system.
- the basic list of named entities is formed based on a standardized dictionary library in the field of images (in the embodiments provided by the invention, based on the standardized dictionary library RADELX).
- the named entities include various organs and focuses.
- a characteristic description text specification for each named entity is formed by analyzing the accumulated medical image reports in the focus image library.
- the medical image reports contain state descriptions of various organs and some local focus descriptions therein.
- a current trend is to establish an image reporting and data system (RADS) structured report covering the Radiological Society of North America (RSNA) and the American College of Radiology (ACR) based on a standardized dictionary library RADELX in the field of images.
- the structured report will clearly describe the position, nature and grade of a lesion in an image.
- RSNA Radiological Society of North America
- ACR American College of Radiology
- the structured report will clearly describe the position, nature and grade of a lesion in an image.
- a spatial position relationship between the focuses of specific types and the organs is relatively clear, and there are relatively specific gray distributions (including, the distribution of gray in spatial positions) and texture structures, so there is a clear semantic representation to the image.
- the characteristic description text specification for each named entity obtained is transformed into the image semantics representation based on the expert knowledge, and then the image semantic representation knowledge graph of the medical image is created by each named entity, in cooperation with the image and the image semantics representation corresponding to the named entity.
- the characteristic description text specification for the named entities is transformed into the image semantics representation, which includes spatial attributes, gray distributions and texture structure descriptions, thereby forming the image semantic representation knowledge graph in a medical ontology involved in an image report.
- the image semantic representation knowledge graph in addition to structured descriptions of text and data, includes labeled samples of images (most of them being local images) corresponding to each named entity (including easy-to-recognize basic components and focuses) type.
- FIG. 2 An image graph of a solid nodule is shown in FIG. 2 , and its corresponding image semantics representation is:
- Position-neighbor 1) surrounded by pulmonary air, or 2) connected to lung walls, or 3) connected to blood vessels, or 4) connected to bronchi.
- Shape nearly round (three-dimensional: spherical) or oval (three-dimensional: cobblestone shape).
- Micro nodules diameter of less than 5 mm;
- Nodules diameter of 10-20 mm;
- Mass diameter of greater than 20 mm.
- Boundary clear (sharply different gray distribution), with or without burrs.
- the probability of malignant micro nodules is less than 1%, and the follow-up interval is confirmed from 6 to 12 months.
- the probability of malignant small nodules is 25% to 30%, and the follow-up interval is 3 to 6 months for CT review (LDCT is recommended).
- Nodules and masses are more likely to be malignant, biopsy or surgery is to be performed.
- a ground glass opacity is also called a frosted glass nodule, and the corresponding image semantic representation is:
- Position-neighbor 1) surrounded by pulmonary air, 2) connected to lung walls, 3) connected to blood vessels, 4) connected to bronchi.
- Type pure GGO (PGGO) and mixed GGO (MGGO) as shown in FIG. 3 and FIG. 4 respectively.
- Density-spatial distribution slightly high-density opacity (ground glass) on a low-density background of a lung field, or partial high-density opacity (solid components with density distribution similar to solid nodules).
- the PGGO has no solid components (excluding high-density opacity), and the MGGO contains high-density opacity part.
- Shape stacked in blocks.
- Boundary clear or unclear, with many burrs.
- Cavity has a corresponding image semantic representation as follows:
- Thin-walled low-density image part surrounded by thin-wall (high-density opacity);
- Thick-walled low-density image part surrounded by thick-wall (high-density opacity).
- Wormy appearance cavity caseous lobar pneumonia and the like, as shown in FIG. 5 ;
- Thin-walled cavity secondary pulmonary tuberculosis and the like, as shown in FIG. 6 ;
- Thick-walled cavity tuberculoma, lung squamous cell carcinoma and the like, as shown in FIG. 7 .
- the medical image aided diagnosis system After initialization, the medical image aided diagnosis system has preliminarily constructed an image semantic representation knowledge graph, forms different attribute descriptions for corresponding organs, lesions and focuses shown in medical images that are scanned in different types, different purposes and different parts.
- the attribute descriptions may be calculated. That is, through calculation from the corresponding feature extraction after the recognition of a specific object in the medical image, the attribute descriptions are obtained such as a relative spatial position range of a specific focus, an average density value, a standard deviation, entropy, roundness or sphericity, edge burr, edge sharpness, a histogram, a density distribution map of a distance around the center, and a correlation matrix of texture expression in the image.
- a medical image of a patient is acquired, a system pre-processes, recognizes and positions some basic and easy-to-recognize components of the medical image of the patient, and then candidate focus options of the patient are provided, based on the basic information and an ROI on a two-dimensional image drawn by an expert, according to the image semantic representation knowledge graph and the ROI.
- the technical means for obtaining the medical image of the patient include, but are not limited to, chest and lung CT, abdominal CT, cerebrovascular MRI, breast MRI, abdominal ultrasound, and the like.
- the system pre-processes, recognizes and positions some basic and easy-to-recognize components of the medical image of the patient. That is, it is necessary to first recognize some basic components such as pulmonary air, bones, and vertebrae.
- FIG. 8 is a distribution diagram of organs in the upper torso of a human body. Lung CT (plain scan or enhanced) is taken as an example.
- FIG. 9 is a cross-sectional view of a three-dimensional CT chest and lung image.
- FIG. 10 b shows a pulmonary air part labeled (red part) after threshold processing and connectivity analysis of an original lung CT image ( FIG. 10 a is a two-dimensional screenshot).
- the recognition and localization of this pulmonary air part aid and limit the present system to analyze and acquire a possible focus type when the experts draw an ROI in the pulmonary air part. Only the focus types for the region within or adjacent to the red part need to be further analyzed (texture analysis, density-spatial analysis and convolutional neural network matching, etc.). High-probability focus types are introduced by the system for experts to choose.
- the expert determines an ROI on the two-dimensional image based on the medical image of the patient, that is, a region where a lesion may be considered to exist by the expert. For example, if an ROI body drawn by a radiologist in a lung CT examination image is in the lung and connected to blood vessels, and the region is surrounded by pulmonary air (the recognition and positioning of pulmonary air is shown in FIG. 10 ), the medical image aided diagnosis system may automatically analyze the position features.
- the medical image aided diagnosis system may automatically pop up a list of options after preliminary calculation.
- the list ranks a plurality of description options based on the possibility, that is, candidate focus options of the patient. There may be one or more candidate focus options.
- preliminary localization analysis is performed on the ROI.
- the type of the named entities (specific types of focuses), which may contain similar characteristics in the ROI labeled on the two-dimensional image cross section, is determined.
- the type of the named entities is provided to the expert through an interactive interface (which may be an option form of graphical interfaces or a voice response form) for the expert to choose.
- the determination of the ROI here may be completed by an expert manually clicking on a computer or through an image recognition algorithm.
- the expert manual completion is that a doctor browses and observes through a medical image display system such as PACS, and finds some suspected lesion regions.
- a medical image display system such as PACS
- the form of a closed curve is drawn, that is, an ROI is positioned on a two-dimensional cross section.
- image recognition The completion through an image recognition algorithm is that a computer with a certain degree of reading (image reading) capability performs recognition positioning and automatic prompt through some types of focus recognition algorithms (for example, traditional image recognition algorithms based on features or rules; or deep learning algorithms such as CNN or RNN; assistance with transfer learning or reinforcement learning) Or, by comparing with normal medical image of the same type, a region where the medical image of the patient is different from the normal medical image is found and determined as the ROI.
- focus recognition algorithms for example, traditional image recognition algorithms based on features or rules; or deep learning algorithms such as CNN or RNN; assistance with transfer learning or reinforcement learning
- the focus type is determined according to the ROI and the candidate focus options.
- the lesion region is obtained through division according to the focus type.
- the structured report related to the ROI of the medical image of the patient is generated, and the lesion region and the corresponding image semantics representation are added into the corresponding focus image library.
- S301 localization is performed on the ROI based on the determined focus type to which the ROI belongs, to calculate the spatial position of the ROI and acquire the lesion region.
- the focus options are determined according to the ROI delineated by the expert and the candidate focus options to further determine the focus type.
- Localization analysis is performed on the ROI, a spatial position of the ROI is calculated, a lesion region is obtained through division, a structured report related to the ROI of the medical image of the patient is generated, and the lesion region and corresponding image semantic representation are added into a corresponding focus image library.
- localization analysis is performed on the ROI based on the determined focus type to which the ROI belongs, a spatial position of the ROI is calculated, and a lesion region is obtained through division.
- the operation specifically includes the following steps.
- the focus type to which the ROI belongs is determined according to the ROI and the candidate focus options.
- human organs generally have relatively fixed positions.
- the positions in medical images and image display are generally more obvious, and are easy to recognize, position, and deconstruct.
- localization analysis is performed on the delineated ROI, and a focus type to which the ROI belongs is determined according to the delineated ROI and the selected focus option.
- a spatial position of the ROI is calculated based on the determined focus type of the ROI, and a lesion region is obtained through division.
- Each type of organ has its unique spatial position, gray spatial distribution, and texture attributes.
- solitary small pulmonary nodules are surrounded by pulmonary air in isolation. After a three-dimensional image shows threshold processing and connected branch analysis, the surroundings are all air branch (HU density is below a certain threshold and in the lung) surroundings. The distribution of HU density values inside the nodule branch with the branch center conforms to a certain distribution.
- the small pulmonary nodules connected to the blood vessels are basically surrounded by pulmonary air, but may be connected to tracheae/bronchi, pleurae, and lung lobes through one or more blood vessels (high density).
- the surroundings are all air branch (HU density is below a certain threshold and in the lung) surroundings, but there are high-density narrow-channel blood vessel branches (HU density higher than a certain threshold and in the lung) connected.
- the high-density narrow-channel blood vessel branches may be filtered in the image through an Opening operator of Morphological Analysis (spherical structural elements at different scales).
- Morphological Analysis spherical structural elements at different scales.
- the internal HU density-spatial distribution is somewhat different from that of the solitary nodules.
- a surrounding part of the small pulmonary nodules connected to a lung wall is surrounded by pulmonary air, but one of its side is close to the lung wall.
- the three-dimensional image by means of threshold processing and connected branch analysis, it may be filtered in the image through an Opening operator of Morphological Analysis (spherical structural elements at different scales), and a focus may be obtained through division.
- Morphological Analysis spherical structural elements at different scales
- a focus may be obtained through division.
- nodules such as frosted glass nodules.
- the spatial position and surrounding region of the region are calculated based on the focus type determined by the medical image of the patient, that is, it is determined whether the ROI is adjacent to certain organs in a certain part or organ, and multiple methods (for example, threshold calculation, edge extraction and texture analysis) may be used in the ROI for division to further obtain possible focus or lesion regions therein through division.
- multiple methods for example, threshold calculation, edge extraction and texture analysis
- the gray value of the two-dimensional image may be used, and based on the threshold or texture features of the focus type, a region that does not meet the threshold or texture feature requirements is divided as the lesion region. That is, by calculating the matching degree between the ROI and the known focus type, a possible focus or lesion region is obtained.
- the structured report is generated by the obtained lesion region through division and the image semantics representation corresponding to the lesion region, together with patient information.
- the image semantic representation corresponding to the lesion region is a structured lesion description entry, which will generate determined focus options and attributes thereof (including the size, burr, clarity, average density value and HU histogram distribution of the focus) to be related to the ROI (or determined as a focus region) of the medical image.
- the named entity part of the focus option determined in the structured report is a hyperlink related to the ROI.
- the hyperlink is a hyperlink of the image semantic representation corresponding to the determined lesion region related to the lesion region.
- the lesion region and the image semantic representation corresponding to the lesion region may be viewed simultaneously by clicking the hyperlink.
- the tediousness of finding a corresponding image in an image library according to image semantic representation corresponding to a lesion region in the existing report is effectively solved, and the efficiency of viewing the report is improved.
- the lesion region and the corresponding image semantic representation are added into the corresponding lesion image library, as a sample accumulation for subsequent update and supplement of the image semantic representation knowledge graph, so that experts can complete accumulation of samples in the working process without extra human and financial resources to specifically conduct data mining, thereby improving the use efficiency of the structured report.
- an image semantic representation knowledge graph and a variety of machine learning, especially deep learning and reinforcement learning are combined to perform medical image recognition.
- One advantage is that sample images can be systematically and deeply accumulated, and labeled focuses of many images can be continuously collected under the same sub-label.
- the so-called evolution is, on the one hand, a quantity accumulation. As more and more sample images of focuses with the same label are accumulated, the number of samples available for deep learning needs to continue to increase. Therefore, regardless of the use of algorithms such as CNN, RNN, DNN, or LSTM, the increase of samples generally leads to the enhancement of recognition capabilities and the improvement of recognition sensitivity and specificity. This is obvious.
- labeling of the focuses can be continuously refined by means of machine learning in combination with manual in-depth research, thereby further enriching the measures of radiomics to continuously refine image performance types of focuses. That is to say, for the original disease attribute description, types will be further increased, or quantitative description points will be further increased. For the former, if it is possible to find MGGO and nodules with different spatial-density distributions, new subtypes are added. For the latter, such as an edge of a focus, new measures may be added, such as edge burr. The increase in parameters may enhance the accuracy of predicting the degree of malignancy of the focus based on CT or MRI images.
- the medical image aided diagnosis system will easily adopt transfer learning and other means to quickly learn some new or fewer samples of focuses.
- breast MM masses and non-tumor-like enhanced focuses have many similarities of spatial-density distribution to lung CT nodules and GGO focuses, but they are different in specific parameters.
- the characteristics are very suitable for cross-domain transfer learning (a parameter model obtained by using, other focus samples with certain image performance similarity is applied to the focus sample for parameter adjustment), or Borrowed Strength parameter estimation when there are not sufficient focus samples with a certain type or label.
- the second embodiment of the invention provides a medical image aided diagnosis method combining image recognition and report editing.
- the difference from the first embodiment is as follows:
- step S3 of the first embodiment after the medical image of the patient is acquired, the determined ROI is extended from a two-dimensional image to a three-dimensional stereoscopic image or a two-dimensional dynamic image (video-like images that change with time), localization analysis is performed on the ROI based on the focus types included in the ROI, a spatial, position of the ROI is calculated, and the lesion region is obtained through division from the whole image.
- Localization analysis is performed on the determined ROI, and a focus type to which the ROI belongs is determined.
- the determined ROI is extended from a two-dimensional image to a three-dimensional stereoscopic image or a two-dimensional dynamic image, and a lesion region of a whole image is obtained through division.
- a corresponding rectangular region or cubic region obtained by CT, MRI, and the like
- a dynamic image obtained by ultrasound
- the labeling, delineating and localization of the certain ROI or focus in the three-dimensional image obtained based on CT or MRI is generally limited to a two-dimensional image of a spatial cross section.
- the ROI of the cross section is delineated and further expanded to neighboring multiple two-dimensional frames adjacent in a front-back direction (in an up or down direction) based on similar texture gray distribution.
- the steps are as follows:
- a gray value of a two-dimensional image is used to delineate based on threshold or texture features of the focus type.
- a mathematical morphology operator or other operators for the two-dimensional image are further used to divide certain focuses (for example, solid nodules connected to a lung wall, or masses connected to glands, each group is similar in pixel gray and texture features) from the organs to which the focuses connects, so as to obtain a main closed region of one or more closed core focus regions corresponding to the focus (that is, a lesion region) in a two-dimensional sub-image (frame) cross section.
- the closed core focus region needs to meet the following two points.
- the closed core focus region is completely contained in the delineated ROI (will not be connected to the outside).
- the proportion of pixels of the closed core focus region to the pixels of the ROI should not be lower than a certain number (such as 30%).
- step 2 based on the main closed region of the image, previous and next images in a spatial sequence of the image is divided according to the threshold or texture features of the focus type.
- a mathematical morphology operator or other division operators of the two-dimensional image are further used to divide certain focuses connected to a certain part of an organ to obtain one or more closed regions that match the description of the focus type.
- regions only closed regions that are three-dimensionally connected to the previously identified main closed regions (generally treated as 6-neighborhood connections) are merged into the closed core focus regions.
- step 3 the operation in step 2 is continued, and a mathematical morphological closed operation in a three-dimensional space is performed to filter out other regions (in terms of masses and nodules, they are some catheters, blood vessels and organ glands) connected to the closed core focus region in the three-dimensional space until the closed core focus region no longer expands.
- regions in terms of masses and nodules, they are some catheters, blood vessels and organ glands
- step 4 a closed core focus region edge is thus delineated, and labeled at the pixel level. Meanwhile, maximum and minimum values of X, Y, and Z axes in edge pixel point coordinates of the closed core focus region are calculated, so as to form a space cube. That is, the three-dimensional cube region include the lesion region.
- the user can label and localize the ROI or the focus, which is generally limited to a static image at a specific time segment (in ultrasound, it is often one or a few frames in image frames which are scanned in a time segment at a fixed point)
- a complete delineation of the ROI or focus is to further extend the delineation of the ROI in a cross section by the user to adjacent two-dimensional frames of the dynamic image through an algorithm (spatial neighboring, texture, gray and the like) by a computer.
- the characteristic of B-mode ultrasound is that doctors are constantly moving probes of detection instruments, and part of images of parts monitored by the probes of the detection instruments are constantly changing with time (such as heart and blood flow).
- doctors operate probes in two states: a state where the probes are quickly moved to find a suspicious region; and a basically stationary or slight sliding state, focusing on the change of ultrasound images (such as a blood flow change) in a certain region with time.
- doctors usually use the medical image aided diagnosis system to draw an ROI on a dynamic image presented by a host computer display.
- the medical image aided diagnosis system will determine a time sequence of dynamic images corresponding to the ROI through the following steps. The specific descriptions are as follows:
- each frame in the dynamic image is pre-processed to output an image shown relatively fixed human organs regions such as bones, muscles, core heart regions (common part of contraction/diastole), or lung regions (common part of respiration), so as to obtain processed dynamic images in real time.
- human organs regions such as bones, muscles, core heart regions (common part of contraction/diastole), or lung regions (common part of respiration), so as to obtain processed dynamic images in real time.
- step 2 a complete sequence of observation frames with relatively fixed probe positions in the dynamic image is obtained by the following two methods.
- the processed dynamic image (the output image in step 1) is analyzed in real time to determine whether the probe is moving fast and looking for an ROI, or is basically stationary (including slight moving) and has been focusing on the change (such as blood flow change) of an image in a certain region with time, and a complete sequence of observation frames of the same scene is determined based on the analysis of adjacent frames and similar scenes (such algorithms have long been mature in MPEG4).
- the MPEG4 compression algorithm provided in the second embodiment is provided with an algorithm module to detect whether the scene has changed ((including the detection of the telescopic transformation of a scene, that is, the detail enlarging of the same scene or scene expansion), scene translation, thorough scene switching).
- Medical dynamic images are generally mainly scene translation. There is less thorough, scene switching, which generally occurs when the probe is placed on a human body and removed. The descriptions are omitted herein.
- step 3 a complete sequence of observation frames where the ROI is located is completely acquired based on the foregoing ROI and the focus type determined by the expert, as well as a sequence of observation frames where it is located when the expert determines the ROI.
- a frame sequence where the ROI is located when the expert determines the ROI refers to corresponding specific one or more continuous two-dimensional images (frames) when the expert determines the ROI.
- the system expands the two-dimensional image or part of two-dimensional dynamic image sequences corresponding to the ROI forward and backward to a complete sequence of observation frames (an entire period of time at which the probe position is relatively fixed).
- the ROI of each extended frame may be simply processed and still limited to surround the two-dimensional ROI, and may also be further processed for image analysis based on the originally determined two-dimensional ROI and the focus type finally determined by the expert. A more accurate focus part is re-obtained in the extended frame through division.
- a focus type is determined according to the ROI and the candidate focus options, a lesion region is obtained through division according to the focus type, a structured report related to the ROI of the medical image of the patient is generated, and the lesion region and corresponding image semantic representation are added into a corresponding focus image library.
- the candidate focus options do not match the ROI, the expert needs to manually input and send the image semantic representation corresponding to the ROI to other experts for verification, and after the verification is passed, the lesion region and the corresponding image semantic representation are added to the corresponding focus image library.
- the situation that the candidate focus options do not match the ROI includes that the candidate focus options do not contain focus options corresponding to the ROI or the description of the ROI in the focus options is not accurate.
- the specific descriptions are as follows:
- the attribute of the focus and the corresponding local image of the focus will be recorded and added into the focus image library, and submitted to other experts for cross-validation as a new discovery that is inconsistent with the system judgment
- the corresponding knowledge (including the lesion region and the corresponding image semantic representation) will be added into the focus image library and added to a training set as a new training sample.
- the new knowledge is added to the image semantic representation knowledge graph. If it is falsified by another expert, the manual entry result of this expert is corrected, and the recognition result of the system is adopted.
- the new type of focus along with the attributes and the corresponding focus local image, will be recorded and added into a temporary focus image library of the new type of focus, and submitted to other experts for cross-validation as a new discovered focus.
- the corresponding knowledge will be added into the image semantic representation knowledge graph, and the focus image will be added to the corresponding focus image library and added to a training set as a new training sample. If it is falsified by another expert, the manual entry result of this expert is corrected, and the previous recognition result of the medical image aided diagnosis system is adopted.
- the medical image aided diagnosis system may wait for such new samples to accumulate to a certain extent before training. When such samples are found, the medical image aided diagnosis system may also generate more similar samples based on the research and other knowledge of such new samples by experts in combination with a generative adversarial network (GAN), and perform learning when there are few samples.
- GAN generative adversarial network
- the third embodiment provided by the invention provides a medical image aided diagnosis system combining image recognition and report editing.
- the system includes a knowledge graph establishment module, an information acquisition module, an ROI determination module, a candidate focus option generation module, a lesion region determination module, a report generation module, and a correction module.
- the knowledge graph establishment module is configured to establish an image semantic representation knowledge graph according to a standardized dictionary library in the field of images and historically accumulated medical image report analysis.
- the information acquisition module is configured to acquire a medical image of a patient.
- the ROI determination module is configured to determine an ROI of the medical image of the patient by an expert according to the medical image of the patient transmitted from the information acquisition module.
- the candidate focus option generation module is configured to provide candidate focus options of the patient according to the image semantic representation knowledge graph transmitted from the knowledge graph establishment module and the ROI transmitted from the ROI determination module.
- the lesion region determination module is configured to determine a focus type according to the ROI transmitted from the ROI determination module and the candidate focus options transmitted from the candidate focus option generation module, and perform division to obtain a lesion region according to the focus type.
- the report generation module is configured to generate a structured report related to the ROI of the medical image of the patient according to the divided lesion region and corresponding image semantic representation.
- the correction module is configured to add the lesion region and the corresponding image semantic representation into a corresponding focus image library.
- the lesion region determination module includes a focus type determination unit and a lesion region determination unit.
- the focus type determination unit is configured to determine a focus type in the candidate focus options provided by the candidate focus option generation module according to the ROI transmitted from the ROI determination module.
- the lesion region determination unit is configured to perform localization analysis on the ROI transmitted from the ROI determination module, perform division to obtain a lesion region, and determine focus options corresponding to the lesion region according to the image semantic representation knowledge graph transmitted from the knowledge graph establishment module, so as to determine a lesion type.
- an image semantic representation knowledge graph and a variety of machine learning are combined to perform medical image recognition, sample images can be systematically and deeply accumulated, and the image semantic representation knowledge graph can be continuously improved, so that labeled focuses of many images can be continuously collected under the same sub-label.
- the number of samples available for deep learning continues to increaser. The increase of samples generally leads to the enhancement of recognition capabilities and the improvement of recognition sensitivity and specificity.
- labeling of the focuses can be continuously refined by means of machine learning in combination with manual in-depth research, thereby further enriching the measures of radiomics, continuously refining image performance types of focuses, and enhancing aided diagnosis capabilities of medical images.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Public Health (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Quality & Reliability (AREA)
- Computer Graphics (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Evolutionary Computation (AREA)
- Geometry (AREA)
- Data Mining & Analysis (AREA)
- Pathology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710895420.4A CN109583440B (zh) | 2017-09-28 | 2017-09-28 | 结合影像识别与报告编辑的医学影像辅助诊断方法及系统 |
| CN201710895420.4 | 2017-09-28 | ||
| PCT/CN2018/108311 WO2019062846A1 (fr) | 2017-09-28 | 2018-09-28 | Procédé et système de diagnostic assisté par image médicale combinant une reconnaissance d'image et une édition de rapport |
Related Parent Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2018/108311 Continuation WO2019062846A1 (fr) | 2017-09-28 | 2018-09-28 | Procédé et système de diagnostic assisté par image médicale combinant une reconnaissance d'image et une édition de rapport |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20200303062A1 US20200303062A1 (en) | 2020-09-24 |
| US11101033B2 true US11101033B2 (en) | 2021-08-24 |
Family
ID=65900854
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/833,512 Expired - Fee Related US11101033B2 (en) | 2017-09-28 | 2020-03-28 | Medical image aided diagnosis method and system combining image recognition and report editing |
Country Status (3)
| Country | Link |
|---|---|
| US (1) | US11101033B2 (fr) |
| CN (1) | CN109583440B (fr) |
| WO (1) | WO2019062846A1 (fr) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20230054096A1 (en) * | 2021-08-17 | 2023-02-23 | Fujifilm Corporation | Learning device, learning method, learning program, information processing apparatus, information processing method, and information processing program |
| US20230068201A1 (en) * | 2021-08-30 | 2023-03-02 | Fujifilm Corporation | Learning device, learning method, learning program, information processing apparatus, information processing method, and information processing program |
Families Citing this family (93)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP6984020B2 (ja) * | 2018-07-31 | 2021-12-17 | オリンパス株式会社 | 画像解析装置および画像解析方法 |
| GB201817049D0 (en) * | 2018-10-19 | 2018-12-05 | Mirada Medical Ltd | System and method for automatic delineation of medical images |
| CN110111864B (zh) * | 2019-04-15 | 2023-05-26 | 中山大学 | 一种基于关系模型的医学报告生成系统及其生成方法 |
| CN110162639A (zh) * | 2019-04-16 | 2019-08-23 | 深圳壹账通智能科技有限公司 | 识图知意的方法、装置、设备及存储介质 |
| CN110097969A (zh) * | 2019-05-10 | 2019-08-06 | 安徽科大讯飞医疗信息技术有限公司 | 一种诊断报告的分析方法、装置及设备 |
| CN111986137B (zh) * | 2019-05-21 | 2024-06-28 | 梁红霞 | 生物器官病变检测方法、装置、设备及可读存储介质 |
| CN110491480B (zh) | 2019-05-22 | 2021-04-30 | 腾讯科技(深圳)有限公司 | 一种医疗图像处理方法、装置、电子医疗设备和存储介质 |
| CN110223761B (zh) * | 2019-06-13 | 2023-08-22 | 上海联影医疗科技股份有限公司 | 一种勾勒数据导入方法、装置、电子设备及存储介质 |
| CN110379492A (zh) * | 2019-07-24 | 2019-10-25 | 复旦大学附属中山医院青浦分院 | 一种全新的ai+pacs系统及其检查报告构建方法 |
| CN110600122B (zh) * | 2019-08-23 | 2023-08-29 | 腾讯医疗健康(深圳)有限公司 | 一种消化道影像的处理方法、装置、以及医疗系统 |
| CN110610181B (zh) * | 2019-09-06 | 2024-08-06 | 腾讯科技(深圳)有限公司 | 医学影像识别方法及装置、电子设备及存储介质 |
| CN110738655B (zh) * | 2019-10-23 | 2024-04-26 | 腾讯科技(深圳)有限公司 | 影像报告生成方法、装置、终端及存储介质 |
| CN110853743A (zh) * | 2019-11-15 | 2020-02-28 | 杭州依图医疗技术有限公司 | 医学影像的显示方法、信息处理方法及存储介质 |
| CN110946615B (zh) * | 2019-11-19 | 2023-04-25 | 苏州佳世达电通有限公司 | 超声波诊断装置及使用其的操作方法 |
| WO2021102844A1 (fr) * | 2019-11-28 | 2021-06-03 | 华为技术有限公司 | Procédé, dispositif et système de traitement d'image |
| CN111028173B (zh) * | 2019-12-10 | 2023-11-17 | 北京百度网讯科技有限公司 | 图像增强方法、装置、电子设备及可读存储介质 |
| CN111179227B (zh) * | 2019-12-16 | 2022-04-05 | 西北工业大学 | 基于辅助诊断和主观美学的乳腺超声图像质量评价方法 |
| CN111048170B (zh) * | 2019-12-23 | 2021-05-28 | 山东大学齐鲁医院 | 基于图像识别的消化内镜结构化诊断报告生成方法与系统 |
| CN112365436B (zh) * | 2020-01-09 | 2023-04-07 | 西安邮电大学 | 一种针对ct影像的肺结节恶性度分级系统 |
| CN113254608A (zh) * | 2020-02-07 | 2021-08-13 | 台达电子工业股份有限公司 | 通过问答生成训练数据的系统及其方法 |
| CN111311705B (zh) * | 2020-02-14 | 2021-06-04 | 广州柏视医疗科技有限公司 | 基于webgl的高适应性医学影像多平面重建方法及系统 |
| CN111325767B (zh) * | 2020-02-17 | 2023-06-02 | 杭州电子科技大学 | 基于真实场景的柑橘果树图像集合的合成方法 |
| CN113314202A (zh) * | 2020-02-26 | 2021-08-27 | 张瑞明 | 基于大数据来处理医学影像的系统 |
| CN111369532A (zh) * | 2020-03-05 | 2020-07-03 | 北京深睿博联科技有限责任公司 | 乳腺x射线影像的处理方法和装置 |
| CN111047609B (zh) * | 2020-03-13 | 2020-07-24 | 北京深睿博联科技有限责任公司 | 肺炎病灶分割方法和装置 |
| CN111275707B (zh) * | 2020-03-13 | 2023-08-25 | 北京深睿博联科技有限责任公司 | 肺炎病灶分割方法和装置 |
| CN111339076A (zh) * | 2020-03-16 | 2020-06-26 | 北京大学深圳医院 | 肾脏病理报告镜检数据处理方法、装置及相关设备 |
| CN111563877B (zh) * | 2020-03-24 | 2023-09-26 | 北京深睿博联科技有限责任公司 | 一种医学影像的生成方法及装置、显示方法及存储介质 |
| CN111563876B (zh) * | 2020-03-24 | 2023-08-25 | 北京深睿博联科技有限责任公司 | 一种医学影像的获取方法、显示方法 |
| CN111430014B (zh) * | 2020-03-31 | 2023-08-04 | 杭州依图医疗技术有限公司 | 腺体医学影像的显示方法、交互方法及存储介质 |
| TWI783219B (zh) * | 2020-04-01 | 2022-11-11 | 緯創資通股份有限公司 | 醫學影像辨識方法及醫學影像辨識裝置 |
| CN111476775B (zh) * | 2020-04-07 | 2021-11-16 | 广州柏视医疗科技有限公司 | Dr征象识别装置和方法 |
| CN111667897A (zh) * | 2020-04-24 | 2020-09-15 | 杭州深睿博联科技有限公司 | 一种影像诊断结果的结构化报告系统 |
| CN111554369B (zh) * | 2020-04-29 | 2023-08-04 | 杭州依图医疗技术有限公司 | 医学数据的处理方法、交互方法及存储介质 |
| CN111681737B (zh) * | 2020-05-07 | 2023-12-19 | 陈�峰 | 用于建设肝癌影像数据库的结构化报告系统及方法 |
| CN111528907A (zh) * | 2020-05-07 | 2020-08-14 | 万东百胜(苏州)医疗科技有限公司 | 一种超声影像肺炎辅助诊断方法及系统 |
| CN111507979A (zh) * | 2020-05-08 | 2020-08-07 | 延安大学 | 一种医学影像计算机辅助分析方法 |
| CN111507978A (zh) * | 2020-05-08 | 2020-08-07 | 延安大学 | 一种泌尿外科用智能数字影像处理系统 |
| CN111681730B (zh) * | 2020-05-22 | 2023-10-27 | 上海联影智能医疗科技有限公司 | 医学影像报告的分析方法和计算机可读存储介质 |
| CN111768844B (zh) * | 2020-05-27 | 2022-05-13 | 中国科学院大学宁波华美医院 | 用于ai模型训练的肺部ct影像标注方法 |
| CN111640503B (zh) * | 2020-05-29 | 2023-09-26 | 上海市肺科医院 | 一种晚期肺癌患者的肿瘤突变负荷的预测系统及方法 |
| CN111933251B (zh) * | 2020-06-24 | 2021-04-13 | 安徽影联云享医疗科技有限公司 | 一种医学影像标注方法及系统 |
| EP4170670A4 (fr) | 2020-07-17 | 2023-12-27 | Wuhan United Imaging Healthcare Co., Ltd. | Procédé et système de traitement de données médicales |
| CN111951952A (zh) * | 2020-07-17 | 2020-11-17 | 北京欧应信息技术有限公司 | 一种基于医疗影像信息自动诊断骨科疾病的装置 |
| CN112530550A (zh) * | 2020-12-10 | 2021-03-19 | 武汉联影医疗科技有限公司 | 影像报告生成方法、装置、计算机设备和存储介质 |
| US11883687B2 (en) | 2020-09-08 | 2024-01-30 | Shanghai United Imaging Healthcare Co., Ltd. | X-ray imaging system for radiation therapy |
| EP3985679A1 (fr) * | 2020-10-19 | 2022-04-20 | Deepc GmbH | Technique permettant de fournir un affichage interactif d'une image médicale |
| CN112401915A (zh) * | 2020-11-19 | 2021-02-26 | 华中科技大学同济医学院附属协和医院 | 一种新冠肺炎ct复查的图像融合比对方法 |
| CN112420150B (zh) * | 2020-12-02 | 2023-11-14 | 沈阳东软智能医疗科技研究院有限公司 | 医学影像报告的处理方法、装置、存储介质及电子设备 |
| CN112419340B (zh) * | 2020-12-09 | 2024-06-28 | 东软医疗系统股份有限公司 | 脑脊液分割模型的生成方法、应用方法及装置 |
| CN112669925A (zh) * | 2020-12-16 | 2021-04-16 | 华中科技大学同济医学院附属协和医院 | 一种新冠肺炎ct复查的报告模板及形成方法 |
| CN112599216B (zh) * | 2020-12-31 | 2021-08-31 | 四川大学华西医院 | 脑肿瘤mri多模态标准化报告输出系统及方法 |
| CN112863649B (zh) * | 2020-12-31 | 2022-07-19 | 四川大学华西医院 | 玻璃体内肿瘤影像结果输出系统及方法 |
| US20220284542A1 (en) * | 2021-03-08 | 2022-09-08 | Embryonics LTD | Semantically Altering Medical Images |
| CN113160166B (zh) * | 2021-04-16 | 2022-02-15 | 宁波全网云医疗科技股份有限公司 | 通过卷积神经网络模型进行医学影像数据挖掘工作方法 |
| WO2022252107A1 (fr) * | 2021-06-01 | 2022-12-08 | 眼灵(上海)智能科技有限公司 | Système et procédé d'examen médical basés sur une image de l'œil |
| CN115496700A (zh) * | 2021-06-01 | 2022-12-20 | 眼灵(上海)智能科技有限公司 | 一种基于眼部图像的疾病检测系统及方法 |
| CN113658107B (zh) * | 2021-07-21 | 2025-03-07 | 杭州深睿博联科技有限公司 | 一种基于ct图像的肝脏病灶诊断方法及装置 |
| CN113486195A (zh) * | 2021-08-17 | 2021-10-08 | 深圳华声医疗技术股份有限公司 | 超声图像处理方法、装置、超声设备及存储介质 |
| CN113592857B (zh) * | 2021-08-25 | 2024-11-15 | 桓由之 | 医学影像中图形要素的识别、提取和标注的方法 |
| CN113763345A (zh) * | 2021-08-31 | 2021-12-07 | 苏州复颖医疗科技有限公司 | 医学影像病灶位置查看方法、系统、设备及存储介质 |
| CN113838560A (zh) * | 2021-09-09 | 2021-12-24 | 王其景 | 一种基于医学影像的远程诊断系统及方法 |
| CN115965562A (zh) * | 2021-10-08 | 2023-04-14 | 佳能医疗系统株式会社 | 医用图像处理装置和医用图像处理方法 |
| CN113963770A (zh) * | 2021-10-11 | 2022-01-21 | 深圳市人民医院 | 报告文件生成方法、装置、计算机设备及其存储介质 |
| CN113889213A (zh) * | 2021-12-06 | 2022-01-04 | 武汉大学 | 超声内镜报告的生成方法、装置、计算机设备及存储介质 |
| US12100512B2 (en) * | 2021-12-21 | 2024-09-24 | National Cheng Kung University | Medical image project management platform |
| CN114530224A (zh) * | 2022-01-18 | 2022-05-24 | 深圳市智影医疗科技有限公司 | 基于医学影像的诊断报告辅助生成方法及系统 |
| CN114463323B (zh) * | 2022-02-22 | 2023-09-08 | 数坤(上海)医疗科技有限公司 | 一种病灶区域识别方法、装置、电子设备和存储介质 |
| CN114565582B (zh) * | 2022-03-01 | 2023-03-10 | 佛山读图科技有限公司 | 一种医学图像分类和病变区域定位方法、系统及存储介质 |
| CN116934657A (zh) * | 2022-03-31 | 2023-10-24 | 佳能医疗系统株式会社 | 医用图像处理装置和医用图像处理方法 |
| CN114708981A (zh) * | 2022-05-05 | 2022-07-05 | 上海辉明软件有限公司 | 基于医学影像报告的疾病图谱构建方法、解读方法及系统 |
| CN114864035A (zh) * | 2022-05-07 | 2022-08-05 | 有方(合肥)医疗科技有限公司 | 影像报告生成方法、装置、系统、设备及存储介质 |
| CN114972806A (zh) * | 2022-05-12 | 2022-08-30 | 上海工程技术大学 | 一种基于计算机视觉的医学图像分析方法 |
| CN114724670A (zh) * | 2022-06-02 | 2022-07-08 | 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) | 一种医学报告生成方法、装置、存储介质和电子设备 |
| CN114708952B (zh) * | 2022-06-02 | 2022-10-04 | 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) | 一种图像标注方法、装置、存储介质和电子设备 |
| CN115295125B (zh) * | 2022-08-04 | 2023-11-17 | 天津市中西医结合医院(天津市南开医院) | 一种基于人工智能的医学影像文件管理系统及方法 |
| CN115062165B (zh) * | 2022-08-18 | 2022-12-06 | 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) | 基于读片知识图谱的医学影像诊断方法及装置 |
| CN115063425B (zh) * | 2022-08-18 | 2022-11-11 | 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) | 基于读片知识图谱的结构化检查所见生成方法及系统 |
| JP2024032079A (ja) * | 2022-08-29 | 2024-03-12 | 富士通株式会社 | 病変検出方法および病変検出プログラム |
| CN115620896A (zh) * | 2022-10-20 | 2023-01-17 | 太保科技有限公司 | 一种医学影像报告的异常事件提取方法及装置 |
| CN116186637B (zh) * | 2022-11-30 | 2025-08-12 | 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) | 中枢神经肿瘤的医学影像特征挖掘方法及相关装置 |
| CN116779093B (zh) * | 2023-08-22 | 2023-11-28 | 青岛美迪康数字工程有限公司 | 一种医学影像结构化报告的生成方法、装置和计算机设备 |
| CN116797889B (zh) * | 2023-08-24 | 2023-12-08 | 青岛美迪康数字工程有限公司 | 医学影像识别模型的更新方法、装置和计算机设备 |
| CN117457142B (zh) * | 2023-11-17 | 2025-02-25 | 浙江飞图影像科技有限公司 | 用于报告生成的医学影像处理系统及方法 |
| CN117476163B (zh) * | 2023-12-27 | 2024-03-08 | 万里云医疗信息科技(北京)有限公司 | 用于确定疾病结论的方法、装置以及存储介质 |
| CN118587182B (zh) * | 2024-06-04 | 2024-12-17 | 山西省肿瘤医院 | 一种基于肾超声影像的肾病变超声分析诊疗辅助系统 |
| CN118334017B (zh) * | 2024-06-12 | 2024-09-10 | 中国人民解放军总医院第八医学中心 | 一种面向呼吸道传染病的风险辅助评估方法 |
| CN118844936B (zh) * | 2024-06-28 | 2026-01-23 | 中国人民解放军总医院第一医学中心 | 一种强直性脊柱炎远程智能风险确定方法及系统 |
| CN118941872A (zh) * | 2024-08-08 | 2024-11-12 | 中国医科大学 | 基于大语言模型的医学视频感兴趣区域特征检测与分类方法 |
| CN119295654A (zh) * | 2024-09-23 | 2025-01-10 | 广州医科大学附属妇女儿童医疗中心 | 基于容积重建技术的医学图像处理方法及装置 |
| CN119170260B (zh) * | 2024-11-21 | 2025-07-18 | 重庆医科大学绍兴柯桥医学检验技术研究中心 | 一种医学影像数字化辅助分析方法及系统 |
| CN119918630A (zh) * | 2024-12-10 | 2025-05-02 | 宜昌市疾病预防控制中心 | 辅助诊断信息的生成方法、装置及设备 |
| CN121304677A (zh) * | 2025-12-12 | 2026-01-09 | 广州中医药大学(广州中医药研究院) | 一种骨科病例图像的重构方法及装置 |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20110137132A1 (en) * | 2009-11-24 | 2011-06-09 | Gustafson Gregory A | Mammography Information System |
| US20120014559A1 (en) * | 2010-01-12 | 2012-01-19 | Siemens Aktiengesellschaft | Method and System for Semantics Driven Image Registration |
| US20140219500A1 (en) * | 2010-07-21 | 2014-08-07 | Armin Moehrle | Image reporting method |
| US20150265251A1 (en) * | 2014-03-18 | 2015-09-24 | Samsung Electronics Co., Ltd. | Apparatus and method for visualizing anatomical elements in a medical image |
| US20180276813A1 (en) * | 2017-03-23 | 2018-09-27 | International Business Machines Corporation | Weakly supervised probabilistic atlas generation through multi-atlas label fusion |
Family Cites Families (18)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2010109351A1 (fr) * | 2009-03-26 | 2010-09-30 | Koninklijke Philips Electronics N.V. | Système d'extraction automatique de modèles de rapport sur la base d'informations de diagnostic |
| WO2010134016A1 (fr) * | 2009-05-19 | 2010-11-25 | Koninklijke Philips Electronics N.V. | Extraction et visualisation d'images médicales |
| CN107403058B (zh) * | 2010-07-21 | 2021-04-16 | 阿敏·E·莫尔勒 | 图像报告方法 |
| CN102156715A (zh) * | 2011-03-23 | 2011-08-17 | 中国科学院上海技术物理研究所 | 面向医学影像数据库的基于多病灶区域特征的检索系统 |
| US9349186B2 (en) * | 2013-02-11 | 2016-05-24 | General Electric Company | Systems and methods for image segmentation using target image intensity |
| US9721340B2 (en) * | 2013-08-13 | 2017-08-01 | H. Lee Moffitt Cancer Center And Research Institute, Inc. | Systems, methods and devices for analyzing quantitative information obtained from radiological images |
| KR101576047B1 (ko) * | 2014-01-17 | 2015-12-09 | 주식회사 인피니트헬스케어 | 의료 영상 판독 과정에서 구조화된 관심 영역 정보 생성 방법 및 그 장치 |
| CN103793611B (zh) * | 2014-02-18 | 2017-01-18 | 中国科学院上海技术物理研究所 | 医学信息的可视化方法和装置 |
| CN107209809A (zh) * | 2015-02-05 | 2017-09-26 | 皇家飞利浦有限公司 | 用于放射学报告的报告内容的背景创建 |
| CN105184074B (zh) * | 2015-09-01 | 2018-10-26 | 哈尔滨工程大学 | 一种基于多模态医学影像数据模型的医学数据提取和并行加载方法 |
| CN106021281A (zh) * | 2016-04-29 | 2016-10-12 | 京东方科技集团股份有限公司 | 医学知识图谱的构建方法、其装置及其查询方法 |
| US9589374B1 (en) * | 2016-08-01 | 2017-03-07 | 12 Sigma Technologies | Computer-aided diagnosis system for medical images using deep convolutional neural networks |
| CN106295186B (zh) * | 2016-08-11 | 2019-03-15 | 中国科学院计算技术研究所 | 一种基于智能推理的辅助疾病诊断的系统 |
| CN106776711B (zh) * | 2016-11-14 | 2020-04-07 | 浙江大学 | 一种基于深度学习的中文医学知识图谱构建方法 |
| CN106909778B (zh) * | 2017-02-09 | 2019-08-27 | 北京市计算中心 | 一种基于深度学习的多模态医学影像识别方法及装置 |
| CN106933994B (zh) * | 2017-02-27 | 2020-07-31 | 广东省中医院 | 一种基于中医药知识图谱的核心症证关系构建方法 |
| CN107103187B (zh) * | 2017-04-10 | 2020-12-29 | 四川省肿瘤医院 | 基于深度学习的肺结节检测分级与管理的方法及系统 |
| CN107145744B (zh) * | 2017-05-08 | 2018-03-02 | 合肥工业大学 | 医学知识图谱的构建方法、装置及辅助诊断方法 |
-
2017
- 2017-09-28 CN CN201710895420.4A patent/CN109583440B/zh active Active
-
2018
- 2018-09-28 WO PCT/CN2018/108311 patent/WO2019062846A1/fr not_active Ceased
-
2020
- 2020-03-28 US US16/833,512 patent/US11101033B2/en not_active Expired - Fee Related
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20110137132A1 (en) * | 2009-11-24 | 2011-06-09 | Gustafson Gregory A | Mammography Information System |
| US20120014559A1 (en) * | 2010-01-12 | 2012-01-19 | Siemens Aktiengesellschaft | Method and System for Semantics Driven Image Registration |
| US20140219500A1 (en) * | 2010-07-21 | 2014-08-07 | Armin Moehrle | Image reporting method |
| US20150265251A1 (en) * | 2014-03-18 | 2015-09-24 | Samsung Electronics Co., Ltd. | Apparatus and method for visualizing anatomical elements in a medical image |
| US20180276813A1 (en) * | 2017-03-23 | 2018-09-27 | International Business Machines Corporation | Weakly supervised probabilistic atlas generation through multi-atlas label fusion |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20230054096A1 (en) * | 2021-08-17 | 2023-02-23 | Fujifilm Corporation | Learning device, learning method, learning program, information processing apparatus, information processing method, and information processing program |
| US12183450B2 (en) * | 2021-08-17 | 2024-12-31 | Fujifilm Corporation | Constructing trained models to associate object in image with description in sentence where feature amount for sentence is derived from structured information |
| US20230068201A1 (en) * | 2021-08-30 | 2023-03-02 | Fujifilm Corporation | Learning device, learning method, learning program, information processing apparatus, information processing method, and information processing program |
| US12431236B2 (en) * | 2021-08-30 | 2025-09-30 | Fujifilm Corporation | Learning device, learning method, learning program, information processing apparatus, information processing method, and information processing program |
Also Published As
| Publication number | Publication date |
|---|---|
| CN109583440B (zh) | 2021-12-17 |
| WO2019062846A1 (fr) | 2019-04-04 |
| US20200303062A1 (en) | 2020-09-24 |
| CN109583440A (zh) | 2019-04-05 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11101033B2 (en) | Medical image aided diagnosis method and system combining image recognition and report editing | |
| CN114782307B (zh) | 基于深度学习的增强ct影像直肠癌分期辅助诊断系统 | |
| CN111127466B (zh) | 医学图像检测方法、装置、设备及存储介质 | |
| CN112086197B (zh) | 基于超声医学的乳腺结节检测方法及系统 | |
| Li et al. | Dilated-inception net: multi-scale feature aggregation for cardiac right ventricle segmentation | |
| CN113781440B (zh) | 超声视频病灶检测方法及装置 | |
| CN111429474B (zh) | 基于混合卷积的乳腺dce-mri图像病灶分割模型建立及分割方法 | |
| CN111214255A (zh) | 一种医学超声图像计算机辅助诊断方法 | |
| CN111227864A (zh) | 使用超声图像利用计算机视觉进行病灶检测的方法与装置 | |
| CN113855079A (zh) | 基于乳腺超声影像的实时检测和乳腺疾病辅助分析方法 | |
| Ma et al. | A novel deep learning framework for automatic recognition of thyroid gland and tissues of neck in ultrasound image | |
| CN105447872A (zh) | 一种在超声影像中自动识别肝脏肿瘤类型的方法 | |
| Vukicevic et al. | Deep learning segmentation of primary Sjögren's syndrome affected salivary glands from ultrasonography images | |
| CN116630680B (zh) | 一种x线摄影联合超声的双模态影像分类方法及系统 | |
| CN105913432A (zh) | 基于ct序列图像的主动脉提取方法及装置 | |
| Zhuang et al. | Tumor classification in automated breast ultrasound (ABUS) based on a modified extracting feature network | |
| CN110728239B (zh) | 一种利用深度学习的胃癌增强ct图像自动识别系统 | |
| CN110363772B (zh) | 基于对抗网络的心脏mri分割方法及系统 | |
| Wang et al. | A method of ultrasonic image recognition for thyroid papillary carcinoma based on deep convolution neural network. | |
| Li et al. | A dual attention-guided 3D convolution network for automatic segmentation of prostate and tumor | |
| Wang et al. | Automated pericardium segmentation and epicardial adipose tissue quantification from computed tomography images | |
| CN110648333B (zh) | 基于中智学理论的乳腺超声视频图像实时分割系统 | |
| CN120107603B (zh) | 基于多形态肿块图像特征分组的卵巢肿块分割装置 | |
| CN114757894A (zh) | 一种骨肿瘤病灶分析系统 | |
| CN115375632A (zh) | 基于CenterNet模型的肺结节智能检测系统及方法 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
| LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
| STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
| FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20250824 |