WO2022247007A1 - Procédé et appareil de classement d'images médicales, dispositif électronique et support de stockage lisible par ordinateur - Google Patents

Procédé et appareil de classement d'images médicales, dispositif électronique et support de stockage lisible par ordinateur Download PDF

Info

Publication number
WO2022247007A1
WO2022247007A1 PCT/CN2021/109482 CN2021109482W WO2022247007A1 WO 2022247007 A1 WO2022247007 A1 WO 2022247007A1 CN 2021109482 W CN2021109482 W CN 2021109482W WO 2022247007 A1 WO2022247007 A1 WO 2022247007A1
Authority
WO
WIPO (PCT)
Prior art keywords
medical image
result
lesion
classification
feature
Prior art date
Application number
PCT/CN2021/109482
Other languages
English (en)
Chinese (zh)
Inventor
郭振
柳杨
李君�
吕彬
高艳
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2022247007A1 publication Critical patent/WO2022247007A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Definitions

  • the present application relates to the fields of artificial intelligence and digital medical technology, and in particular to a medical image classification method, device, electronic equipment and readable storage medium.
  • image recognition as an important part of artificial intelligence, has been applied to various fields, such as applying image recognition in the medical field to identify medical images to judge the severity of diseases, such as grading fundus color ultrasound images, To determine the degree of diabetic retinopathy.
  • the inventor realized that the current image classification method can only rely on a single image recognition model to classify medical images, and the feature dimensions are small, resulting in poor image classification accuracy.
  • a medical image grading method provided by this application includes:
  • the feature information and the first grading result are graded and corrected by using the pre-built second grading model to obtain a target grading result.
  • the present application also provides a medical image grading device, the device comprising:
  • the feature matching module is used to obtain the medical image to be classified, and use the feature extraction network in the pre-built lesion detection model to perform feature extraction on the medical image to be classified to obtain a feature map; perform classification recognition and result statistics on the feature map , obtaining a classification result; using the lesion segmentation network in the lesion detection model to perform region segmentation and area calculation on the feature map to obtain a segmentation result; performing feature matching on the classification result and the segmentation result to obtain feature information;
  • An image classification module configured to use a pre-built first classification model to classify the medical image to be classified to obtain a first classification result
  • the grading correction module is configured to use the pre-built second grading model to perform grading correction on the feature information and the first grading result to obtain a target grading result.
  • the application also provides an electronic device, which includes:
  • a memory storing at least one computer program
  • a processor executing the computer program stored in the memory to achieve the following steps:
  • the feature information and the first grading result are graded and corrected by using the pre-built second grading model to obtain a target grading result.
  • the present application also provides a computer-readable storage medium, at least one computer program is stored in the computer-readable storage medium, and the at least one computer program is executed by a processor in an electronic device to implement the following steps:
  • the feature information and the first grading result are graded and corrected by using the pre-built second grading model to obtain a target grading result.
  • FIG. 1 is a schematic flowchart of a medical image classification method provided by an embodiment of the present application
  • Fig. 2 is a schematic module diagram of a medical image grading device provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram of the internal structure of an electronic device implementing a method for grading medical images provided by an embodiment of the present application;
  • AI artificial intelligence
  • digital computers or machines controlled by digital computers to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use knowledge to obtain the best results.
  • Artificial intelligence basic technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technology, operation/interaction systems, and mechatronics.
  • Artificial intelligence software technology mainly includes computer vision technology, robotics technology, biometrics technology, speech processing technology, natural language processing technology, and machine learning/deep learning.
  • the sample image is a medical image
  • the type of the object contained in the sample image is a lesion, that is, a part of the body where a lesion occurs.
  • Medical imaging refers to internal tissues obtained in a non-invasive way for medical treatment or medical research, such as images of the stomach, abdomen, heart, knees, and brain, such as CT (Computed Tomography, computerized tomography), MRI ( Magnetic Resonance Imaging, Magnetic Resonance Imaging), US (ultrasonic, Ultrasound), X-ray images, EEG, and images generated by medical instruments with optical photography lights.
  • An embodiment of the present application provides a medical image classification method.
  • the executor of the medical image grading method includes but is not limited to at least one of electronic devices such as a server and a terminal that can be configured to execute the method provided by the embodiment of the present application.
  • the medical image grading method can be executed by software or hardware installed on a terminal device or a server device, and the software can be a blockchain platform.
  • the server includes, but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like.
  • the server can be an independent server, or it can provide cloud services, cloud database, cloud computing, cloud function, cloud storage, network service, cloud communication, middleware service, domain name service, security service, content distribution network (ContentDelivery Network, CDN ), and cloud servers for basic cloud computing services such as big data and artificial intelligence platforms.
  • cloud database cloud computing
  • cloud function cloud storage
  • network service cloud communication
  • middleware service domain name service
  • security service content distribution network (ContentDelivery Network, CDN )
  • CDN content distribution network
  • cloud servers for basic cloud computing services such as big data and artificial intelligence platforms.
  • the medical image classification method includes:
  • the medical image to be classified in the embodiment of the present application is a fundus color ultrasound image
  • the lesion detection model includes: a feature extraction network, a lesion classification network, and a lesion segmentation network.
  • the feature extraction network is used for feature extraction
  • the lesion classification network is used for lesion classification
  • the lesion segmentation network is used for lesion region segmentation.
  • the initial feature extraction network in the feature extraction network is used to perform a convolution pooling operation on the medical image to be classified to obtain an initial feature map; the region extraction network in the feature extraction network is used to mark the The region of interest in the initial feature map is described to obtain the feature map.
  • the initial feature extraction network is a convolutional neural network
  • the region extraction network is an RPN (Region Proposal Network, region proposal network).
  • the feature extraction network in the lesion detection model before using the feature extraction network in the lesion detection model to perform feature extraction on the medical image to be classified, it also includes: acquiring a set of historical medical images, and performing preset labels on the set of historical medical images, Obtaining a first training image set; using the first training image set to iteratively train a pre-built first deep learning network model to obtain the lesion detection model.
  • the historical medical image set includes a plurality of historical medical images, and the historical medical images are medical images of the same type as the image to be classified but with different contents.
  • the lesion detection model is obtained by training the first deep model, therefore, the first deep learning model has the same network structure as the lesion detection model, therefore, the first deep learning model also includes : Feature extraction network, lesion classification network, lesion segmentation network.
  • the first training image set is used to iteratively train the pre-built first deep learning network model to obtain the lesion detection model, wherein the first deep learning network model is Mask- RCNN model, including:
  • Step A Use the feature extraction network in the first deep learning network model to perform convolution pooling on each image in the first training image set, and mark the region of interest on the images after convolution pooling , get the historical feature map;
  • the feature extraction network in the embodiment of the present application includes an initial feature extraction network and a region extraction network; wherein, the initial feature extraction network is a convolutional neural network, and the region extraction network is an RPN (Region Proposal Network, Regional Advice Network).
  • RPN Registered Proposal Network, Regional Advice Network
  • the initial feature extraction network is used to perform convolution pooling
  • the region extraction network is used to mark the region of interest.
  • Step B using the lesion classification network in the first deep learning network model to perform bounding box label prediction and classification prediction on the region of interest in the historical feature map, and obtain bounding box prediction coordinates and classification prediction values;
  • Step C According to the lesion area marked by the historical feature image corresponding to the historical feature map, the real coordinates of the bounding box are obtained; according to the lesion category marked by the historical feature image corresponding to the historical feature map, the classification true value is obtained;
  • the true value of the classification of the corresponding laser spot lesion is 1.
  • Step D Calculate according to the classification prediction value and the classification real value, using a preset first loss function to obtain a first loss value; according to the real coordinates of the bounding box and the predicted coordinates of the bounding box, use the preset Calculate the second loss function set to obtain the second loss function.
  • the first loss function or the second loss function in this embodiment of the present application may be a cross-entropy loss function.
  • the lesion segmentation network described in the embodiment of the present application includes a fully connected layer and a softmax network.
  • Step E using the lesion segmentation network in the first deep learning network model to perform region segmentation prediction on the historical feature map, and obtain the predicted value of the total number of pixels corresponding to each region and the predicted value of the number of pixels at the edge of the region;
  • the lesion segmentation network described in the embodiment of the present application is a fully convolutional network.
  • Step F According to the lesion area marked by the historical feature image corresponding to the historical feature map, the real value of the total number of pixels in the corresponding area and the real value of the number of pixels at the edge of the area are obtained;
  • Step G According to the predicted value of the total number of pixels in each region and the predicted value of the number of pixels at the edge of the region and the real value of the total number of pixels in the corresponding region and the real value of the number of pixels at the edge of the region, use the preset third loss function to calculate, obtaining a third loss value; summing the first loss value, the second loss value, and the third loss value to obtain a target loss value;
  • the third loss function is a cross-entropy loss function.
  • Step H When the target loss value is greater than or equal to the preset loss threshold, update the first deep learning network model parameters, and return to step A above, until the target loss value is less than the preset loss threshold, stop training to obtain the lesion detection model.
  • the high-throughput feature of the blockchain is used to store the medical images to be graded in the blockchain nodes to improve data access efficiency.
  • the lesion segmentation network in the lesion detection model is used to mark and classify the bounding boxes of the feature map, and the number of bounding boxes of the same category is summed up to obtain the classification result.
  • the A bounding box is classified as a hemorrhage lesion
  • the B bounding box is a laser spot lesion
  • the C bounding box is a preretinal hemorrhage lesion
  • the D bounding box is For bleeding lesions, then the number of bounding boxes of the same category is collected to obtain the classification result.
  • There are two bounding boxes A and D bounding boxes for bleeding lesions one bounding box B for laser spot lesions, and one preretinal hemorrhage lesion is the bounding box of C.
  • the lesion segmentation network in the lesion detection model is used to segment the feature map to obtain multiple segmentation regions.
  • the lesion segmentation network in the embodiment of the application is a full Convolutional network, further, since the size of the segmented regions corresponding to images of different sizes to be classified is quite different, in order to facilitate comparison, it is necessary to unify the comparison standard, calculate the area ratio of each segmented region to the medical image to be classified, and obtain the corresponding The relative area is not affected by the area change of the medical image to be classified; all the segmented regions and the relative area corresponding to each segmented region are summed up to obtain the segmented result.
  • the segmentation result is that there are 4 segmentation regions A, B, C, and D in the feature map, the segmentation region A consists of 10 pixels, and the medical image to be classified consists of 100 pixels, then the corresponding relative The area is 10%.
  • the embodiment of the present application matches and associates the classification result with the segmentation result to obtain the lesion category corresponding to each relative area in the segmentation result.
  • the classification result and the segmentation result in the embodiment of the present application are obtained from different branches in the same large model, and the positions of each bounding box and segmentation region in the classification result are the same, for example, the classification result There is a bounding box A for the bleeding lesions, and the segmented area a corresponds to the bounding box A. Therefore, the corresponding lesion category of the segmented area a obtained by matching is the bleeding lesion.
  • all the relative areas corresponding to the same lesion category in the segmentation results are summed to obtain the corresponding total area of the segmented region; the total area of the segmented region is combined with the corresponding lesion category,
  • the classification model is used to classify the medical images to be classified, and before obtaining the first classification result, it also includes: performing preset classification labels on the historical medical image set to obtain the second training image set; using the second training image set to iteratively train the pre-built second deep learning network model to obtain the first hierarchical model.
  • the grading labels include: mild non-proliferative retinopathy, moderate non-proliferative retinopathy, severe non-proliferative retinopathy, proliferative retinopathy and normal fundus.
  • the second deep learning network model in the embodiment of the present application is a convolutional neural network model including a dense attention mechanism.
  • the second hierarchical model is a random forest network model.
  • the embodiment of the present application uses the target classification network model to classify the feature information to obtain the target classification result .
  • the target classification network before using the target classification network to classify the feature information in the embodiment of the present application, it also includes: using the preset lesion category label as the root node, using the pre-built relative area classification interval and the preset classification Labels are used as classification conditions, and a random forest model is constructed to obtain the second grading model, wherein the grading labels share mild non-proliferative retinopathy, moderate non-proliferative retinopathy, severe non-proliferative retinopathy, proliferative retinopathy There are five kinds of lesions and normal fundus, and the classification interval of the lesion area can be set according to the actual diagnosis experience, such as [0, 20%), [20%, 40%), [40%, 60%), [60% %, 80%), [80%, 100%].
  • the feature information and the first grading result are input into the second grading model to obtain the target grading result, for example: the first grading result is moderate non-proliferative retina
  • the feature information of the lesion is [preretinal hemorrhage lesion, 10%]
  • the first grading result and the feature information are input into the second grading model
  • the target grading result is mild non-proliferative retinopathy.
  • FIG. 3 it is a functional block diagram of the medical image grading device of the present application.
  • the medical image grading apparatus 100 described in this application can be installed in an electronic device.
  • the medical image grading device may include a feature matching module 101, an image grading module 102, and a grading correction module 103.
  • the module described in the present invention may also be referred to as a unit, which refers to a device that can be processed by an electronic device processor. A series of computer program segments executed and capable of performing a fixed function, stored in the memory of an electronic device.
  • each module/unit is as follows:
  • the feature matching module 101 is used to obtain the medical image to be classified, and use the feature extraction network in the pre-built lesion detection model to perform feature extraction on the medical image to be classified to obtain a feature map; classify and identify the feature map and The results are counted to obtain classification results; using the lesion segmentation network in the lesion detection model to perform region segmentation and area calculation on the feature map to obtain segmentation results; performing feature matching between the classification results and the segmentation results to obtain features information;
  • the medical image to be classified in the embodiment of the present application is a fundus color ultrasound image
  • the lesion detection model includes: a feature extraction network, a lesion classification network, and a lesion segmentation network.
  • the feature extraction network is used for feature extraction
  • the lesion classification network is used for lesion classification
  • the lesion segmentation network is used for lesion region segmentation.
  • the feature matching module 101 in the embodiment of the present application uses the initial feature extraction network in the feature extraction network to perform a convolution pooling operation on the medical image to be classified to obtain an initial feature map;
  • the region extraction network marks the region of interest in the initial feature map to obtain a feature map.
  • the initial feature extraction network is a convolutional neural network
  • the region extraction network is an RPN (Region Proposal Network, region proposal network).
  • the feature matching module 101 in the embodiment of the present application utilizes the feature extraction network in the lesion detection model to perform feature extraction on the medical image to be classified, it also includes: acquiring a historical medical image set, and analyzing the historical medical image set Preset labeling is performed to obtain a first training image set; and the pre-built first deep learning network model is iteratively trained by using the first training image set to obtain the lesion detection model.
  • the historical medical image set includes a plurality of historical medical images, and the historical medical images are medical images of the same type as the image to be classified but with different contents.
  • the feature matching module 101 in the embodiment of the present application performs preset labeling on the historical medical image set to obtain the first training image set, including: for each lesion in the historical medical image set Marking the lesion area to obtain a target area, and performing lesion category marking on each of the target areas in each of the historical medical images to obtain a first training image set; optionally, the preset lesion area includes a microvascular tumor area, hemorrhage area, sclerotic area, cotton wool spot area, laser spot area, neovascular area, vitreous hemorrhage area, preretinal hemorrhage area, fibrous membrane area; the preset lesion category and the preset lesion area are one by one Correspondingly, including: microvascular tumor lesions, bleeding lesions, sclerotic lesions, cotton wool spot lesions, laser spot lesions, neovascular lesions, vitreous hemorrhage lesions, preretinal hemorrhage lesions, and fibrous membrane lesions. If the target
  • the lesion detection model is obtained by training the first deep model, therefore, the first deep learning model has the same network structure as the lesion detection model, therefore, the first deep learning model also includes : Feature extraction network, lesion classification network, lesion segmentation network.
  • the feature matching module 101 uses the first training image set to iteratively train the pre-built first deep learning network model to obtain the lesion detection model, wherein the first depth
  • the learning network model is the Mask-RCNN model, including:
  • Step A Use the feature extraction network in the first deep learning network model to perform convolution pooling on each image in the first training image set, and mark the region of interest on the images after convolution pooling , get the historical feature map;
  • the feature extraction network in the embodiment of the present application includes an initial feature extraction network and a region extraction network; wherein, the initial feature extraction network is a convolutional neural network, and the region extraction network is an RPN (Region Proposal Network, Regional Advice Network).
  • RPN Registered Proposal Network, Regional Advice Network
  • the initial feature extraction network is used to perform convolution pooling
  • the region extraction network is used to mark the region of interest.
  • Step B using the lesion classification network in the first deep learning network model to perform bounding box label prediction and classification prediction on the region of interest in the historical feature map, and obtain bounding box prediction coordinates and classification prediction values;
  • Step C According to the lesion area marked by the historical feature image corresponding to the historical feature map, the real coordinates of the bounding box are obtained; according to the lesion category marked by the historical feature image corresponding to the historical feature map, the classification true value is obtained;
  • the true value of the classification of the corresponding laser spot lesion is 1.
  • Step D Calculate according to the classification prediction value and the classification real value, using a preset first loss function to obtain a first loss value; according to the real coordinates of the bounding box and the predicted coordinates of the bounding box, use the preset Calculate the second loss function set to obtain the second loss function.
  • the first loss function or the second loss function in this embodiment of the present application may be a cross-entropy loss function.
  • the lesion segmentation network described in the embodiment of the present application includes a fully connected layer and a softmax network.
  • Step E using the lesion segmentation network in the first deep learning network model to perform region segmentation prediction on the historical feature map, and obtain the predicted value of the total number of pixels corresponding to each region and the predicted value of the number of pixels at the edge of the region;
  • the lesion segmentation network described in the embodiment of the present application is a fully convolutional network.
  • Step F According to the lesion area marked by the historical feature image corresponding to the historical feature map, the real value of the total number of pixels in the corresponding area and the real value of the number of pixels at the edge of the area are obtained;
  • Step G According to the predicted value of the total number of pixels in each region and the predicted value of the number of pixels at the edge of the region and the real value of the total number of pixels in the corresponding region and the real value of the number of pixels at the edge of the region, use the preset third loss function to calculate, obtaining a third loss value; summing the first loss value, the second loss value, and the third loss value to obtain a target loss value;
  • the third loss function is a cross-entropy loss function.
  • Step H When the target loss value is greater than or equal to the preset loss threshold, update the first deep learning network model parameters, and return to step A above, until the target loss value is less than the preset loss threshold, stop training to obtain the lesion detection model.
  • the high-throughput feature of the blockchain is used to store the medical images to be graded in the blockchain nodes to improve data access efficiency.
  • the feature matching module 101 in the embodiment of the present application uses the lesion segmentation network in the lesion detection model to mark and classify the bounding boxes of the feature maps, and summarize the number of bounding boxes of the same category, Get classification results. For example: there are 4 bounding boxes A, B, C, and D in the feature map, the A bounding box is classified as a hemorrhage lesion, the B bounding box is a laser spot lesion, the C bounding box is a preretinal hemorrhage lesion, and the D bounding box is For bleeding lesions, then the number of bounding boxes of the same category is collected to obtain the classification result. There are two bounding boxes A and D bounding boxes for bleeding lesions, one bounding box B for laser spot lesions, and one preretinal hemorrhage lesion is the bounding box of C.
  • the feature matching module 101 in the embodiment of the present application uses the lesion segmentation network in the lesion detection model to perform region segmentation on the feature map to obtain multiple segmented regions.
  • the The lesion segmentation network described above is a fully convolutional network.
  • a unified comparison standard is required to calculate the difference between each segmented region and the medical image to be classified. The corresponding relative area is obtained, and the relative area is not affected by the area change of the medical image to be classified; all the segmented regions and the relative area corresponding to each segmented region are summarized to obtain the segmented result.
  • the result of the segmentation is that there are 4 segmentation regions A, B, C, and D in the feature map, the segmentation region A consists of 10 pixels, and the medical image to be classified consists of 100 pixels, then the corresponding relative The area is 10%.
  • the feature matching module 101 in the embodiment of the present application matches and associates the classification result with the segmentation result to obtain the lesion category corresponding to each relative area in the segmentation result.
  • the classification result and the segmentation result in the embodiment of the present application are obtained from different branches in the same large model, and the positions of each bounding box and segmentation region in the classification result are the same, for example, the classification result There is a bounding box A for the bleeding lesions, and the segmented area a corresponds to the bounding box A. Therefore, the corresponding lesion category of the segmented area a obtained by matching is the bleeding lesion.
  • the image classification module 102 is configured to use a pre-built first classification model to classify the medical image to be classified to obtain a first classification result;
  • the image classification module 102 uses a classification model to classify the medical images to be classified, and before obtaining the first classification result, it also includes: performing preset classification labels on the historical medical image set , to obtain a second training image set; using the second training image set to iteratively train the pre-built second deep learning network model to obtain the first hierarchical model.
  • the grading labels include: mild non-proliferative retinopathy, moderate non-proliferative retinopathy, severe non-proliferative retinopathy, proliferative retinopathy and normal fundus.
  • the second deep learning network model in the embodiment of the present application is a convolutional neural network model including a dense attention mechanism.
  • the grading correction module 103 is configured to use the pre-built second grading model to perform grading correction on the feature information and the first grading result to obtain a target grading result.
  • the second hierarchical model is a random forest network model.
  • the classification correction module 103 in the embodiment of the present application uses the target classification network model to classify the feature information, Obtain the target classification result.
  • the grading correction module 103 in the embodiment of the present application uses the target grading network to classify the feature information, it also includes: using the preset lesion category label as the root node, using the pre-built relative area classification Intervals and preset grading labels are used as classification conditions to construct a random forest model to obtain the second grading model, wherein the grading labels share mild non-proliferative retinopathy, moderate non-proliferative retinopathy, severe non-proliferative retinopathy There are five types of retinopathy, proliferative retinopathy, and normal fundus.
  • the lesion area classification interval can be set according to actual diagnosis experience, such as [0, 20%), [20%, 40%), [40%, 60%), [60%, 80%), [80%, 100%].
  • the grading correction module 103 inputs the feature information and the first grading result into the second grading model to obtain the target grading result, for example: the first grading result is Moderate non-proliferative retinopathy, the feature information is [preretinal hemorrhage lesion, 10%], then input the first grading result and the feature information into the second grading model, and the target grading result is mild non-proliferative Retinopathy.
  • FIG. 3 it is a schematic structural diagram of an electronic device implementing the medical image classification method of the present application.
  • the electronic device may include a processor 10, a memory 11, a communication bus 12 and a communication interface 13, and may also include a computer program stored in the memory 11 and operable on the processor 10, such as a medical image grading program .
  • the memory 11 includes at least one type of readable storage medium, and the readable storage medium includes flash memory, mobile hard disk, multimedia card, card-type memory (for example: SD or DX memory, etc.), magnetic memory, magnetic disk, CD etc.
  • the storage 11 may be an internal storage unit of the electronic device in some embodiments, such as a mobile hard disk of the electronic device.
  • the memory 11 can also be an external storage device of the electronic device, such as a plug-in mobile hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD ) card, flash memory card (Flash Card), etc.
  • the memory 11 may also include both an internal storage unit of the electronic device and an external storage device.
  • the memory 11 can not only be used to store application software and various data installed in the electronic device, such as codes of medical image grading programs, but also can be used to temporarily store outputted or to-be-outputted data.
  • the processor 10 may be composed of integrated circuits, for example, may be composed of a single packaged integrated circuit, or may be composed of multiple integrated circuits with the same function or different functions, including one or more Combination of central processing unit (Central Processing unit, CPU), microprocessor, digital processing chip, graphics processor and various control chips, etc.
  • the processor 10 is the control core (Control Unit) of the electronic device, and uses various interfaces and lines to connect various components of the entire electronic device, and runs or executes programs or modules stored in the memory 11 (such as medical image grading program, etc.), and call the data stored in the memory 11 to execute various functions of the electronic device and process data.
  • Control Unit Control Unit
  • the communication bus 12 can be a peripheral component interconnection standard (perIPheral component interconnect (PCI for short) bus or extended industry standard structure (extended industry standard architecture, referred to as EISA) bus and so on.
  • PCI peripheral component interconnection standard
  • EISA extended industry standard architecture
  • the bus can be divided into address bus, data bus, control bus and so on.
  • the communication bus 12 is configured to realize connection and communication between the memory 11 and at least one processor 10 and the like. For ease of representation, only one thick line is used in the figure, but it does not mean that there is only one bus or one type of bus.
  • Figure 3 only shows an electronic device with components, and those skilled in the art can understand that the structure shown in Figure 3 does not constitute a limitation to the electronic device, and may include fewer or more components than shown in the figure , or combinations of certain components, or different arrangements of components.
  • the electronic device may also include a power supply (such as a battery) for supplying power to various components.
  • the power supply may be logically connected to the at least one processor 10 through a power management device, so that Realize functions such as charge management, discharge management, and power consumption management.
  • the power supply may also include one or more DC or AC power supplies, recharging devices, power failure detection circuits, power converters or inverters, power status indicators and other arbitrary components.
  • the electronic device may also include various sensors, a Bluetooth module, a Wi-Fi module, etc., which will not be repeated here.
  • the communication interface 13 may include a wired interface and/or a wireless interface (such as a WI-FI interface, a Bluetooth interface, etc.), which are generally used to establish a communication connection between the electronic device and other electronic devices.
  • a wired interface and/or a wireless interface such as a WI-FI interface, a Bluetooth interface, etc.
  • the communication interface 13 may also include a user interface
  • the user interface may be a display (Display), an input unit (such as a keyboard (Keyboard)), optionally, the user interface may also be a standard wired interface, a wireless interface .
  • the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode, organic light-emitting diode) touch device, and the like.
  • the display may also be properly referred to as a display screen or a display unit, and is used for displaying information processed in the electronic device and for displaying a visualized user interface.
  • the medical image grading program stored in the memory 11 in the electronic device is a combination of multiple computer programs. When running in the processor 10, it can realize:
  • the feature information and the first grading result are graded and corrected by using the pre-built second grading model to obtain a target grading result.
  • the integrated modules/units of the electronic equipment are realized in the form of software function units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the computer readable medium may be non-volatile or volatile.
  • the computer readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, removable hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) .
  • the embodiment of the present application can also provide a computer-readable storage medium, the readable storage medium stores a computer program, and when the computer program is executed by a processor of an electronic device, it can realize:
  • the feature information and the first grading result are graded and corrected by using the pre-built second grading model to obtain a target grading result.
  • the computer-usable storage medium may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function, etc.; use of the created data, etc.
  • modules described as separate components may or may not be physically separated, and the components shown as modules may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional module in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware, or in the form of hardware plus software function modules.
  • Blockchain essentially a decentralized database, is a series of data blocks associated with each other using cryptographic methods. Each data block contains a batch of network transaction information, which is used to verify its Validity of information (anti-counterfeiting) and generation of the next block.
  • the blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé et un appareil de classement d'images médicales, ainsi qu'un dispositif électronique et un support de stockage, qui se rapportent au domaine technique de l'intelligence artificielle et de la médecine numérique. Le procédé consiste à : acquérir une image médicale devant être classée, puis effectuer une extraction de caractéristiques sur ladite image médicale à l'aide d'un réseau d'extraction de caractéristiques dans un modèle de détection de lésion pré-construit afin d'obtenir une carte de caractéristiques (S1), ladite image médicale pouvant être stockée dans un nœud de chaîne de blocs ; effectuer des statistiques de classification, de reconnaissance et de résultat sur la carte de caractéristiques afin d'obtenir un résultat de classification (S2) ; effectuer une segmentation de région et un calcul de zone sur la carte de caractéristiques afin d'obtenir un résultat de segmentation (S3) ; procéder à un appariement de caractéristiques sur le résultat de classification et le résultat de segmentation afin d'obtenir des informations de caractéristiques (S4) ; classer ladite image médicale à l'aide d'un premier modèle de classement pré-construit afin d'obtenir un premier résultat de classement (S5) ; et effectuer une correction de classement sur les informations de caractéristiques et le premier résultat de classement à l'aide d'un second modèle de classement pré-construit afin d'obtenir un résultat de classement cible (S6). Le procédé permet d'améliorer la précision du classement d'images médicales.
PCT/CN2021/109482 2021-05-25 2021-07-30 Procédé et appareil de classement d'images médicales, dispositif électronique et support de stockage lisible par ordinateur WO2022247007A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110570809.8 2021-05-25
CN202110570809.8A CN113487621A (zh) 2021-05-25 2021-05-25 医学图像分级方法、装置、电子设备及可读存储介质

Publications (1)

Publication Number Publication Date
WO2022247007A1 true WO2022247007A1 (fr) 2022-12-01

Family

ID=77933476

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/109482 WO2022247007A1 (fr) 2021-05-25 2021-07-30 Procédé et appareil de classement d'images médicales, dispositif électronique et support de stockage lisible par ordinateur

Country Status (2)

Country Link
CN (1) CN113487621A (fr)
WO (1) WO2022247007A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116458945A (zh) * 2023-04-25 2023-07-21 杭州整形医院有限公司 一种儿童面部美容缝合路线的智能引导系统及方法

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114529760B (zh) * 2022-01-25 2022-09-02 北京医准智能科技有限公司 一种针对甲状腺结节的自适应分类方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107563123A (zh) * 2017-09-27 2018-01-09 百度在线网络技术(北京)有限公司 用于标注医学图像的方法和装置
CN109785300A (zh) * 2018-12-27 2019-05-21 华南理工大学 一种癌症医学图像数据处理方法、系统、装置和存储介质
WO2020077962A1 (fr) * 2018-10-16 2020-04-23 杭州依图医疗技术有限公司 Procédé et dispositif de reconnaissance d'image de sein
CN111161279A (zh) * 2019-12-12 2020-05-15 中国科学院深圳先进技术研究院 医学图像分割方法、装置及服务器

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108615051B (zh) * 2018-04-13 2020-09-15 博众精工科技股份有限公司 基于深度学习的糖尿病视网膜图像分类方法及系统
US20200250398A1 (en) * 2019-02-01 2020-08-06 Owkin Inc. Systems and methods for image classification
US10430946B1 (en) * 2019-03-14 2019-10-01 Inception Institute of Artificial Intelligence, Ltd. Medical image segmentation and severity grading using neural network architectures with semi-supervised learning techniques
CN111028206A (zh) * 2019-11-21 2020-04-17 万达信息股份有限公司 一种基于深度学习前列腺癌自动检测和分类系统
CN111986211A (zh) * 2020-08-14 2020-11-24 武汉大学 一种基于深度学习的眼科超声自动筛查方法及系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107563123A (zh) * 2017-09-27 2018-01-09 百度在线网络技术(北京)有限公司 用于标注医学图像的方法和装置
WO2020077962A1 (fr) * 2018-10-16 2020-04-23 杭州依图医疗技术有限公司 Procédé et dispositif de reconnaissance d'image de sein
CN109785300A (zh) * 2018-12-27 2019-05-21 华南理工大学 一种癌症医学图像数据处理方法、系统、装置和存储介质
CN111161279A (zh) * 2019-12-12 2020-05-15 中国科学院深圳先进技术研究院 医学图像分割方法、装置及服务器

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116458945A (zh) * 2023-04-25 2023-07-21 杭州整形医院有限公司 一种儿童面部美容缝合路线的智能引导系统及方法
CN116458945B (zh) * 2023-04-25 2024-01-16 杭州整形医院有限公司 一种儿童面部美容缝合路线的智能引导系统及方法

Also Published As

Publication number Publication date
CN113487621A (zh) 2021-10-08

Similar Documents

Publication Publication Date Title
US10004463B2 (en) Systems, methods, and computer readable media for using descriptors to identify when a subject is likely to have a dysmorphic feature
WO2021189909A1 (fr) Procédé et appareil de détection et d'analyse de lésion, dispositif électronique et support de stockage informatique
CN111932547B (zh) 图像中目标物的分割方法、装置、电子设备及存储介质
CN111932534B (zh) 医学影像图片分析方法、装置、电子设备及可读存储介质
WO2022247007A1 (fr) Procédé et appareil de classement d'images médicales, dispositif électronique et support de stockage lisible par ordinateur
WO2021151291A1 (fr) Procédé d'analyse de risque de maladie, appareil, dispositif électronique et support de stockage informatique
WO2021189914A1 (fr) Dispositif électronique, procédé et appareil de génération d'index d'image médicale, et support de stockage
CN111967539B (zh) 基于cbct数据库的颌面部骨折的识别方法、装置及终端设备
CN111933274A (zh) 疾病分类诊断方法、装置、电子设备及存储介质
CN115294426B (zh) 介入医学设备的跟踪方法、装置、设备及存储介质
CN116719891A (zh) 中医信息分组聚类方法、装置、设备及计算机存储介质
CN116150690A (zh) DRGs决策树构建方法及装置、电子设备、存储介质
CN114511569B (zh) 基于肿瘤标志物的医学图像识别方法、装置、设备及介质
WO2023029348A1 (fr) Procédé d'étiquetage d'instance d'image basé sur l'intelligence artificielle, et dispositif associé
CN111967540B (zh) 基于ct数据库的颌面部骨折的识别方法、装置及终端设备
CN115760656A (zh) 一种医学影像处理方法及系统
CN113688319B (zh) 医疗产品推荐方法及相关设备
CN116386831B (zh) 一种基于智慧医院管理平台的数据可视化展示方法及系统
CN114627050A (zh) 基于肝部病理全切片的病例分析方法及系统
CN116741395A (zh) 知识图谱嵌入方法、装置、设备及存储介质
CN113674840A (zh) 医学影像共享方法、装置、电子设备及存储介质
CN116680380A (zh) 视觉智能问答方法、装置、电子设备及存储介质
CN115983234A (zh) 健康问卷分析方法、装置、设备及存储介质
CN116860944A (zh) 对话生成方法、装置、电子设备及介质
CN115330733A (zh) 基于细粒度领域知识的疾病智能识别方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21942576

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE