WO2022247007A1 - Medical image grading method and apparatus, electronic device, and readable storage medium - Google Patents

Medical image grading method and apparatus, electronic device, and readable storage medium Download PDF

Info

Publication number
WO2022247007A1
WO2022247007A1 PCT/CN2021/109482 CN2021109482W WO2022247007A1 WO 2022247007 A1 WO2022247007 A1 WO 2022247007A1 CN 2021109482 W CN2021109482 W CN 2021109482W WO 2022247007 A1 WO2022247007 A1 WO 2022247007A1
Authority
WO
WIPO (PCT)
Prior art keywords
medical image
result
lesion
classification
feature
Prior art date
Application number
PCT/CN2021/109482
Other languages
French (fr)
Chinese (zh)
Inventor
郭振
柳杨
李君�
吕彬
高艳
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2022247007A1 publication Critical patent/WO2022247007A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Definitions

  • the present application relates to the fields of artificial intelligence and digital medical technology, and in particular to a medical image classification method, device, electronic equipment and readable storage medium.
  • image recognition as an important part of artificial intelligence, has been applied to various fields, such as applying image recognition in the medical field to identify medical images to judge the severity of diseases, such as grading fundus color ultrasound images, To determine the degree of diabetic retinopathy.
  • the inventor realized that the current image classification method can only rely on a single image recognition model to classify medical images, and the feature dimensions are small, resulting in poor image classification accuracy.
  • a medical image grading method provided by this application includes:
  • the feature information and the first grading result are graded and corrected by using the pre-built second grading model to obtain a target grading result.
  • the present application also provides a medical image grading device, the device comprising:
  • the feature matching module is used to obtain the medical image to be classified, and use the feature extraction network in the pre-built lesion detection model to perform feature extraction on the medical image to be classified to obtain a feature map; perform classification recognition and result statistics on the feature map , obtaining a classification result; using the lesion segmentation network in the lesion detection model to perform region segmentation and area calculation on the feature map to obtain a segmentation result; performing feature matching on the classification result and the segmentation result to obtain feature information;
  • An image classification module configured to use a pre-built first classification model to classify the medical image to be classified to obtain a first classification result
  • the grading correction module is configured to use the pre-built second grading model to perform grading correction on the feature information and the first grading result to obtain a target grading result.
  • the application also provides an electronic device, which includes:
  • a memory storing at least one computer program
  • a processor executing the computer program stored in the memory to achieve the following steps:
  • the feature information and the first grading result are graded and corrected by using the pre-built second grading model to obtain a target grading result.
  • the present application also provides a computer-readable storage medium, at least one computer program is stored in the computer-readable storage medium, and the at least one computer program is executed by a processor in an electronic device to implement the following steps:
  • the feature information and the first grading result are graded and corrected by using the pre-built second grading model to obtain a target grading result.
  • FIG. 1 is a schematic flowchart of a medical image classification method provided by an embodiment of the present application
  • Fig. 2 is a schematic module diagram of a medical image grading device provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram of the internal structure of an electronic device implementing a method for grading medical images provided by an embodiment of the present application;
  • AI artificial intelligence
  • digital computers or machines controlled by digital computers to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use knowledge to obtain the best results.
  • Artificial intelligence basic technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technology, operation/interaction systems, and mechatronics.
  • Artificial intelligence software technology mainly includes computer vision technology, robotics technology, biometrics technology, speech processing technology, natural language processing technology, and machine learning/deep learning.
  • the sample image is a medical image
  • the type of the object contained in the sample image is a lesion, that is, a part of the body where a lesion occurs.
  • Medical imaging refers to internal tissues obtained in a non-invasive way for medical treatment or medical research, such as images of the stomach, abdomen, heart, knees, and brain, such as CT (Computed Tomography, computerized tomography), MRI ( Magnetic Resonance Imaging, Magnetic Resonance Imaging), US (ultrasonic, Ultrasound), X-ray images, EEG, and images generated by medical instruments with optical photography lights.
  • An embodiment of the present application provides a medical image classification method.
  • the executor of the medical image grading method includes but is not limited to at least one of electronic devices such as a server and a terminal that can be configured to execute the method provided by the embodiment of the present application.
  • the medical image grading method can be executed by software or hardware installed on a terminal device or a server device, and the software can be a blockchain platform.
  • the server includes, but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like.
  • the server can be an independent server, or it can provide cloud services, cloud database, cloud computing, cloud function, cloud storage, network service, cloud communication, middleware service, domain name service, security service, content distribution network (ContentDelivery Network, CDN ), and cloud servers for basic cloud computing services such as big data and artificial intelligence platforms.
  • cloud database cloud computing
  • cloud function cloud storage
  • network service cloud communication
  • middleware service domain name service
  • security service content distribution network (ContentDelivery Network, CDN )
  • CDN content distribution network
  • cloud servers for basic cloud computing services such as big data and artificial intelligence platforms.
  • the medical image classification method includes:
  • the medical image to be classified in the embodiment of the present application is a fundus color ultrasound image
  • the lesion detection model includes: a feature extraction network, a lesion classification network, and a lesion segmentation network.
  • the feature extraction network is used for feature extraction
  • the lesion classification network is used for lesion classification
  • the lesion segmentation network is used for lesion region segmentation.
  • the initial feature extraction network in the feature extraction network is used to perform a convolution pooling operation on the medical image to be classified to obtain an initial feature map; the region extraction network in the feature extraction network is used to mark the The region of interest in the initial feature map is described to obtain the feature map.
  • the initial feature extraction network is a convolutional neural network
  • the region extraction network is an RPN (Region Proposal Network, region proposal network).
  • the feature extraction network in the lesion detection model before using the feature extraction network in the lesion detection model to perform feature extraction on the medical image to be classified, it also includes: acquiring a set of historical medical images, and performing preset labels on the set of historical medical images, Obtaining a first training image set; using the first training image set to iteratively train a pre-built first deep learning network model to obtain the lesion detection model.
  • the historical medical image set includes a plurality of historical medical images, and the historical medical images are medical images of the same type as the image to be classified but with different contents.
  • the lesion detection model is obtained by training the first deep model, therefore, the first deep learning model has the same network structure as the lesion detection model, therefore, the first deep learning model also includes : Feature extraction network, lesion classification network, lesion segmentation network.
  • the first training image set is used to iteratively train the pre-built first deep learning network model to obtain the lesion detection model, wherein the first deep learning network model is Mask- RCNN model, including:
  • Step A Use the feature extraction network in the first deep learning network model to perform convolution pooling on each image in the first training image set, and mark the region of interest on the images after convolution pooling , get the historical feature map;
  • the feature extraction network in the embodiment of the present application includes an initial feature extraction network and a region extraction network; wherein, the initial feature extraction network is a convolutional neural network, and the region extraction network is an RPN (Region Proposal Network, Regional Advice Network).
  • RPN Registered Proposal Network, Regional Advice Network
  • the initial feature extraction network is used to perform convolution pooling
  • the region extraction network is used to mark the region of interest.
  • Step B using the lesion classification network in the first deep learning network model to perform bounding box label prediction and classification prediction on the region of interest in the historical feature map, and obtain bounding box prediction coordinates and classification prediction values;
  • Step C According to the lesion area marked by the historical feature image corresponding to the historical feature map, the real coordinates of the bounding box are obtained; according to the lesion category marked by the historical feature image corresponding to the historical feature map, the classification true value is obtained;
  • the true value of the classification of the corresponding laser spot lesion is 1.
  • Step D Calculate according to the classification prediction value and the classification real value, using a preset first loss function to obtain a first loss value; according to the real coordinates of the bounding box and the predicted coordinates of the bounding box, use the preset Calculate the second loss function set to obtain the second loss function.
  • the first loss function or the second loss function in this embodiment of the present application may be a cross-entropy loss function.
  • the lesion segmentation network described in the embodiment of the present application includes a fully connected layer and a softmax network.
  • Step E using the lesion segmentation network in the first deep learning network model to perform region segmentation prediction on the historical feature map, and obtain the predicted value of the total number of pixels corresponding to each region and the predicted value of the number of pixels at the edge of the region;
  • the lesion segmentation network described in the embodiment of the present application is a fully convolutional network.
  • Step F According to the lesion area marked by the historical feature image corresponding to the historical feature map, the real value of the total number of pixels in the corresponding area and the real value of the number of pixels at the edge of the area are obtained;
  • Step G According to the predicted value of the total number of pixels in each region and the predicted value of the number of pixels at the edge of the region and the real value of the total number of pixels in the corresponding region and the real value of the number of pixels at the edge of the region, use the preset third loss function to calculate, obtaining a third loss value; summing the first loss value, the second loss value, and the third loss value to obtain a target loss value;
  • the third loss function is a cross-entropy loss function.
  • Step H When the target loss value is greater than or equal to the preset loss threshold, update the first deep learning network model parameters, and return to step A above, until the target loss value is less than the preset loss threshold, stop training to obtain the lesion detection model.
  • the high-throughput feature of the blockchain is used to store the medical images to be graded in the blockchain nodes to improve data access efficiency.
  • the lesion segmentation network in the lesion detection model is used to mark and classify the bounding boxes of the feature map, and the number of bounding boxes of the same category is summed up to obtain the classification result.
  • the A bounding box is classified as a hemorrhage lesion
  • the B bounding box is a laser spot lesion
  • the C bounding box is a preretinal hemorrhage lesion
  • the D bounding box is For bleeding lesions, then the number of bounding boxes of the same category is collected to obtain the classification result.
  • There are two bounding boxes A and D bounding boxes for bleeding lesions one bounding box B for laser spot lesions, and one preretinal hemorrhage lesion is the bounding box of C.
  • the lesion segmentation network in the lesion detection model is used to segment the feature map to obtain multiple segmentation regions.
  • the lesion segmentation network in the embodiment of the application is a full Convolutional network, further, since the size of the segmented regions corresponding to images of different sizes to be classified is quite different, in order to facilitate comparison, it is necessary to unify the comparison standard, calculate the area ratio of each segmented region to the medical image to be classified, and obtain the corresponding The relative area is not affected by the area change of the medical image to be classified; all the segmented regions and the relative area corresponding to each segmented region are summed up to obtain the segmented result.
  • the segmentation result is that there are 4 segmentation regions A, B, C, and D in the feature map, the segmentation region A consists of 10 pixels, and the medical image to be classified consists of 100 pixels, then the corresponding relative The area is 10%.
  • the embodiment of the present application matches and associates the classification result with the segmentation result to obtain the lesion category corresponding to each relative area in the segmentation result.
  • the classification result and the segmentation result in the embodiment of the present application are obtained from different branches in the same large model, and the positions of each bounding box and segmentation region in the classification result are the same, for example, the classification result There is a bounding box A for the bleeding lesions, and the segmented area a corresponds to the bounding box A. Therefore, the corresponding lesion category of the segmented area a obtained by matching is the bleeding lesion.
  • all the relative areas corresponding to the same lesion category in the segmentation results are summed to obtain the corresponding total area of the segmented region; the total area of the segmented region is combined with the corresponding lesion category,
  • the classification model is used to classify the medical images to be classified, and before obtaining the first classification result, it also includes: performing preset classification labels on the historical medical image set to obtain the second training image set; using the second training image set to iteratively train the pre-built second deep learning network model to obtain the first hierarchical model.
  • the grading labels include: mild non-proliferative retinopathy, moderate non-proliferative retinopathy, severe non-proliferative retinopathy, proliferative retinopathy and normal fundus.
  • the second deep learning network model in the embodiment of the present application is a convolutional neural network model including a dense attention mechanism.
  • the second hierarchical model is a random forest network model.
  • the embodiment of the present application uses the target classification network model to classify the feature information to obtain the target classification result .
  • the target classification network before using the target classification network to classify the feature information in the embodiment of the present application, it also includes: using the preset lesion category label as the root node, using the pre-built relative area classification interval and the preset classification Labels are used as classification conditions, and a random forest model is constructed to obtain the second grading model, wherein the grading labels share mild non-proliferative retinopathy, moderate non-proliferative retinopathy, severe non-proliferative retinopathy, proliferative retinopathy There are five kinds of lesions and normal fundus, and the classification interval of the lesion area can be set according to the actual diagnosis experience, such as [0, 20%), [20%, 40%), [40%, 60%), [60% %, 80%), [80%, 100%].
  • the feature information and the first grading result are input into the second grading model to obtain the target grading result, for example: the first grading result is moderate non-proliferative retina
  • the feature information of the lesion is [preretinal hemorrhage lesion, 10%]
  • the first grading result and the feature information are input into the second grading model
  • the target grading result is mild non-proliferative retinopathy.
  • FIG. 3 it is a functional block diagram of the medical image grading device of the present application.
  • the medical image grading apparatus 100 described in this application can be installed in an electronic device.
  • the medical image grading device may include a feature matching module 101, an image grading module 102, and a grading correction module 103.
  • the module described in the present invention may also be referred to as a unit, which refers to a device that can be processed by an electronic device processor. A series of computer program segments executed and capable of performing a fixed function, stored in the memory of an electronic device.
  • each module/unit is as follows:
  • the feature matching module 101 is used to obtain the medical image to be classified, and use the feature extraction network in the pre-built lesion detection model to perform feature extraction on the medical image to be classified to obtain a feature map; classify and identify the feature map and The results are counted to obtain classification results; using the lesion segmentation network in the lesion detection model to perform region segmentation and area calculation on the feature map to obtain segmentation results; performing feature matching between the classification results and the segmentation results to obtain features information;
  • the medical image to be classified in the embodiment of the present application is a fundus color ultrasound image
  • the lesion detection model includes: a feature extraction network, a lesion classification network, and a lesion segmentation network.
  • the feature extraction network is used for feature extraction
  • the lesion classification network is used for lesion classification
  • the lesion segmentation network is used for lesion region segmentation.
  • the feature matching module 101 in the embodiment of the present application uses the initial feature extraction network in the feature extraction network to perform a convolution pooling operation on the medical image to be classified to obtain an initial feature map;
  • the region extraction network marks the region of interest in the initial feature map to obtain a feature map.
  • the initial feature extraction network is a convolutional neural network
  • the region extraction network is an RPN (Region Proposal Network, region proposal network).
  • the feature matching module 101 in the embodiment of the present application utilizes the feature extraction network in the lesion detection model to perform feature extraction on the medical image to be classified, it also includes: acquiring a historical medical image set, and analyzing the historical medical image set Preset labeling is performed to obtain a first training image set; and the pre-built first deep learning network model is iteratively trained by using the first training image set to obtain the lesion detection model.
  • the historical medical image set includes a plurality of historical medical images, and the historical medical images are medical images of the same type as the image to be classified but with different contents.
  • the feature matching module 101 in the embodiment of the present application performs preset labeling on the historical medical image set to obtain the first training image set, including: for each lesion in the historical medical image set Marking the lesion area to obtain a target area, and performing lesion category marking on each of the target areas in each of the historical medical images to obtain a first training image set; optionally, the preset lesion area includes a microvascular tumor area, hemorrhage area, sclerotic area, cotton wool spot area, laser spot area, neovascular area, vitreous hemorrhage area, preretinal hemorrhage area, fibrous membrane area; the preset lesion category and the preset lesion area are one by one Correspondingly, including: microvascular tumor lesions, bleeding lesions, sclerotic lesions, cotton wool spot lesions, laser spot lesions, neovascular lesions, vitreous hemorrhage lesions, preretinal hemorrhage lesions, and fibrous membrane lesions. If the target
  • the lesion detection model is obtained by training the first deep model, therefore, the first deep learning model has the same network structure as the lesion detection model, therefore, the first deep learning model also includes : Feature extraction network, lesion classification network, lesion segmentation network.
  • the feature matching module 101 uses the first training image set to iteratively train the pre-built first deep learning network model to obtain the lesion detection model, wherein the first depth
  • the learning network model is the Mask-RCNN model, including:
  • Step A Use the feature extraction network in the first deep learning network model to perform convolution pooling on each image in the first training image set, and mark the region of interest on the images after convolution pooling , get the historical feature map;
  • the feature extraction network in the embodiment of the present application includes an initial feature extraction network and a region extraction network; wherein, the initial feature extraction network is a convolutional neural network, and the region extraction network is an RPN (Region Proposal Network, Regional Advice Network).
  • RPN Registered Proposal Network, Regional Advice Network
  • the initial feature extraction network is used to perform convolution pooling
  • the region extraction network is used to mark the region of interest.
  • Step B using the lesion classification network in the first deep learning network model to perform bounding box label prediction and classification prediction on the region of interest in the historical feature map, and obtain bounding box prediction coordinates and classification prediction values;
  • Step C According to the lesion area marked by the historical feature image corresponding to the historical feature map, the real coordinates of the bounding box are obtained; according to the lesion category marked by the historical feature image corresponding to the historical feature map, the classification true value is obtained;
  • the true value of the classification of the corresponding laser spot lesion is 1.
  • Step D Calculate according to the classification prediction value and the classification real value, using a preset first loss function to obtain a first loss value; according to the real coordinates of the bounding box and the predicted coordinates of the bounding box, use the preset Calculate the second loss function set to obtain the second loss function.
  • the first loss function or the second loss function in this embodiment of the present application may be a cross-entropy loss function.
  • the lesion segmentation network described in the embodiment of the present application includes a fully connected layer and a softmax network.
  • Step E using the lesion segmentation network in the first deep learning network model to perform region segmentation prediction on the historical feature map, and obtain the predicted value of the total number of pixels corresponding to each region and the predicted value of the number of pixels at the edge of the region;
  • the lesion segmentation network described in the embodiment of the present application is a fully convolutional network.
  • Step F According to the lesion area marked by the historical feature image corresponding to the historical feature map, the real value of the total number of pixels in the corresponding area and the real value of the number of pixels at the edge of the area are obtained;
  • Step G According to the predicted value of the total number of pixels in each region and the predicted value of the number of pixels at the edge of the region and the real value of the total number of pixels in the corresponding region and the real value of the number of pixels at the edge of the region, use the preset third loss function to calculate, obtaining a third loss value; summing the first loss value, the second loss value, and the third loss value to obtain a target loss value;
  • the third loss function is a cross-entropy loss function.
  • Step H When the target loss value is greater than or equal to the preset loss threshold, update the first deep learning network model parameters, and return to step A above, until the target loss value is less than the preset loss threshold, stop training to obtain the lesion detection model.
  • the high-throughput feature of the blockchain is used to store the medical images to be graded in the blockchain nodes to improve data access efficiency.
  • the feature matching module 101 in the embodiment of the present application uses the lesion segmentation network in the lesion detection model to mark and classify the bounding boxes of the feature maps, and summarize the number of bounding boxes of the same category, Get classification results. For example: there are 4 bounding boxes A, B, C, and D in the feature map, the A bounding box is classified as a hemorrhage lesion, the B bounding box is a laser spot lesion, the C bounding box is a preretinal hemorrhage lesion, and the D bounding box is For bleeding lesions, then the number of bounding boxes of the same category is collected to obtain the classification result. There are two bounding boxes A and D bounding boxes for bleeding lesions, one bounding box B for laser spot lesions, and one preretinal hemorrhage lesion is the bounding box of C.
  • the feature matching module 101 in the embodiment of the present application uses the lesion segmentation network in the lesion detection model to perform region segmentation on the feature map to obtain multiple segmented regions.
  • the The lesion segmentation network described above is a fully convolutional network.
  • a unified comparison standard is required to calculate the difference between each segmented region and the medical image to be classified. The corresponding relative area is obtained, and the relative area is not affected by the area change of the medical image to be classified; all the segmented regions and the relative area corresponding to each segmented region are summarized to obtain the segmented result.
  • the result of the segmentation is that there are 4 segmentation regions A, B, C, and D in the feature map, the segmentation region A consists of 10 pixels, and the medical image to be classified consists of 100 pixels, then the corresponding relative The area is 10%.
  • the feature matching module 101 in the embodiment of the present application matches and associates the classification result with the segmentation result to obtain the lesion category corresponding to each relative area in the segmentation result.
  • the classification result and the segmentation result in the embodiment of the present application are obtained from different branches in the same large model, and the positions of each bounding box and segmentation region in the classification result are the same, for example, the classification result There is a bounding box A for the bleeding lesions, and the segmented area a corresponds to the bounding box A. Therefore, the corresponding lesion category of the segmented area a obtained by matching is the bleeding lesion.
  • the image classification module 102 is configured to use a pre-built first classification model to classify the medical image to be classified to obtain a first classification result;
  • the image classification module 102 uses a classification model to classify the medical images to be classified, and before obtaining the first classification result, it also includes: performing preset classification labels on the historical medical image set , to obtain a second training image set; using the second training image set to iteratively train the pre-built second deep learning network model to obtain the first hierarchical model.
  • the grading labels include: mild non-proliferative retinopathy, moderate non-proliferative retinopathy, severe non-proliferative retinopathy, proliferative retinopathy and normal fundus.
  • the second deep learning network model in the embodiment of the present application is a convolutional neural network model including a dense attention mechanism.
  • the grading correction module 103 is configured to use the pre-built second grading model to perform grading correction on the feature information and the first grading result to obtain a target grading result.
  • the second hierarchical model is a random forest network model.
  • the classification correction module 103 in the embodiment of the present application uses the target classification network model to classify the feature information, Obtain the target classification result.
  • the grading correction module 103 in the embodiment of the present application uses the target grading network to classify the feature information, it also includes: using the preset lesion category label as the root node, using the pre-built relative area classification Intervals and preset grading labels are used as classification conditions to construct a random forest model to obtain the second grading model, wherein the grading labels share mild non-proliferative retinopathy, moderate non-proliferative retinopathy, severe non-proliferative retinopathy There are five types of retinopathy, proliferative retinopathy, and normal fundus.
  • the lesion area classification interval can be set according to actual diagnosis experience, such as [0, 20%), [20%, 40%), [40%, 60%), [60%, 80%), [80%, 100%].
  • the grading correction module 103 inputs the feature information and the first grading result into the second grading model to obtain the target grading result, for example: the first grading result is Moderate non-proliferative retinopathy, the feature information is [preretinal hemorrhage lesion, 10%], then input the first grading result and the feature information into the second grading model, and the target grading result is mild non-proliferative Retinopathy.
  • FIG. 3 it is a schematic structural diagram of an electronic device implementing the medical image classification method of the present application.
  • the electronic device may include a processor 10, a memory 11, a communication bus 12 and a communication interface 13, and may also include a computer program stored in the memory 11 and operable on the processor 10, such as a medical image grading program .
  • the memory 11 includes at least one type of readable storage medium, and the readable storage medium includes flash memory, mobile hard disk, multimedia card, card-type memory (for example: SD or DX memory, etc.), magnetic memory, magnetic disk, CD etc.
  • the storage 11 may be an internal storage unit of the electronic device in some embodiments, such as a mobile hard disk of the electronic device.
  • the memory 11 can also be an external storage device of the electronic device, such as a plug-in mobile hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD ) card, flash memory card (Flash Card), etc.
  • the memory 11 may also include both an internal storage unit of the electronic device and an external storage device.
  • the memory 11 can not only be used to store application software and various data installed in the electronic device, such as codes of medical image grading programs, but also can be used to temporarily store outputted or to-be-outputted data.
  • the processor 10 may be composed of integrated circuits, for example, may be composed of a single packaged integrated circuit, or may be composed of multiple integrated circuits with the same function or different functions, including one or more Combination of central processing unit (Central Processing unit, CPU), microprocessor, digital processing chip, graphics processor and various control chips, etc.
  • the processor 10 is the control core (Control Unit) of the electronic device, and uses various interfaces and lines to connect various components of the entire electronic device, and runs or executes programs or modules stored in the memory 11 (such as medical image grading program, etc.), and call the data stored in the memory 11 to execute various functions of the electronic device and process data.
  • Control Unit Control Unit
  • the communication bus 12 can be a peripheral component interconnection standard (perIPheral component interconnect (PCI for short) bus or extended industry standard structure (extended industry standard architecture, referred to as EISA) bus and so on.
  • PCI peripheral component interconnection standard
  • EISA extended industry standard architecture
  • the bus can be divided into address bus, data bus, control bus and so on.
  • the communication bus 12 is configured to realize connection and communication between the memory 11 and at least one processor 10 and the like. For ease of representation, only one thick line is used in the figure, but it does not mean that there is only one bus or one type of bus.
  • Figure 3 only shows an electronic device with components, and those skilled in the art can understand that the structure shown in Figure 3 does not constitute a limitation to the electronic device, and may include fewer or more components than shown in the figure , or combinations of certain components, or different arrangements of components.
  • the electronic device may also include a power supply (such as a battery) for supplying power to various components.
  • the power supply may be logically connected to the at least one processor 10 through a power management device, so that Realize functions such as charge management, discharge management, and power consumption management.
  • the power supply may also include one or more DC or AC power supplies, recharging devices, power failure detection circuits, power converters or inverters, power status indicators and other arbitrary components.
  • the electronic device may also include various sensors, a Bluetooth module, a Wi-Fi module, etc., which will not be repeated here.
  • the communication interface 13 may include a wired interface and/or a wireless interface (such as a WI-FI interface, a Bluetooth interface, etc.), which are generally used to establish a communication connection between the electronic device and other electronic devices.
  • a wired interface and/or a wireless interface such as a WI-FI interface, a Bluetooth interface, etc.
  • the communication interface 13 may also include a user interface
  • the user interface may be a display (Display), an input unit (such as a keyboard (Keyboard)), optionally, the user interface may also be a standard wired interface, a wireless interface .
  • the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode, organic light-emitting diode) touch device, and the like.
  • the display may also be properly referred to as a display screen or a display unit, and is used for displaying information processed in the electronic device and for displaying a visualized user interface.
  • the medical image grading program stored in the memory 11 in the electronic device is a combination of multiple computer programs. When running in the processor 10, it can realize:
  • the feature information and the first grading result are graded and corrected by using the pre-built second grading model to obtain a target grading result.
  • the integrated modules/units of the electronic equipment are realized in the form of software function units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the computer readable medium may be non-volatile or volatile.
  • the computer readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, removable hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) .
  • the embodiment of the present application can also provide a computer-readable storage medium, the readable storage medium stores a computer program, and when the computer program is executed by a processor of an electronic device, it can realize:
  • the feature information and the first grading result are graded and corrected by using the pre-built second grading model to obtain a target grading result.
  • the computer-usable storage medium may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function, etc.; use of the created data, etc.
  • modules described as separate components may or may not be physically separated, and the components shown as modules may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional module in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware, or in the form of hardware plus software function modules.
  • Blockchain essentially a decentralized database, is a series of data blocks associated with each other using cryptographic methods. Each data block contains a batch of network transaction information, which is used to verify its Validity of information (anti-counterfeiting) and generation of the next block.
  • the blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.

Abstract

A medical image grading method and apparatus, an electronic device, and a storage medium, relating to the technical field of artificial intelligence and digital medicine. The method comprises: acquiring a medical image to be graded, and performing feature extraction on said medical image using a feature extraction network in a pre-built lesion detection model to obtain a feature map (S1), wherein said medical image can be stored in a blockchain node; performing classification, recognition, and result statistics on the feature map to obtain a classification result (S2); performing region segmentation and area calculation on the feature map to obtain a segmentation result (S3); performing feature matching on the classification result and the segmentation result to obtain feature information (S4); grading said medical image using a pre-built first grading model to obtain a first grading result (S5); and performing grading correction on the feature information and the first grading result using a pre-built second grading model to obtain a target grading result (S6). The method can improve the accuracy of medical image grading.

Description

医学图像分级方法、装置、电子设备及可读存储介质Medical image classification method, device, electronic equipment and readable storage medium
本申请要求于2021年5月25日提交中国专利局、申请号为CN202110570809.8、名称为“医学图像分机方法、装置、电子设备及可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application submitted to the China Patent Office on May 25, 2021, with the application number CN202110570809.8, and the title "Medical Image Extension Method, Device, Electronic Equipment, and Readable Storage Medium", all of which The contents are incorporated by reference in this application.
技术领域technical field
本申请涉及人工智能及数字医疗技术领域,尤其涉及一种医学图像分级方法、装置、电子设备及可读存储介质。The present application relates to the fields of artificial intelligence and digital medical technology, and in particular to a medical image classification method, device, electronic equipment and readable storage medium.
背景技术Background technique
随着人工智能的发展,图像的识别作为人工智能的重要组成部分应用到了各个领域,比如将图像识别应用在医学领域对医学图像进行识别从而判断疾病的严重等级,如对眼底彩超图像进行分级,以判断糖尿病视网膜病变的程度等。With the development of artificial intelligence, image recognition, as an important part of artificial intelligence, has been applied to various fields, such as applying image recognition in the medical field to identify medical images to judge the severity of diseases, such as grading fundus color ultrasound images, To determine the degree of diabetic retinopathy.
技术问题technical problem
但是,发明人意识到,目前图像分级方法只能依靠单一的图像识别模型对医学图像进行分级,特征维度较少,导致图像分级的准确率较差。However, the inventor realized that the current image classification method can only rely on a single image recognition model to classify medical images, and the feature dimensions are small, resulting in poor image classification accuracy.
技术解决方案technical solution
本申请提供的一种医学图像分级方法,包括:A medical image grading method provided by this application includes:
获取待分级医学图像,利用预构建的病灶检测模型中的特征提取网络对所述待分级医学图像进行特征提取,得到特征图;Acquiring the medical image to be classified, using the feature extraction network in the pre-built lesion detection model to perform feature extraction on the medical image to be classified to obtain a feature map;
对所述特征图进行分类识别及结果统计,得到分类结果;Carrying out classification recognition and result statistics on the feature map to obtain classification results;
利用所述病灶检测模型中的病灶分割网络对所述特征图进行区域分割及面积计算,得到分割结果;Using the lesion segmentation network in the lesion detection model to perform region segmentation and area calculation on the feature map to obtain a segmentation result;
将所述分类结果与所述分割结果进行特征匹配,得到特征信息;performing feature matching on the classification result and the segmentation result to obtain feature information;
利用预构建的第一分级模型对所述待分级医学图像进行分级,得到第一分级结果;Classifying the medical image to be classified by using a pre-built first classification model to obtain a first classification result;
利用预构建的第二分级模型对所述特征信息及所述第一分级结果进行分级矫正,得到目标分级结果。The feature information and the first grading result are graded and corrected by using the pre-built second grading model to obtain a target grading result.
本申请还提供一种医学图像分级装置,所述装置包括:The present application also provides a medical image grading device, the device comprising:
特征匹配模块,用于获取待分级医学图像,利用预构建的病灶检测模型中的特征提取网络对所述待分级医学图像进行特征提取,得到特征图;对所述特征图进行分类识别及结果统计,得到分类结果;利用所述病灶检测模型中的病灶分割网络对所述特征图进行区域分割及面积计算,得到分割结果;将所述分类结果与所述分割结果进行特征匹配,得到特征信息;The feature matching module is used to obtain the medical image to be classified, and use the feature extraction network in the pre-built lesion detection model to perform feature extraction on the medical image to be classified to obtain a feature map; perform classification recognition and result statistics on the feature map , obtaining a classification result; using the lesion segmentation network in the lesion detection model to perform region segmentation and area calculation on the feature map to obtain a segmentation result; performing feature matching on the classification result and the segmentation result to obtain feature information;
图像分级模块,用于利用预构建的第一分级模型对所述待分级医学图像进行分级,得到第一分级结果;An image classification module, configured to use a pre-built first classification model to classify the medical image to be classified to obtain a first classification result;
分级矫正模块,用于利用预构建的第二分级模型对所述特征信息及所述第一分级结果进行分级矫正,得到目标分级结果。The grading correction module is configured to use the pre-built second grading model to perform grading correction on the feature information and the first grading result to obtain a target grading result.
申请还提供一种电子设备,所述电子设备包括:The application also provides an electronic device, which includes:
存储器,存储至少一个计算机程序;及a memory storing at least one computer program; and
处理器,执行所述存储器中存储的计算机程序以实现如下步骤:A processor, executing the computer program stored in the memory to achieve the following steps:
获取待分级医学图像,利用预构建的病灶检测模型中的特征提取网络对所述待分级医学图像进行特征提取,得到特征图;Acquiring the medical image to be classified, using the feature extraction network in the pre-built lesion detection model to perform feature extraction on the medical image to be classified to obtain a feature map;
对所述特征图进行分类识别及结果统计,得到分类结果;Carrying out classification recognition and result statistics on the feature map to obtain classification results;
利用所述病灶检测模型中的病灶分割网络对所述特征图进行区域分割及面积计算,得到分割结果;Using the lesion segmentation network in the lesion detection model to perform region segmentation and area calculation on the feature map to obtain a segmentation result;
将所述分类结果与所述分割结果进行特征匹配,得到特征信息;performing feature matching on the classification result and the segmentation result to obtain feature information;
利用预构建的第一分级模型对所述待分级医学图像进行分级,得到第一分级结果;Classifying the medical image to be classified by using a pre-built first classification model to obtain a first classification result;
利用预构建的第二分级模型对所述特征信息及所述第一分级结果进行分级矫正,得到目标分级结果。The feature information and the first grading result are graded and corrected by using the pre-built second grading model to obtain a target grading result.
本申请还提供一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一个计算机程序,所述至少一个计算机程序被电子设备中的处理器执行以实现如下步骤:The present application also provides a computer-readable storage medium, at least one computer program is stored in the computer-readable storage medium, and the at least one computer program is executed by a processor in an electronic device to implement the following steps:
获取待分级医学图像,利用预构建的病灶检测模型中的特征提取网络对所述待分级医学图像进行特征提取,得到特征图;Acquiring the medical image to be classified, using the feature extraction network in the pre-built lesion detection model to perform feature extraction on the medical image to be classified to obtain a feature map;
对所述特征图进行分类识别及结果统计,得到分类结果;Carrying out classification recognition and result statistics on the feature map to obtain classification results;
利用所述病灶检测模型中的病灶分割网络对所述特征图进行区域分割及面积计算,得到分割结果;Using the lesion segmentation network in the lesion detection model to perform region segmentation and area calculation on the feature map to obtain a segmentation result;
将所述分类结果与所述分割结果进行特征匹配,得到特征信息;performing feature matching on the classification result and the segmentation result to obtain feature information;
利用预构建的第一分级模型对所述待分级医学图像进行分级,得到第一分级结果;Classifying the medical image to be classified by using a pre-built first classification model to obtain a first classification result;
利用预构建的第二分级模型对所述特征信息及所述第一分级结果进行分级矫正,得到目标分级结果。The feature information and the first grading result are graded and corrected by using the pre-built second grading model to obtain a target grading result.
附图说明Description of drawings
图1为本申请一实施例提供的医学图像分级方法的流程示意图;FIG. 1 is a schematic flowchart of a medical image classification method provided by an embodiment of the present application;
图2为本申请一实施例提供的医学图像分级装置的模块示意图;Fig. 2 is a schematic module diagram of a medical image grading device provided by an embodiment of the present application;
图3为本申请一实施例提供的实现医学图像分级方法的电子设备的内部结构示意图;FIG. 3 is a schematic diagram of the internal structure of an electronic device implementing a method for grading medical images provided by an embodiment of the present application;
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The realization, functional features and advantages of the present application will be further described in conjunction with the embodiments and with reference to the accompanying drawings.
本发明的实施方式Embodiments of the present invention
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。It should be understood that the specific embodiments described here are only used to explain the present application, and are not intended to limit the present application.
申请实施例可以基于人工智能技术对相关的数据进行获取和处理。其中,人工智能(Artificial Intelligence,AI)是利用数字计算机或者数字计算机控制的机器模拟、延伸和扩展人的智能,感知环境、获取知识并使用知识获得最佳结果的理论、方法、技术及应用系统。The embodiments of the application can acquire and process relevant data based on artificial intelligence technology. Among them, artificial intelligence (AI) is a theory, method, technology and application system that uses digital computers or machines controlled by digital computers to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use knowledge to obtain the best results. .
人工智能基础技术一般包括如传感器、专用人工智能芯片、云计算、分布式存储、大数据处理技术、操作/交互系统、机电一体化等技术。人工智能软件技术主要包括计算机视觉技术、机器人技术、生物识别技术、语音处理技术、自然语言处理技术以及机器学习/深度学习等几大方向。Artificial intelligence basic technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technology, operation/interaction systems, and mechatronics. Artificial intelligence software technology mainly includes computer vision technology, robotics technology, biometrics technology, speech processing technology, natural language processing technology, and machine learning/deep learning.
在医学应用场景中,样本图像为医学影像,样本图像包含的对象所属类型为病灶,即机体上发生病变的部分。医学影像是指为了医疗或医学研究,以非侵入方式取得的内部组织,例如,胃部、腹部、心脏、膝盖、脑部的影像,比如,CT(Computed Tomography,电子计算机断层扫描)、MRI (Magnetic Resonance Imaging,磁共振成像)、US(ultrasonic,超声)、X光图像、脑电图以及光学摄影灯由医学仪器生成的图像。In a medical application scenario, the sample image is a medical image, and the type of the object contained in the sample image is a lesion, that is, a part of the body where a lesion occurs. Medical imaging refers to internal tissues obtained in a non-invasive way for medical treatment or medical research, such as images of the stomach, abdomen, heart, knees, and brain, such as CT (Computed Tomography, computerized tomography), MRI ( Magnetic Resonance Imaging, Magnetic Resonance Imaging), US (ultrasonic, Ultrasound), X-ray images, EEG, and images generated by medical instruments with optical photography lights.
本申请实施例提供一种医学图像分级方法。所述医学图像分级方法的执行主体包括但不限于服务端、终端等能够被配置为执行本申请实施例提供的该方法的电子设备中的至少一种。换言之,所述医学图像分级方法可以由安装在终端设备或服务端设备的软件或硬件来执行,所述软件可以是区块链平台。所述服务端包括但不限于:单台服务器、服务器集群、云端服务器或云端服务器集群等。服务器可以是独立的服务器,也可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、内容分发网络(ContentDelivery Network,CDN)、以及大数据和人工智能平台等基础云计算服务的云服务器。An embodiment of the present application provides a medical image classification method. The executor of the medical image grading method includes but is not limited to at least one of electronic devices such as a server and a terminal that can be configured to execute the method provided by the embodiment of the present application. In other words, the medical image grading method can be executed by software or hardware installed on a terminal device or a server device, and the software can be a blockchain platform. The server includes, but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like. The server can be an independent server, or it can provide cloud services, cloud database, cloud computing, cloud function, cloud storage, network service, cloud communication, middleware service, domain name service, security service, content distribution network (ContentDelivery Network, CDN ), and cloud servers for basic cloud computing services such as big data and artificial intelligence platforms.
参照图1所示的本申请一实施例提供的医学图像分级方法的流程示意图,在本申请实施例中,所述医学图像分级方法包括:Referring to the schematic flowchart of the medical image classification method provided by an embodiment of the present application shown in FIG. 1, in the embodiment of the present application, the medical image classification method includes:
S1、获取待分级医学图像,利用预构建病灶检测模型中的特征提取网络对所述待分级医学图像进行特征提取,得到特征图;S1. Acquire the medical image to be classified, and use the feature extraction network in the pre-built lesion detection model to perform feature extraction on the medical image to be classified to obtain a feature map;
可选地,本申请实施例中所述待分级医学图像为眼底彩超图像,所述病灶检测模型包括:特征提取网络、病灶分类网络、病灶分割网络。其中,所述特征提取网络用于特征提取,所述病灶分类网络用于病灶分类,所述病灶分割网络用于病灶区域分割。Optionally, the medical image to be classified in the embodiment of the present application is a fundus color ultrasound image, and the lesion detection model includes: a feature extraction network, a lesion classification network, and a lesion segmentation network. Wherein, the feature extraction network is used for feature extraction, the lesion classification network is used for lesion classification, and the lesion segmentation network is used for lesion region segmentation.
详细地,本申请实施例中利用所述特征提取网络中初始特征提取网络对所述待分级医学图像执行卷积池化操作得到初始特征图;利用所述特征提取网络中的区域提取网络标记所述初始特征图中的感兴趣区域,得到特征图。In detail, in the embodiment of the present application, the initial feature extraction network in the feature extraction network is used to perform a convolution pooling operation on the medical image to be classified to obtain an initial feature map; the region extraction network in the feature extraction network is used to mark the The region of interest in the initial feature map is described to obtain the feature map.
可选地,本申请实施例中所述初始特征提取网络为卷积神经网络,所述区域提取网络为RPN(Region Proposal Network,区域建议网络)。Optionally, in the embodiment of the present application, the initial feature extraction network is a convolutional neural network, and the region extraction network is an RPN (Region Proposal Network, region proposal network).
进一步地,本申请实施例中利用病灶检测模型中的特征提取网络对所述待分级医学图像进行特征提取之前还包括:获取历史医学图像集,对所述历史医学图像集进行预设标签标记,得到第一训练图像集;利用所述第一训练图像集对预构建的第一深度学习网络模型进行迭代训练,得到所述病灶检测模型。其中,所述历史医学图像集包含多个历史医学图像,所述历史医学图像为与所述待分级图像类型相同内容不同的医学图像。Further, in the embodiment of the present application, before using the feature extraction network in the lesion detection model to perform feature extraction on the medical image to be classified, it also includes: acquiring a set of historical medical images, and performing preset labels on the set of historical medical images, Obtaining a first training image set; using the first training image set to iteratively train a pre-built first deep learning network model to obtain the lesion detection model. Wherein, the historical medical image set includes a plurality of historical medical images, and the historical medical images are medical images of the same type as the image to be classified but with different contents.
详细地,本申请实施例对所述历史医学图像集进行预设标签标记,得到第一训练图像集,包括:对所述历史医学图像集中每个历史医学图像中的病灶进行病灶区域标记,得到目标区域,对每个所述历史医学图像中的每个所述目标区域进行病灶类别标记,得到第一训练图像集;可选地,所述预设病灶区域包括微血管瘤区域、出血区域、硬渗区域、棉絮斑区域、激光斑区域、新生血管区域、玻璃体出血区域、视网膜前出血区域、纤维膜区域;所述预设病灶类别与所述预设病灶区域是一一对应的,包括:微血管瘤病灶、出血病灶、硬渗病灶、棉絮斑病灶、激光斑病灶、新生血管病灶、玻璃体出血病灶、视网膜前出血病灶、纤维膜病灶,如目标区域为激光斑区域那么将该区域标记为激光斑病灶。In detail, in this embodiment of the present application, preset labels are performed on the historical medical image set to obtain a first training image set, which includes: performing lesion area labeling on each historical medical image in the historical medical image set to obtain target area, performing lesion category marking on each target area in each of the historical medical images to obtain a first training image set; optionally, the preset lesion area includes a microvascular tumor area, a hemorrhage area, a hard Infiltration area, cotton wool spot area, laser spot area, neovascularization area, vitreous hemorrhage area, preretinal hemorrhage area, fibrous membrane area; the preset lesion category is in one-to-one correspondence with the preset lesion area, including: microvascular Tumor lesions, hemorrhage lesions, sclerotic lesions, cotton wool spot lesions, laser spot lesions, neovascular lesions, vitreous hemorrhage lesions, preretinal hemorrhage lesions, fibrous membrane lesions, if the target area is a laser spot area, then mark the area as a laser spot lesion.
进一步地,所述病灶检测模型是由第一深度模型训练得到的,因此,所述第一深度学习模型与所述病灶检测模型具有相同的网络结构,因此,所述第一深度学习模型也包括:特征提取网络、病灶分类网络、病灶分割网络。Further, the lesion detection model is obtained by training the first deep model, therefore, the first deep learning model has the same network structure as the lesion detection model, therefore, the first deep learning model also includes : Feature extraction network, lesion classification network, lesion segmentation network.
详细地,本申请实施例中利用所述第一训练图像集对预构建的第一深度学习网络模型进行迭代训练,得到所述病灶检测模型,其中,所述第一深度学习网络模型为Mask-RCNN模型,包括:In detail, in the embodiment of the present application, the first training image set is used to iteratively train the pre-built first deep learning network model to obtain the lesion detection model, wherein the first deep learning network model is Mask- RCNN model, including:
步骤A:利用所述第一深度学习网络模型中的特征提取网络对所述第一训练图像集中的每一张图像进行卷积池化,并将卷积池化后的图像进行感兴趣区域标记,得到历史特征图;Step A: Use the feature extraction network in the first deep learning network model to perform convolution pooling on each image in the first training image set, and mark the region of interest on the images after convolution pooling , get the historical feature map;
可选地,本申请实施例中所述特征提取网络包含初始特征提取网络及区域提取网络;其中,所述初始特征提取网络为卷积神经网络,所述区域提取网络为RPN(Region Proposal Network,区域建议网络)。Optionally, the feature extraction network in the embodiment of the present application includes an initial feature extraction network and a region extraction network; wherein, the initial feature extraction network is a convolutional neural network, and the region extraction network is an RPN (Region Proposal Network, Regional Advice Network).
详细地,本申请实施例中利用初始特征提取网络进行卷积池化,利用所述区域提取网络进行感兴趣区域标记。In detail, in the embodiment of the present application, the initial feature extraction network is used to perform convolution pooling, and the region extraction network is used to mark the region of interest.
步骤B:利用所述第一深度学习网络模型中的病灶分类网络对所述历史特征图中的感兴趣区域进行边界框标记预测及分类预测,得到边界框预测坐标及分类预测值;Step B: using the lesion classification network in the first deep learning network model to perform bounding box label prediction and classification prediction on the region of interest in the historical feature map, and obtain bounding box prediction coordinates and classification prediction values;
步骤C:根据所述历史特征图对应的历史历史特征图像标记的病灶区域,得到边界框真实坐标;根据所述历史特征图对应的历史历史特征图像标记的病灶类别,得到分类真实值;Step C: According to the lesion area marked by the historical feature image corresponding to the historical feature map, the real coordinates of the bounding box are obtained; according to the lesion category marked by the historical feature image corresponding to the historical feature map, the classification true value is obtained;
例如:标记的病灶类别为激光斑病灶,那么对应的激光斑病灶的分类真实值为1。For example, if the marked lesion category is a laser spot lesion, then the true value of the classification of the corresponding laser spot lesion is 1.
步骤D:根据所述分类预测值与所述分类真实值,利用预设的第一损失函数进行计算,得到第一损失值;根据所述边界框真实坐标与所述边界框预测坐标,利用预设的第二损失函数进行计算,得到第二损失函数。Step D: Calculate according to the classification prediction value and the classification real value, using a preset first loss function to obtain a first loss value; according to the real coordinates of the bounding box and the predicted coordinates of the bounding box, use the preset Calculate the second loss function set to obtain the second loss function.
可选地,本申请实施例中所述第一损失函数或所述第二损失函数可以为交叉熵损失函数。Optionally, the first loss function or the second loss function in this embodiment of the present application may be a cross-entropy loss function.
可选地,本申请实施例中所述病灶分割网络包含全连接层及softmax网络。Optionally, the lesion segmentation network described in the embodiment of the present application includes a fully connected layer and a softmax network.
步骤E:利用所述第一深度学习网络模型中的病灶分割网络对所述历史特征图进行区域分割预测,得到每个区域对应的像素总数预测值及区域边缘像素个数预测值;Step E: using the lesion segmentation network in the first deep learning network model to perform region segmentation prediction on the historical feature map, and obtain the predicted value of the total number of pixels corresponding to each region and the predicted value of the number of pixels at the edge of the region;
可选地,本申请实施例中所述病灶分割网络为全卷积网络。Optionally, the lesion segmentation network described in the embodiment of the present application is a fully convolutional network.
步骤F:根据所述历史特征图对应的历史历史特征图像标记的病灶区域,得到对应区域的像素总数真实值及区域边缘像素个数真实值;Step F: According to the lesion area marked by the historical feature image corresponding to the historical feature map, the real value of the total number of pixels in the corresponding area and the real value of the number of pixels at the edge of the area are obtained;
步骤G:根据每个区域的所述像素总数预测值及区域边缘像素个数预测值与对应区域的像素总数真实值及区域边缘像素个数真实值,利用预设的第三损失函数进行计算,得到第三损失值;将所述第一损失值、所述第二损失值及所述第三损失值进行求和计算,得到目标损失值;Step G: According to the predicted value of the total number of pixels in each region and the predicted value of the number of pixels at the edge of the region and the real value of the total number of pixels in the corresponding region and the real value of the number of pixels at the edge of the region, use the preset third loss function to calculate, obtaining a third loss value; summing the first loss value, the second loss value, and the third loss value to obtain a target loss value;
可选地,本申请实施例中,所述第三损失函数为交叉熵损失函数。Optionally, in this embodiment of the present application, the third loss function is a cross-entropy loss function.
步骤H:当所述目标损失值大于或等于预设损失阈值时,更新所述第一深度学习网络模型参数,并返回上述的步骤A,直到所述目标损失值小于预设损失阈值时,停止训练,得到所述病灶检测模型。Step H: When the target loss value is greater than or equal to the preset loss threshold, update the first deep learning network model parameters, and return to step A above, until the target loss value is less than the preset loss threshold, stop training to obtain the lesion detection model.
本申请另一实施例中,利用区块链高吞吐的特性,将所述待分级医学图像存储在区块链节点中,提高数据存取效率。In another embodiment of the present application, the high-throughput feature of the blockchain is used to store the medical images to be graded in the blockchain nodes to improve data access efficiency.
S2、对所述特征图进行分类识别及结果统计,得到分类结果;S2. Perform classification recognition and result statistics on the feature map to obtain classification results;
详细地,本申请实施例中利用所述病灶检测模型中的病灶分割网络对所述特征图进行边界框标记及分类,并将相同类别的边界框的个数进行汇总,得分类结果。例如:所述特征图中共有A、B、C、D共4个边界框,A边界框分类为出血病灶,B边界框为激光斑病灶,C边界框为视网膜前出血病灶,D边界框为出血病灶,那么将相同类别的边界框的个数进行汇得到分类结果为出血病灶共有两处为A边界框及D边界框,激光斑病灶有1处为B边界框,视网膜前出血病灶有一处为C边界框。In detail, in the embodiment of the present application, the lesion segmentation network in the lesion detection model is used to mark and classify the bounding boxes of the feature map, and the number of bounding boxes of the same category is summed up to obtain the classification result. For example: there are 4 bounding boxes A, B, C, and D in the feature map, the A bounding box is classified as a hemorrhage lesion, the B bounding box is a laser spot lesion, the C bounding box is a preretinal hemorrhage lesion, and the D bounding box is For bleeding lesions, then the number of bounding boxes of the same category is collected to obtain the classification result. There are two bounding boxes A and D bounding boxes for bleeding lesions, one bounding box B for laser spot lesions, and one preretinal hemorrhage lesion is the bounding box of C.
S3、对所述特征图进行区域分割及面积计算,得到分割结果;S3. Perform region segmentation and area calculation on the feature map to obtain a segmentation result;
详细地,本申请实施例中利用所述病灶检测模型中的病灶分割网络对所述特征图进行区域分割,得到多个分割区域,可选地,本申请实施例中所述病灶分割网络为全卷积网络,进一步,由于不同大小的待分级图像对应的分割区域的大小差异较大,为了便于比较,需要统一比较标准,计算每个分割区域与所述待分级医学图像的面积比值,得到对应的相对面积,所述相对面积不受所述待分级医学图像的面积变化影响;汇总所有的分割区域及每个分割区域对应的相对面积,得到所述分割结果。例如:分割结果为所述特征图中共有A、B、C、D共4个分割区域,A分割区域共有10个像素组成,待分级医学图像由100个像素组成,那么A分割区域对应的相对面积为10%。In detail, in the embodiment of the present application, the lesion segmentation network in the lesion detection model is used to segment the feature map to obtain multiple segmentation regions. Optionally, the lesion segmentation network in the embodiment of the application is a full Convolutional network, further, since the size of the segmented regions corresponding to images of different sizes to be classified is quite different, in order to facilitate comparison, it is necessary to unify the comparison standard, calculate the area ratio of each segmented region to the medical image to be classified, and obtain the corresponding The relative area is not affected by the area change of the medical image to be classified; all the segmented regions and the relative area corresponding to each segmented region are summed up to obtain the segmented result. For example: the segmentation result is that there are 4 segmentation regions A, B, C, and D in the feature map, the segmentation region A consists of 10 pixels, and the medical image to be classified consists of 100 pixels, then the corresponding relative The area is 10%.
S5、将所述分类结果与所述分割结果进行特征匹配,得到特征信息;S5. Perform feature matching on the classification result and the segmentation result to obtain feature information;
详细地,本申请实施例将所述分类结果与所述分割结果进行匹配关联,得到所述分割结果中每个相对面积对应的病灶类别。In detail, the embodiment of the present application matches and associates the classification result with the segmentation result to obtain the lesion category corresponding to each relative area in the segmentation result.
具体地,本申请实施例所述分类结果与所述分割结果是同一个大模型中的不同分支得到的,所述分类结果中的每个边界框和分割区域的位置是相同的,例如分类结果为出血病灶共有一出为A边界框,A边界框对应的a分割区域,因此,匹配得到a分割区域的对应的病灶类别为出血病灶。Specifically, the classification result and the segmentation result in the embodiment of the present application are obtained from different branches in the same large model, and the positions of each bounding box and segmentation region in the classification result are the same, for example, the classification result There is a bounding box A for the bleeding lesions, and the segmented area a corresponds to the bounding box A. Therefore, the corresponding lesion category of the segmented area a obtained by matching is the bleeding lesion.
进一步地,本申请实施例将所述分割结果中相同病灶类别对应的所有所述相对面积进行求和,得到对应的分割区域总面积;将所述分割区域总面积与对应的病灶类别进行组合,得到匹配数组,例如:所述分割结果中视网膜前出血病灶类别对应的分割区域为A和B,A分割区域对应的相对面积为10%,B分割区域对应的相对面积为20%,那么视网膜前出血病灶类别对应的分各区域总面积为(10%+20%)=30%,对应的匹配数组为[视网膜前出血病灶,30%];进一步地,本申请实施例将所有匹配数组随机进行组合,得到所述特征信息。Further, in the embodiment of the present application, all the relative areas corresponding to the same lesion category in the segmentation results are summed to obtain the corresponding total area of the segmented region; the total area of the segmented region is combined with the corresponding lesion category, Obtain a matching array, for example: in the segmentation result, the segmented areas corresponding to the preretinal hemorrhage lesion category are A and B, the relative area corresponding to the A segmented area is 10%, and the relative area corresponding to the B segmented area is 20%, then the preretinal The total area of each area corresponding to the category of hemorrhage lesions is (10%+20%)=30%, and the corresponding matching array is [preretinal hemorrhage lesions, 30%]; further, in the embodiment of the present application, all matching arrays are randomly combination to obtain the feature information.
S4、利用分级模型对所述待分级医学图像进行分级,得到第一分级结果;S4. Using a grading model to classify the medical image to be classified to obtain a first grading result;
详细地,本申请实施例中利用分级模型对所述待分级医学图像进行分级,得到第一分级结果之前,还包括:对所述历史医学图像集进行预设分级标签标记,得到第二训练图像集;利用所述第二训练图像集对预构建的第二深度学习网络模型进行迭代训练,得到所述第一分级模型。可选地,所述分级标签包括:轻度非增殖性视网膜病变、中度非增殖性视网膜病变、重度非增殖性视网膜病变、增殖性视网膜病变及正常眼底。In detail, in the embodiment of the present application, the classification model is used to classify the medical images to be classified, and before obtaining the first classification result, it also includes: performing preset classification labels on the historical medical image set to obtain the second training image set; using the second training image set to iteratively train the pre-built second deep learning network model to obtain the first hierarchical model. Optionally, the grading labels include: mild non-proliferative retinopathy, moderate non-proliferative retinopathy, severe non-proliferative retinopathy, proliferative retinopathy and normal fundus.
可选地,本申请实施例中所述第二深度学习网络模型为包含稠密注意力机制的卷积神经网络模型。Optionally, the second deep learning network model in the embodiment of the present application is a convolutional neural network model including a dense attention mechanism.
S6、利用第二分级模型对所述特征信息进行分级,得到目标分级结果。S6. Classify the feature information by using the second classification model to obtain a target classification result.
可选地,本申请实施例中,所述第二分级模型为随机森林网络模型。Optionally, in this embodiment of the present application, the second hierarchical model is a random forest network model.
进一步地,本申请实施例为了让分级结果更精确,需要对第一分级结果进行修正,因此,本申请实施例利用所述目标分级网络模型对所述特征信息进行分级,得到所述目标分级结果。Further, in the embodiment of the present application, in order to make the classification result more accurate, the first classification result needs to be corrected. Therefore, the embodiment of the present application uses the target classification network model to classify the feature information to obtain the target classification result .
详细地,本申请实施例利用所述目标分级网络对所述特征信息进行分级之前还包括:利用所述预设的病灶类别标签作为根节点,利用预构建的相对面积分类区间及预设的分级标签作为分类条件,构建随机森林模型,得到所述第二分级模型,其中所述分级标签共有轻度非增殖性视网膜病变、中度非增殖性视网膜病变、重度非增殖性视网膜病变、增殖性视网膜病变及正常眼底五种,所述病灶面积分类区间可以根据实际诊断经验进行设置,如可以分为[0,20%),[20%,40%),[40%,60%),[60%,80%), [80%,100%]。In detail, before using the target classification network to classify the feature information in the embodiment of the present application, it also includes: using the preset lesion category label as the root node, using the pre-built relative area classification interval and the preset classification Labels are used as classification conditions, and a random forest model is constructed to obtain the second grading model, wherein the grading labels share mild non-proliferative retinopathy, moderate non-proliferative retinopathy, severe non-proliferative retinopathy, proliferative retinopathy There are five kinds of lesions and normal fundus, and the classification interval of the lesion area can be set according to the actual diagnosis experience, such as [0, 20%), [20%, 40%), [40%, 60%), [60% %, 80%), [80%, 100%].
进一步地,本申请实施例中,将所述特征信息及所述第一分级结果输入至所述第二分级模型,得到所述目标分级结果,例如:第一分级结果为中度非增殖性视网膜病变,特征信息为[视网膜前出血病灶,10%],那么将第一分级结果及所述特征信息输入所述第二分级模型,得到目标分级结果为轻度非增殖性视网膜病变。Further, in the embodiment of the present application, the feature information and the first grading result are input into the second grading model to obtain the target grading result, for example: the first grading result is moderate non-proliferative retina The feature information of the lesion is [preretinal hemorrhage lesion, 10%], then the first grading result and the feature information are input into the second grading model, and the target grading result is mild non-proliferative retinopathy.
如图3所示,是本申请医学图像分级装置的功能模块图。As shown in FIG. 3 , it is a functional block diagram of the medical image grading device of the present application.
本申请所述医学图像分级装置100可以安装于电子设备中。根据实现的功能,所述医学图像分级装置可以包括特征匹配模块101、图像分级模块102、分级矫正模块103,本发所述模块也可以称之为单元,是指一种能够被电子设备处理器所执行,并且能够完成固定功能的一系列计算机程序段,其存储在电子设备的存储器中。The medical image grading apparatus 100 described in this application can be installed in an electronic device. According to the realized functions, the medical image grading device may include a feature matching module 101, an image grading module 102, and a grading correction module 103. The module described in the present invention may also be referred to as a unit, which refers to a device that can be processed by an electronic device processor. A series of computer program segments executed and capable of performing a fixed function, stored in the memory of an electronic device.
在本实施例中,关于各模块/单元的功能如下:In this embodiment, the functions of each module/unit are as follows:
所述特征匹配模块101用于获取待分级医学图像,利用预构建的病灶检测模型中的特征提取网络对所述待分级医学图像进行特征提取,得到特征图;对所述特征图进行分类识别及结果统计,得到分类结果;利用所述病灶检测模型中的病灶分割网络对所述特征图进行区域分割及面积计算,得到分割结果;将所述分类结果与所述分割结果进行特征匹配,得到特征信息;The feature matching module 101 is used to obtain the medical image to be classified, and use the feature extraction network in the pre-built lesion detection model to perform feature extraction on the medical image to be classified to obtain a feature map; classify and identify the feature map and The results are counted to obtain classification results; using the lesion segmentation network in the lesion detection model to perform region segmentation and area calculation on the feature map to obtain segmentation results; performing feature matching between the classification results and the segmentation results to obtain features information;
可选地,本申请实施例中所述待分级医学图像为眼底彩超图像,所述病灶检测模型包括:特征提取网络、病灶分类网络、病灶分割网络。其中,所述特征提取网络用于特征提取,所述病灶分类网络用于病灶分类,所述病灶分割网络用于病灶区域分割。Optionally, the medical image to be classified in the embodiment of the present application is a fundus color ultrasound image, and the lesion detection model includes: a feature extraction network, a lesion classification network, and a lesion segmentation network. Wherein, the feature extraction network is used for feature extraction, the lesion classification network is used for lesion classification, and the lesion segmentation network is used for lesion region segmentation.
详细地,本申请实施例中所述特征匹配模块101利用所述特征提取网络中初始特征提取网络对所述待分级医学图像执行卷积池化操作得到初始特征图;利用所述特征提取网络中的区域提取网络标记所述初始特征图中的感兴趣区域,得到特征图。In detail, the feature matching module 101 in the embodiment of the present application uses the initial feature extraction network in the feature extraction network to perform a convolution pooling operation on the medical image to be classified to obtain an initial feature map; The region extraction network marks the region of interest in the initial feature map to obtain a feature map.
可选地,本申请实施例中所述初始特征提取网络为卷积神经网络,所述区域提取网络为RPN(Region Proposal Network,区域建议网络)。Optionally, in the embodiment of the present application, the initial feature extraction network is a convolutional neural network, and the region extraction network is an RPN (Region Proposal Network, region proposal network).
进一步地,本申请实施例中所述特征匹配模块101利用病灶检测模型中的特征提取网络对所述待分级医学图像进行特征提取之前还包括:获取历史医学图像集,对所述历史医学图像集进行预设标签标记,得到第一训练图像集;利用所述第一训练图像集对预构建的第一深度学习网络模型进行迭代训练,得到所述病灶检测模型。其中,所述历史医学图像集包含多个历史医学图像,所述历史医学图像为与所述待分级图像类型相同内容不同的医学图像。Further, before the feature matching module 101 in the embodiment of the present application utilizes the feature extraction network in the lesion detection model to perform feature extraction on the medical image to be classified, it also includes: acquiring a historical medical image set, and analyzing the historical medical image set Preset labeling is performed to obtain a first training image set; and the pre-built first deep learning network model is iteratively trained by using the first training image set to obtain the lesion detection model. Wherein, the historical medical image set includes a plurality of historical medical images, and the historical medical images are medical images of the same type as the image to be classified but with different contents.
详细地,本申请实施例所述特征匹配模块101对所述历史医学图像集进行预设标签标记,得到第一训练图像集,包括:对所述历史医学图像集中每个历史医学图像中的病灶进行病灶区域标记,得到目标区域,对每个所述历史医学图像中的每个所述目标区域进行病灶类别标记,得到第一训练图像集;可选地,所述预设病灶区域包括微血管瘤区域、出血区域、硬渗区域、棉絮斑区域、激光斑区域、新生血管区域、玻璃体出血区域、视网膜前出血区域、纤维膜区域;所述预设病灶类别与所述预设病灶区域是一一对应的,包括:微血管瘤病灶、出血病灶、硬渗病灶、棉絮斑病灶、激光斑病灶、新生血管病灶、玻璃体出血病灶、视网膜前出血病灶、纤维膜病灶,如目标区域为激光斑区域那么将该区域标记为激光斑病灶。In detail, the feature matching module 101 in the embodiment of the present application performs preset labeling on the historical medical image set to obtain the first training image set, including: for each lesion in the historical medical image set Marking the lesion area to obtain a target area, and performing lesion category marking on each of the target areas in each of the historical medical images to obtain a first training image set; optionally, the preset lesion area includes a microvascular tumor area, hemorrhage area, sclerotic area, cotton wool spot area, laser spot area, neovascular area, vitreous hemorrhage area, preretinal hemorrhage area, fibrous membrane area; the preset lesion category and the preset lesion area are one by one Correspondingly, including: microvascular tumor lesions, bleeding lesions, sclerotic lesions, cotton wool spot lesions, laser spot lesions, neovascular lesions, vitreous hemorrhage lesions, preretinal hemorrhage lesions, and fibrous membrane lesions. If the target area is a laser spot area, then the This area is marked as a laser spot lesion.
进一步地,所述病灶检测模型是由第一深度模型训练得到的,因此,所述第一深度学习模型与所述病灶检测模型具有相同的网络结构,因此,所述第一深度学习模型也包括:特征提取网络、病灶分类网络、病灶分割网络。Further, the lesion detection model is obtained by training the first deep model, therefore, the first deep learning model has the same network structure as the lesion detection model, therefore, the first deep learning model also includes : Feature extraction network, lesion classification network, lesion segmentation network.
详细地,本申请实施例中所述特征匹配模块101利用所述第一训练图像集对预构建的第一深度学习网络模型进行迭代训练,得到所述病灶检测模型,其中,所述第一深度学习网络模型为Mask-RCNN模型,包括:In detail, in the embodiment of the present application, the feature matching module 101 uses the first training image set to iteratively train the pre-built first deep learning network model to obtain the lesion detection model, wherein the first depth The learning network model is the Mask-RCNN model, including:
步骤A:利用所述第一深度学习网络模型中的特征提取网络对所述第一训练图像集中的每一张图像进行卷积池化,并将卷积池化后的图像进行感兴趣区域标记,得到历史特征图;Step A: Use the feature extraction network in the first deep learning network model to perform convolution pooling on each image in the first training image set, and mark the region of interest on the images after convolution pooling , get the historical feature map;
可选地,本申请实施例中所述特征提取网络包含初始特征提取网络及区域提取网络;其中,所述初始特征提取网络为卷积神经网络,所述区域提取网络为RPN(Region Proposal Network,区域建议网络)。Optionally, the feature extraction network in the embodiment of the present application includes an initial feature extraction network and a region extraction network; wherein, the initial feature extraction network is a convolutional neural network, and the region extraction network is an RPN (Region Proposal Network, Regional Advice Network).
详细地,本申请实施例中利用初始特征提取网络进行卷积池化,利用所述区域提取网络进行感兴趣区域标记。In detail, in the embodiment of the present application, the initial feature extraction network is used to perform convolution pooling, and the region extraction network is used to mark the region of interest.
步骤B:利用所述第一深度学习网络模型中的病灶分类网络对所述历史特征图中的感兴趣区域进行边界框标记预测及分类预测,得到边界框预测坐标及分类预测值;Step B: using the lesion classification network in the first deep learning network model to perform bounding box label prediction and classification prediction on the region of interest in the historical feature map, and obtain bounding box prediction coordinates and classification prediction values;
步骤C:根据所述历史特征图对应的历史历史特征图像标记的病灶区域,得到边界框真实坐标;根据所述历史特征图对应的历史历史特征图像标记的病灶类别,得到分类真实值;Step C: According to the lesion area marked by the historical feature image corresponding to the historical feature map, the real coordinates of the bounding box are obtained; according to the lesion category marked by the historical feature image corresponding to the historical feature map, the classification true value is obtained;
例如:标记的病灶类别为激光斑病灶,那么对应的激光斑病灶的分类真实值为1。For example, if the marked lesion category is a laser spot lesion, then the true value of the classification of the corresponding laser spot lesion is 1.
步骤D:根据所述分类预测值与所述分类真实值,利用预设的第一损失函数进行计算,得到第一损失值;根据所述边界框真实坐标与所述边界框预测坐标,利用预设的第二损失函数进行计算,得到第二损失函数。Step D: Calculate according to the classification prediction value and the classification real value, using a preset first loss function to obtain a first loss value; according to the real coordinates of the bounding box and the predicted coordinates of the bounding box, use the preset Calculate the second loss function set to obtain the second loss function.
可选地,本申请实施例中所述第一损失函数或所述第二损失函数可以为交叉熵损失函数。Optionally, the first loss function or the second loss function in this embodiment of the present application may be a cross-entropy loss function.
可选地,本申请实施例中所述病灶分割网络包含全连接层及softmax网络。Optionally, the lesion segmentation network described in the embodiment of the present application includes a fully connected layer and a softmax network.
步骤E:利用所述第一深度学习网络模型中的病灶分割网络对所述历史特征图进行区域分割预测,得到每个区域对应的像素总数预测值及区域边缘像素个数预测值;Step E: using the lesion segmentation network in the first deep learning network model to perform region segmentation prediction on the historical feature map, and obtain the predicted value of the total number of pixels corresponding to each region and the predicted value of the number of pixels at the edge of the region;
可选地,本申请实施例中所述病灶分割网络为全卷积网络。Optionally, the lesion segmentation network described in the embodiment of the present application is a fully convolutional network.
步骤F:根据所述历史特征图对应的历史历史特征图像标记的病灶区域,得到对应区域的像素总数真实值及区域边缘像素个数真实值;Step F: According to the lesion area marked by the historical feature image corresponding to the historical feature map, the real value of the total number of pixels in the corresponding area and the real value of the number of pixels at the edge of the area are obtained;
步骤G:根据每个区域的所述像素总数预测值及区域边缘像素个数预测值与对应区域的像素总数真实值及区域边缘像素个数真实值,利用预设的第三损失函数进行计算,得到第三损失值;将所述第一损失值、所述第二损失值及所述第三损失值进行求和计算,得到目标损失值;Step G: According to the predicted value of the total number of pixels in each region and the predicted value of the number of pixels at the edge of the region and the real value of the total number of pixels in the corresponding region and the real value of the number of pixels at the edge of the region, use the preset third loss function to calculate, obtaining a third loss value; summing the first loss value, the second loss value, and the third loss value to obtain a target loss value;
可选地,本申请实施例中,所述第三损失函数为交叉熵损失函数。Optionally, in this embodiment of the present application, the third loss function is a cross-entropy loss function.
步骤H:当所述目标损失值大于或等于预设损失阈值时,更新所述第一深度学习网络模型参数,并返回上述的步骤A,直到所述目标损失值小于预设损失阈值时,停止训练,得到所述病灶检测模型。Step H: When the target loss value is greater than or equal to the preset loss threshold, update the first deep learning network model parameters, and return to step A above, until the target loss value is less than the preset loss threshold, stop training to obtain the lesion detection model.
本申请另一实施例中,利用区块链高吞吐的特性,将所述待分级医学图像存储在区块链节点中,提高数据存取效率。In another embodiment of the present application, the high-throughput feature of the blockchain is used to store the medical images to be graded in the blockchain nodes to improve data access efficiency.
详细地,本申请实施例中所述特征匹配模块101利用所述病灶检测模型中的病灶分割网络对所述特征图进行边界框标记及分类,并将相同类别的边界框的个数进行汇总,得分类结果。例如:所述特征图中共有A、B、C、D共4个边界框,A边界框分类为出血病灶,B边界框为激光斑病灶,C边界框为视网膜前出血病灶,D边界框为出血病灶,那么将相同类别的边界框的个数进行汇得到分类结果为出血病灶共有两处为A边界框及D边界框,激光斑病灶有1处为B边界框,视网膜前出血病灶有一处为C边界框。In detail, the feature matching module 101 in the embodiment of the present application uses the lesion segmentation network in the lesion detection model to mark and classify the bounding boxes of the feature maps, and summarize the number of bounding boxes of the same category, Get classification results. For example: there are 4 bounding boxes A, B, C, and D in the feature map, the A bounding box is classified as a hemorrhage lesion, the B bounding box is a laser spot lesion, the C bounding box is a preretinal hemorrhage lesion, and the D bounding box is For bleeding lesions, then the number of bounding boxes of the same category is collected to obtain the classification result. There are two bounding boxes A and D bounding boxes for bleeding lesions, one bounding box B for laser spot lesions, and one preretinal hemorrhage lesion is the bounding box of C.
详细地,本申请实施例中所述特征匹配模块101利用所述病灶检测模型中的病灶分割网络对所述特征图进行区域分割,得到多个分割区域,可选地,本申请实施例中所述病灶分割网络为全卷积网络,进一步,由于不同大小的待分级图像对应的分割区域的大小差异较大,为了便于比较,需要统一比较标准,计算每个分割区域与所述待分级医学图像的面积比值,得到对应的相对面积,所述相对面积不受所述待分级医学图像的面积变化影响;汇总所有的分割区域及每个分割区域对应的相对面积,得到所述分割结果。例如:分割结果为所述特征图中共有A、B、C、D共4个分割区域, A分割区域共有10个像素组成,待分级医学图像由100个像素组成,那么A分割区域对应的相对面积为10%。In detail, the feature matching module 101 in the embodiment of the present application uses the lesion segmentation network in the lesion detection model to perform region segmentation on the feature map to obtain multiple segmented regions. Optionally, the The lesion segmentation network described above is a fully convolutional network. Further, since the size of the segmented regions corresponding to images of different sizes to be classified is quite different, in order to facilitate comparison, a unified comparison standard is required to calculate the difference between each segmented region and the medical image to be classified. The corresponding relative area is obtained, and the relative area is not affected by the area change of the medical image to be classified; all the segmented regions and the relative area corresponding to each segmented region are summarized to obtain the segmented result. For example: the result of the segmentation is that there are 4 segmentation regions A, B, C, and D in the feature map, the segmentation region A consists of 10 pixels, and the medical image to be classified consists of 100 pixels, then the corresponding relative The area is 10%.
详细地,本申请实施例所述特征匹配模块101将所述分类结果与所述分割结果进行匹配关联,得到所述分割结果中每个相对面积对应的病灶类别。In detail, the feature matching module 101 in the embodiment of the present application matches and associates the classification result with the segmentation result to obtain the lesion category corresponding to each relative area in the segmentation result.
具体地,本申请实施例所述分类结果与所述分割结果是同一个大模型中的不同分支得到的,所述分类结果中的每个边界框和分割区域的位置是相同的,例如分类结果为出血病灶共有一出为A边界框,A边界框对应的a分割区域,因此,匹配得到 a分割区域的对应的病灶类别为出血病灶。Specifically, the classification result and the segmentation result in the embodiment of the present application are obtained from different branches in the same large model, and the positions of each bounding box and segmentation region in the classification result are the same, for example, the classification result There is a bounding box A for the bleeding lesions, and the segmented area a corresponds to the bounding box A. Therefore, the corresponding lesion category of the segmented area a obtained by matching is the bleeding lesion.
进一步地,本申请实施例所述特征匹配模块101将所述分割结果中相同病灶类别对应的所有所述相对面积进行求和,得到对应的分割区域总面积;将所述分割区域总面积与对应的病灶类别进行组合,得到匹配数组,例如:所述分割结果中视网膜前出血病灶类别对应的分割区域为A和B,A分割区域对应的相对面积为10%,B分割区域对应的相对面积为20%,那么视网膜前出血病灶类别对应的分各区域总面积为(10%+20%)=30%,对应的匹配数组为[视网膜前出血病灶,30%];进一步地,本申请实施例将所有匹配数组随机进行组合,得到所述特征信息。Further, the feature matching module 101 in the embodiment of the present application sums all the relative areas corresponding to the same lesion category in the segmentation results to obtain the corresponding total area of the segmented area; compares the total area of the segmented area with the corresponding The lesion categories are combined to obtain a matching array, for example: in the segmentation result, the segmentation areas corresponding to the preretinal hemorrhage lesion category are A and B, the relative area corresponding to the A segmentation area is 10%, and the relative area corresponding to the B segmentation area is 20%, then the total area of each region corresponding to the preretinal hemorrhage lesion category is (10%+20%)=30%, and the corresponding matching array is [preretinal hemorrhage lesion, 30%]; further, the embodiment of the present application Randomly combine all matching arrays to obtain the feature information.
所述图像分级模块102用于利用预构建的第一分级模型对所述待分级医学图像进行分级,得到第一分级结果;The image classification module 102 is configured to use a pre-built first classification model to classify the medical image to be classified to obtain a first classification result;
详细地,本申请实施例中所述图像分级模块102利用分级模型对所述待分级医学图像进行分级,得到第一分级结果之前,还包括:对所述历史医学图像集进行预设分级标签标记,得到第二训练图像集;利用所述第二训练图像集对预构建的第二深度学习网络模型进行迭代训练,得到所述第一分级模型。可选地,所述分级标签包括:轻度非增殖性视网膜病变、中度非增殖性视网膜病变、重度非增殖性视网膜病变、增殖性视网膜病变及正常眼底。In detail, in the embodiment of the present application, the image classification module 102 uses a classification model to classify the medical images to be classified, and before obtaining the first classification result, it also includes: performing preset classification labels on the historical medical image set , to obtain a second training image set; using the second training image set to iteratively train the pre-built second deep learning network model to obtain the first hierarchical model. Optionally, the grading labels include: mild non-proliferative retinopathy, moderate non-proliferative retinopathy, severe non-proliferative retinopathy, proliferative retinopathy and normal fundus.
可选地,本申请实施例中所述第二深度学习网络模型为包含稠密注意力机制的卷积神经网络模型。Optionally, the second deep learning network model in the embodiment of the present application is a convolutional neural network model including a dense attention mechanism.
所述分级矫正模块103用于利用预构建的第二分级模型对所述特征信息及所述第一分级结果进行分级矫正,得到目标分级结果。The grading correction module 103 is configured to use the pre-built second grading model to perform grading correction on the feature information and the first grading result to obtain a target grading result.
可选地,本申请实施例中,所述第二分级模型为随机森林网络模型。Optionally, in this embodiment of the present application, the second hierarchical model is a random forest network model.
进一步地,本申请实施例为了让分级结果更精确,需要对第一分级结果进行修正,因此,本申请实施例所述分级矫正模块103利用所述目标分级网络模型对所述特征信息进行分级,得到所述目标分级结果。Furthermore, in this embodiment of the present application, in order to make the classification result more accurate, the first classification result needs to be corrected. Therefore, the classification correction module 103 in the embodiment of the present application uses the target classification network model to classify the feature information, Obtain the target classification result.
详细地,本申请实施例所述分级矫正模块103利用所述目标分级网络对所述特征信息进行分级之前还包括:利用所述预设的病灶类别标签作为根节点,利用预构建的相对面积分类区间及预设的分级标签作为分类条件,构建随机森林模型,得到所述第二分级模型,其中所述分级标签共有轻度非增殖性视网膜病变、中度非增殖性视网膜病变、重度非增殖性视网膜病变、增殖性视网膜病变及正常眼底五种,所述病灶面积分类区间可以根据实际诊断经验进行设置,如可以分为[0,20%),[20%,40%),[40%,60%),[60%,80%), [80%,100%]。In detail, before the grading correction module 103 in the embodiment of the present application uses the target grading network to classify the feature information, it also includes: using the preset lesion category label as the root node, using the pre-built relative area classification Intervals and preset grading labels are used as classification conditions to construct a random forest model to obtain the second grading model, wherein the grading labels share mild non-proliferative retinopathy, moderate non-proliferative retinopathy, severe non-proliferative retinopathy There are five types of retinopathy, proliferative retinopathy, and normal fundus. The lesion area classification interval can be set according to actual diagnosis experience, such as [0, 20%), [20%, 40%), [40%, 60%), [60%, 80%), [80%, 100%].
进一步地,本申请实施例中,所述分级矫正模块103将所述特征信息及所述第一分级结果输入至所述第二分级模型,得到所述目标分级结果,例如:第一分级结果为中度非增殖性视网膜病变,特征信息为[视网膜前出血病灶,10%],那么将第一分级结果及所述特征信息输入所述第二分级模型,得到目标分级结果为轻度非增殖性视网膜病变。Further, in the embodiment of the present application, the grading correction module 103 inputs the feature information and the first grading result into the second grading model to obtain the target grading result, for example: the first grading result is Moderate non-proliferative retinopathy, the feature information is [preretinal hemorrhage lesion, 10%], then input the first grading result and the feature information into the second grading model, and the target grading result is mild non-proliferative Retinopathy.
如图3所示,是本申请实现医学图像分级方法的电子设备的结构示意图。As shown in FIG. 3 , it is a schematic structural diagram of an electronic device implementing the medical image classification method of the present application.
所述电子设备可以包括处理器10、存储器11、通信总线12和通信接口13,还可以包括存储在所述存储器11中并可在所述处理器10上运行的计算机程序,如医学图像分级程序。The electronic device may include a processor 10, a memory 11, a communication bus 12 and a communication interface 13, and may also include a computer program stored in the memory 11 and operable on the processor 10, such as a medical image grading program .
其中,所述存储器11至少包括一种类型的可读存储介质,所述可读存储介质包括闪存、移动硬盘、多媒体卡、卡型存储器(例如:SD或DX存储器等)、磁性存储器、磁盘、光盘等。所述存储器11在一些实施例中可以是电子设备的内部存储单元,例如该电子设备的移动硬盘。所述存储器11在另一些实施例中也可以是电子设备的外部存储设备,例如电子设备上配备的插接式移动硬盘、智能存储卡(Smart Media Card, SMC)、安全数字(Secure Digital, SD)卡、闪存卡(Flash Card)等。进一步地,所述存储器11还可以既包括电子设备的内部存储单元也包括外部存储设备。所述存储器11不仅可以用于存储安装于电子设备的应用软件及各类数据,例如医学图像分级程序的代码等,还可以用于暂时地存储已经输出或者将要输出的数据。Wherein, the memory 11 includes at least one type of readable storage medium, and the readable storage medium includes flash memory, mobile hard disk, multimedia card, card-type memory (for example: SD or DX memory, etc.), magnetic memory, magnetic disk, CD etc. The storage 11 may be an internal storage unit of the electronic device in some embodiments, such as a mobile hard disk of the electronic device. In other embodiments, the memory 11 can also be an external storage device of the electronic device, such as a plug-in mobile hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD ) card, flash memory card (Flash Card), etc. Further, the memory 11 may also include both an internal storage unit of the electronic device and an external storage device. The memory 11 can not only be used to store application software and various data installed in the electronic device, such as codes of medical image grading programs, but also can be used to temporarily store outputted or to-be-outputted data.
所述处理器10在一些实施例中可以由集成电路组成,例如可以由单个封装的集成电路所组成,也可以是由多个相同功能或不同功能封装的集成电路所组成,包括一个或者多个中央处理器(Central Processing unit,CPU)、微处理器、数字处理芯片、图形处理器及各种控制芯片的组合等。所述处理器10是所述电子设备的控制核心(Control Unit),利用各种接口和线路连接整个电子设备的各个部件,通过运行或执行存储在所述存储器11内的程序或者模块(例如医学图像分级程序等),以及调用存储在所述存储器11内的数据,以执行电子设备的各种功能和处理数据。In some embodiments, the processor 10 may be composed of integrated circuits, for example, may be composed of a single packaged integrated circuit, or may be composed of multiple integrated circuits with the same function or different functions, including one or more Combination of central processing unit (Central Processing unit, CPU), microprocessor, digital processing chip, graphics processor and various control chips, etc. The processor 10 is the control core (Control Unit) of the electronic device, and uses various interfaces and lines to connect various components of the entire electronic device, and runs or executes programs or modules stored in the memory 11 (such as medical image grading program, etc.), and call the data stored in the memory 11 to execute various functions of the electronic device and process data.
所述通信总线12可以是外设部件互连标准(perIPheral component interconnect,简称PCI)总线或扩展工业标准结构(extended industry standard architecture,简称EISA)总线等。该总线可以分为地址总线、数据总线、控制总线等。所述通信总线12总线被设置为实现所述存储器11以及至少一个处理器10等之间的连接通信。为便于表示,图中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。The communication bus 12 can be a peripheral component interconnection standard (perIPheral component interconnect (PCI for short) bus or extended industry standard structure (extended industry standard architecture, referred to as EISA) bus and so on. The bus can be divided into address bus, data bus, control bus and so on. The communication bus 12 is configured to realize connection and communication between the memory 11 and at least one processor 10 and the like. For ease of representation, only one thick line is used in the figure, but it does not mean that there is only one bus or one type of bus.
图3仅示出了具有部件的电子设备,本领域技术人员可以理解的是,图3示出的结构并不构成对所述电子设备的限定,可以包括比图示更少或者更多的部件,或者组合某些部件,或者不同的部件布置。Figure 3 only shows an electronic device with components, and those skilled in the art can understand that the structure shown in Figure 3 does not constitute a limitation to the electronic device, and may include fewer or more components than shown in the figure , or combinations of certain components, or different arrangements of components.
例如,尽管未示出,所述电子设备还可以包括给各个部件供电的电源(比如电池),优选地,电源可以通过电源管理装置与所述至少一个处理器10逻辑相连,从而通过电源管理装置实现充电管理、放电管理、以及功耗管理等功能。电源还可以包括一个或一个以上的直流或交流电源、再充电装置、电源故障检测电路、电源转换器或者逆变器、电源状态指示器等任意组件。所述电子设备还可以包括多种传感器、蓝牙模块、Wi-Fi模块等,在此不再赘述。For example, although not shown, the electronic device may also include a power supply (such as a battery) for supplying power to various components. Preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so that Realize functions such as charge management, discharge management, and power consumption management. The power supply may also include one or more DC or AC power supplies, recharging devices, power failure detection circuits, power converters or inverters, power status indicators and other arbitrary components. The electronic device may also include various sensors, a Bluetooth module, a Wi-Fi module, etc., which will not be repeated here.
可选地,所述通信接口13可以包括有线接口和/或无线接口(如WI-FI接口、蓝牙接口等),通常用于在该电子设备与其他电子设备之间建立通信连接。Optionally, the communication interface 13 may include a wired interface and/or a wireless interface (such as a WI-FI interface, a Bluetooth interface, etc.), which are generally used to establish a communication connection between the electronic device and other electronic devices.
可选地,所述通信接口13还可以包括用户接口,用户接口可以是显示器(Display)、输入单元(比如键盘(Keyboard)),可选地,用户接口还可以是标准的有线接口、无线接口。可选地,在一些实施例中,显示器可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED(Organic Light-Emitting Diode,有机发光二极管)触摸器等。其中,显示器也可以适当的称为显示屏或显示单元,用于显示在电子设备中处理的信息以及用于显示可视化的用户界面。Optionally, the communication interface 13 may also include a user interface, the user interface may be a display (Display), an input unit (such as a keyboard (Keyboard)), optionally, the user interface may also be a standard wired interface, a wireless interface . Optionally, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode, organic light-emitting diode) touch device, and the like. Wherein, the display may also be properly referred to as a display screen or a display unit, and is used for displaying information processed in the electronic device and for displaying a visualized user interface.
应该了解,所述实施例仅为说明之用,在专利申请范围上并不受此结构的限制。It should be understood that the embodiments are only for illustration, and are not limited by the structure in terms of the scope of the patent application.
所述电子设备中的所述存储器11存储的医学图像分级程序是多个计算机程序的组合,在所述处理器10中运行时,可以实现:The medical image grading program stored in the memory 11 in the electronic device is a combination of multiple computer programs. When running in the processor 10, it can realize:
获取待分级医学图像,利用预构建的病灶检测模型中的特征提取网络对所述待分级医学图像进行特征提取,得到特征图;Acquiring the medical image to be classified, using the feature extraction network in the pre-built lesion detection model to perform feature extraction on the medical image to be classified to obtain a feature map;
对所述特征图进行分类识别及结果统计,得到分类结果;Carrying out classification recognition and result statistics on the feature map to obtain classification results;
利用所述病灶检测模型中的病灶分割网络对所述特征图进行区域分割及面积计算,得到分割结果;Using the lesion segmentation network in the lesion detection model to perform region segmentation and area calculation on the feature map to obtain a segmentation result;
将所述分类结果与所述分割结果进行特征匹配,得到特征信息;performing feature matching on the classification result and the segmentation result to obtain feature information;
利用预构建的第一分级模型对所述待分级医学图像进行分级,得到第一分级结果;Classifying the medical image to be classified by using a pre-built first classification model to obtain a first classification result;
利用预构建的第二分级模型对所述特征信息及所述第一分级结果进行分级矫正,得到目标分级结果。The feature information and the first grading result are graded and corrected by using the pre-built second grading model to obtain a target grading result.
具体地,所述处理器10对上述计算机程序的具体实现方法可参考图1对应实施例中相关步骤的描述,在此不赘述。Specifically, for a specific implementation method of the above computer program by the processor 10, reference may be made to the description of relevant steps in the embodiment corresponding to FIG. 1 , and details are not repeated here.
进一步地,所述电子设备集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。所述计算机可读介质可以是非易失性的,也可以是易失性的。所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)。Furthermore, if the integrated modules/units of the electronic equipment are realized in the form of software function units and sold or used as independent products, they can be stored in a computer-readable storage medium. The computer readable medium may be non-volatile or volatile. The computer readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, removable hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) .
本申请实施例还可以提供一种计算机可读存储介质,所述可读存储介质存储有计算机程序,所述计算机程序在被电子设备的处理器所执行时,可以实现:The embodiment of the present application can also provide a computer-readable storage medium, the readable storage medium stores a computer program, and when the computer program is executed by a processor of an electronic device, it can realize:
获取待分级医学图像,利用预构建的病灶检测模型中的特征提取网络对所述待分级医学图像进行特征提取,得到特征图;Acquiring the medical image to be classified, using the feature extraction network in the pre-built lesion detection model to perform feature extraction on the medical image to be classified to obtain a feature map;
对所述特征图进行分类识别及结果统计,得到分类结果;Carrying out classification recognition and result statistics on the feature map to obtain classification results;
利用所述病灶检测模型中的病灶分割网络对所述特征图进行区域分割及面积计算,得到分割结果;Using the lesion segmentation network in the lesion detection model to perform region segmentation and area calculation on the feature map to obtain a segmentation result;
将所述分类结果与所述分割结果进行特征匹配,得到特征信息;performing feature matching on the classification result and the segmentation result to obtain feature information;
利用预构建的第一分级模型对所述待分级医学图像进行分级,得到第一分级结果;Classifying the medical image to be classified by using a pre-built first classification model to obtain a first classification result;
利用预构建的第二分级模型对所述特征信息及所述第一分级结果进行分级矫正,得到目标分级结果。The feature information and the first grading result are graded and corrected by using the pre-built second grading model to obtain a target grading result.
进一步地,所述计算机可用存储介质可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序等;存储数据区可存储根据区块链节点的使用所创建的数据等。Further, the computer-usable storage medium may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function, etc.; use of the created data, etc.
在本申请所提供的几个实施例中,应该理解到,所揭露的设备,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。In the several embodiments provided in this application, it should be understood that the disclosed devices, devices and methods can be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of the modules is only a logical function division, and there may be other division methods in actual implementation.
所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。The modules described as separate components may or may not be physically separated, and the components shown as modules may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
另外,在本申请各个实施例中的各功能模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能模块的形式实现。In addition, each functional module in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit. The above-mentioned integrated units can be implemented in the form of hardware, or in the form of hardware plus software function modules.
对于本领域技术人员而言,显然本申请不限于上述示范性实施例的细节,而且在不背离本申请的精神或基本特征的情况下,能够以其他的具体形式实现本申请。It will be apparent to those skilled in the art that the present application is not limited to the details of the exemplary embodiments described above, but that the present application can be implemented in other specific forms without departing from the spirit or essential characteristics of the present application.
因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本申请的范围由所附权利要求而不是上述说明限定,因此旨在将落在权利要求的等同要件的含义和范围内的所有变化涵括在本申请内。不应将权利要求中的任何附关联图标记视为限制所涉及的权利要求。Therefore, the embodiments should be regarded as exemplary and not restrictive in all points of view, and the scope of the application is defined by the appended claims rather than the foregoing description, and it is intended that the scope of the present application be defined by the appended claims rather than by the foregoing description. All changes within the meaning and range of equivalents of the elements are embraced in this application. Any reference sign in a claim should not be construed as limiting the claim concerned.
本申请所指区块链是分布式数据存储、点对点传输、共识机制、加密算法等计算机技术的新型应用模式。区块链(Blockchain),本质上是一个去中心化的数据库,是一串使用密码学方法相关联产生的数据块,每一个数据块中包含了一批次网络交易的信息,用于验证其信息的有效性(防伪)和生成下一个区块。区块链可以包括区块链底层平台、平台产品服务层以及应用服务层等。The blockchain referred to in this application is a new application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm. Blockchain (Blockchain), essentially a decentralized database, is a series of data blocks associated with each other using cryptographic methods. Each data block contains a batch of network transaction information, which is used to verify its Validity of information (anti-counterfeiting) and generation of the next block. The blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.
此外,显然“包括”一词不排除其他单元或步骤,单数不排除复数。系统权利要求中陈述的多个单元或装置也可以由一个单元或装置通过软件或者硬件来实现。第二等词语用来表示名称,而并不表示任何特定的顺序。In addition, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or devices stated in the system claims may also be realized by one unit or device through software or hardware. Secondary terms are used to denote names without implying any particular order.
最后应说明的是,以上实施例仅用以说明本申请的技术方案而非限制,尽管参照较佳实施例对本申请进行了详细说明,本领域的普通技术人员应当理解,可以对本申请的技术方案进行修改或等同替换,而不脱离本申请技术方案的精神和范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present application without limitation. Although the present application has been described in detail with reference to the preferred embodiments, those skilled in the art should understand that the technical solutions of the present application can be Make modifications or equivalent replacements without departing from the spirit and scope of the technical solutions of the present application.

Claims (20)

  1. 一种医学图像分级方法,其中,所述方法包括:A method for grading medical images, wherein the method includes:
    获取待分级医学图像,利用预构建的病灶检测模型中的特征提取网络对所述待分级医学图像进行特征提取,得到特征图;Acquiring the medical image to be classified, using the feature extraction network in the pre-built lesion detection model to perform feature extraction on the medical image to be classified to obtain a feature map;
    对所述特征图进行分类识别及结果统计,得到分类结果;Carrying out classification recognition and result statistics on the feature map to obtain classification results;
    利用所述病灶检测模型中的病灶分割网络对所述特征图进行区域分割及面积计算,得到分割结果;Using the lesion segmentation network in the lesion detection model to perform region segmentation and area calculation on the feature map to obtain a segmentation result;
    将所述分类结果与所述分割结果进行特征匹配,得到特征信息;performing feature matching on the classification result and the segmentation result to obtain feature information;
    利用预构建的第一分级模型对所述待分级医学图像进行分级,得到第一分级结果;Classifying the medical image to be classified by using a pre-built first classification model to obtain a first classification result;
    利用预构建的第二分级模型对所述特征信息及所述第一分级结果进行分级矫正,得到目标分级结果。The feature information and the first grading result are graded and corrected by using the pre-built second grading model to obtain a target grading result.
  2. 如权利要求1所述的医学图像分级方法,其中,所述利用预构建的病灶检测模型中的特征提取网络对所述待分级医学图像进行特征提取,得到特征图,包括:The method for grading medical images according to claim 1, wherein said using the feature extraction network in the pre-built lesion detection model to perform feature extraction on the medical image to be classified to obtain a feature map, comprising:
    对所述待分级医学图像执行卷积池化操作,得到初始特征图;performing a convolution pooling operation on the medical image to be classified to obtain an initial feature map;
    标记所述初始特征图中的感兴趣区域,得到特征图。A region of interest in the initial feature map is marked to obtain a feature map.
  3. 如权利要求1所述的医学图像分级方法,其中,所述利用预构建的病灶检测模型中的特征提取网络对所述待分级医学图像进行特征提取之前,所述方法还包括:The medical image classification method according to claim 1, wherein, before performing feature extraction on the medical image to be classified by using the feature extraction network in the pre-built lesion detection model, the method further comprises:
    获取历史医学图像集,对所述历史医学图像集进行标签标记,得到第一训练图像集;Acquiring a set of historical medical images, and performing label marking on the set of historical medical images to obtain a first set of training images;
    利用所述第一训练图像集对预构建的第一深度学习网络模型进行迭代训练,得到所述病灶检测模型。The first training image set is used to iteratively train the pre-built first deep learning network model to obtain the lesion detection model.
  4. 如权利要求3所述的医学图像分级方法,其中,所述对所述历史医学图像集进行标签标记,包括:The medical image classification method according to claim 3, wherein said labeling said historical medical image set comprises:
    对所述历史医学图像集中每个历史医学图像中的病灶进行病灶区域划分,得到目标区域;performing lesion area division on the lesions in each historical medical image in the historical medical image set to obtain a target area;
    利用所述预设病灶类别标签对每个所述历史医学图像中的每个所述目标区域进行病灶类别标记。Each of the target regions in each of the historical medical images is labeled with a lesion category by using the preset lesion category label.
  5. 如权利要求3或4所述的医学图像分级方法,其中,所述利用预构建的第一分级模型对所述待分级医学图像进行分级,得到第一分级结果之前,所述方法还包括:The method for grading medical images according to claim 3 or 4, wherein said using the pre-built first grading model to classify the medical image to be graded, before obtaining the first grading result, said method further comprises:
    对所述历史医学图像集进行预设分级标签标记,得到第二训练图像集;performing preset classification labels on the historical medical image set to obtain a second training image set;
    利用所述第二训练图像集对预构建的第二深度学习网络模型进行迭代训练,得到所述第一分级模型。The second training image set is used to iteratively train the pre-built second deep learning network model to obtain the first hierarchical model.
  6. 如权利要求1所述的医学图像分级方法,其中,所述利用所述病灶检测模型中的病灶分割网络对所述特征图进行区域分割及面积计算,得到分割结果,包括:The medical image grading method according to claim 1, wherein said using the lesion segmentation network in the lesion detection model to perform region segmentation and area calculation on the feature map to obtain a segmentation result, comprising:
    对所述特征图进行区域分割,得到多个分割区域;performing region segmentation on the feature map to obtain a plurality of segmented regions;
    计算每个所述分割区域与所述待分级医学图像的面积比值,得到所述分割区域对应的相对面积;calculating the area ratio of each segmented region to the medical image to be classified to obtain the relative area corresponding to the segmented region;
    汇总所有的分割区域及每个分割区域对应的相对面积,得到所述分割结果。Summarize all the segmented regions and the relative area corresponding to each segmented region to obtain the segmented result.
  7. 如权利要求6所述的医学图像分级方法,其中,所述将所述分类结果与所述分割结果进行特征匹配,得到特征信息,包括:The medical image classification method according to claim 6, wherein said performing feature matching on said classification result and said segmentation result to obtain feature information comprises:
    将所述分类结果与所述分割结果进行匹配关联,得到所述分割结果中每个相对面积对应的病灶类别;Matching and associating the classification result with the segmentation result to obtain a lesion category corresponding to each relative area in the segmentation result;
    将所述分割结果中相同病灶类别对应的所有所述相对面积进行求和,得到对应的分割区域总面积;Summing all the relative areas corresponding to the same lesion category in the segmentation result to obtain the total area of the corresponding segmented region;
    将所述分割区域总面积与对应的病灶类别进行组合,得到匹配数组;Combining the total area of the segmented regions with the corresponding lesion category to obtain a matching array;
    将所有匹配数组随机进行组合,得到所述特征信息。Randomly combine all matching arrays to obtain the feature information.
  8. 一种医学图像分级装置,其中,包括:A medical image grading device, including:
    特征匹配模块,用于获取待分级医学图像,利用预构建的病灶检测模型中的特征提取网络对所述待分级医学图像进行特征提取,得到特征图;对所述特征图进行分类识别及结果统计,得到分类结果;利用所述病灶检测模型中的病灶分割网络对所述特征图进行区域分割及面积计算,得到分割结果;将所述分类结果与所述分割结果进行特征匹配,得到特征信息;The feature matching module is used to obtain the medical image to be classified, and use the feature extraction network in the pre-built lesion detection model to perform feature extraction on the medical image to be classified to obtain a feature map; perform classification recognition and result statistics on the feature map , obtaining a classification result; using the lesion segmentation network in the lesion detection model to perform region segmentation and area calculation on the feature map to obtain a segmentation result; performing feature matching on the classification result and the segmentation result to obtain feature information;
    图像分级模块,用于利用预构建的第一分级模型对所述待分级医学图像进行分级,得到第一分级结果;An image classification module, configured to use a pre-built first classification model to classify the medical image to be classified to obtain a first classification result;
    分级矫正模块,用于利用预构建的第二分级模型对所述特征信息及所述第一分级结果进行分级矫正,得到目标分级结果。The grading correction module is configured to use the pre-built second grading model to perform grading correction on the feature information and the first grading result to obtain a target grading result.
  9. 一种电子设备,其中,所述电子设备包括:An electronic device, wherein the electronic device includes:
    至少一个处理器;以及,at least one processor; and,
    与所述至少一个处理器通信连接的存储器;其中,a memory communicatively coupled to the at least one processor; wherein,
    所述存储器存储有可被所述至少一个处理器执行的计算机程序,所述计算机程序被所述至少一个处理器执行,以使所述至少一个处理器能够执行如下步骤:The memory stores a computer program executable by the at least one processor, the computer program is executed by the at least one processor, so that the at least one processor can perform the following steps:
    获取待分级医学图像,利用预构建的病灶检测模型中的特征提取网络对所述待分级医学图像进行特征提取,得到特征图;Acquiring the medical image to be classified, using the feature extraction network in the pre-built lesion detection model to perform feature extraction on the medical image to be classified to obtain a feature map;
    对所述特征图进行分类识别及结果统计,得到分类结果;Carrying out classification recognition and result statistics on the feature map to obtain classification results;
    利用所述病灶检测模型中的病灶分割网络对所述特征图进行区域分割及面积计算,得到分割结果;Using the lesion segmentation network in the lesion detection model to perform region segmentation and area calculation on the feature map to obtain a segmentation result;
    将所述分类结果与所述分割结果进行特征匹配,得到特征信息;performing feature matching on the classification result and the segmentation result to obtain feature information;
    利用预构建的第一分级模型对所述待分级医学图像进行分级,得到第一分级结果;Classifying the medical image to be classified by using a pre-built first classification model to obtain a first classification result;
    利用预构建的第二分级模型对所述特征信息及所述第一分级结果进行分级矫正,得到目标分级结果。The feature information and the first grading result are graded and corrected by using the pre-built second grading model to obtain a target grading result.
  10. 如权利要求9所述的电子设备,其中,所述利用预构建的病灶检测模型中的特征提取网络对所述待分级医学图像进行特征提取,得到特征图,包括:The electronic device according to claim 9, wherein the feature extraction network of the pre-built lesion detection model is used to perform feature extraction on the medical image to be classified to obtain a feature map, comprising:
    对所述待分级医学图像执行卷积池化操作,得到初始特征图;performing a convolution pooling operation on the medical image to be classified to obtain an initial feature map;
    标记所述初始特征图中的感兴趣区域,得到特征图。A region of interest in the initial feature map is marked to obtain a feature map.
  11. 如权利要求10所述的电子设备,其中,所述利用预构建的病灶检测模型中的特征提取网络对所述待分级医学图像进行特征提取之前,所述方法还包括:The electronic device according to claim 10, wherein, before performing feature extraction on the medical image to be classified using the feature extraction network in the pre-built lesion detection model, the method further comprises:
    获取历史医学图像集,对所述历史医学图像集进行标签标记,得到第一训练图像集;Acquiring a set of historical medical images, and performing label marking on the set of historical medical images to obtain a first set of training images;
    利用所述第一训练图像集对预构建的第一深度学习网络模型进行迭代训练,得到所述病灶检测模型。The first training image set is used to iteratively train the pre-built first deep learning network model to obtain the lesion detection model.
  12. 如权利要求11所述的电子设备,其中,所述对所述历史医学图像集进行标签标记,包括:The electronic device of claim 11, wherein said tagging said set of historical medical images comprises:
    对所述历史医学图像集中每个历史医学图像中的病灶进行病灶区域划分,得到目标区域;performing lesion area division on the lesions in each historical medical image in the historical medical image set to obtain a target area;
    利用所述预设病灶类别标签对每个所述历史医学图像中的每个所述目标区域进行病灶类别标记。Each of the target regions in each of the historical medical images is labeled with a lesion category by using the preset lesion category label.
  13. 如权利要求11或12所述的电子设备,其中,所述利用预构建的第一分级模型对所述待分级医学图像进行分级,得到第一分级结果之前,所述方法还包括:The electronic device according to claim 11 or 12, wherein, before the first classification model is used to classify the medical image to be classified, before obtaining the first classification result, the method further comprises:
    对所述历史医学图像集进行预设分级标签标记,得到第二训练图像集;performing preset classification labels on the historical medical image set to obtain a second training image set;
    利用所述第二训练图像集对预构建的第二深度学习网络模型进行迭代训练,得到所述第一分级模型。The second training image set is used to iteratively train the pre-built second deep learning network model to obtain the first hierarchical model.
  14. 如权利要求9所述的电子设备,其中,所述利用所述病灶检测模型中的病灶分割网络对所述特征图进行区域分割及面积计算,得到分割结果,包括:The electronic device according to claim 9, wherein said using the lesion segmentation network in the lesion detection model to perform region segmentation and area calculation on the feature map to obtain a segmentation result, comprising:
    对所述特征图进行区域分割,得到多个分割区域;performing region segmentation on the feature map to obtain a plurality of segmented regions;
    计算每个所述分割区域与所述待分级医学图像的面积比值,得到所述分割区域对应的相对面积;calculating the area ratio of each segmented region to the medical image to be classified to obtain the relative area corresponding to the segmented region;
    汇总所有的分割区域及每个分割区域对应的相对面积,得到所述分割结果。Summarize all the segmented regions and the relative area corresponding to each segmented region to obtain the segmented result.
  15. 如权利要求14所述的电子设备,其中,所述将所述分类结果与所述分割结果进行特征匹配,得到特征信息,包括:The electronic device according to claim 14, wherein said performing feature matching on said classification result and said segmentation result to obtain feature information comprises:
    将所述分类结果与所述分割结果进行匹配关联,得到所述分割结果中每个相对面积对应的病灶类别;Matching and associating the classification result with the segmentation result to obtain a lesion category corresponding to each relative area in the segmentation result;
    将所述分割结果中相同病灶类别对应的所有所述相对面积进行求和,得到对应的分割区域总面积;Summing all the relative areas corresponding to the same lesion category in the segmentation result to obtain the total area of the corresponding segmented region;
    将所述分割区域总面积与对应的病灶类别进行组合,得到匹配数组;Combining the total area of the segmented regions with the corresponding lesion category to obtain a matching array;
    将所有匹配数组随机进行组合,得到所述特征信息。Randomly combine all matching arrays to obtain the feature information.
  16. 一种计算机可读存储介质,存储有计算机程序,其中,所述计算机程序被处理器执行时实现如下步骤:A computer-readable storage medium storing a computer program, wherein the computer program implements the following steps when executed by a processor:
    获取待分级医学图像,利用预构建的病灶检测模型中的特征提取网络对所述待分级医学图像进行特征提取,得到特征图;Acquiring the medical image to be classified, using the feature extraction network in the pre-built lesion detection model to perform feature extraction on the medical image to be classified to obtain a feature map;
    对所述特征图进行分类识别及结果统计,得到分类结果;Carrying out classification recognition and result statistics on the feature map to obtain classification results;
    利用所述病灶检测模型中的病灶分割网络对所述特征图进行区域分割及面积计算,得到分割结果;Using the lesion segmentation network in the lesion detection model to perform region segmentation and area calculation on the feature map to obtain a segmentation result;
    将所述分类结果与所述分割结果进行特征匹配,得到特征信息;performing feature matching on the classification result and the segmentation result to obtain feature information;
    利用预构建的第一分级模型对所述待分级医学图像进行分级,得到第一分级结果;Classifying the medical image to be classified by using a pre-built first classification model to obtain a first classification result;
    利用预构建的第二分级模型对所述特征信息及所述第一分级结果进行分级矫正,得到目标分级结果。The feature information and the first grading result are graded and corrected by using the pre-built second grading model to obtain a target grading result.
  17. 如权利要求16所述的计算机可读存储介质,其中,所述利用预构建的病灶检测模型中的特征提取网络对所述待分级医学图像进行特征提取,得到特征图,包括:The computer-readable storage medium according to claim 16, wherein the feature extraction network of the pre-built lesion detection model is used to perform feature extraction on the medical image to be classified to obtain a feature map, comprising:
    对所述待分级医学图像执行卷积池化操作,得到初始特征图;performing a convolution pooling operation on the medical image to be classified to obtain an initial feature map;
    标记所述初始特征图中的感兴趣区域,得到特征图。A region of interest in the initial feature map is marked to obtain a feature map.
  18. 如权利要求16所述的计算机可读存储介质,其中,所述利用预构建的病灶检测模型中的特征提取网络对所述待分级医学图像进行特征提取之前,所述方法还包括:The computer-readable storage medium according to claim 16, wherein, before performing feature extraction on the medical image to be classified using the feature extraction network in the pre-built lesion detection model, the method further comprises:
    获取历史医学图像集,对所述历史医学图像集进行标签标记,得到第一训练图像集;Acquiring a set of historical medical images, and performing label marking on the set of historical medical images to obtain a first set of training images;
    利用所述第一训练图像集对预构建的第一深度学习网络模型进行迭代训练,得到所述病灶检测模型。The first training image set is used to iteratively train the pre-built first deep learning network model to obtain the lesion detection model.
  19. 如权利要求18所述的计算机可读存储介质,其中,所述对所述历史医学图像集进行标签标记,包括:The computer-readable storage medium of claim 18, wherein said tagging said set of historical medical images comprises:
    对所述历史医学图像集中每个历史医学图像中的病灶进行病灶区域划分,得到目标区域;performing lesion area division on the lesions in each historical medical image in the historical medical image set to obtain a target area;
    利用所述预设病灶类别标签对每个所述历史医学图像中的每个所述目标区域进行病灶类别标记。Each of the target regions in each of the historical medical images is labeled with a lesion category by using the preset lesion category label.
  20. 如权利要求18或19所述的计算机可读存储介质,其中,所述利用预构建的第一分级模型对所述待分级医学图像进行分级,得到第一分级结果之前,所述方法还包括:The computer-readable storage medium according to claim 18 or 19, wherein, before the said medical image to be graded is graded using the pre-built first grade model, and before the first grade result is obtained, the method further comprises:
    对所述历史医学图像集进行预设分级标签标记,得到第二训练图像集;performing preset classification labels on the historical medical image set to obtain a second training image set;
    利用所述第二训练图像集对预构建的第二深度学习网络模型进行迭代训练,得到所述第一分级模型。The second training image set is used to iteratively train the pre-built second deep learning network model to obtain the first hierarchical model.
PCT/CN2021/109482 2021-05-25 2021-07-30 Medical image grading method and apparatus, electronic device, and readable storage medium WO2022247007A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110570809.8A CN113487621A (en) 2021-05-25 2021-05-25 Medical image grading method and device, electronic equipment and readable storage medium
CN202110570809.8 2021-05-25

Publications (1)

Publication Number Publication Date
WO2022247007A1 true WO2022247007A1 (en) 2022-12-01

Family

ID=77933476

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/109482 WO2022247007A1 (en) 2021-05-25 2021-07-30 Medical image grading method and apparatus, electronic device, and readable storage medium

Country Status (2)

Country Link
CN (1) CN113487621A (en)
WO (1) WO2022247007A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116458945A (en) * 2023-04-25 2023-07-21 杭州整形医院有限公司 Intelligent guiding system and method for children facial beauty suture route

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114529760B (en) * 2022-01-25 2022-09-02 北京医准智能科技有限公司 Self-adaptive classification method and device for thyroid nodules

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107563123A (en) * 2017-09-27 2018-01-09 百度在线网络技术(北京)有限公司 Method and apparatus for marking medical image
CN109785300A (en) * 2018-12-27 2019-05-21 华南理工大学 A kind of cancer medical image processing method, system, device and storage medium
WO2020077962A1 (en) * 2018-10-16 2020-04-23 杭州依图医疗技术有限公司 Method and device for breast image recognition
CN111161279A (en) * 2019-12-12 2020-05-15 中国科学院深圳先进技术研究院 Medical image segmentation method and device and server

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108615051B (en) * 2018-04-13 2020-09-15 博众精工科技股份有限公司 Diabetic retina image classification method and system based on deep learning
US20200250398A1 (en) * 2019-02-01 2020-08-06 Owkin Inc. Systems and methods for image classification
US10430946B1 (en) * 2019-03-14 2019-10-01 Inception Institute of Artificial Intelligence, Ltd. Medical image segmentation and severity grading using neural network architectures with semi-supervised learning techniques
CN111028206A (en) * 2019-11-21 2020-04-17 万达信息股份有限公司 Prostate cancer automatic detection and classification system based on deep learning
CN111986211A (en) * 2020-08-14 2020-11-24 武汉大学 Deep learning-based ophthalmic ultrasonic automatic screening method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107563123A (en) * 2017-09-27 2018-01-09 百度在线网络技术(北京)有限公司 Method and apparatus for marking medical image
WO2020077962A1 (en) * 2018-10-16 2020-04-23 杭州依图医疗技术有限公司 Method and device for breast image recognition
CN109785300A (en) * 2018-12-27 2019-05-21 华南理工大学 A kind of cancer medical image processing method, system, device and storage medium
CN111161279A (en) * 2019-12-12 2020-05-15 中国科学院深圳先进技术研究院 Medical image segmentation method and device and server

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116458945A (en) * 2023-04-25 2023-07-21 杭州整形医院有限公司 Intelligent guiding system and method for children facial beauty suture route
CN116458945B (en) * 2023-04-25 2024-01-16 杭州整形医院有限公司 Intelligent guiding system and method for children facial beauty suture route

Also Published As

Publication number Publication date
CN113487621A (en) 2021-10-08

Similar Documents

Publication Publication Date Title
US10004463B2 (en) Systems, methods, and computer readable media for using descriptors to identify when a subject is likely to have a dysmorphic feature
WO2021189909A1 (en) Lesion detection and analysis method and apparatus, and electronic device and computer storage medium
CN111932547B (en) Method and device for segmenting target object in image, electronic device and storage medium
WO2022247007A1 (en) Medical image grading method and apparatus, electronic device, and readable storage medium
CN111932534B (en) Medical image picture analysis method and device, electronic equipment and readable storage medium
WO2021151291A1 (en) Disease risk analysis method, apparatus, electronic device, and computer storage medium
WO2021189914A1 (en) Electronic device, medical image index generation method and apparatus, and storage medium
CN111933274A (en) Disease classification diagnosis method and device, electronic equipment and storage medium
CN111967539A (en) Recognition method and device for maxillofacial fracture based on CBCT database and terminal equipment
CN115294426B (en) Method, device and equipment for tracking interventional medical equipment and storage medium
CN116150690A (en) DRGs decision tree construction method and device, electronic equipment and storage medium
CN116578704A (en) Text emotion classification method, device, equipment and computer readable medium
CN114511569B (en) Tumor marker-based medical image identification method, device, equipment and medium
WO2023029348A1 (en) Image instance labeling method based on artificial intelligence, and related device
CN111967540B (en) Maxillofacial fracture identification method and device based on CT database and terminal equipment
CN115760656A (en) Medical image processing method and system
CN115100103A (en) Tumor prediction method and device based on bacterial data
WO2022247006A1 (en) Target object cut-out method and apparatus based on multiple features, and device and storage medium
CN114627050A (en) Case analysis method and system based on liver pathology full-section
CN116741395A (en) Knowledge graph embedding method, device, equipment and storage medium
CN116719891A (en) Clustering method, device, equipment and computer storage medium for traditional Chinese medicine information packet
CN113674840A (en) Medical image sharing method and device, electronic equipment and storage medium
CN116680380A (en) Visual intelligent question-answering method and device, electronic equipment and storage medium
CN115983234A (en) Health questionnaire analysis method, device, equipment and storage medium
CN115330733A (en) Disease intelligent identification method and system based on fine-grained domain knowledge

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21942576

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE