CN110598782A - Method and device for training classification network for medical image - Google Patents

Method and device for training classification network for medical image Download PDF

Info

Publication number
CN110598782A
CN110598782A CN201910843634.6A CN201910843634A CN110598782A CN 110598782 A CN110598782 A CN 110598782A CN 201910843634 A CN201910843634 A CN 201910843634A CN 110598782 A CN110598782 A CN 110598782A
Authority
CN
China
Prior art keywords
image
chest
disease
classification network
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910843634.6A
Other languages
Chinese (zh)
Other versions
CN110598782B (en
Inventor
叶德贤
房劬
刘维平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xingmai Information Technology Co Ltd
Original Assignee
Shanghai Xingmai Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xingmai Information Technology Co Ltd filed Critical Shanghai Xingmai Information Technology Co Ltd
Priority to CN201910843634.6A priority Critical patent/CN110598782B/en
Publication of CN110598782A publication Critical patent/CN110598782A/en
Application granted granted Critical
Publication of CN110598782B publication Critical patent/CN110598782B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Abstract

It is an object of the present invention to provide an on-line training of classification networks for medical images. Aiming at a target disease species, acquiring a chest orthostatic image and a corresponding diagnosis report thereof; positioning the heart and lung region according to the chest orthostatic image; and inputting the image of the heart and lung region and the information of the disease species in the corresponding diagnosis report as sample data into a classification network so as to train the image on line. On the basis of using the medical image classification network trained offline, new medical images and diagnosis report data are further obtained from a medical image database of a hospital to be trained in an online learning mode in an actual scene used by the medical image classification network, so that the efficiency of a classification network model is further evolved and improved, the classification network model is more suitable for the actual application scene and population, the training precision is improved by introducing disease position information, and the performance of the classification network and the effectiveness of online learning are remarkably improved.

Description

Method and device for training classification network for medical image
Technical Field
The invention relates to the technical field of medical image processing, in particular to a technology for training a classification network for medical images.
Background
Chest X-ray radiation is widely used clinically as the most common method for diagnosing cardiothoracic diseases. In the prior art, the diagnosis by X-ray radiography still needs to depend on manual radiograph reading. The manual film reading has higher requirements on personal experience and capability of doctors; meanwhile, manual film reading also has the problems of high cost, long time consumption, easy interference of human factors such as doctor states and the like.
With the rapid development of artificial intelligence, especially in the field of deep learning, a great deal of researchers have tried to help solve the diagnosis problem of medical images through such techniques. After the neural network for medical diagnosis obtained by offline training is deployed in different hospitals, because different hospitals and different groups of people often have different practical situations, the above scheme often has the problems of poor generalization capability, performance attenuation and the like in the practical application process, thereby causing difficulty in wide application in a real clinical environment.
One common solution to solve the above problem is to capture the diagnosis report in real time on site by means of online learning, and perform model evolution based on the captured diagnosis report. In the implementation process, the above scheme often obtains the disease category of the corresponding image only through simple keyword retrieval, and the key information of the disease position is not fully utilized, so that the performance of online learning is poor.
Disclosure of Invention
To solve the above problems, it is an object of the present invention to provide a method, an apparatus, a computing device, a computer-readable storage medium, and a computer program product for online training of a classification network for medical images.
According to an aspect of the present invention, there is provided a method for online training of a classification network for medical images, wherein the method comprises the steps of:
aiming at a target disease species, acquiring a chest orthostatic image and a corresponding diagnosis report thereof;
positioning the heart and lung region according to the chest orthostatic image;
inputting the image of the heart and lung region and the information of the disease species in the corresponding diagnosis report as sample data into a classification network so as to train the image;
wherein the training objective function of the classification network comprises:
-classification errors of negative and positive dichotomy of the disease species information;
-if a description of a disease location is included in the diagnostic report, calculating a mean square error between a mask of a granularity region corresponding to the disease location and a smoothed mask of a gradient class activation map generated by the classification network for the image.
According to an aspect of the present invention, there is also provided an apparatus for online training of a classification network for medical images, wherein the apparatus comprises:
the acquisition device is used for acquiring the chest orthostatic image and a corresponding diagnosis report aiming at a target disease species;
the positioning device is used for positioning the heart and lung area according to the chest orthostatic image;
the learning device is used for inputting the image of the heart and lung area and the disease species information in the corresponding diagnosis report as sample data into a classification network so as to train the image;
wherein the training objective function of the classification network comprises:
-classification errors of negative and positive dichotomy of the disease species information;
-if a description of a disease location is included in the diagnostic report, calculating a mean square error between a mask of a granularity region corresponding to the disease location and a smoothed mask of a gradient class activation map generated by the classification network for the image.
According to an aspect of the present invention, there is also provided a computing device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the computer program when executed by the processor implements a method of training a classification network for medical images according to an aspect of the present invention.
According to an aspect of the present invention, there is also provided a computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements a method of online training of a classification network for medical images according to an aspect of the present invention.
According to an aspect of the present invention, there is also provided a computer program product which, when executed by a computing device, implements a method of online training of a classification network for medical images according to an aspect of the present invention.
Compared with the prior art, the method and the device have the advantages that on the basis of using the medical image classification network trained offline (offset), in the actual scene used by the medical image classification network, new medical images and diagnosis report data are further obtained from a medical image database of a hospital for training in an online (online) learning mode, so that the efficiency of the classification network model is further evolved and improved, the method and the device are more suitable for the scene and the crowd of the actual application, the training precision is improved by introducing disease occurrence position information, and the performance of the classification network and the effectiveness of online learning are remarkably improved.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments made with reference to the following drawings:
FIG. 1 illustrates a flow diagram of a method of training a classification network for medical images, according to one embodiment of the invention;
fig. 2 is a schematic diagram of an apparatus for training a classification network for medical images according to another embodiment of the present invention.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments of the present invention are described as an apparatus represented by a block diagram and a process or method represented by a flow diagram. Although a flowchart depicts a sequence of process steps in the present invention, many of the operations can be performed in parallel, concurrently, or simultaneously. In addition, the order of the operations may be re-arranged. The process of the present invention may be terminated when its operations are performed, but may include additional steps not shown in the flowchart. The processes of the present invention may correspond to methods, functions, procedures, subroutines, and the like.
The methods illustrated by the flow diagrams and apparatus illustrated by the block diagrams discussed below may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine or computer readable medium such as storage medium. The processor(s) may perform the necessary tasks.
Similarly, it will be further appreciated that any flow charts, flow diagrams, state transition diagrams, and the like represent various processes which may be substantially described as program code stored in computer readable media and so executed by a computing device or processor, whether or not such computing device or processor is explicitly shown.
As used herein, the term "storage medium" may refer to one or more devices for storing data, including Read Only Memory (ROM), Random Access Memory (RAM), magnetic RAM, kernel memory, magnetic disk storage media, optical storage media, flash memory devices, and/or other machine-readable media for storing information. The term "computer-readable medium" can include, but is not limited to portable or fixed storage devices, optical storage devices, and various other mediums capable of storing and/or containing instructions and/or data.
A code segment may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program descriptions. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, information passing, token passing, network transmission, etc.
The term "computing device" in this context refers to an electronic device that can perform predetermined processes such as numerical calculations and/or logical calculations by executing predetermined programs or instructions, and may include at least a processor and a memory, wherein the predetermined processes are performed by the processor executing program instructions prestored in the memory, or by hardware such as ASIC, FPGA, DSP, or by a combination of the above two.
The "computing device" described above is typically embodied in the form of a general purpose computing device, whose components may include, but are not limited to: one or more processors or processing units, system memory. The system memory may include computer readable media in the form of volatile memory, such as Random Access Memory (RAM) and/or cache memory. "computing device" may further include other removable/non-removable, volatile/nonvolatile computer-readable storage media. The memory may include at least one computer program product having a set (e.g., at least one) of program modules that are configured to perform the functions and/or methods of embodiments of the present invention. The processor executes various functional applications and data processing by executing programs stored in the memory.
For example, the memory stores a computer program for executing the functions and processes of the present invention, and the processor executes the computer program, so that the present invention trains the classification network for medical images.
Typically, the computing devices include, for example, user devices and network devices. Wherein the user equipment includes but is not limited to a Personal Computer (PC), a notebook computer, a mobile terminal, etc., and the mobile terminal includes but is not limited to a smart phone, a tablet computer, etc.; the network device includes, but is not limited to, a single network server, a server group consisting of a plurality of network servers, or a Cloud Computing (Cloud Computing) based Cloud consisting of a large number of computers or network servers, wherein Cloud Computing is one of distributed Computing, a super virtual computer consisting of a collection of loosely coupled computers. Wherein the computing device is capable of operating alone to implement the invention, or of accessing a network and performing the invention by interoperating with other computing devices in the network. The network in which the computing device is located includes, but is not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a VPN network, and the like.
It should be noted that the user devices, network devices, networks, etc. are merely examples, and other existing or future computing devices or networks may be suitable for the present invention, and are included in the scope of the present invention and are incorporated by reference herein.
Specific structural and functional details disclosed herein are merely representative and are provided for purposes of describing example embodiments of the present invention. The present invention may, however, be embodied in many alternate forms and should not be construed as limited to only the embodiments set forth herein.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element may be termed a second element, and, similarly, a second element may be termed a first element, without departing from the scope of example embodiments. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be noted that, in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed substantially concurrently, or the figures may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
The present invention is described in further detail below with reference to the attached drawing figures.
Fig. 1 shows a flow diagram of a method according to an embodiment of the invention, which particularly shows a process of online training of a classification network for medical images.
Typically, the invention is implemented by a computing device. When a general-purpose computing device is configured with program modules implementing the present invention, it will become a specialized computing device to train a classification network for medical images, rather than any general-purpose computer or processor. However, those skilled in the art will appreciate that the foregoing description is only intended to illustrate that the present invention may be applied to any general purpose computing device, which becomes a specialized training device for online training of classification networks for medical images that implements the present invention when applied to a general purpose computing device.
Herein, "on-line training" is to be understood broadly and includes, but is not limited to, any training based on data acquired at a medical diagnosis site, for example, training to obtain diagnostic data from a medical image repository of a medical facility.
As shown in fig. 1, in step S1, the training apparatus acquires chest orthophoto images and corresponding diagnosis reports for a target disease; in step S2, the training device locates the cardiopulmonary region according to the chest orthophoto image; in step S3, the training device inputs the image of the cardiopulmonary region and the information of the disease type in the corresponding diagnosis report as sample data into a classification network for training; the training objective function of the classification network comprises at least one of: 1) classification errors of negative and positive secondary classifications of the disease species information; 2) if the diagnosis report comprises the description of the attack position, calculating the mean square error between the mask of the granularity area corresponding to the attack position and the smooth mask of a Gradient-weighted Class Activation Mapping (Grad-CAM for short) generated by the classification network on the image.
Specifically, in step S1, the training device acquires chest orthostatic images and corresponding diagnosis reports for a target disease.
In this case, the training device can, for example, acquire a plurality of chest ortho images from a medical image database and a diagnosis report of the type of disease for which each chest ortho image corresponds. Alternatively, for a target disease category, after the doctor completes the diagnosis report, the doctor submits the chest orthophoto image and the diagnosis report to the training device.
For example, for "pneumonia," the training device obtains multiple chest orthostatic images from a medical image database and a "pneumonia" related diagnostic report for each chest orthostatic image.
The medical image database stores a plurality of chest normal images and diagnosis reports corresponding to the chest normal images.
Alternatively, the medical image database stores a plurality of chest image sequences and a diagnosis report corresponding to each chest image sequence. Therefore, after the training device obtains a chest image sequence and its corresponding diagnosis report from the medical image database, the chest orthophoto is also identified from the chest image sequence. A chest image sequence comprises at least one chest orthostatic image and also comprises a plurality of chest lateral images.
The breast orthophoto image can be identified by a classifier.
For example, the classifier may be obtained by training sample images, which may include chest orthophotos and chest laterality movies. And inputting the sample image marked with the chest orthostatic image and the chest lateral image into a classifier to train the sample image, wherein the trained classifier can identify the chest orthostatic image and the chest lateral image. Therein, classifiers such as inclusion, ResNet (Residual Neural Network), etc. are based on deep learning classification networks.
In step S2, the training device locates its cardiopulmonary region from its chest orthostatic image.
For the plurality of chest orthostatic images obtained in step S1, the training apparatus may perform target region identification thereon respectively in step S2 to obtain cardiopulmonary regions therein.
1) The target region is located by a conventional image segmentation model.
The traditional image segmentation model is mainly based on various image segmentation algorithms, including a threshold-based segmentation method, a region-based segmentation method, an edge-based segmentation method, a segmentation method based on a specific theory, and the like. Among other things, image segmentation models that can be used for medical images such as active contour models, Grabcut, region growing models, threshold segmentation models, and the like.
Here, the cardiopulmonary region can be extracted from the inputted chest ortho-position image by the various image segmentation algorithms. The present invention is not limited in this respect, and any existing or future image segmentation algorithm using the above-mentioned various segmentation methods, such as the one applicable to the present invention, should be included in the scope of the present invention.
2) The target region is located by an image segmentation model based on deep learning.
Deep learning is a series of algorithms in the field of machine learning, which attempt to perform multi-layer abstraction on data by using multiple nonlinear transformations, and not only learns the nonlinear mapping between input and output, but also learns the hidden structure of the input data vector, so as to perform intelligent identification or prediction on new samples.
Here, the deep learning neural network based image segmentation model that can be used in the present invention is such as FCN (full Convolutional network) model, U-net.
By inputting the sample image labeled with the target region in advance, the image segmentation algorithm based on the deep learning can be trained to identify the specific target region.
Specifically, in this embodiment, when the U-net algorithm model is trained by using the sample image labeled with the cardiopulmonary region, the trained U-net algorithm model can identify the cardiopulmonary region in the newly input chest ortho-image.
For example, in a chest orthostatic image as a sample image, a cardiopulmonary region is labeled as "1", and the other regions are labeled as "0". These labeled sample images are input into a U-net algorithm model for training. When the chest orthostatic image to be identified is input into the U-net algorithm model trained by the method, the chest orthostatic image identifying the central lung region can be output, wherein the central lung region is marked as '1', and the other regions are marked as '0'. Here, the mask output from the U-net model may also be a mask of the cardiopulmonary area, which, although it is still an image in nature, may be characterized as a two-dimensional matrix array, wherein the cardiopulmonary area is denoted by "1" and the other areas by "0".
3) The heart-lung region is located by a detector.
The detector may be pre-configured with a scanning window for the cardiopulmonary region so that the scanning window can be moved directly over the chest orthostatic image for rapid positioning to the cardiopulmonary region. Here, the detector is, for example, an SSD (single-shot detector).
After locating the cardiopulmonary region, the detector or training device may crop the chest orthostatic image to obtain an image that includes only the cardiopulmonary region. This "cropping" is to be understood in a broad sense and includes any processing that makes the cardiopulmonary area the only region of interest. "cropping" may be, for example, removing other areas of the chest orthostatic image, leaving only the cardiopulmonary area. The "cropping" may be performed without changing the gray scale value of the cardiopulmonary region, for example, by setting the gray scale value of the other region of the chest ortho-image to "0".
In step S3, the training device inputs the images of the cardiopulmonary area and the information of the disease type in the corresponding diagnosis report as sample data into the classification network for training.
In the invention, the training equipment can extract disease species information and disease positions from the diagnosis report through the natural language processing model.
The description of the information on the disease type by the doctor in the diagnosis report is various. Taking the "pneumonia" as an example, the doctor may describe that pneumonia should be considered, pneumonia is suspected, pneumonia may be considered, pneumonia is not excluded, and pneumonia is excluded. In this regard, the natural language processing model may obtain disease information and its corresponding disease location by extracting keywords, such as disease information and disease location, from the diagnosis report and then combining semantic analysis of context. Any existing or future natural language processing models, such as those applicable to the present invention, are intended to be encompassed by the present invention.
Wherein, for doctor descriptions such as "pneumonia should be", "pneumonia is suspected", "pneumonia may be", "pneumonia is not excluded", the corresponding sample image may be labeled "pneumonia positive"; for doctor descriptions such as "excluding pneumonia", the corresponding sample image may be labeled as "pneumonia negative".
The training objective function of the classification network comprises:
1) classification errors of negative and positive secondary classifications of the disease species information;
if the diagnosis report does not have the description of the disease position, classifying the disease type corresponding to the input image of the heart and lung region according to the classification network, namely, the corresponding disease type is positive or the corresponding disease type is negative, and determining classification errors of negative and positive classification of the disease type information if the corresponding disease type in the diagnosis report is positive.
Typically, the classification error Loss1It is usually expressed using cross entropy, specifically as calculated using the following equation (1):
wherein, yiFor inputting a positive or negative label of the sample image, yi_For positive and negative results output by the classification network, n is the number of sample images.
2) If the diagnosis report comprises the description of the disease occurrence position, calculating the mean square error between a mask of a granularity region corresponding to the disease occurrence position and a smooth mask of a gradient activation map generated by the classification network on the input image; the larger the mean square error is, the larger the difference between the effective detection area of the classification network and the actual disease occurrence position is, and the farther the classification network deviates from the effective image criterion is, that is, the worse the performance of the classification network is.
Here, the mean square error Loss2Can be calculated by the following formula (2):
where w is the number of columns of pixels in the mask, h is the number of rows of pixels in the mask, pmask(i, j) represents the pixel value of the mask midpoint (i, j) of the granularity region, pgrad-camThe pixel value at this point (i, j) in the gradient class activation map is represented.
For the case where the diagnosis report includes a description of the disease onset position, since the disease onset position is generally described according to the granularity of 1/6 lung fields, the two lung field regions are divided into 6 sub-regions according to the upper/middle/lower lung and the left/right lung, such as the upper left lung field, the middle right lung field, etc.
Accordingly, an image segmentation algorithm based on deep learning similar to the above can be trained to identify the granularity region corresponding to the disease position, such as the left upper lung field region, the right middle lung field region, and the like.
For example, the sample image is still a chest orthotopic image in which the left upper lung field region is labeled as "1", the left middle lung field region is labeled as "2", the left lower lung field region is labeled as "3", the right upper lung field region is labeled as "4", the right middle lung field region is labeled as "5", the right lower lung field region is labeled as "6", and other regions of the sample image are labeled as "0". And inputting the sample image labeled according to the label into a U-net algorithm model for training. When the chest orthostatic image to be identified is input into the trained U-net algorithm model, masks of the left upper lung field, the left middle lung field, the left lower lung field, the right upper lung field, the right middle lung field and the right lower lung field can be output. The mask in the diseased region has a granularity consistent with the granularity of the diseased location. Alternatively, an image segmentation model may be trained separately for each 1/6 lung field to segment the chest orthostatic image to be identified by using 6 image segmentation models respectively, so as to obtain masks of the left upper lung field, the left middle lung field, the left lower lung field, the right upper lung field, the right middle lung field and the right lower lung field respectively.
Further, the objective of the classification network may be, for example, that the sum of the classification error obtained in 1) above and the mean square error obtained in 2) above is as small as possible. The smaller the sum of the two is, the closer the classification result of the classification network is to the real result.
When the training device newly acquires a chest orthophoto image and a corresponding diagnosis report, a new round of training on the classification network is triggered, and the steps S2 and S3 are repeated again to introduce new samples so that the classification network can be continuously trained to evolve. The newly acquired chest ortho image and the corresponding diagnosis report can be from a medical image database, and when the database is updated, the training device can acquire the updated data from the medical image database through a data crawling tool such as a crawler. In addition, the training device can also request the updated chest orthostatic image and the corresponding diagnosis report from the medical image database through a preset interface; alternatively, when a new chest orthostatic image and its corresponding diagnostic report are added to the medical image database, the medical image database actively pushes these updated data to the training device. This allows the classification network to learn the new sample data online immediately when an update occurs to the medical image database.
Fig. 2 shows a schematic view of an apparatus according to an embodiment of the invention, which shows in particular one of the apparatuses.
Typically, the apparatus of the present invention can be implemented as a functional module in any general-purpose computing device. When a general purpose computing device is configured with the apparatus of the present invention, it will become a specialized training device for training classification networks for medical images, rather than any general purpose computer or processor. However, it will be appreciated by those skilled in the art that the foregoing description is only intended to illustrate that the apparatus of the present invention can be applied to any general purpose computing device, and when the apparatus of the present invention is applied to a general purpose computing device, the general purpose computing device becomes a specific device for online training of classification networks for medical images, hereinafter referred to as "training device", and the apparatus of the present invention can also be referred to as "training apparatus" accordingly. Also, the "training apparatus" may be implemented in a computer program, hardware, or a combination thereof.
Herein, "on-line training" is to be understood broadly and includes, but is not limited to, any training based on data acquired at a medical diagnosis site, for example, training to obtain diagnostic data from a medical image repository of a medical facility.
As shown in FIG. 2, the exercise device 20 is incorporated into a computing apparatus 200. The training device 20 further comprises acquisition means 21, positioning means 22 and learning means 23.
Wherein, aiming at a target disease species, the acquisition device 21 acquires the chest orthostatic image and a corresponding diagnosis report; the positioning device 22 positions the cardiopulmonary region according to the chest orthophoto image; the learning device 23 inputs the image of the cardiopulmonary region and the information of the disease species in the corresponding diagnosis report as sample data into the classification network to train the image; the training objective function of the classification network comprises at least one of: 1) classification errors of negative and positive secondary classifications of the disease species information; 2) and if the diagnosis report comprises the description of the disease incidence position, calculating the mean square error between a mask of a granularity region corresponding to the disease incidence position and a smooth mask of a gradient class activation map generated by the classification network on the image.
Specifically, the acquiring device 21 acquires the chest orthophoto image and the corresponding diagnosis report for a target disease type.
In this case, for a target disease category, the acquisition device 21 may acquire a plurality of chest ortho images and a diagnosis report of each chest ortho image corresponding to the target disease category from a medical image database, for example. Alternatively, after completing the diagnosis report, the doctor submits the chest normal image and the diagnosis report to the acquiring device 21 for a target disease.
For example, for "pneumonia", the acquisition device 21 acquires a plurality of chest positive images and a "pneumonia" related diagnosis report corresponding to each chest positive image from a medical image database.
The medical image database stores a plurality of chest normal images and diagnosis reports corresponding to the chest normal images.
Alternatively, the medical image database stores a plurality of chest image sequences and a diagnosis report corresponding to each chest image sequence. Therefore, after the acquiring device 21 acquires a chest image sequence and its corresponding diagnosis report from the medical image database, the chest ortho image is identified from the chest image sequence. A chest image sequence comprises at least one chest orthostatic image and also comprises a plurality of chest lateral images.
The breast orthophoto image can be specifically identified by a classifier.
For example, the classifier may be obtained by training sample images, which may include chest orthophotos and chest laterality movies. And inputting the sample image marked with the chest orthostatic image and the chest lateral image into a classifier to train the sample image, wherein the trained classifier can identify the chest orthostatic image and the chest lateral image. Therein, classifiers such as inclusion, ResNet (Residual Neural Network), etc. are based on deep learning classification networks. When the acquiring means 21 acquires a chest image sequence from the medical image database, the classifier can be called to acquire an orthostatic chest image therein.
Subsequently, the positioning device 22 positions the cardiopulmonary region based on the chest orthophoto image.
For the plurality of chest orthostatic images obtained by the obtaining device 21, the positioning device 22 may respectively perform target area identification thereon to obtain the cardiopulmonary area therein.
1) The target region is located by a conventional image segmentation model.
The traditional image segmentation model is mainly based on various image segmentation algorithms, including a threshold-based segmentation method, a region-based segmentation method, an edge-based segmentation method, a segmentation method based on a specific theory, and the like. Among other things, image segmentation models that can be used for medical images such as active contour models, Grabcut, region growing models, threshold segmentation models, and the like.
Here, the positioning device 22 can extract the cardiopulmonary region from the inputted chest ortho-position image by the various image segmentation algorithms. The present invention is not limited in this respect, and any existing or future image segmentation algorithm using the above-mentioned various segmentation methods, such as the one applicable to the present invention, should be included in the scope of the present invention.
2) The target region is located by an image segmentation model based on deep learning.
Deep learning is a series of algorithms in the field of machine learning, which attempt to perform multi-layer abstraction on data by using multiple nonlinear transformations, and not only learns the nonlinear mapping between input and output, but also learns the hidden structure of the input data vector, so as to perform intelligent identification or prediction on new samples.
Here, the deep learning neural network based image segmentation model that can be used in the present invention is such as FCN (full Convolutional network) model, U-net.
By inputting the sample image labeled with the target region in advance, the image segmentation algorithm based on the deep learning can be trained to identify the specific target region.
Specifically, in this embodiment, when the U-net algorithm model is trained by using the sample image labeled with the cardiopulmonary region, the trained U-net algorithm model can identify the cardiopulmonary region in the newly input chest ortho-image.
For example, in a chest orthostatic image as a sample image, a cardiopulmonary region is labeled as "1", and the other regions are labeled as "0". These labeled sample images are input into a U-net algorithm model for training. When the chest orthostatic image to be identified is input into the U-net algorithm model trained by the method, the chest orthostatic image identifying the central lung region can be output, wherein the central lung region is marked as '1', and the other regions are marked as '0'. Here, the mask output from the U-net model may also be a mask of the cardiopulmonary area, which, although it is still an image in nature, may be characterized as a two-dimensional matrix array, wherein the cardiopulmonary area is denoted by "1" and the other areas by "0". The localization device 22 may invoke or directly integrate the trained cardiopulmonary segmentation model to identify the cardiopulmonary region in the chest orthostatic image.
3) The heart-lung region is located by a detector.
The detector may be pre-configured with a scanning window for the cardiopulmonary region so that the scanning window can be moved directly over the chest orthostatic image for rapid positioning to the cardiopulmonary region. Here, the detector is, for example, an SSD (single-shot detector). The positioning device 22 may invoke or directly integrate the detector to detect the cardiopulmonary area in the chest orthostatic image.
After locating the cardiopulmonary region, the detector or locating device 22 may crop the chest ortho images to obtain an image that includes only the cardiopulmonary region. This "cropping" is to be understood in a broad sense and includes any processing that makes the cardiopulmonary area the only region of interest. "cropping" may be, for example, removing other areas of the chest orthostatic image, leaving only the cardiopulmonary area. The "cropping" may be performed without changing the gray scale value of the cardiopulmonary region, for example, by setting the gray scale value of the other region of the chest ortho-image to "0".
Next, the learning device 23 inputs the images of the cardiopulmonary region and the information of the disease type in the corresponding diagnosis report as sample data into the classification network, and trains the classification network.
The classification network can also be integrated with the learning device 23.
In the present invention, the learning device 23 or other devices in the training facility (not shown in fig. 2) can extract the disease type information and the disease location from the diagnosis report through the natural language processing model.
The description of the information on the disease type by the doctor in the diagnosis report is various. Taking the "pneumonia" as an example, the doctor may describe that pneumonia should be considered, pneumonia is suspected, pneumonia may be considered, pneumonia is not excluded, and pneumonia is excluded. In this regard, the natural language processing model may obtain disease information and its corresponding disease location by extracting keywords, such as disease information and disease location, from the diagnosis report and then combining semantic analysis of context. Any existing or future natural language processing models, such as those applicable to the present invention, are intended to be encompassed by the present invention.
Wherein, for doctor descriptions such as "pneumonia should be", "pneumonia is suspected", "pneumonia may be", "pneumonia is not excluded", the corresponding sample image may be labeled "pneumonia positive"; for doctor descriptions such as "excluding pneumonia", the corresponding sample image may be labeled as "pneumonia negative".
The training objective function of the classification network comprises:
1) classification errors of negative and positive secondary classifications of the disease species information;
if the diagnosis report does not have the description of the disease position, classifying the disease type corresponding to the input image of the heart and lung region according to the classification network, namely, the corresponding disease type is positive or the corresponding disease type is negative, and determining classification errors of negative and positive classification of the disease type information if the corresponding disease type in the diagnosis report is positive.
Typically, the classification error Loss1It is usually expressed using cross entropy, specifically as calculated using equation (1) above.
2) If the diagnosis report comprises the description of the disease occurrence position, calculating the mean square error between a mask of a granularity region corresponding to the disease occurrence position and a smooth mask of a gradient activation map generated by the classification network on the input image; the larger the mean square error is, the larger the difference between the effective detection area of the classification network and the actual disease occurrence position is, and the farther the classification network deviates from the effective image criterion is, that is, the worse the performance of the classification network is.
Here, the mean square error Loss2Can be calculated by the above formula (2).
For the case where the diagnosis report includes a description of the disease onset position, since the disease onset position is generally described according to the granularity of 1/6 lung fields, the two lung field regions are divided into 6 sub-regions according to the upper/middle/lower lung and the left/right lung, such as the upper left lung field, the middle right lung field, etc.
Accordingly, an image segmentation algorithm based on deep learning similar to the above can be trained to identify the granularity region corresponding to the disease position, such as the left upper lung field region, the right middle lung field region, and the like. The image segmentation algorithm may be invoked or integrated by the positioning device 22 or other devices in the training apparatus (not shown in fig. 2) to identify specific lung field regions.
For example, the sample image is still a chest orthotopic image in which the left upper lung field region is labeled as "1", the left middle lung field region is labeled as "2", the left lower lung field region is labeled as "3", the right upper lung field region is labeled as "4", the right middle lung field region is labeled as "5", the right lower lung field region is labeled as "6", and other regions of the sample image are labeled as "0". And inputting the sample image labeled according to the label into a U-net algorithm model for training. When the chest orthostatic image to be identified is input into the trained U-net algorithm model, masks of the left upper lung field, the left middle lung field, the left lower lung field, the right upper lung field, the right middle lung field and the right lower lung field can be output. The mask in the diseased region has a granularity consistent with the granularity of the diseased location. Alternatively, an image segmentation model may be trained separately for each 1/6 lung field to segment the chest orthostatic image to be identified by using 6 image segmentation models respectively, so as to obtain masks of the left upper lung field, the left middle lung field, the left lower lung field, the right upper lung field, the right middle lung field and the right lower lung field respectively.
Further, the objective of the classification network may be, for example, that the sum of the classification error obtained in 1) above and the mean square error obtained in 2) above is as small as possible. The smaller the sum of the two is, the closer the classification result of the classification network is to the real result.
When the acquiring device 21 newly acquires a chest orthophoto image and its corresponding diagnosis report, it triggers a new round of training on the classification network, and triggers the positioning device 22 and the learning device 23 to perform their respective operations again, so as to introduce new samples to enable the classification network to be continuously trained for continuous evolution. The newly acquired chest ortho image and the corresponding diagnosis report may be from a medical image database, and when the database is updated, the acquiring device 21 may acquire the updated data from the medical image database by using a data crawling tool such as a crawler. In addition, the acquiring device 21 may also request the updated chest ortho-position image and the corresponding diagnosis report from the medical image database through a preset interface; alternatively, when a new chest orthostatic image and its corresponding diagnosis report are added to the medical image database, the medical image database actively pushes these update data to the acquisition means 21. This allows the classification network to learn the new sample data online immediately when an update occurs to the medical image database.
According to the various embodiments described above, the following clauses are proposed:
clause 1. a method of online training a classification network for medical images, wherein the method comprises the steps of:
aiming at a target disease species, acquiring a chest orthostatic image and a corresponding diagnosis report thereof;
positioning the heart and lung region according to the chest orthostatic image;
inputting the image of the heart and lung region and the information of the disease species in the corresponding diagnosis report as sample data into a classification network so as to train the image;
wherein the training objective function of the classification network comprises:
-classification errors of negative and positive dichotomy of the disease species information;
-if a description of a disease location is included in the diagnostic report, calculating a mean square error between a mask of a granularity region corresponding to the disease location and a smoothed mask of a gradient class activation map generated by the classification network for the image.
Item 2. the method of item 1, wherein the disease location is described in terms of a granularity of 1/6 lung fields, the mask of the granularity region having a granularity that is consistent with the granularity of the disease location.
Clause 3. the method of clause 1 or 2, wherein the locating step and the training step are performed again when a new chest orthotopic image and its corresponding diagnostic report are acquired.
Clause 4. the method of any of clauses 1-3, wherein the chest orthostatic image and its corresponding diagnostic report are obtained from a medical image database.
Clause 5. the method of clause 4, wherein the method further comprises:
acquiring a chest image sequence and a corresponding diagnosis report from the medical image database;
the chest orthostatic image is identified from the sequence of chest images to correlate the chest orthostatic image with its corresponding diagnostic report.
Clause 6. the method of any of clauses 1-5, wherein the method further comprises:
after the cardiopulmonary region is located, the chest orthophotos is cropped to obtain an image including only the cardiopulmonary region.
Clause 7. the method of any of clauses 1-6, wherein the disease category information and the disease location are extracted from the diagnostic report by a natural language processing model.
Clause 8. an apparatus for online training of a classification network for medical images, wherein the apparatus comprises:
the acquisition device is used for acquiring the chest orthostatic image and a corresponding diagnosis report aiming at a target disease species;
the positioning device is used for positioning the heart and lung area according to the chest orthostatic image;
the learning device is used for inputting the image of the heart and lung area and the disease species information in the corresponding diagnosis report as sample data into a classification network so as to train the image;
wherein the training objective function of the classification network comprises:
-classification errors of negative and positive dichotomy of the disease species information;
-if a description of a disease location is included in the diagnostic report, calculating a mean square error between a mask of a granularity region corresponding to the disease location and a smoothed mask of a gradient class activation map generated by the classification network for the image.
Clause 9. the apparatus of clause 8, wherein the disease location is described in terms of a granularity of 1/6 lung fields, the granularity of the mask of the granularity region being consistent with the granularity of the disease location.
Clause 10. the device of clause 8 or 9, wherein when a new chest orthotopic image and its corresponding diagnostic report are acquired by the acquisition device, the positioning device and the learning device are again triggered to perform their respective operations.
Clause 11. the apparatus of any one of clauses 8-10, wherein the chest orthostatic image and its corresponding diagnostic report are obtained from a medical image database.
Clause 12. the apparatus of clause 11, wherein the obtaining means is further for:
acquiring a chest image sequence and a corresponding diagnosis report from the medical image database;
the chest orthostatic image is identified from the sequence of chest images to correlate the chest orthostatic image with its corresponding diagnostic report.
Clause 13. the apparatus of any one of clauses 8-12, wherein the positioning device is further configured to:
after the cardiopulmonary region is located, the chest orthophotos is cropped to obtain an image including only the cardiopulmonary region.
Clause 14. the apparatus of any one of clauses 8 to 13, wherein the disease category information and the disease location are extracted from the diagnostic report by a natural language processing model.
Clause 15. a computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any of clauses 1 to 7 when executing the computer program.
Clause 16. a computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the method of any of clauses 1 to 7.
Clause 17. a computer program product implementing the method of any one of clauses 1 to 7 when executed by a computer device.
It should be noted that the present invention may be implemented in software and/or in a combination of software and hardware, for example, as an Application Specific Integrated Circuit (ASIC), a general purpose computer or any other similar hardware device. In one embodiment, the software program of the present invention may be executed by a processor to implement the steps or functions described above. Also, the software programs (including associated data structures) of the present invention can be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Further, some of the steps or functions of the present invention may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
In addition, at least a portion of the present invention may be implemented as a computer program product, such as computer program instructions, which, when executed by a computing device, may invoke or provide methods and/or aspects in accordance with the present invention through operation of the computing device. Program instructions which invoke/provide the methods of the present invention may be stored on fixed or removable recording media and/or transmitted via a data stream over a broadcast or other signal-bearing medium, and/or stored in a working memory of a computing device operating in accordance with the program instructions.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (10)

1. A method of on-line training a classification network for medical images, wherein the method comprises the steps of:
aiming at a target disease species, acquiring a chest orthostatic image and a corresponding diagnosis report thereof;
positioning the heart and lung region according to the chest orthostatic image;
inputting the image of the heart and lung region and the information of the disease species in the corresponding diagnosis report as sample data into a classification network so as to train the image;
wherein the training objective function of the classification network comprises:
-classification errors of negative and positive dichotomy of the disease species information;
-if a description of a disease location is included in the diagnostic report, calculating a mean square error between a mask of a granularity region corresponding to the disease location and a smoothed mask of a gradient class activation map generated by the classification network for the image.
2. The method of claim 1, wherein the disease locations are described in terms of a granularity of 1/6 lung fields, the mask of the granularity region having a granularity that is consistent with the granularity of the disease locations.
3. The method of claim 1 or 2, wherein the positioning step and the training step are performed again when a new chest orthostatic image and its corresponding diagnostic report are acquired.
4. The method of claim 1 or 3, wherein the chest orthostatic image and its corresponding diagnostic report are obtained from a medical image database.
5. The method of claim 4, wherein the method further comprises:
acquiring a chest image sequence and a corresponding diagnosis report from the medical image database;
the chest orthostatic image is identified from the sequence of chest images to correlate the chest orthostatic image with its corresponding diagnostic report.
6. An apparatus for online training of classification networks for medical images, wherein the apparatus comprises:
the acquisition device is used for acquiring the chest orthostatic image and a corresponding diagnosis report aiming at a target disease species;
the positioning device is used for positioning the heart and lung area according to the chest orthostatic image;
the learning device is used for inputting the image of the heart and lung area and the disease species information in the corresponding diagnosis report as sample data into a classification network so as to train the image;
wherein the training objective function of the classification network comprises:
-classification errors of negative and positive dichotomy of the disease species information;
-if a description of a disease location is included in the diagnostic report, calculating a mean square error between a mask of a granularity region corresponding to the disease location and a smoothed mask of a gradient class activation map generated by the classification network for the image.
7. The apparatus of claim 6, wherein the disease locations are described in terms of a granularity of 1/6 lung fields, the mask of the granularity region having a granularity that is consistent with the granularity of the disease locations.
8. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 5 when executing the computer program.
9. A computer-readable storage medium, on which a computer program is stored, wherein the computer program, when executed by a processor, implements the method of any of claims 1 to 5.
10. A computer program product implementing the method of any one of claims 1 to 5 when executed by a computer device.
CN201910843634.6A 2019-09-06 2019-09-06 Method and device for training classification network for medical image Active CN110598782B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910843634.6A CN110598782B (en) 2019-09-06 2019-09-06 Method and device for training classification network for medical image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910843634.6A CN110598782B (en) 2019-09-06 2019-09-06 Method and device for training classification network for medical image

Publications (2)

Publication Number Publication Date
CN110598782A true CN110598782A (en) 2019-12-20
CN110598782B CN110598782B (en) 2020-07-07

Family

ID=68858228

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910843634.6A Active CN110598782B (en) 2019-09-06 2019-09-06 Method and device for training classification network for medical image

Country Status (1)

Country Link
CN (1) CN110598782B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111462865A (en) * 2020-02-28 2020-07-28 平安国际智慧城市科技股份有限公司 Medical image recognition model generation method and device, computer equipment and medium
CN111583226A (en) * 2020-05-08 2020-08-25 上海杏脉信息科技有限公司 Cytopathological infection evaluation method, electronic device, and storage medium
CN111667469A (en) * 2020-06-03 2020-09-15 北京小白世纪网络科技有限公司 Lung disease classification method, device and equipment
CN111739023A (en) * 2020-08-25 2020-10-02 湖南数定智能科技有限公司 Funnel chest Haller index measuring method, electronic equipment and storage medium
CN111767946A (en) * 2020-06-19 2020-10-13 北京百度网讯科技有限公司 Medical image hierarchical model training and prediction method, device, equipment and medium
CN112001536A (en) * 2020-08-12 2020-11-27 武汉青忆辰科技有限公司 High-precision finding method for minimal sample of mathematical capability point defect of primary and secondary schools based on machine learning
CN112349392A (en) * 2020-11-25 2021-02-09 北京大学第三医院(北京大学第三临床医学院) Human cervical vertebra medical image processing system
CN112561894A (en) * 2020-12-22 2021-03-26 中国科学院苏州生物医学工程技术研究所 Intelligent electronic medical record generation method and system for CT image
CN112686833A (en) * 2020-08-22 2021-04-20 安徽大学 Industrial product surface defect detecting and classifying device based on convolutional neural network
CN113378984A (en) * 2021-07-05 2021-09-10 国药(武汉)医学实验室有限公司 Medical image classification method, system, terminal and storage medium
JP2022107558A (en) * 2021-01-09 2022-07-22 国立大学法人岩手大学 Method for detecting stomatognathic disease and detection system therefor

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107424152A (en) * 2017-08-11 2017-12-01 联想(北京)有限公司 The detection method and electronic equipment of organ lesion and the method and electronic equipment for training neuroid
CN107644419A (en) * 2017-09-30 2018-01-30 百度在线网络技术(北京)有限公司 Method and apparatus for analyzing medical image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107424152A (en) * 2017-08-11 2017-12-01 联想(北京)有限公司 The detection method and electronic equipment of organ lesion and the method and electronic equipment for training neuroid
CN107644419A (en) * 2017-09-30 2018-01-30 百度在线网络技术(北京)有限公司 Method and apparatus for analyzing medical image

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111462865A (en) * 2020-02-28 2020-07-28 平安国际智慧城市科技股份有限公司 Medical image recognition model generation method and device, computer equipment and medium
CN111583226B (en) * 2020-05-08 2023-06-30 上海杏脉信息科技有限公司 Cell pathological infection evaluation method, electronic device and storage medium
CN111583226A (en) * 2020-05-08 2020-08-25 上海杏脉信息科技有限公司 Cytopathological infection evaluation method, electronic device, and storage medium
CN111667469A (en) * 2020-06-03 2020-09-15 北京小白世纪网络科技有限公司 Lung disease classification method, device and equipment
CN111667469B (en) * 2020-06-03 2023-10-31 北京小白世纪网络科技有限公司 Lung disease classification method, device and equipment
CN111767946A (en) * 2020-06-19 2020-10-13 北京百度网讯科技有限公司 Medical image hierarchical model training and prediction method, device, equipment and medium
CN111767946B (en) * 2020-06-19 2024-03-22 北京康夫子健康技术有限公司 Medical image hierarchical model training and predicting method, device, equipment and medium
CN112001536A (en) * 2020-08-12 2020-11-27 武汉青忆辰科技有限公司 High-precision finding method for minimal sample of mathematical capability point defect of primary and secondary schools based on machine learning
CN112001536B (en) * 2020-08-12 2023-08-11 武汉青忆辰科技有限公司 High-precision discovery method for point defect minimum sample of mathematical ability of middle and primary schools based on machine learning
CN112686833A (en) * 2020-08-22 2021-04-20 安徽大学 Industrial product surface defect detecting and classifying device based on convolutional neural network
CN112686833B (en) * 2020-08-22 2023-06-06 安徽大学 Industrial product surface defect detection and classification device based on convolutional neural network
CN111739023A (en) * 2020-08-25 2020-10-02 湖南数定智能科技有限公司 Funnel chest Haller index measuring method, electronic equipment and storage medium
CN112349392A (en) * 2020-11-25 2021-02-09 北京大学第三医院(北京大学第三临床医学院) Human cervical vertebra medical image processing system
CN112561894A (en) * 2020-12-22 2021-03-26 中国科学院苏州生物医学工程技术研究所 Intelligent electronic medical record generation method and system for CT image
CN112561894B (en) * 2020-12-22 2023-11-28 中国科学院苏州生物医学工程技术研究所 Intelligent electronic medical record generation method and system for CT image
JP2022107558A (en) * 2021-01-09 2022-07-22 国立大学法人岩手大学 Method for detecting stomatognathic disease and detection system therefor
JP7390666B2 (en) 2021-01-09 2023-12-04 国立大学法人岩手大学 Image processing method and system for detecting stomatognathic disease sites
CN113378984A (en) * 2021-07-05 2021-09-10 国药(武汉)医学实验室有限公司 Medical image classification method, system, terminal and storage medium

Also Published As

Publication number Publication date
CN110598782B (en) 2020-07-07

Similar Documents

Publication Publication Date Title
CN110598782B (en) Method and device for training classification network for medical image
US10636147B2 (en) Method for characterizing images acquired through a video medical device
Jiang et al. Medical image semantic segmentation based on deep learning
Ghesu et al. Contrastive self-supervised learning from 100 million medical images with optional supervision
Kaur et al. A survey on deep learning approaches to medical images and a systematic look up into real-time object detection
Chudzik et al. Exudate segmentation using fully convolutional neural networks and inception modules
US11449717B2 (en) System and method for identification and localization of images using triplet loss and predicted regions
Huang et al. Lesion-based contrastive learning for diabetic retinopathy grading from fundus images
Kotia et al. Few shot learning for medical imaging
Yang et al. Detecting helicobacter pylori in whole slide images via weakly supervised multi-task learning
Raut et al. Gastrointestinal tract disease segmentation and classification in wireless capsule endoscopy using intelligent deep learning model
Shamrat et al. Analysing most efficient deep learning model to detect COVID-19 from computer tomography images
Ahmad et al. Optimized lung nodule prediction model for lung cancer using contour features extraction
Chowdhury et al. Classification of diseases from CT images using LSTM-based CNN
Nie et al. Recent advances in diagnosis of skin lesions using dermoscopic images based on deep learning
Chatterjee et al. A survey on techniques used in medical imaging processing
Tang et al. M-SEAM-NAM: multi-instance self-supervised equivalent attention mechanism with neighborhood affinity module for double weakly supervised segmentation of COVID-19
Huang et al. Recent advances in medical image processing
Ovi et al. Infection segmentation from covid-19 chest ct scans with dilated cbam u-net
Prasad et al. Lung cancer detection and classification using deep neural network based on hybrid metaheuristic algorithm
Moghaddam et al. Towards smart diagnostic methods for COVID-19: Review of deep learning for medical imaging
Baskaran et al. MSRFNet for skin lesion segmentation and deep learning with hybrid optimization for skin cancer detection
Patil et al. Auto segmentation of lung in non-small cell lung cancer using deep convolution neural network
Girma Identify animal lumpy skin disease using image processing and machine learning
Tang et al. Automatic CT lesion detection based on feature pyramid inference with multi-scale response

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant