CN110610488A - Classification training and detecting method and device - Google Patents

Classification training and detecting method and device Download PDF

Info

Publication number
CN110610488A
CN110610488A CN201910810538.1A CN201910810538A CN110610488A CN 110610488 A CN110610488 A CN 110610488A CN 201910810538 A CN201910810538 A CN 201910810538A CN 110610488 A CN110610488 A CN 110610488A
Authority
CN
China
Prior art keywords
disease
positive
negative
classification
species
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910810538.1A
Other languages
Chinese (zh)
Inventor
刘维平
房劬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xingmai Information Technology Co Ltd
Original Assignee
Shanghai Xingmai Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xingmai Information Technology Co Ltd filed Critical Shanghai Xingmai Information Technology Co Ltd
Priority to CN201910810538.1A priority Critical patent/CN110610488A/en
Publication of CN110610488A publication Critical patent/CN110610488A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

Aiming at a disease species with a specific disease position, inputting a first human medical image into a corresponding image segmentation model to obtain a mask of a target region to which the disease position belongs; shielding an irrelevant area in the first human body medical image by using the mask of the target area; inputting the first human medical image after the irrelevant area is shielded into a classification network of the disease species to obtain a classification detection result of the disease species, wherein the classification detection result indicates that the disease species is positive or negative. Compared with the prior art, the method and the device have the advantages that the mask of the target area is used for shielding the irrelevant area in the human chest medical image, and then the shielded image is classified and detected through the classification network according to the disease types. The classification detection performance of disease species with specific disease positions is effectively improved by excluding irrelevant interference information.

Description

Classification training and detecting method and device
Technical Field
The invention relates to the technical field of medical image processing, in particular to a technology for detecting a disease type with a specific disease position.
Background
In the prior art, the diagnosis of a plurality of diseases through X-ray radiography still needs to depend on manual film reading. This has high requirements on the personal experience and ability of the doctor; meanwhile, manual film reading also has the problems of high cost, long time consumption, easy interference of human factors such as doctor states and the like.
With the rapid development of artificial intelligence, especially in the field of deep learning, a great deal of researchers have tried to help solve the diagnosis problem of medical images through such techniques. For the detection of disease species, if the X-ray chest radiograph image is directly used as input, the conventional multi-classification network (such as inclusion, ResNet, etc.) cannot obtain ideal results. The reason for the above problem is that the methods do not fully consider the image determination criteria of different disease types, resulting in a large amount of irrelevant interference being introduced erroneously.
Disclosure of Invention
The invention aims to provide a method, a device, a computing device, a computer readable storage medium and a computer program product for classification training and detection of disease types with specific disease positions.
According to an aspect of the present invention, there is provided a model training method, wherein the method comprises:
inputting positive and negative sample images respectively indicating that a disease species is positive and negative to a classification network aiming at the disease species with a specific disease position so as to train the classification network;
wherein, the positive and negative sample images are shielded in areas which are irrelevant to the target area to which the disease incidence position belongs;
and obtaining the trained classification network, wherein the classification detection result indicates that the disease species is positive or negative.
According to another aspect of the present invention, there is also provided a method for detecting a disease species having a specific disease location, wherein the method further comprises:
aiming at a disease species with a specific disease position, inputting a first human medical image into a corresponding image segmentation model to obtain a mask of a target region to which the disease position belongs;
shielding an irrelevant area in the first human body medical image by using the mask of the target area;
inputting the first human medical image after being shielded from the irrelevant area into a classification network of the disease species to obtain a classification detection result of the disease species, wherein the classification detection result indicates that the disease species is positive or negative;
the classification network is obtained by inputting positive and negative sample images after the mask of the target area shields an irrelevant area to train, wherein the positive sample image indicates that the disease species is positive, and the negative sample image indicates that the disease species is negative.
According to an aspect of the present invention, there is also provided a model training apparatus, wherein the apparatus comprises:
the training device is used for inputting positive and negative sample images respectively indicating that the disease species are positive and negative into a classification network to train the classification network aiming at the disease species with a specific disease position, so as to obtain the trained classification network, and the classification detection result indicates that the disease species are positive or negative;
wherein both the positive and negative sample images are masked to regions unrelated to the target region to which the onset position belongs.
According to another aspect of the present invention, there is also provided an apparatus for detecting a disease species having a specific disease onset position, wherein the apparatus comprises:
the segmentation device is used for inputting the first human medical image into a corresponding image segmentation model aiming at a disease species with a specific disease occurrence position so as to obtain a mask of a target region to which the disease occurrence position belongs;
a shielding device for shielding an irrelevant area in the first human body medical image by using the mask of the target area;
a classification device, configured to input the first human medical image after the irrelevant area is masked into a classification network of the disease category to obtain a classification detection result of the disease category, where the classification detection result indicates that the disease category is positive or negative;
the classification network is obtained by inputting positive and negative sample images after the mask of the target area shields an irrelevant area to train, wherein the positive sample image indicates that the disease species is positive, and the negative sample image indicates that the disease species is negative.
According to an aspect of the present invention, there is also provided a computing device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements a model training method according to an aspect of the present invention when executing the computer program.
According to an aspect of the present invention, there is also provided a computing device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the computer program when executed by the processor implements a method of detecting a disease type having a specific disease location according to another aspect of the present invention.
According to an aspect of the present invention, there is also provided a computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements a model training method according to an aspect of the present invention.
According to an aspect of the invention, there is also provided a computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements a method of detecting a disease species having a particular disease location according to another aspect of the invention.
According to an aspect of the invention, there is also provided a computer program product which, when executed by a computing device, implements a model training method according to an aspect of the invention.
According to an aspect of the invention, there is also provided a computer program product which, when executed by a computing device, implements a method of detecting a disease species having a particular disease location according to another aspect of the invention.
Compared with the prior art, the method and the device have the advantages that the mask of the target area is used for shielding the irrelevant area in the human chest medical image, and then the shielded image is classified and detected through the classification network according to the disease types. The classification detection performance of disease species with specific disease positions is effectively improved by excluding irrelevant interference information.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments made with reference to the following drawings:
FIG. 1 illustrates a flow diagram of a method of training a classification network for detecting disease species having a particular disease location, according to one embodiment of the invention;
FIG. 2 illustrates a flow diagram of a method for detecting a disease species having a particular disease location, in accordance with one embodiment of the present invention;
FIG. 3 shows a schematic diagram of an apparatus for training a classification network for detecting disease species having a particular disease location, according to one embodiment of the invention;
fig. 4 shows a schematic diagram of an apparatus for detecting a disease species having a specific disease location according to an embodiment of the present invention.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments of the present invention are described as an apparatus represented by a block diagram and a process or method represented by a flow diagram. Although a flowchart depicts a sequence of process steps in the present invention, many of the operations can be performed in parallel, concurrently, or simultaneously. In addition, the order of the operations may be re-arranged. The process of the present invention may be terminated when its operations are performed, but may include additional steps not shown in the flowchart. The processes of the present invention may correspond to methods, functions, procedures, subroutines, and the like.
The methods illustrated by the flow diagrams and apparatus illustrated by the block diagrams discussed below may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine or computer readable medium such as storage medium. The processor(s) may perform the necessary tasks.
Similarly, it will be further appreciated that any flow charts, flow diagrams, state transition diagrams, and the like represent various processes which may be substantially described as program code stored in computer readable media and so executed by a computing device or processor, whether or not such computing device or processor is explicitly shown.
As used herein, the term "storage medium" may refer to one or more devices for storing data, including Read Only Memory (ROM), Random Access Memory (RAM), magnetic RAM, kernel memory, magnetic disk storage media, optical storage media, flash memory devices, and/or other machine-readable media for storing information. The term "computer-readable medium" can include, but is not limited to portable or fixed storage devices, optical storage devices, and various other mediums capable of storing and/or containing instructions and/or data.
A code segment may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program descriptions. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, information passing, token passing, network transmission, etc.
The term "computing device" in this context refers to an electronic device that can perform predetermined processes such as numerical calculations and/or logical calculations by executing predetermined programs or instructions, and may include at least a processor and a memory, wherein the predetermined processes are performed by the processor executing program instructions prestored in the memory, or by hardware such as ASIC, FPGA, DSP, or by a combination of the above two.
The "computing device" described above is typically embodied in the form of a general purpose computing device, whose components may include, but are not limited to: one or more processors or processing units, system memory. The system memory may include computer readable media in the form of volatile memory, such as Random Access Memory (RAM) and/or cache memory. "computing device" may further include other removable/non-removable, volatile/nonvolatile computer-readable storage media. The memory may include at least one computer program product having a set (e.g., at least one) of program modules that are configured to perform the functions and/or methods of embodiments of the present invention. The processor executes various functional applications and data processing by executing programs stored in the memory.
For example, a computer program for executing the functions and processes of the present invention is stored in the memory, and when the processor executes the computer program, the detection of a disease type having a specific disease location is realized in the present invention.
Typically, the computing devices include, for example, user devices and network devices. Wherein the user equipment includes but is not limited to a Personal Computer (PC), a notebook computer, a mobile terminal, etc., and the mobile terminal includes but is not limited to a smart phone, a tablet computer, etc.; the network device includes, but is not limited to, a single network server, a server group consisting of a plurality of network servers, or a Cloud Computing (Cloud Computing) based Cloud consisting of a large number of computers or network servers, wherein Cloud Computing is one of distributed Computing, a super virtual computer consisting of a collection of loosely coupled computers. Wherein the computing device is capable of operating alone to implement the invention, or of accessing a network and performing the invention by interoperating with other computing devices in the network. The network in which the computing device is located includes, but is not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a VPN network, and the like.
It should be noted that the user devices, network devices, networks, etc. are merely examples, and other existing or future computing devices or networks may be suitable for the present invention, and are included in the scope of the present invention and are incorporated by reference herein.
Specific structural and functional details disclosed herein are merely representative and are provided for purposes of describing example embodiments of the present invention. The present invention may, however, be embodied in many alternate forms and should not be construed as limited to only the embodiments set forth herein.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element may be termed a second element, and, similarly, a second element may be termed a first element, without departing from the scope of example embodiments. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be noted that, in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed substantially concurrently, or the figures may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
The present invention is described in further detail below with reference to the attached drawing figures.
FIG. 1 illustrates a method flow diagram that particularly shows a process for training a classification network for detecting disease species having a particular disease location, in accordance with one embodiment of the present invention.
Typically, the invention is implemented by a computing device. When a general-purpose computing device is configured with program modules embodying the invention, it will become a specialized device for training a classification network for detecting disease types having specific disease locations, rather than any general-purpose computer or processor. However, those skilled in the art will appreciate that the foregoing description is intended only to illustrate that the present invention may be applied to any general purpose computing device, which, when applied to a general purpose computing device, becomes a specific device for practicing the present invention for training a classification network for detecting disease species having a particular disease location, and is hereinafter referred to as a "training device".
As shown in fig. 1, in step S102, for a disease species with a specific disease position, the training device inputs positive and negative sample images respectively indicating that the disease species is positive and negative to the classification network to train the classification network; in step S104, the training device obtains the trained classification network, and the classification detection result indicates that the disease is positive or negative.
Specifically, in step S102, the training apparatus inputs, for a disease species having a specific disease occurrence position, positive and negative sample images respectively indicating that the disease species is positive and negative to the classification network to train the classification network.
Both the positive and negative sample images are masked to regions unrelated to the target region to which the onset position belongs.
In this case, for a disease type having a specific disease position, the target region to which the disease position belongs in the positive and negative sample images can be identified by the image segmentation model.
For the disease species with specific disease onset positions, a special image segmentation model can be trained for the target region to which each disease onset position belongs, for example, the disease onset positions of pleural effusion and pneumothorax are both in the pleural cavity, and the disease onset position of aortic calcification is in the aorta, so that when a human medical image is input into the trained image segmentation model, a mask corresponding to the target region, for example, a mask of the pleural cavity and a mask of the aorta, can be obtained. The input medical images of the human body can be medical images of various parts of the human body, such as brain medical images, chest medical images and the like, according to different disease onset positions. The present invention will be described below by taking the example of the detection of medical images of human breast, but this should not be construed as limiting the present invention in any way, and is only for the purpose of illustrating the present invention.
Here, the image segmentation model of the deep learning based neural network that can be used in the present invention is, for example, FCN (full Convolutional network) algorithm model, U-net algorithm model. Deep learning is a series of algorithms in the field of machine learning, which attempt to perform multi-layer abstraction on data by using multiple nonlinear transformations, and not only learns the nonlinear mapping between input and output, but also learns the hidden structure of the input data vector, so as to perform intelligent identification or prediction on new samples.
By inputting the sample image labeled with the target region in advance, the image segmentation model based on the deep learning can be trained to identify the specific target region.
For example, the pleural cavity image segmentation model can be obtained by training a U-net algorithm model through sample images. The pleural cavity region is labeled as "1" in the sample image, and the other regions are labeled as "0".
After obtaining a trained pleural cavity image segmentation model, the training apparatus may input a human chest medical image (e.g., positive and negative sample images) into the segmentation model to obtain a pleural cavity mask. The nature of the pleural mask is still an image, a medical image of the human chest with the pleural region labeled "1" and the other regions labeled "0", but the image may also be characterized by a two-dimensional matrix array.
Still aiming at the same disease species for image segmentation, after an irrelevant area in a positive sample image and a negative sample image is shielded by using a mask of a target area of the disease species, the positive sample image and the negative sample image of the shielded irrelevant area are input into a classification network to train the classification network, wherein the positive sample image indicates that the disease species is positive, and the negative sample image indicates that the disease species is negative.
Subsequently, in step S104, the training device obtains a trained classification network, and the classification detection result indicates that the disease species is positive or negative.
Here, the deep learning based classification Network that can be used in the present invention is such as inclusion, Xception, ResNet (Residual Network), densneet (dense convolution Network), and the like.
And training the classification network by using the positive and negative sample images. The positive sample image indicates that the disease species is positive, and the negative sample image indicates that the disease species is negative. The sample image is a medical image of the chest of a human body of which an irrelevant area is firstly shielded, wherein the gray value of a target area is reserved, and the gray values of other irrelevant areas are 0. The target region corresponds to a disease species indicated by the sample image.
Pleural effusion is still exemplified. Only the pleural region in the chest X-ray image sample of the shielded irrelevant area has gray values, positive samples indicate positive pleural effusion, and negative samples indicate negative pleural effusion. After the classification network is trained through the sample images, the trained classification network can be used for classification detection of the pleural effusion, namely, the pleural effusion is positive or the pleural effusion is negative.
During the training process, for each sample image, the classification network will calculate the probability of being positive for pleural effusion and the probability of being negative for pleural effusion. Each classification result will have a corresponding probability threshold, depending on the specific application needs. Specifically, for example, the present invention is more concerned with the erroneous determination of negative pleural effusion, so a larger probability threshold, such as 0.7, may be set for the negative pleural effusion probability, that is, when the probability of negative pleural effusion calculated from the input sample image exceeds 0.7, the classification result at this time is determined to be negative pleural effusion. Here, the probability threshold may be determined according to the recall rate (i.e. the accuracy rate of the classification result) of the classification network and the specificity of the pleural effusion, so that both the recall rate and the specificity can reach clinical expectations at the probability threshold.
FIG. 2 shows a method flow diagram that particularly illustrates a process for detecting a disease species having a particular disease location, in accordance with another embodiment of the present invention.
Typically, the invention is implemented by a computing device. When a general-purpose computing device is configured with program modules embodying the invention, it will become a specific purpose computing device for detecting disease types with specific disease locations, rather than any general-purpose computer or processor. However, those skilled in the art will appreciate that the foregoing description is intended only to illustrate that the present invention may be applied to any general purpose computing device, and that when applied to a general purpose computing device, the general purpose computing device becomes a specific device for performing the detection of disease types having specific disease locations, hereinafter referred to as a "detection device" in carrying out the present invention.
As shown in fig. 2, in step S202, for a disease species with a specific disease onset position, the detection device inputs a first human medical image into a corresponding image segmentation model to obtain a mask of a target region to which the disease onset position belongs; in step S204, using the mask of the target region, a detection device masks an irrelevant region in the first human medical image; in step S206, the detection device inputs the first human medical image after being shielded from the irrelevant area into a classification network of the disease species to obtain a classification detection result of the disease species, wherein the classification detection result indicates that the disease species is positive or negative; the classification network is obtained by inputting positive and negative sample images after the mask of the target area shields an irrelevant area to train, wherein the positive sample image indicates that the disease species is positive, and the negative sample image indicates that the disease species is negative.
Specifically, in step S202, for a disease type with a specific disease onset position, the detection device inputs the first human medical image into the corresponding image segmentation model to obtain a mask of the target region to which the disease onset position belongs.
For example, for pleural effusion, the detection device enters a chest X-ray image into the trained pleural cavity image segmentation model to obtain a pleural cavity mask.
Here, for the disease species with specific disease onset positions, a special image segmentation model can be trained for the target region to which each disease onset position belongs, for example, the disease onset positions of pleural effusion and pneumothorax are both in the pleural cavity, and the disease onset position of aortic calcification is in the aorta, so that when a human medical image is input into the trained image segmentation model, a mask corresponding to the target region, for example, a mask of the pleural cavity and a mask of the aorta, can be obtained. The input medical images of the human body can be medical images of various parts of the human body, such as brain medical images, chest medical images and the like, according to different disease onset positions. The present invention will be described below by taking the example of the detection of medical images of human breast, but this should not be construed as limiting the present invention in any way, and is only for the purpose of illustrating the present invention.
Among them, the image segmentation model of the neural network based on deep learning of the present invention is such as FCN (full convolution network) algorithm model, U-net algorithm model. Deep learning is a series of algorithms in the field of machine learning, which attempt to perform multi-layer abstraction on data by using multiple nonlinear transformations, and not only learns the nonlinear mapping between input and output, but also learns the hidden structure of the input data vector, so as to perform intelligent identification or prediction on new samples.
By inputting the sample image labeled with the target region in advance, the image segmentation model based on the deep learning can be trained to identify the specific target region.
For example, the pleural cavity image segmentation model can be obtained by training a U-net algorithm model through sample images. The pleural cavity region is labeled as "1" in the sample image, and the other regions are labeled as "0".
After a trained pleural cavity image segmentation model is obtained, medical images of a human chest are input into the segmentation model by detection equipment, and therefore a mask of a pleural cavity is obtained. The nature of the pleural mask is still an image, a medical image of the human chest with the pleural region labeled "1" and the other regions labeled "0", but the image may also be characterized by a two-dimensional matrix array.
In step S204, the detection device masks the extraneous region in the first human medical image by using the mask of the target region.
Here, the mask of the target region is also an image in nature, except that the target region is labeled as "1" and the other regions are labeled as "0".
The mask of the target region is multiplied by the first human body medical image by the detection device, so that the gray value of the target region in the first human body medical image is unchanged, and the gray values of other regions are '0', and therefore irrelevant regions outside the target region are shielded.
In step S206, the detection device inputs the first human medical image after the irrelevant area is masked into the classification network of the corresponding disease category to obtain a classification detection result of the disease category, wherein the classification detection result indicates that the disease category is positive or negative.
For example, the detection device may input a chest X-ray image of the pleural region of the non-relevant region into a classification network of the pleural effusion, and obtain a corresponding classification detection result, such as indicating that the pleural effusion is positive. The classification detection result can provide the doctor with auxiliary diagnosis and analysis of the disease species.
Here, the deep learning based classification Network that can be used in the present invention is such as inclusion, Xception, ResNet (Residual Network), densneet (dense convolution Network), and the like.
And training the classification network by using the positive and negative sample images. The positive sample image indicates that the disease species is positive, and the negative sample image indicates that the disease species is negative. The sample image is a medical image of the chest of a human body of which an irrelevant area is firstly shielded, wherein the gray value of a target area is reserved, and the gray values of other irrelevant areas are 0. The target region corresponds to a disease species indicated by the sample image.
Pleural effusion is still exemplified. Only the pleural region in the chest X-ray image sample of the shielded irrelevant area has gray values, positive samples indicate positive pleural effusion, and negative samples indicate negative pleural effusion. After the classification network is trained through the sample images, the trained classification network can be used for classification detection of the pleural effusion, namely, the pleural effusion is positive or the pleural effusion is negative.
During the training process, for each sample image, the classification network will calculate the probability of being positive for pleural effusion and the probability of being negative for pleural effusion. Each classification result will have a corresponding probability threshold, depending on the specific application needs. Specifically, for example, the present invention is more concerned with the erroneous determination of negative pleural effusion, so a larger probability threshold, such as 0.7, may be set for the negative pleural effusion probability, that is, when the probability of negative pleural effusion calculated from the input sample image exceeds 0.7, the classification result at this time is determined to be negative pleural effusion. Here, the probability threshold may be determined according to the recall rate (i.e. the accuracy rate of the classification result) of the classification network and the specificity of the pleural effusion, so that both the recall rate and the specificity can reach clinical expectations at the probability threshold.
FIG. 3 shows a schematic diagram of an apparatus embodying the present invention, which shows an apparatus for training a classification network for detecting disease species having a particular disease location.
Typically, the apparatus of the present invention can be implemented as a functional module in any general-purpose computing device. When a general purpose computing device is configured with the apparatus of the present invention, it will become a dedicated device for training a classification network for detecting disease species having a particular disease location, rather than any general purpose computer or processor. However, it will be appreciated by those skilled in the art that the foregoing description is intended merely to illustrate that the apparatus of the present invention can be applied to any general purpose computing device, and when the apparatus of the present invention is applied to a general purpose computing device, the general purpose computing device becomes a specific device for implementing the present invention for training a classification network for detecting disease species having a specific disease location, hereinafter referred to as "training device", and the apparatus of the present invention may also be referred to as "model training apparatus". Also, the "model training means" may be implemented in a computer program, hardware, or a combination thereof.
As shown in FIG. 3, the model training device 320 is incorporated into the training apparatus 300. The model training device 320 further comprises a training device 321.
For a disease species with a specific disease position, the training device 321 inputs positive and negative sample images respectively indicating that the disease species is positive and negative into a classification network to train the classification network, so as to obtain the trained classification network, and the classification detection result indicates that the disease species is positive or negative.
Both the positive and negative sample images are masked to regions unrelated to the target region to which the onset position belongs.
Here, the training device 321 may be an interface to invoke the classification network, or may be directly integrated with the classification network.
In this case, for a disease type having a specific disease position, the target region to which the disease position belongs in the positive and negative sample images can be identified by the image segmentation model.
For the disease species with specific disease onset positions, a special image segmentation model can be trained for the target area to which each disease onset position belongs, for example, the disease onset positions of pleural effusion and pneumothorax are both in the pleural cavity, and the disease onset position of aortic calcification is in the aorta, so that when a medical image of the chest of a human body is input into the trained image segmentation model, masks corresponding to the target area, such as a mask of the pleural cavity and a mask of the aorta, can be obtained. The input medical images of the human body can be medical images of various parts of the human body, such as brain medical images, chest medical images and the like, according to different disease onset positions.
Here, the image segmentation model of the deep learning based neural network that can be used in the present invention is, for example, FCN (full Convolutional network) algorithm model, U-net algorithm model. Deep learning is a series of algorithms in the field of machine learning, which attempt to perform multi-layer abstraction on data by using multiple nonlinear transformations, and not only learns the nonlinear mapping between input and output, but also learns the hidden structure of the input data vector, so as to perform intelligent identification or prediction on new samples.
By inputting the sample image labeled with the target region in advance, the image segmentation model based on the deep learning can be trained to identify the specific target region.
For example, the pleural cavity image segmentation model can be obtained by training a U-net algorithm model through sample images. The pleural cavity region is labeled as "1" in the sample image, and the other regions are labeled as "0".
After a trained pleural cavity image segmentation model is thus obtained, a human chest medical image (e.g., positive and negative sample images) is input into the segmentation model, thereby obtaining a pleural cavity mask. The nature of the pleural mask is still an image, a medical image of the human chest with the pleural region labeled "1" and the other regions labeled "0", but the image may also be characterized by a two-dimensional matrix array.
Still for the same disease species for image segmentation, after the irrelevant area in the positive and negative sample images is shielded by using the mask of the target area of the disease species, the training device 321 inputs the positive and negative sample images of the shielded irrelevant area into the classification network to train the classification network, wherein the positive sample image indicates that the disease species is positive, the negative sample image indicates that the disease species is negative, and the trained classification network is obtained, and the classification detection result indicates that the disease species is positive or negative.
Here, the deep learning based classification Network that can be used in the present invention is such as inclusion, Xception, ResNet (Residual Network), densneet (dense convolution Network), and the like.
And training the classification network by using the positive and negative sample images. The positive sample image indicates that the disease species is positive, and the negative sample image indicates that the disease species is negative. The sample image is a medical image of the chest of a human body of which an irrelevant area is firstly shielded, wherein the gray value of a target area is reserved, and the gray values of other irrelevant areas are 0. The target region corresponds to a disease species indicated by the sample image.
Pleural effusion is still exemplified. Only the pleural cavity in the chest X-ray image sample of the shielded irrelevant area has a gray value, a positive sample indicates positive pleural effusion, and a negative sample indicates negative pleural effusion. After the classification network is trained through the sample images, the trained classification network can be used for classification detection of the pleural effusion, namely, the pleural effusion is positive or the pleural effusion is negative.
During the training process, for each sample image, the classification network will calculate the probability of being positive for pleural effusion and the probability of being negative for pleural effusion. Each classification result will have a corresponding probability threshold, depending on the specific application needs. Specifically, for example, the present invention is more concerned with the erroneous determination of negative pleural effusion, so a larger probability threshold, such as 0.7, may be set for the negative pleural effusion probability, that is, when the probability of negative pleural effusion calculated from the input sample image exceeds 0.7, the classification result at this time is determined to be negative pleural effusion. Here, the probability threshold may be determined according to the recall rate (i.e. the accuracy rate of the classification result) of the classification network and the specificity of the pleural effusion, so that both the recall rate and the specificity can reach clinical expectations at the probability threshold.
Fig. 4 shows a schematic view of an apparatus according to another embodiment of the invention, which particularly shows an apparatus for detecting a disease species having a specific disease location.
Typically, the apparatus of the present invention can be implemented as a functional module in any general-purpose computing device. When a general purpose computing device is configured with the apparatus of the present invention, it will become a specific purpose computing device for detecting a disease type having a specific disease location, rather than any general purpose computer or processor. However, it will be appreciated by those skilled in the art that the foregoing description is intended merely to illustrate that the apparatus of the present invention can be applied to any general purpose computing device, and when the apparatus of the present invention is applied to a general purpose computing device, the general purpose computing device becomes a specific device for detecting a disease type having a specific disease location, hereinafter referred to as "detecting device", and the apparatus of the present invention can also be referred to as "detecting apparatus" accordingly. Also, the "detecting means" may be implemented in a computer program, hardware, or a combination thereof.
As shown in fig. 4, the detecting device 420 is disposed in the computing apparatus 400. The detecting means 420 further comprises segmenting means 421, shielding means 422 and sorting means 423.
Here, the segmentation apparatus 421 may call the image segmentation model as an interface, or may directly integrate the image segmentation model. Similarly, the classification device 423 may be used as an interface to invoke a classification network, or may be directly integrated with a classification network.
For a disease type with a specific disease position, the segmentation device 421 inputs the first human medical image into the corresponding image segmentation model to obtain a mask of the target region to which the disease position belongs.
For example, for pleural effusion, the segmentation apparatus 421 inputs a chest X-ray image into the trained pleural cavity image segmentation model to obtain the pleural cavity mask.
Here, for the disease species with specific disease onset positions, a special image segmentation model can be trained for the target region to which each disease onset position belongs, for example, the disease onset positions of pleural effusion and pneumothorax are both in the pleural cavity, and the disease onset position of aortic calcification is in the aorta, so that when a human medical image is input into the trained image segmentation model, a mask corresponding to the target region, for example, a mask of the pleural cavity and a mask of the aorta, can be obtained. The input medical images of the human body can be medical images of various parts of the human body, such as brain medical images, chest medical images and the like, according to different disease onset positions. The present invention will be described below by taking the example of the detection of medical images of human breast, but this should not be construed as limiting the present invention in any way, and is only for the purpose of illustrating the present invention.
Among them, the image segmentation model of the neural network based on deep learning of the present invention is such as FCN (full convolution network) algorithm model, U-net algorithm model. Deep learning is a series of algorithms in the field of machine learning, which attempt to perform multi-layer abstraction on data by using multiple nonlinear transformations, and not only learns the nonlinear mapping between input and output, but also learns the hidden structure of the input data vector, so as to perform intelligent identification or prediction on new samples.
By inputting the sample image labeled with the target region in advance, the image segmentation model based on the deep learning can be trained to identify the specific target region.
For example, the pleural cavity image segmentation model can be obtained by training a U-net algorithm model through sample images. The pleural cavity region is labeled as "1" in the sample image, and the other regions are labeled as "0".
After the trained pleural cavity image segmentation model is obtained, the segmentation device 421 inputs a human chest medical image into the segmentation model, so as to obtain a pleural cavity mask. The nature of the pleural mask is still an image, a medical image of the human chest with the pleural region labeled "1" and the other regions labeled "0", but the image may also be characterized by a two-dimensional matrix array.
Subsequently, the masking device 422 masks the extraneous region in the first biomedical image by using the mask of the target region.
Here, the mask of the target region is also an image in nature, except that the target region is labeled as "1" and the other regions are labeled as "0".
The mask 422 multiplies the mask of the target region by the first human medical image, so that the gray scale value of the target region in the first human medical image is not changed, and the gray scale values of other regions are "0", and thus irrelevant regions other than the target region are masked.
Next, the classification device 423 inputs the first human medical image after the irrelevant area is shielded into the classification network of the corresponding disease category to obtain a classification detection result of the disease category, wherein the classification detection result indicates that the disease category is positive or negative.
For example, the classification device 423 inputs the chest X-ray image of the pleural region only reserved in the irrelevant region into the classification network of the pleural effusion, and obtains a corresponding classification detection result, such as indicating that the pleural effusion is positive. The classification detection result can provide the doctor with auxiliary diagnosis and analysis of the disease species.
Here, the deep learning based classification Network that can be used in the present invention is such as inclusion, Xception, ResNet (Residual Network), densneet (dense convolution Network), and the like.
And training the classification network by using the positive and negative sample images. The positive sample image indicates that the disease species is positive, and the negative sample image indicates that the disease species is negative. The sample image is a medical image of the chest of a human body of which an irrelevant area is firstly shielded, wherein the gray value of a target area is reserved, and the gray values of other irrelevant areas are 0. The target region corresponds to a disease species indicated by the sample image.
Pleural effusion is still exemplified. Only the pleural region in the chest X-ray image sample of the shielded irrelevant area has gray values, positive samples indicate positive pleural effusion, and negative samples indicate negative pleural effusion. After the classification network is trained through the sample images, the trained classification network can be used for classification detection of the pleural effusion, namely, the pleural effusion is positive or the pleural effusion is negative.
During the training process, for each sample image, the classification network will calculate the probability of being positive for pleural effusion and the probability of being negative for pleural effusion. Each classification result will have a corresponding probability threshold, depending on the specific application needs. Specifically, for example, the present invention is more concerned with the erroneous determination of negative pleural effusion, so a larger probability threshold, such as 0.7, may be set for the negative pleural effusion probability, that is, when the probability of negative pleural effusion calculated from the input sample image exceeds 0.7, the classification result at this time is determined to be negative pleural effusion. Here, the probability threshold may be determined according to the recall rate (i.e. the accuracy rate of the classification result) of the classification network and the specificity of the pleural effusion, so that both the recall rate and the specificity can reach clinical expectations at the probability threshold.
According to the various embodiments described above, the following clauses are proposed:
clause 1. a model training method, wherein the method comprises:
inputting positive and negative sample images respectively indicating that a disease species is positive and negative to a classification network aiming at the disease species with a specific disease position so as to train the classification network;
wherein, the positive and negative sample images are shielded in areas which are irrelevant to the target area to which the disease incidence position belongs;
and obtaining the trained classification network, wherein the classification detection result indicates that the disease species is positive or negative.
Clause 2. the method of clause 1, wherein the masking operation specifically comprises:
and reserving the gray value of the target area, and setting the gray values of other areas in the positive and negative sample images as '0'.
Clause 3. the method of clause 1 or 2, wherein the target region in the positive and negative sample images is identified by an image segmentation model.
Clause 4. the method of clause 3, wherein the image segmentation model is obtained by training:
inputting the human medical image sample pre-labeled with the target area into the image segmentation model to obtain the trained image segmentation model, wherein the target area is labeled as '1', and other areas are labeled as '0'.
Clause 5. the method of any of clauses 1-4, wherein the classification network is a deep learning based classification network.
Clause 6. a method for detecting, by a computing device, a disease species having a particular disease location, wherein the method further comprises:
aiming at a disease species with a specific disease position, inputting a first human medical image into a corresponding image segmentation model to obtain a mask of a target region to which the disease position belongs;
shielding an irrelevant area in the first human body medical image by using the mask of the target area;
inputting the first human medical image after being shielded from the irrelevant area into a classification network of the disease species to obtain a classification detection result of the disease species, wherein the classification detection result indicates that the disease species is positive or negative;
the classification network is obtained by inputting positive and negative sample images after the mask of the target area shields an irrelevant area to train, wherein the positive sample image indicates that the disease species is positive, and the negative sample image indicates that the disease species is negative.
Clause 7. the method of clause 6, wherein the image segmentation model is trained and obtained by:
inputting the human medical image sample pre-labeled with the target area into the image segmentation model to obtain the trained image segmentation model, wherein the target area is labeled as '1', and other areas are labeled as '0'.
Clause 8. the method of clause 6 or 7, wherein the masking operation specifically comprises:
and reserving the gray value of the target area, and setting the gray values of other areas in the first human body medical image as 0.
Clause 9. the method of any of clauses 6-8, wherein the classification network is a deep learning based classification network.
Clause 10. a model training apparatus, wherein the apparatus comprises:
the training device is used for inputting positive and negative sample images respectively indicating that the disease species are positive and negative into a classification network to train the classification network aiming at the disease species with a specific disease position, so as to obtain the trained classification network, and the classification detection result indicates that the disease species are positive or negative;
wherein both the positive and negative sample images are masked to regions unrelated to the target region to which the onset position belongs.
Clause 11. the apparatus of clause 10, wherein the masking operation specifically comprises:
and reserving the gray value of the target area, and setting the gray values of other areas in the positive and negative sample images as '0'.
Clause 12. the apparatus of clause 10 or 11, wherein the target region in the positive and negative sample images is identified by an image segmentation model.
Clause 13. the apparatus of clause 12, wherein the image segmentation model is obtained by training:
inputting the human chest medical image sample labeled with the target region in advance into the image segmentation model to obtain the trained image segmentation model, wherein the target region is labeled as '1', and other regions are labeled as '0'.
Clause 14. the apparatus of any one of clauses 10-13, wherein the classification network is a deep learning based classification network.
Clause 15. an apparatus for detecting, by a computing device, a disease species having a particular disease location, wherein the apparatus comprises:
the segmentation device is used for inputting the first human medical image into a corresponding image segmentation model aiming at a disease species with a specific disease occurrence position so as to obtain a mask of a target region to which the disease occurrence position belongs;
a shielding device for shielding an irrelevant area in the first human body medical image by using the mask of the target area;
a classification device, configured to input the first human medical image after the irrelevant area is masked into a classification network of the disease category to obtain a classification detection result of the disease category, where the classification detection result indicates that the disease category is positive or negative;
the classification network is obtained by inputting positive and negative sample images after the mask of the target area shields an irrelevant area to train, wherein the positive sample image indicates that the disease species is positive, and the negative sample image indicates that the disease species is negative.
Clause 16. the apparatus of clause 15, wherein the image segmentation model is trained and obtained by:
inputting the human chest medical image sample labeled with the target region in advance into the image segmentation model to obtain the trained image segmentation model, wherein the target region is labeled as '1', and other regions are labeled as '0'.
Clause 17. the apparatus of clause 15 or 16, wherein the masking operation specifically comprises:
and reserving the gray value of the target area, and setting the gray values of other areas in the first human body chest medical image as 0.
Clause 18. the apparatus of any one of clauses 15-17, wherein the classification network is a deep learning based classification network.
Clause 19. a computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor, when executing the computer program, implements the method of any of clauses 1 to 5
Clause 20. a computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any of clauses 6-9 when executing the computer program.
Clause 21. a computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the method of any of clauses 1 to 5.
Clause 22. a computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the method of any of clauses 6 to 9.
Clause 23. a computer program product implementing the method of any one of clauses 1 to 5 when executed by a computer device.
Clause 24. a computer program product implementing the method of any one of clauses 6 to 9 when executed by a computer device.
It should be noted that the present invention may be implemented in software and/or in a combination of software and hardware, for example, as an Application Specific Integrated Circuit (ASIC), a general purpose computer or any other similar hardware device. In one embodiment, the software program of the present invention may be executed by a processor to implement the steps or functions described above. As such, the software program of the present invention (including the associated data knots)Or) may be stored in a computer readable recording medium, such as RAM memory, a magnetic or optical drive or diskette, and the like. Additionally, some of the steps or functions of the present invention may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions
In addition, at least a portion of the present invention may be implemented as a computer program product, such as computer program instructions, which, when executed by a computing device, may invoke or provide methods and/or aspects in accordance with the present invention through operation of the computing device. Program instructions which invoke/provide the methods of the present invention may be stored on fixed or removable recording media and/or transmitted via a data stream over a broadcast or other signal-bearing medium, and/or stored in a working memory of a computing device operating in accordance with the program instructions.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (10)

1. A method of model training, wherein the method comprises:
inputting positive and negative sample images respectively indicating that a disease species is positive and negative to a classification network aiming at the disease species with a specific disease position so as to train the classification network;
wherein, the positive and negative sample images are shielded in areas which are irrelevant to the target area to which the disease incidence position belongs;
and obtaining the trained classification network, wherein the classification detection result indicates that the disease species is positive or negative.
2. The method according to claim 1, wherein the masking operation comprises in particular:
and reserving the gray value of the target area, and setting the gray values of other areas in the positive and negative sample images as '0'.
3. The method according to claim 1 or 2, wherein the target region in the positive and negative sample images is identified by an image segmentation model.
4. The method of any of claims 1-3, wherein the classification network is a deep learning based classification network.
5. A method of detecting a disease species having a particular disease location, wherein the method comprises:
aiming at a disease species with a specific disease position, inputting a first human medical image into a corresponding image segmentation model to obtain a mask of a target region to which the disease position belongs;
shielding an irrelevant area in the first human body medical image by using the mask of the target area;
inputting the first human medical image after being shielded from the irrelevant area into a classification network of the disease species to obtain a classification detection result of the disease species, wherein the classification detection result indicates that the disease species is positive or negative;
the classification network is obtained by inputting positive and negative sample images after the mask of the target area shields an irrelevant area to train, wherein the positive sample image indicates that the disease species is positive, and the negative sample image indicates that the disease species is negative.
6. A model training apparatus, wherein the apparatus comprises:
the training device is used for inputting positive and negative sample images respectively indicating that the disease species are positive and negative into a classification network to train the classification network aiming at the disease species with a specific disease position, so as to obtain the trained classification network, and the classification detection result indicates that the disease species are positive or negative;
wherein both the positive and negative sample images are masked to regions unrelated to the target region to which the onset position belongs.
7. An apparatus for detecting a disease species having a specific disease location, wherein the apparatus comprises:
the segmentation device is used for inputting the first human medical image into a corresponding image segmentation model aiming at a disease species with a specific disease occurrence position so as to obtain a mask of a target region to which the disease occurrence position belongs;
a shielding device for shielding an irrelevant area in the first human body medical image by using the mask of the target area;
a classification device, configured to input the first human medical image after the irrelevant area is masked into a classification network of the disease category to obtain a classification detection result of the disease category, where the classification detection result indicates that the disease category is positive or negative;
the classification network is obtained by inputting positive and negative sample images after the mask of the target area shields an irrelevant area to train, wherein the positive sample image indicates that the disease species is positive, and the negative sample image indicates that the disease species is negative.
8. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 4 or the method of claim 5 when executing the computer program.
9. A computer-readable storage medium, on which a computer program is stored, wherein the computer program, when executed by a processor, implements the method of any one of claims 1 to 4 or the method of claim 5.
10. A computer program product implementing the method of any one of claims 1 to 4 or the method of claim 5 when executed by a computer device.
CN201910810538.1A 2019-08-29 2019-08-29 Classification training and detecting method and device Pending CN110610488A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910810538.1A CN110610488A (en) 2019-08-29 2019-08-29 Classification training and detecting method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910810538.1A CN110610488A (en) 2019-08-29 2019-08-29 Classification training and detecting method and device

Publications (1)

Publication Number Publication Date
CN110610488A true CN110610488A (en) 2019-12-24

Family

ID=68890683

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910810538.1A Pending CN110610488A (en) 2019-08-29 2019-08-29 Classification training and detecting method and device

Country Status (1)

Country Link
CN (1) CN110610488A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111383217A (en) * 2020-03-11 2020-07-07 深圳先进技术研究院 Visualization method, device and medium for evaluation of brain addiction traits
CN111476773A (en) * 2020-04-07 2020-07-31 重庆医科大学附属儿童医院 Auricle malformation analysis and identification method, system, medium and electronic terminal
CN113269782A (en) * 2021-04-21 2021-08-17 青岛小鸟看看科技有限公司 Data generation method and device and electronic equipment
WO2021179189A1 (en) * 2020-03-11 2021-09-16 深圳先进技术研究院 Visualization method and device for evaluating brain addiction traits, and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020585A (en) * 2012-11-06 2013-04-03 华南师范大学 Method for identifying positive cells and negative cells of immunologic tissue
CN103778444A (en) * 2014-01-07 2014-05-07 沈阳航空航天大学 Pulmonary nodule benign and malignant identification method based on support vector machine sample reduction
CN108629764A (en) * 2018-04-17 2018-10-09 杭州依图医疗技术有限公司 A kind of good pernicious method and device of determining Lung neoplasm

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020585A (en) * 2012-11-06 2013-04-03 华南师范大学 Method for identifying positive cells and negative cells of immunologic tissue
CN103778444A (en) * 2014-01-07 2014-05-07 沈阳航空航天大学 Pulmonary nodule benign and malignant identification method based on support vector machine sample reduction
CN108629764A (en) * 2018-04-17 2018-10-09 杭州依图医疗技术有限公司 A kind of good pernicious method and device of determining Lung neoplasm

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
徐峰等: "基于U-Net的结节分割方法", 《软件导刊》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111383217A (en) * 2020-03-11 2020-07-07 深圳先进技术研究院 Visualization method, device and medium for evaluation of brain addiction traits
WO2021179189A1 (en) * 2020-03-11 2021-09-16 深圳先进技术研究院 Visualization method and device for evaluating brain addiction traits, and medium
CN111383217B (en) * 2020-03-11 2023-08-29 深圳先进技术研究院 Visual method, device and medium for brain addiction character evaluation
CN111476773A (en) * 2020-04-07 2020-07-31 重庆医科大学附属儿童医院 Auricle malformation analysis and identification method, system, medium and electronic terminal
CN113269782A (en) * 2021-04-21 2021-08-17 青岛小鸟看看科技有限公司 Data generation method and device and electronic equipment
US11995741B2 (en) 2021-04-21 2024-05-28 Qingdao Pico Technology Co., Ltd. Data generation method and apparatus, and electronic device

Similar Documents

Publication Publication Date Title
Fuhrman et al. A review of explainable and interpretable AI with applications in COVID‐19 imaging
CN110598782B (en) Method and device for training classification network for medical image
CN110610488A (en) Classification training and detecting method and device
Roth et al. A new 2.5 D representation for lymph node detection using random sets of deep convolutional neural network observations
Choi et al. Convolutional neural network technology in endoscopic imaging: artificial intelligence for endoscopy
Li et al. Dual-consistency semi-supervised learning with uncertainty quantification for COVID-19 lesion segmentation from CT images
Liu et al. 3DFPN-HS^ 2 2: 3D Feature Pyramid Network Based High Sensitivity and Specificity Pulmonary Nodule Detection
Skouta et al. Automated binary classification of diabetic retinopathy by convolutional neural networks
Mokter et al. Classification of ulcerative colitis severity in colonoscopy videos using vascular pattern detection
Swapnarekha et al. Competitive deep learning methods for COVID-19 detection using X-ray images
Kumar et al. LiteCovidNet: A lightweight deep neural network model for detection of COVID‐19 using X‐ray images
Mustaqim et al. Deep learning for the detection of acute lymphoblastic leukemia subtypes on microscopic images: A systematic literature review
Kumar et al. Automated white corpuscles nucleus segmentation using deep neural network from microscopic blood smear
CN113177554B (en) Thyroid nodule identification and segmentation method, system, storage medium and equipment
CN114359671A (en) Multi-target learning-based ultrasonic image thyroid nodule classification method and system
Devi et al. Segmentation and classification of white blood cancer cells from bone marrow microscopic images using duplet-convolutional neural network design
Ummah et al. Effect of image pre-processing method on convolutional neural network classification of COVID-19 CT scan images
Moghaddam et al. Towards smart diagnostic methods for COVID-19: Review of deep learning for medical imaging
Priyanka Pramila et al. Automated skin lesion detection and classification using fused deep convolutional neural network on dermoscopic images
Ovi et al. Infection segmentation from covid-19 chest ct scans with dilated cbam u-net
Scherzinger et al. CNN-based background subtraction for long-term in-vial FIM imaging
Oyelade et al. Deep Learning Model for Improving the Characterization of Coronavirus on Chest X-ray Images Using CNN
Xu et al. A Tuberculosis Detection Method Using Attention and Sparse R-CNN.
Moosavi et al. Segmentation and classification of lungs CT-scan for detecting COVID-19 abnormalities by deep learning technique: U-Net model
Princy Magdaline et al. Detection of lung cancer using novel attention gate residual U-Net model and KNN classifier from computer tomography images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191224