CN113379687A - Network training method, image detection method, and medium - Google Patents

Network training method, image detection method, and medium Download PDF

Info

Publication number
CN113379687A
CN113379687A CN202110589927.3A CN202110589927A CN113379687A CN 113379687 A CN113379687 A CN 113379687A CN 202110589927 A CN202110589927 A CN 202110589927A CN 113379687 A CN113379687 A CN 113379687A
Authority
CN
China
Prior art keywords
loss
network
medical image
category
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110589927.3A
Other languages
Chinese (zh)
Inventor
殷敬敬
郑介志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Intelligent Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Intelligent Healthcare Co Ltd filed Critical Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority to CN202110589927.3A priority Critical patent/CN113379687A/en
Publication of CN113379687A publication Critical patent/CN113379687A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Radiology & Medical Imaging (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Epidemiology (AREA)
  • Biomedical Technology (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The application relates to a network training method, an image detection device, computer equipment and a storage medium. The method comprises the following steps: acquiring a training medical image set; inputting each training medical image into an initial network, determining a prediction position of an interest area in each training medical image through a backbone network and a first network in the initial network, and determining a prediction category corresponding to each training medical image through the backbone network and a second network in the initial network; calculating the category loss between the prediction category corresponding to each training medical image and the category label corresponding to each training medical image, and calculating the position loss between the prediction position of the interest area in each training medical image and the position label corresponding to part of the training medical images; and training the initial network according to the category loss and the position loss, and determining the trained network. By adopting the method, the manual marking cost can be reduced.

Description

Network training method, image detection method, and medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a network training method, an image detection method, an apparatus, a computer device, and a storage medium.
Background
With the continuous development of medical imaging technology, when more and more patients go to a hospital for examination, doctors can enable the patients to shoot a medical image, and the image analysis result of the patients is obtained by analyzing the shot medical image.
In the related art, when a doctor analyzes medical images, the doctor usually analyzes the medical images by combining a deep learning technology, that is, the doctor manually labels a plurality of sample medical images in advance, labels the positions and the types of all focuses on each sample medical image, and trains a deep learning network through each labeled sample medical image to obtain a trained network. Then, the doctor can analyze the medical image of the patient through the well-trained network to obtain the medical image analysis result of the patient.
However, when the deep learning network is trained by using the above-mentioned technique, a lot of time and effort are required to be expended by a doctor to label the sample medical image in the training process, which results in high manual labeling cost.
Disclosure of Invention
In view of the above, it is necessary to provide a network training method, an image detection method, an apparatus, a computer device and a storage medium capable of reducing the cost of manual labeling.
A method of network training, the network comprising a backbone network, a first network and a second network, the method comprising:
acquiring a training medical image set; the training medical image set comprises a plurality of training medical images, a category label corresponding to each training medical image and a position label corresponding to a part of the training medical images;
inputting each training medical image into an initial network, determining a prediction position of an interest area in each training medical image through a backbone network and a first network in the initial network, and determining a prediction category corresponding to each training medical image through the backbone network and a second network in the initial network;
calculating the category loss between the prediction category corresponding to each training medical image and the category label corresponding to each training medical image, and calculating the position loss between the prediction position of the interest area in each training medical image and the position label corresponding to part of the training medical images;
and training the initial network according to the category loss and the position loss, and determining the trained network.
In one embodiment, the calculating the class loss between the prediction class corresponding to each training medical image and the class label corresponding to each training medical image includes:
calculating a first class loss between a prediction sample class corresponding to each training medical image and a corresponding positive and negative sample label;
calculating a second category loss between the image category label of each training medical image and the corresponding predicted image category;
and determining the category loss according to the first category loss and the second category loss.
In one embodiment, the determining the class loss according to the first class loss and the second class loss includes:
acquiring a positive sample heat map corresponding to each training medical image output by a second network and a category heat map corresponding to each training medical image output by a first network; the positive sample heat map comprises a prediction sample category corresponding to the positive sample training medical image in each training medical image, and the category heat map comprises a prediction category and a prediction position of an interested area in the training medical image;
calculating a heat map loss between the positive sample heat map and the category heat map;
the category losses are determined from the first category losses, the second category losses, and the heat map losses.
In one embodiment, the determining the class loss according to the first class loss, the second class loss and the heat map loss includes:
and performing summation operation on the first category loss, the second category loss and the heat map loss to determine the category loss.
In one embodiment, the calculating the position loss between the predicted position of the region of interest in each of the training medical images and the position label corresponding to the part of the training medical image includes:
and calculating the loss between the predicted position of the region of interest in each training medical image and the position label of the region of interest in the corresponding training medical image to obtain the position loss.
In one embodiment, the training the initial network according to the category loss and the location loss to determine a trained network includes:
performing summation operation on the category loss and the position loss, and taking the sum value as the value of a loss function;
and training the initial network by using the value of the loss function to determine the trained network.
In one embodiment, the first network includes a first convolution module and a first output channel for outputting the category heatmap.
In one embodiment, the second network includes a second convolution module and a second output channel, a first fully-connected layer and a third output channel, a second fully-connected layer and a fourth output channel;
the second output channel is used for outputting the positive sample heat map, the third output channel is used for outputting the prediction sample category of the training medical image, and the fourth output channel is used for outputting the prediction image category of the training medical image.
An image detection method, the method comprising:
acquiring a medical image to be detected; the medical image to be detected comprises at least one region of interest;
inputting the medical image to be detected into a trained network for detection processing, and determining a detection result corresponding to the medical image to be detected; the network is a network model trained according to the network training method, and the detection result comprises the types and positions of all interested areas on the medical image to be detected.
A network training apparatus, the network comprising a backbone network, a first network and a second network, the apparatus comprising:
the sample image acquisition module is used for acquiring a training medical image set; the training medical image set comprises a plurality of training medical images, a category label corresponding to each training medical image and a position label corresponding to a part of the training medical images;
the prediction module is used for inputting each training medical image into the initial network, determining the prediction position of the interest area in each training medical image through the backbone network and the first network in the initial network, and determining the prediction category corresponding to each training medical image through the backbone network and the second network in the initial network;
the loss calculation module is used for calculating the category loss between the prediction category corresponding to each training medical image and the category label corresponding to each training medical image, and calculating the position loss between the prediction position of the interest area in each training medical image and the position label corresponding to part of the training medical images;
and the training module is used for training the initial network according to the category loss and the position loss and determining the trained network.
An image sensing apparatus, the apparatus comprising:
the test image acquisition module is used for acquiring a medical image to be tested; the medical image to be detected comprises at least one region of interest;
the detection module is used for inputting the medical image to be detected into the trained network for detection processing, and determining a detection result corresponding to the medical image to be detected; the network is a network model trained according to the network training method, and the detection result comprises the types and positions of all interested areas on the medical image to be detected.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring a training medical image set; the training medical image set comprises a plurality of training medical images, a category label corresponding to each training medical image and a position label corresponding to a part of the training medical images;
inputting each training medical image into an initial network, determining a prediction position of an interest area in each training medical image through a backbone network and a first network in the initial network, and determining a prediction category corresponding to each training medical image through the backbone network and a second network in the initial network;
calculating the category loss between the prediction category corresponding to each training medical image and the category label corresponding to each training medical image, and calculating the position loss between the prediction position of the interest area in each training medical image and the position label corresponding to part of the training medical images;
and training the initial network according to the category loss and the position loss, and determining the trained network.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring a medical image to be detected; the medical image to be detected comprises at least one region of interest;
inputting the medical image to be detected into a trained network for detection processing, and determining a detection result corresponding to the medical image to be detected; the network is a network model trained according to the network training method, and the detection result comprises the types and positions of all interested areas on the medical image to be detected.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a training medical image set; the training medical image set comprises a plurality of training medical images, a category label corresponding to each training medical image and a position label corresponding to a part of the training medical images;
inputting each training medical image into an initial network, determining a prediction position of an interest area in each training medical image through a backbone network and a first network in the initial network, and determining a prediction category corresponding to each training medical image through the backbone network and a second network in the initial network;
calculating the category loss between the prediction category corresponding to each training medical image and the category label corresponding to each training medical image, and calculating the position loss between the prediction position of the interest area in each training medical image and the position label corresponding to part of the training medical images;
and training the initial network according to the category loss and the position loss, and determining the trained network.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a medical image to be detected; the medical image to be detected comprises at least one region of interest;
inputting the medical image to be detected into a trained network for detection processing, and determining a detection result corresponding to the medical image to be detected; the network is a network model trained according to the network training method, and the detection result comprises the types and positions of all interested areas on the medical image to be detected.
The network training method, the image detection method, the device, the computer equipment and the storage medium comprise a backbone network, a first network and a second network, wherein training medical images in a training medical image set are input into an initial network by acquiring the training medical image set, the predicted positions of interest areas in the training medical images are acquired by the backbone network and the first network, the predicted categories corresponding to the training medical images are acquired by the backbone network and the second network, category loss between the predicted categories and the category labels of the training medical images is calculated, position loss between the predicted positions of the interest areas in the training medical images and the corresponding position labels is calculated, the initial network is trained according to the category loss and the position loss, and the trained network is a Hodo-trained network; the training medical image set comprises a plurality of training medical images, a category label corresponding to each training medical image and a position label corresponding to part of the training medical images. In the method, when the network is trained, the adopted training samples comprise the category labels of the training medical images and the position labels of the part of the training medical images, namely the network can be trained without completely marking the position labels of the training medical images, so that a large amount of time and energy of doctors are not required to be consumed to mark the samples, and the manual marking cost of the samples can be saved by adopting the method. In addition, because the network in the method comprises the backbone network, the first network and the second network, and the network can be trained through the prediction output of the three networks to obtain the trained network, the network can be trained by adopting a sample with a small number of position labels, and the trained network can be obtained.
Drawings
FIG. 1 is a diagram illustrating an internal structure of a computer device according to an embodiment;
FIG. 2 is a flow diagram illustrating a network training method according to one embodiment;
FIG. 2a is a diagram illustrating an example of a network in one embodiment;
FIG. 3 is a schematic flow chart of the network training step in another embodiment;
FIG. 4 is a flow chart illustrating a network training method according to another embodiment;
FIG. 4a is a diagram showing a specific structure example of a first network and a second network in another embodiment;
FIG. 5 is a flow diagram illustrating an exemplary image detection method;
FIG. 6 is a block diagram of a network training apparatus according to an embodiment;
FIG. 7 is a block diagram showing the structure of an image detection apparatus according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The network training method provided by the embodiment of the application can be applied to medical equipment, the medical equipment comprises a scanning device and a computer device which are connected with each other, the scanning device can scan a detection object and send obtained scanning data to the computer device, so that the computer device can perform processes such as data processing, image reconstruction and the like according to the obtained scanning data.
The computer device may be a terminal device or a server, and taking the computer device as a terminal device as an example, an internal structure diagram of the computer device may be as shown in fig. 1. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a network training method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 1 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
It should be noted that the execution subject in the embodiment of the present application may be a medical device, or may be a computer device in a medical device, or may also be a network training apparatus, or may also be an image detection apparatus, and the network training apparatus and the image detection apparatus may be implemented as part or all of a computer device by software, hardware, or a combination of software and hardware. The following method embodiments are described by taking the execution subject as the computer device as an example.
The following embodiments first describe the network training process.
In one embodiment, a network training method is provided, the network comprising a backbone network, a first network and a second network, as shown in fig. 2, the method may comprise the following steps:
s202, acquiring a training medical image set; the training medical image set comprises a plurality of training medical images, a category label corresponding to each training medical image and a position label corresponding to a part of the training medical images.
Each training medical image in the training medical image set can be a medical image of a different object, each training medical image is labeled with a category label, and position labels are labeled in part of the training medical images, so that a large amount of time and energy consumed when doctors label all the position labels of the training medical images can be reduced.
The category label may be a positive/negative sample label indicating whether an area of interest exists in the training medical image, may also be an image category label specifically including which area of interest is included in the training medical image, and may also include both category labels. The position label may be a position of the region of interest in the training medical image.
It should be noted that, if the training medical image includes the region of interest, the included region of interest may be the same type of region of interest, or may be multiple types of regions of interest.
Specifically, before training the network, the regions of interest of different objects may be scanned in advance to obtain medical images of the objects, and the medical images of the objects are subjected to category and/or position labeling to obtain a training medical image set.
S204, inputting each training medical image into the initial network, determining the prediction position of the interested area in each training medical image through the backbone network and the first network in the initial network, and determining the prediction category corresponding to each training medical image through the backbone network and the second network in the initial network.
The initial network refers to a network in which a network to be trained is in an initial untrained state in this embodiment. The network comprises a backbone network, a first network and a second network, which can also be called as a weak supervised learning network, wherein the backbone network is respectively connected with the first network and the second network. The Backbone network may be a Backbone CNN network, and the first network and the second network may be convolutional neural networks (e.g., the first network or the second network may be composed of CNN convolutional modules and fully-connected layers). The specific structure of the network can be seen in fig. 2a, where the Backbone CNN is a Backbone network, the CNN convolution module and the full connection layer are second networks, and a network formed by the CNN convolution module and/or the full connection layer may also be a first network (not shown in the figure) before the focal position information. Wherein bbox label is a position label, and class label is a category label.
It should be noted that fig. 2a is only an example, and does not affect the essence of the embodiments of the present application.
The network structure formed by the backbone network and the first network is mainly used for identifying the position of an interested area in the training medical image, and the network structure formed by the backbone network and the second network is mainly used for identifying the category of the training medical image. After obtaining the training medical images, the training medical images may be respectively input into a backbone network of the initial network, convolution operation processing and the like are performed in the backbone network, a result output by the backbone network may be input into the first network and the second network, convolution, pooling, activation, full connection and the like are respectively performed on the result output by the backbone network through the first network and the second network, and finally, a predicted position of an area of interest in each training medical image may be output through the first network (i.e., a position of the area of interest in each training medical image is predicted), and a predicted category corresponding to each training medical image may be output through the second network (i.e., a category of each training medical image is predicted).
S206, calculating the category loss between the prediction category corresponding to each training medical image and the category label corresponding to each training medical image, and calculating the position loss between the prediction position of the interest area in each training medical image and the position label corresponding to part of the training medical images.
In this step, after obtaining the prediction category corresponding to each training medical image and the predicted position of the region of interest in each training medical image, the loss calculation with the corresponding label, that is, the category loss calculation and the position loss calculation, may be performed.
Specifically, the category loss between the prediction category of each training medical image and the corresponding category label may be calculated, and the location loss between the location label of the training medical image labeled with the location label and the predicted location of the region of interest in the corresponding training medical image may be calculated to obtain the category loss and the location loss.
And S208, training the initial network according to the category loss and the position loss, and determining the trained network.
In this step, after obtaining the category loss and the location loss, the parameters of the initial network may be adjusted by using the category loss and the location loss, respectively, to obtain a trained network. Or adjusting the parameters of the initial network by adopting the sum of the category loss and the position loss to obtain a trained network. Of course, the sum of the category losses of all the training medical images and the sum of the position losses of all the training medical images may be adopted to adjust the parameters of the initial network respectively to obtain the trained network. Of course, other training methods may be used, and are not limited herein.
In the network training method, the network comprises a backbone network, a first network and a second network, wherein each training medical image in the training medical image set is input to an initial network by acquiring a training medical image set, the predicted position of an interest area in each training medical image is acquired through the backbone network and the first network, the predicted category corresponding to each training medical image is acquired through the backbone network and the second network, the category loss between the predicted category of each training medical image and a category label is calculated, the position loss between the predicted position of the interest area in each training medical image and the corresponding position label is calculated, the initial network is trained according to the category loss and the position loss, and a trained network is acquired; the training medical image set comprises a plurality of training medical images, a category label corresponding to each training medical image and a position label corresponding to part of the training medical images. In the method, when the network is trained, the adopted training samples comprise the category labels of the training medical images and the position labels of the part of the training medical images, namely the network can be trained without completely marking the position labels of the training medical images, so that a large amount of time and energy of doctors are not required to be consumed to mark the samples, and the manual marking cost of the samples can be saved by adopting the method. In addition, because the network in the method comprises the backbone network, the first network and the second network, and the network can be trained through the prediction output of the three networks to obtain the trained network, the network can be trained by adopting a sample with a small number of position labels, and the trained network can be obtained.
In another embodiment, on the basis of the above embodiment, the class label includes an image class label of the training medical image and/or a positive/negative sample label of the training medical image, and the prediction class corresponding to the training medical image includes a prediction sample class of the training medical image and a prediction image class of the training medical image, as shown in fig. 3, the calculating process of the class loss in S206 may include the following steps:
s302, calculating a first class loss between the prediction sample class corresponding to each training medical image and the corresponding positive and negative sample labels.
The positive and negative sample labels of the embodiment refer to whether the training medical image is a positive sample or a negative sample, and represent whether the training medical image includes the region of interest. For example, if the region of interest is included in the training medical image, the label of the training medical image is a positive sample label, otherwise, the label is a negative sample label.
The prediction sample category, that is, the result of the training medical image predicted by the initial network being a positive sample or a negative sample, may be a prediction result of the second classification.
Specifically, after the prediction sample class of each training medical image is obtained through the second network of the initial network, the loss between the prediction sample class of each training medical image and the corresponding positive and negative sample labels thereof can be calculated and recorded as the first class loss. The first class loss here may be a binary cross-entropy loss or the like.
And S304, calculating a second class loss between the image class label of each training medical image and the corresponding prediction image class.
Here, the image class label refers to a class label of a region of interest included in the training medical image. For example, if there are three types of regions of interest, the labels are 1, 2, and 3, respectively, and a training medical image includes the region of interest, then the image category label of the training medical image may be 12, 23, 13, and 123, that is, the training medical image may include at least one type of region of interest.
The prediction image category refers to a category of a region of interest included in the training medical image predicted by the initial network.
Specifically, after the predicted image class of each training medical image is obtained through the second network of the initial network, the loss between the predicted image class of each training medical image and the corresponding image class label can be calculated and recorded as the second class loss. The second category loss here may be a multi-class cross entropy loss or the like.
S306, determining the category loss according to the first category loss and the second category loss.
In this step, after the first class loss and the second class loss corresponding to each training medical image are obtained, the total class loss can be calculated. Optionally, the present step may comprise the following steps a 1-A3:
step A1, acquiring a positive sample heat map corresponding to each training medical image output by the second network and a category heat map corresponding to each training medical image output by the first network; the positive sample heat map comprises a prediction sample category corresponding to the positive sample training medical image in each training medical image, and the category heat map comprises a prediction category and a prediction position of an interested area in the training medical image.
The positive sample heat map may also be referred to as a positive sample feature map, which includes class information that the training medical image is a positive sample, i.e., a predicted sample class of the training medical image. The category heat map may also be referred to as a category feature map, which includes prediction categories and prediction positions of regions of interest in the training medical image, where the prediction categories of the regions of interest refer to prediction categories of the respective regions of interest, and are not prediction image categories of the training medical image. For example, the training medical image includes two types of regions of interest, the image types of which are 12 or 23, or 13, wherein the two types of regions of interest include the types of regions of interest, such as a first type of region of interest 1, a second type of region of interest 2, a third type of region of interest 3, and so on.
Step a2, calculate heat map losses between the positive sample heat map and the category heat map.
Specifically, the heat map loss can be calculated here using the following equation (1), as shown below:
Figure BDA0003088967630000111
wherein N is the number of the interest areas in the training medical image; mpIs a positive sample heatmap; makA category heat map for the kth region of interest; i and j are the pixel positions of the pixels on the thermal image in the horizontal and vertical directions respectively; k is the number one in training medical imagesk regions of interest; y iskCalculated is the loss of the kth region of interest in the training medical image; t is an activation function.
Wherein M isuTo train the largest region of interest in the medical image, the calculation method is shown in the following formula (2):
Figure BDA0003088967630000121
the heat map loss between the positive sample heat map and the category heat map can be calculated by equations (1) and (2) above.
Step a3, determining a category loss based on the first category loss, the second category loss, and the heat map loss.
After calculating the heat map loss, step a3 may optionally include: and performing summation operation on the first category loss, the second category loss and the heat map loss to determine the category loss.
The summation operation here may be weighted summation or direct summation, or may be other summation manners, and in short, the sum of the first class loss, the second class loss, and the heat map loss of each training medical image may be obtained, and the sum is used as the total class loss of the training medical image.
In the embodiment, the total category loss is determined through the two category losses by calculating the first category loss between the prediction sample category and the positive and negative sample labels of each training medical image and calculating the second category loss between the image category label and the prediction image category, so that the category loss can be refined, and the calculated category loss is more accurate. In addition, the heat map loss between the positive sample heat map and the category heat map is calculated, and the total category loss is determined according to the heat map loss and the first two category losses, so that not only can the image loss be combined, but also the classification information loss can be combined, and the finally determined category loss integrates various losses, so that the finally determined category loss is more precise and accurate. Further, the total category loss is obtained by summing the heat map loss and the first two category losses, so that the total category loss can be obtained rapidly, the efficiency of obtaining the total category loss is improved, and the efficiency of training the network can be improved.
In another embodiment, on the basis of the foregoing embodiment, the position labels include position labels for training a part or all of a region of interest in a medical image, and the calculation process of the position loss in S206 may include the following steps B:
and B, calculating the loss between the predicted position of the region of interest in each training medical image and the position label of the region of interest in the corresponding training medical image, and obtaining the position loss.
In this step, when position labels are labeled on each training medical image before the network is trained, some training medical images are labeled with position labels, and some training medical images are not labeled with position labels; in addition, in the training medical image labeled with the position label, some training medical images can be labeled with the position labels of all the interested areas included in the training medical images, or can be labeled with the position labels of part of the interested areas included in the training medical images, so that the position labels are not labeled on all the training medical images, and the reduction of the labeling cost is realized. For example, the training medical image includes three types of regions of interest, which are 1, 2, and 3, and the three types of regions of interest may be all labeled with position labels, or only two or one of the three types of regions of interest may be labeled with position labels.
After the position labels are obtained, the network can be trained, in the network training process, the predicted positions of the interested areas on each training medical image can be obtained, and because some training medical images are not labeled with the position labels or the labeled position labels are incomplete, when the position loss is calculated, when each interested area simultaneously has the predicted position and the position label, the position loss between the predicted position of each interested area and the corresponding position label is calculated.
For example, the training medical image 1 is not labeled with position labels, the training medical image 1 outputs the predicted position with the region of interest 1 through the network, the training medical image 2 is labeled with position labels of the regions of interest 2 and 3, the training medical image 2 outputs the predicted position with the regions of interest 2 and 3 through the network, and when the position loss is calculated here, the loss between the predicted position of the region of interest 2 and the corresponding position label and the loss between the predicted position of the region of interest 3 and the corresponding position label are calculated.
In addition, the predicted position can be obtained through the above-mentioned category heat map, and the size of the general category heat map may be smaller than that of the original training medical image, so that when the position loss is calculated, the category heat map can be interpolated to the size of the original training medical image by adopting a double interpolation method, and then the loss between the predicted position and the position label is calculated. Meanwhile, the position loss may be MSE mean square error loss, or may be other types of loss, for example, MSE loss, and the calculation formula of the position loss may be as shown in the following formula (3):
Figure BDA0003088967630000131
wherein, the same parameters as those in the formula (1) or (2) have the same meanings, and are not repeated herein; for GkThe binary image is a binary image (or a binary image) having the same size as the original training medical image.
In the embodiment, the position loss is obtained by calculating the loss between the predicted position of the region of interest in each training medical image and the position label of the region of interest in the corresponding training medical image, so that the position loss of the region of interest marked with the position label is only calculated, the calculated position loss can be more accurate, and the realizability of the training network under the condition of only a small number of position labels is ensured.
In another embodiment, another network training method is provided, and on the basis of the above embodiment, as shown in fig. 4, the above S208 may include the following steps:
and S402, performing summation operation on the category loss and the position loss, and taking the summation value as the value of the loss function.
In this step, the category loss is the above total category loss, and the summation operation may be weighted summation, direct summation, or other summation methods, so that the sum of the total category loss and the position loss of each training medical image can be obtained and used as the value of the loss function. The loss function here may be set according to actual conditions, and may be, for example, MSE mean square error loss or the like.
S404, training the initial network by using the value of the loss function, and determining the trained network.
In this step, after obtaining the value of the loss function, the parameter of the initial network may be adjusted by using the value of the loss function, and the process is executed in a loop. When the value of the loss function is smaller than the preset loss function threshold value or the value of the loss function is unchanged, the initial network can be determined to be trained, and at the moment, the parameters of the initial network can be fixed to obtain the trained network.
In the embodiment, the category loss and the position loss of the training medical image are summed, and the calculated sum value is used as the value of the loss function to train the initial network and obtain the trained network, so that the value of the loss function can be rapidly obtained, the speed of adjusting the network parameters can be accelerated, and the efficiency of network training can be improved.
In another embodiment, on the basis of the above embodiments, a specific structure of the first network and the second network is described in the following embodiments.
The first network includes a first convolution module and a first output channel for outputting a category heatmap. The second network comprises a second convolution module, a second output channel, a first full-connection layer, a third output channel, a second full-connection layer and a fourth output channel; the second output channel is used for outputting the positive sample heat map, the third output channel is used for outputting the prediction sample category of the training medical image, and the fourth output channel is used for outputting the prediction image category of the training medical image.
Here, referring to fig. 4a, the first network is a network outputting a category heat map, and includes a first convolution module (such as the convolution connected to the category heat map), an activation layer, and a first output channel, which is behind the activation layer and can output the category heat map, which includes the predicted location of the region of interest in the training medical image, and can calculate the location Loss with the location tag of the region of interest, such as Loss2 in the figure.
The second network is a network that outputs a positive sample heat map, predicted sample classes, and predicted image classes, and includes a second convolution module (e.g., the convolution illustrated in the figure as connected to the positive sample heat map), an active layer, and two fully-connected layers and three output channels, wherein the second convolution module in the figure is followed by the second output channel by the active layer, and can output the positive sample heat map, and can calculate a heat map Loss with the class heat map, namely Loss 1. The third output channel, which follows the pooled and first fully-connected layer in the graph, can output predicted sample classes (i.e., positive and negative sample information in the graph), and can calculate a first class penalty, Loss3, with the positive and negative sample labels. The pooling in the figure is followed by a fourth output channel with a second fully-connected layer (the lowest fully-connected layer in the figure) that can output predicted image categories (i.e., category classification information in the figure) and can compute a second category Loss, Loss4, with the image category labels.
It should be noted that fig. 4a is only an example, and does not affect the essence of the embodiments of the present application.
In this embodiment, the information output by each output channel is determined by refining the structures of the first network and the second network, so that the loss corresponding to each output channel can be quickly calculated in the network training process, and the network training efficiency is improved.
After the network is trained, the trained network may be used for image detection, and the following embodiment describes a test process after the network is trained.
In one embodiment, an image detection method is provided, on the basis of the above embodiment, as shown in fig. 5, the method may include the following steps:
s502, acquiring a medical image to be detected; the medical image to be tested includes at least one region of interest.
In this step, the medical image to be measured may be a medical image of a certain body part of the object to be measured, for example, a medical image of an abdomen of the object to be measured, a two-dimensional image, a three-dimensional image, or the like. The medical image to be tested may include one or more regions of interest, such as an abdominal image, for example, which may include various regions of interest such as ileus, free gas, calculi, etc. The number of each region of interest may be one or more. The medical image to be measured and each medical image in the training medical image set may be images of the same body part.
S504, inputting the medical image to be detected into a trained network for detection processing, and determining a detection result corresponding to the medical image to be detected; the network is a network model trained according to the network training method, and the detection result comprises the types and positions of all interested areas on the medical image to be detected.
After the network is trained, after the medical image to be detected is input into the trained network, the positions of all the interested regions and the types of all the interested regions on the medical image to be detected can be output through the network, so that a doctor can conveniently perform further analysis and processing on the image according to the positions and types of all the interested regions.
In this embodiment, the position and the category of each region of interest to be detected are obtained by acquiring the medical image to be detected and inputting the medical image to be detected into the trained network, and since the medical image to be detected is detected through the trained network, the obtained detection result is necessarily also relatively accurate, that is, the accuracy of detecting the medical image to be detected to obtain the detection result can be improved by using the method.
It should be understood that although the steps in the flowcharts of fig. 2, 3, 4, and 5 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2, 3, 4, and 5 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 6, there is provided a network training apparatus, the network including a backbone network, a first network and a second network, the network training apparatus may include: sample image acquisition module 10, prediction module 11, loss calculation module 12 and training module 13, wherein:
a sample image acquisition module 10, configured to acquire a training medical image set; the training medical image set comprises a plurality of training medical images, a category label corresponding to each training medical image and a position label corresponding to a part of the training medical images;
the prediction module 11 is configured to input each training medical image to an initial network, determine a predicted position of an interest area in each training medical image through a backbone network and a first network in the initial network, and determine a prediction category corresponding to each training medical image through the backbone network and a second network in the initial network;
a loss calculation module 12, configured to calculate a category loss between a prediction category corresponding to each training medical image and a category label corresponding to each training medical image, and calculate a position loss between a predicted position of a region of interest in each training medical image and a position label corresponding to a part of the training medical images;
and the training module 13 is configured to train the initial network according to the category loss and the location loss, and determine a trained network.
For specific limitations of the network training apparatus, reference may be made to the above limitations of the network training method, which is not described herein again.
In another embodiment, another network training apparatus is provided, based on the above embodiment, where the class label includes an image class label of a training medical image and/or a positive/negative sample label of the training medical image, and the prediction class corresponding to the training medical image includes a prediction sample class of the training medical image and a prediction image class of the training medical image, and the loss calculating module 12 may include:
the first class loss calculating unit is used for calculating the first class loss between the prediction sample class corresponding to each training medical image and the corresponding positive and negative sample labels;
the second category loss calculation unit is used for calculating second category losses between the image category labels of the training medical images and the corresponding predicted image categories;
and the category loss calculating unit is used for determining the category loss according to the first category loss and the second category loss.
Optionally, the category loss calculating unit may include:
the heat map acquisition subunit is used for acquiring a positive sample heat map corresponding to each training medical image output by the second network and a category heat map corresponding to each training medical image output by the first network; the positive sample heat map comprises a prediction sample category corresponding to the positive sample training medical image in each training medical image, and the category heat map comprises a prediction category and a prediction position of an interested area in the training medical image;
a heat map loss calculation subunit to calculate heat map losses between the positive sample heat map and the category heat map;
and the category loss calculation subunit is used for determining the category loss according to the first category loss, the second category loss and the heat map loss.
Optionally, the category loss calculating subunit is specifically configured to perform summation operation on the first category loss, the second category loss, and the heatmap loss, and determine the category loss.
In another embodiment, another network training apparatus is provided, and on the basis of the above embodiment, the position label includes a position label for training a part or all of a region of interest in a medical image, and the loss calculating module 12 may include:
and the position loss calculation unit is used for calculating the loss between the predicted position of the region of interest in each training medical image and the position label of the region of interest in the corresponding training medical image to obtain the position loss.
In another embodiment, another network training apparatus is provided, and on the basis of the above embodiment, the training module 13 may include:
the operation unit is used for carrying out summation operation on the category loss and the position loss and taking the summation value as the value of the loss function;
and the training unit is used for training the initial network by using the value of the loss function and determining the trained network.
In another embodiment, on the basis of the above embodiment, the first network includes a first convolution module and a first output channel, and the first output channel is used for outputting the category heat map. The second network comprises a second convolution module, a second output channel, a first full-connection layer, a third output channel, a second full-connection layer and a fourth output channel; the second output channel is used for outputting the positive sample heat map, the third output channel is used for outputting the prediction sample category of the training medical image, and the fourth output channel is used for outputting the prediction image category of the training medical image.
For specific limitations of the network training apparatus, reference may be made to the above limitations of the network training method, which is not described herein again.
In one embodiment, as shown in fig. 7, there is provided an image detection apparatus, which may include: a test image acquisition module 20 and a detection module 21, wherein:
a test image acquisition module 20, configured to acquire a medical image to be tested; the medical image to be detected comprises at least one region of interest;
the detection module 21 is configured to input the medical image to be detected into a trained network for detection processing, and determine a detection result corresponding to the medical image to be detected; the network is a network model trained according to the network training method, and the detection result comprises the types and positions of all interested areas on the medical image to be detected.
For specific limitations of the image detection apparatus, reference may be made to the above limitations of the image detection method, which are not described herein again.
All or part of each module in the network training device and the image detection device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring a training medical image set; the training medical image set comprises a plurality of training medical images, a category label corresponding to each training medical image and a position label corresponding to a part of the training medical images; inputting each training medical image into an initial network, determining a prediction position of an interest area in each training medical image through a backbone network and a first network in the initial network, and determining a prediction category corresponding to each training medical image through the backbone network and a second network in the initial network; calculating the category loss between the prediction category corresponding to each training medical image and the category label corresponding to each training medical image, and calculating the position loss between the prediction position of the interest area in each training medical image and the position label corresponding to part of the training medical images; and training the initial network according to the category loss and the position loss, and determining the trained network.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
calculating a first class loss between a prediction sample class corresponding to each training medical image and a corresponding positive and negative sample label; calculating a second category loss between the image category label of each training medical image and the corresponding predicted image category; and determining the category loss according to the first category loss and the second category loss.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring a positive sample heat map corresponding to each training medical image output by a second network and a category heat map corresponding to each training medical image output by a first network; the positive sample heat map comprises a prediction sample category corresponding to the positive sample training medical image in each training medical image, and the category heat map comprises a prediction category and a prediction position of an interested area in the training medical image; calculating a heat map loss between the positive sample heat map and the category heat map; the category losses are determined from the first category losses, the second category losses, and the heat map losses.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
and performing summation operation on the first category loss, the second category loss and the heat map loss to determine the category loss.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
and calculating the loss between the predicted position of the region of interest in each training medical image and the position label of the region of interest in the corresponding training medical image to obtain the position loss.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
performing summation operation on the category loss and the position loss, and taking the sum value as the value of a loss function; and training the initial network by using the value of the loss function to determine the trained network.
In one embodiment, the first network includes a first volume module and a first output channel for outputting the category heatmap.
In one embodiment, the second network comprises a second convolution module and a second output channel, a first fully-connected layer and a third output channel, a second fully-connected layer and a fourth output channel; the second output channel is used for outputting the positive sample heat map, the third output channel is used for outputting the prediction sample category of the training medical image, and the fourth output channel is used for outputting the prediction image category of the training medical image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring a medical image to be detected; the medical image to be detected comprises at least one region of interest; inputting the medical image to be detected into a trained network for detection processing, and determining a detection result corresponding to the medical image to be detected; the network is a network model trained according to the network training method, and the detection result comprises the types and positions of all interested areas on the medical image to be detected.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a training medical image set; the training medical image set comprises a plurality of training medical images, a category label corresponding to each training medical image and a position label corresponding to a part of the training medical images; inputting each training medical image into an initial network, determining a prediction position of an interest area in each training medical image through a backbone network and a first network in the initial network, and determining a prediction category corresponding to each training medical image through the backbone network and a second network in the initial network; calculating the category loss between the prediction category corresponding to each training medical image and the category label corresponding to each training medical image, and calculating the position loss between the prediction position of the interest area in each training medical image and the position label corresponding to part of the training medical images; and training the initial network according to the category loss and the position loss, and determining the trained network.
In one embodiment, the computer program when executed by the processor further performs the steps of:
calculating a first class loss between a prediction sample class corresponding to each training medical image and a corresponding positive and negative sample label; calculating a second category loss between the image category label of each training medical image and the corresponding predicted image category; and determining the category loss according to the first category loss and the second category loss.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring a positive sample heat map corresponding to each training medical image output by a second network and a category heat map corresponding to each training medical image output by a first network; the positive sample heat map comprises a prediction sample category corresponding to the positive sample training medical image in each training medical image, and the category heat map comprises a prediction category and a prediction position of an interested area in the training medical image; calculating a heat map loss between the positive sample heat map and the category heat map; the category losses are determined from the first category losses, the second category losses, and the heat map losses.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and performing summation operation on the first category loss, the second category loss and the heat map loss to determine the category loss.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and calculating the loss between the predicted position of the region of interest in each training medical image and the position label of the region of interest in the corresponding training medical image to obtain the position loss.
In one embodiment, the computer program when executed by the processor further performs the steps of:
performing summation operation on the category loss and the position loss, and taking the sum value as the value of a loss function; and training the initial network by using the value of the loss function to determine the trained network.
In one embodiment, the first network includes a first volume module and a first output channel for outputting the category heatmap.
In one embodiment, the second network comprises a second convolution module and a second output channel, a first fully-connected layer and a third output channel, a second fully-connected layer and a fourth output channel; the second output channel is used for outputting the positive sample heat map, the third output channel is used for outputting the prediction sample category of the training medical image, and the fourth output channel is used for outputting the prediction image category of the training medical image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring a medical image to be detected; the medical image to be detected comprises at least one region of interest; inputting the medical image to be detected into a trained network for detection processing, and determining a detection result corresponding to the medical image to be detected; the network is a network model trained according to the network training method, and the detection result comprises the types and positions of all interested areas on the medical image to be detected.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method for network training, wherein the network comprises a backbone network, a first network, and a second network, the method comprising:
acquiring a training medical image set; the training medical image set comprises a plurality of training medical images, a category label corresponding to each training medical image and a position label corresponding to a part of the training medical images;
inputting each training medical image into an initial network, determining a predicted position of an interested area in each training medical image through a backbone network and a first network in the initial network, and determining a predicted category corresponding to each training medical image through the backbone network and a second network in the initial network;
calculating the category loss between the prediction category corresponding to each training medical image and the category label corresponding to each training medical image, and calculating the position loss between the prediction position of the interest region in each training medical image and the position label corresponding to part of the training medical images;
and training the initial network according to the category loss and the position loss, and determining the trained network.
2. The method according to claim 1, wherein the class labels comprise image class labels of the training medical images and/or positive and negative sample labels of the training medical images, the prediction classes corresponding to the training medical images comprise prediction sample classes of the training medical images and prediction image classes of the training medical images, and the calculating of the class loss between the prediction class corresponding to each of the training medical images and the class label corresponding to each of the training medical images comprises:
calculating a first class loss between a prediction sample class corresponding to each training medical image and a corresponding positive and negative sample label;
calculating a second class loss between the image class label of each training medical image and the corresponding predicted image class;
determining the category loss according to the first category loss and the second category loss.
3. The method of claim 2, wherein the determining the class loss from the first class loss and the second class loss comprises:
acquiring a positive sample heat map corresponding to each training medical image output by the second network and a category heat map corresponding to each training medical image output by the first network; the positive sample heat map comprises a prediction sample category corresponding to a positive sample training medical image in each training medical image, and the category heat map comprises a prediction category and a prediction position of an interest area in the training medical image;
calculating a heat map loss between the positive sample heat map and the category heat map;
determining the class loss from the first class loss, the second class loss, and the heat map loss.
4. The method of claim 3, wherein said determining said class loss from said first class loss, said second class loss, and said heat map loss comprises:
and performing summation operation on the first category loss, the second category loss and the heat map loss to determine the category loss.
5. The method of claim 1, wherein the location labels comprise location labels of part or all of the region of interest in the training medical images, and wherein the calculating of the location loss between the predicted location of the region of interest in each of the training medical images and the location labels corresponding to part of the training medical images comprises:
and calculating the loss between the predicted position of the region of interest in each training medical image and the position label of the region of interest in the corresponding training medical image, and obtaining the position loss.
6. The method according to any of claims 1-5, wherein said training said initial network according to said category loss and said location loss, and determining a trained network comprises:
performing summation operation on the category loss and the position loss, and taking a summation value as a value of a loss function;
and training the initial network by using the value of the loss function to determine the trained network.
7. The method of claim 3, wherein the first network comprises a first convolution module and a first output channel, and wherein the first output channel is configured to output the category heat map.
8. The method of claim 3, wherein the second network comprises a second convolution module and a second output channel, a first fully-connected layer and a third output channel, a second fully-connected layer and a fourth output channel;
the second output channel is configured to output the positive sample heatmap, the third output channel is configured to output a predicted sample category of the training medical image, and the fourth output channel is configured to output a predicted image category of the training medical image.
9. An image detection method, characterized in that the method comprises:
acquiring a medical image to be detected; the medical image to be detected comprises at least one region of interest;
inputting the medical image to be detected into a trained network for detection processing, and determining a detection result corresponding to the medical image to be detected; the network is a network model trained according to the method of any one of claims 1 to 8, and the detection result includes the category and the position of all regions of interest on the medical image to be detected.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1-9.
CN202110589927.3A 2021-05-28 2021-05-28 Network training method, image detection method, and medium Pending CN113379687A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110589927.3A CN113379687A (en) 2021-05-28 2021-05-28 Network training method, image detection method, and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110589927.3A CN113379687A (en) 2021-05-28 2021-05-28 Network training method, image detection method, and medium

Publications (1)

Publication Number Publication Date
CN113379687A true CN113379687A (en) 2021-09-10

Family

ID=77574673

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110589927.3A Pending CN113379687A (en) 2021-05-28 2021-05-28 Network training method, image detection method, and medium

Country Status (1)

Country Link
CN (1) CN113379687A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082748A (en) * 2022-08-23 2022-09-20 浙江大华技术股份有限公司 Classification network training and target re-identification method, device, terminal and storage medium
WO2023061195A1 (en) * 2021-10-15 2023-04-20 腾讯科技(深圳)有限公司 Image acquisition model training method and apparatus, image detection method and apparatus, and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110335250A (en) * 2019-05-31 2019-10-15 上海联影智能医疗科技有限公司 Network training method, device, detection method, computer equipment and storage medium
CN111160367A (en) * 2019-12-23 2020-05-15 上海联影智能医疗科技有限公司 Image classification method and device, computer equipment and readable storage medium
CN111476290A (en) * 2020-04-03 2020-07-31 北京推想科技有限公司 Detection model training method, lymph node detection method, apparatus, device and medium
CN111709485A (en) * 2020-06-19 2020-09-25 腾讯科技(深圳)有限公司 Medical image processing method and device and computer equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110335250A (en) * 2019-05-31 2019-10-15 上海联影智能医疗科技有限公司 Network training method, device, detection method, computer equipment and storage medium
CN111160367A (en) * 2019-12-23 2020-05-15 上海联影智能医疗科技有限公司 Image classification method and device, computer equipment and readable storage medium
CN111476290A (en) * 2020-04-03 2020-07-31 北京推想科技有限公司 Detection model training method, lymph node detection method, apparatus, device and medium
CN111709485A (en) * 2020-06-19 2020-09-25 腾讯科技(深圳)有限公司 Medical image processing method and device and computer equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023061195A1 (en) * 2021-10-15 2023-04-20 腾讯科技(深圳)有限公司 Image acquisition model training method and apparatus, image detection method and apparatus, and device
CN115082748A (en) * 2022-08-23 2022-09-20 浙江大华技术股份有限公司 Classification network training and target re-identification method, device, terminal and storage medium
CN115082748B (en) * 2022-08-23 2022-11-22 浙江大华技术股份有限公司 Classification network training and target re-identification method, device, terminal and storage medium

Similar Documents

Publication Publication Date Title
US10810735B2 (en) Method and apparatus for analyzing medical image
US11049240B2 (en) Method and system for assessing bone age using deep neural network
CN110136103B (en) Medical image interpretation method, device, computer equipment and storage medium
CN111325714B (en) Method for processing region of interest, computer device and readable storage medium
CN113379687A (en) Network training method, image detection method, and medium
CN111488872B (en) Image detection method, image detection device, computer equipment and storage medium
CN111583199B (en) Sample image labeling method, device, computer equipment and storage medium
CN111429482A (en) Target tracking method and device, computer equipment and storage medium
CN112151179B (en) Image data evaluation method, device, equipment and storage medium
CN110335250A (en) Network training method, device, detection method, computer equipment and storage medium
CN110599421A (en) Model training method, video fuzzy frame conversion method, device and storage medium
CN110728673A (en) Target part analysis method and device, computer equipment and storage medium
CN111583184A (en) Image analysis method, network, computer device, and storage medium
CN113160199B (en) Image recognition method and device, computer equipment and storage medium
CN112102235B (en) Human body part recognition method, computer device, and storage medium
CN111145152B (en) Image detection method, computer device, and storage medium
CN111951316A (en) Image quantization method and storage medium
CN116977895A (en) Stain detection method and device for universal camera lens and computer equipment
CN111583264A (en) Training method for image segmentation network, image segmentation method, and storage medium
CN116091522A (en) Medical image segmentation method, device, equipment and readable storage medium
CN113902670B (en) Ultrasonic video segmentation method and device based on weak supervised learning
CN111784637B (en) Prognostic characteristic visualization method, system, equipment and storage medium
CN112862002A (en) Training method of multi-scale target detection model, target detection method and device
CN113393498A (en) Image registration method and device, computer equipment and storage medium
CN112967235A (en) Image detection method, image detection device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination