CN112802012A - Pathological image detection method, pathological image detection device, computer equipment and storage medium - Google Patents

Pathological image detection method, pathological image detection device, computer equipment and storage medium Download PDF

Info

Publication number
CN112802012A
CN112802012A CN202110254516.9A CN202110254516A CN112802012A CN 112802012 A CN112802012 A CN 112802012A CN 202110254516 A CN202110254516 A CN 202110254516A CN 112802012 A CN112802012 A CN 112802012A
Authority
CN
China
Prior art keywords
neural network
image detection
additional input
network model
pathological
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110254516.9A
Other languages
Chinese (zh)
Inventor
陈翔
李芳芳
张宇
谢佩珍
赵爽
陈明亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiangya Hospital of Central South University
Original Assignee
Xiangya Hospital of Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiangya Hospital of Central South University filed Critical Xiangya Hospital of Central South University
Priority to CN202110254516.9A priority Critical patent/CN112802012A/en
Publication of CN112802012A publication Critical patent/CN112802012A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Medical Informatics (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a pathological image detection method, a pathological image detection device, computer equipment and a storage medium. The method comprises the following steps: acquiring a pathological image to be detected; carrying out pathological detection on a pathological image to be detected by adopting a pre-trained pathological image detection model to obtain a pathological image detection result; the training process of the pathological image detection model comprises the following steps: acquiring a sample set, wherein the sample set comprises pathological images obtained by processing all original pathological images; training each neural network model according to the sample set to obtain each corresponding image detection model; the neural network model is constructed by adding a characteristic extraction channel in the original neural network model; adopting each image detection model to carry out image detection test to obtain corresponding image detection test results; and comparing the detection test results of the images, and determining a pathological image detection model from the image detection models according to the comparison result. By adopting the method, the detection precision and efficiency of pathological image detection can be improved.

Description

Pathological image detection method, pathological image detection device, computer equipment and storage medium
Technical Field
The present application relates to the field of image detection technologies, and in particular, to a pathological image detection method, apparatus, computer device, and storage medium.
Background
Histopathology is the gold standard for diagnosing locally suspicious hyperplasia as benign or malignant disease and subtypes thereof, and disease diagnosis results can be obtained by pathological detection on pathological images. With the development of convolutional neural networks in histopathological image detection technology, higher detection accuracy has been achieved in medical image detection such as disease classification, lesion segmentation, cell detection, cell nucleus detection, mitosis detection and the like.
In skin pathology image detection, although the use of convolutional neural networks enables classification of the pathological levels for malignant melanoma and benign nevi. However, the existing convolutional neural network lacks the capability of fusing different scale features, and when the cell morphology difference in pathological images is not large or cells are densely packed, the resolution capability of the convolutional neural network is insufficient, so that the detection accuracy and efficiency of pathological image detection are reduced.
Disclosure of Invention
In view of the above, it is necessary to provide a pathological image detection method, apparatus, computer device, and storage medium capable of improving detection accuracy and efficiency of pathological image detection in view of the above technical problems.
A pathological image detection method, the method comprising:
acquiring a pathological image to be detected;
carrying out pathological detection on the pathological image to be detected by adopting a pre-trained pathological image detection model to obtain a pathological image detection result;
the training process of the pathological image detection model comprises the following steps:
acquiring a sample set, wherein the sample set comprises pathological images obtained after processing all original pathological images;
training each neural network model according to the sample set to obtain each corresponding image detection model; the neural network model is constructed by adding a characteristic extraction channel in an original neural network model;
carrying out image detection test by adopting each image detection model to obtain corresponding image detection test results;
and comparing the detection test results of the images, and determining the pathological image detection model from the image detection models according to the comparison result.
In one embodiment, a construction method of each neural network model includes:
and respectively inserting additional input modules into the model positions of the original neural network model to obtain the constructed neural network models, wherein the additional input modules are used for increasing the feature extraction channels of the sample set during model training, and the number of the additional input modules is at least one.
In one embodiment, the inserting additional input modules at model positions of the original neural network model respectively comprises:
inserting each of the additional input modules into an end of a corresponding down-sampling layer or a successive down-sampling layer of the original neural network, respectively, an output width of each of the additional input modules being the same as an output width of the down-sampling layer or the successive down-sampling layer of the original neural network.
In one embodiment, when the neural network models are trained according to the sample set to obtain corresponding image detection models, the images input in the additional input modules are scaled pathological images in the sample set.
In one embodiment, the number of insertable positions of each additional input module in the original neural network model is the same as the number of layers of the non-continuous down-sampling layer of the original neural network model, and the number of the insertable positions is at least one.
In one embodiment, the determining manner of the number of the additional input modules and the corresponding insertion positions inserted into the original neural network model includes:
determining a search space comprising insertable locations of the original neural network model;
acquiring a current additional input module, and respectively inserting the current additional input module into each insertable position of the original neural network model to acquire each initial neural network model; carrying out image detection test on each initial neural network model to obtain corresponding initial detection results;
determining the insertion position of the current additional input module according to each initial detection result, and removing the determined insertion position and the insertable position corresponding to the initial detection result with the accuracy less than the preset accuracy from the search space;
acquiring a next additional input module as a current additional input module, and returning to the step of respectively inserting the current additional input module into each insertable position of the original neural network model until the search space is empty;
and determining each additional input module and corresponding insertion position corresponding to the initial detection result with high detection precision as each additional input module and corresponding insertion position which can be inserted into the original neural network model in the initial detection result corresponding to each determined insertion position.
In one embodiment, the determining manner of the number of the additional input modules and the corresponding insertion positions inserted into the original neural network model includes:
determining a search space comprising insertable locations of the original neural network model;
carrying out image detection test on the original neural network model to obtain an original detection result;
acquiring a current additional input module and a current pluggable position of the search space;
inserting the current additional input module to obtain a current initial neural network model, wherein the current additional input module is inserted into the current insertable position of the original neural network model; performing image detection test on the current initial neural network model to obtain a first initial detection result;
when the first initial detection result is larger than the original detection result, determining an insertion position corresponding to the first initial detection result as an insertion position of a current additional input module, and taking the current initial network model corresponding to the first initial detection result as the original neural network model;
removing the current pluggable position from the search space, using a next additional input module as a current additional input module, acquiring the next pluggable position of the search space as a current pluggable position, and returning to the step of acquiring the current additional input module and the current pluggable position of the search space until the search space is empty;
and determining each additional input module and corresponding insertion position corresponding to the initial detection result with high detection precision as each additional input module and corresponding insertion position which can be inserted into the original neural network model in the initial detection result corresponding to each determined insertion position.
A pathological image detection device, the device comprising:
the pathological image acquisition module is used for acquiring a pathological image to be detected;
the pathological image detection module is used for carrying out pathological detection on the pathological image to be detected by adopting a pre-trained pathological image detection model to obtain a pathological image detection result;
the pathological image detection model acquisition module is used for acquiring the pathological image detection model; wherein the training process of the pathological image detection model comprises the following steps: acquiring a sample set, wherein the sample set comprises pathological images obtained after processing all original pathological images; training each neural network model according to the sample set to obtain each corresponding image detection model; the neural network model is constructed by adding a characteristic extraction channel in an original neural network model; carrying out image detection test by adopting each image detection model to obtain corresponding image detection test results; and comparing the detection test results of the images, and determining the pathological image detection model from the image detection models according to the comparison result.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the method described above when executing the computer program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method.
According to the pathological image detection method, the pathological image detection device, the computer equipment and the storage medium, the pathological image to be detected is obtained; carrying out pathological detection on a pathological image to be detected by adopting a pre-trained pathological image detection model to obtain a pathological image detection result; in the training process of the pathological image detection model, training each neural network model according to a sample set to obtain each corresponding image detection model; the neural network model is constructed by adding a characteristic extraction channel in an original neural network model; and further determining a pathological image detection model from the image detection models. By adopting the method of the embodiment, the neural network model is constructed by adding the feature extraction channel, so that the pathological image detection model obtained by training stores more image features, and the detection precision and efficiency of pathological image detection are effectively improved.
Drawings
FIG. 1 is a diagram of an exemplary embodiment of a pathological image detection method;
FIG. 2 is a flowchart illustrating a method for detecting a pathological image according to an embodiment;
FIG. 3 is a flowchart illustrating a training process of a pathology image detection model in one embodiment;
FIG. 4 is a schematic diagram of a neural network model constructed in one embodiment;
FIG. 5 is a schematic diagram of an additional input module in one embodiment;
FIG. 6 is a schematic illustration of a pathology image in a sample set in one embodiment;
FIG. 7 is a block diagram showing the structure of a pathological image detection apparatus according to an embodiment;
FIG. 8 is a block diagram of the pathological image detection model acquisition module in one embodiment;
FIG. 9 is a diagram showing an internal structure of a computer device in one embodiment;
fig. 10 is an internal structural view of a computer device in another embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In one embodiment, the pathological image detection method provided by the present application may be applied to an application environment as shown in fig. 1, where the application environment relates to both the terminal 102 and the server 104, and the terminal 102 communicates with the server 104 through a network. Specifically, the server 104 acquires a pathological image to be detected through the terminal 102; carrying out pathological detection on a pathological image to be detected by adopting a pre-trained pathological image detection model to obtain a pathological image detection result; the training process of the pathological image detection model comprises the following steps: acquiring a sample set, wherein the sample set comprises pathological images obtained by processing all original pathological images; training each neural network model according to the sample set to obtain each corresponding image detection model; the neural network model is constructed by adding a characteristic extraction channel in the original neural network model; adopting each image detection model to carry out image detection test to obtain corresponding image detection test results; and comparing the detection test results of the images, and determining a pathological image detection model from the image detection models according to the comparison result.
In one embodiment, the server 104 may also train to obtain a pathological image detection model, and the terminal 102 acquires the pathological image detection model from the server 104. The terminal 102 acquires a pathological image to be detected; and carrying out pathological detection on the pathological image to be detected by adopting the acquired pathological image detection model to obtain a pathological image detection result.
In one embodiment, the pathological image detection model may also be obtained by training the terminal 102. The terminal 102 acquires a pathological image to be detected; and carrying out pathological detection on the pathological image to be detected by adopting a pre-trained pathological image detection model to obtain a pathological image detection result.
The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and the server 104 may be implemented by an independent server or a server cluster formed by a plurality of servers.
In one embodiment, as shown in fig. 2, a pathological image detection method is provided, which is described by taking the method as an example applied to a terminal 102 or a server 104 for pathological image detection, and includes the following steps:
step S202, acquiring a pathological image to be detected.
The pathological image to be detected is a histopathological image which needs pathological detection, and the histopathological image is called a pathological image and comes from a patient. Histopathology is the gold standard for diagnosing locally suspicious hyperplasia as benign or malignant disease and subtypes thereof, and disease diagnosis results can be obtained by pathological detection on pathological images. Specifically, a pathological image to be detected is acquired.
And S204, carrying out pathological detection on the pathological image to be detected by adopting a pre-trained pathological image detection model to obtain a pathological image detection result.
The pathological image detection model is a pre-trained model, and can be directly operated when being used, and pathological detection can be carried out by importing the pathological image to be detected, so that the disease type corresponding to the pathological image to be detected is obtained. Specifically, the pathological image to be detected is imported into a pre-trained pathological image detection model for pathological detection, and a pathological image detection result is obtained.
In one embodiment, as shown in fig. 3, a training process of a pathological image detection model is provided, which is described by taking the example that the training process is applied to a terminal 102 or a server 104 for training the pathological image detection model, and includes the following steps:
step S302, a sample set is obtained, wherein the sample set comprises pathological images obtained after processing of the original pathological images.
The original pathological images are a plurality of pathological images of different disease types which are derived from a patient and not preprocessed. Specifically, when The pathological image detection is applied to skin pathological detection, The original pathological images mainly include full-field digital pathological section (WSI) images in a plurality of hospitals and Cancer databases (TCGA), which mainly include melanoma, intradermal nevus, junction nevus, compound nevus, and other diseases.
In one embodiment, each original pathological image is preprocessed, mainly including processing of image color and size. Specifically, the Macenko method may be adopted to perform color normalization processing on the WSI original pathological images, and each WSI image is cut into patch images of a preset size, called sub-images, at a preset magnification of the microscope. The preset magnification of the microscope may be set to 40 times (X), and the preset size may be set to 512 × 512 pixels.
In one embodiment, after the original pathological images are processed, sub-images meeting preset conditions are selected from each WSI image to serve as pathological images in a sample set, and the sample set is used for training to obtain a pathological image detection model. The preset condition can be set to that the lesion tissue area in the subimage accounts for 50% or more of the total area of the subimage so as to improve the detection precision of the trained pathological image detection model.
In one embodiment, after the sample set is obtained, the sample set may be expanded by horizontal rotation, flipping, and cutting, and the pathological images in the sample set may be divided into a training set, a verification set, and a test set. The verification set is a subdivision of the training set and is used for adjusting parameters of the model. When the sample sets are divided, pathological images in the three sets do not intersect with each other, namely, pathological images belonging to the same patient only exist in one set of the training set, the verification set and the test set. Specifically, the patient from which the pathological image is derived can be distinguished by setting the image name, the attribute, and the like.
Step S304, training each neural network model according to the sample set to obtain each corresponding image detection model; the neural network model is constructed by adding a characteristic extraction channel in the original neural network model.
In one embodiment, in order to improve the detection accuracy of the pathological image detection model, a feature extraction channel is added in the original neural network model, so that more image features can be saved. Specifically, by inserting an additional input module into the original neural network model to construct a multi-scale neural network model, the inserted additional input module forms an insertion layer, and the additional image feature information of the insertion layer can improve the feature extraction capability of the model.
Specifically, each additional input module is inserted into each model position of the original neural network model to obtain each constructed neural network model, and the additional input modules are used for increasing a feature extraction channel of a sample set during model training. Wherein the number of additional input modules is at least one. However, when the detection accuracy of the original neural network model cannot be improved after the additional input modules are inserted into all model positions of the original neural network model, the number of the additional input modules may be 0.
In one embodiment, the locations in the original neural network model where additional input modules can be inserted are referred to as pluggable locations. Specifically, the number of insertable positions of each additional input module in the original neural network model is the same as the number of layers of the discontinuous down-sampling layer of the original neural network model, and the number of the insertable positions is at least one. For example, the number of layers of the non-continuous downsampling layer of the VGG neural network model is 5, and the corresponding insertable positions of the model are 5. The number of downsampling layers of the ResNet50 neural network model is 5, wherein 2 downsampling layers are continuous sampling layers, that is, the number of non-continuous sampling layers is 4, and the corresponding insertable positions of the model are 4.
In one embodiment, the images input in each additional input module are pathology images in a sample set that are scaled to preserve more global image features. Specifically, as shown in fig. 4, a schematic diagram of a constructed neural network model is provided. The original neural network model consists of 5 Block modules and a classifier layer, with 3 insertable locations. After 3 Input Block additional Input modules are inserted into the original neural network model, the neural network model is constructed. In the constructed neural network model, the pathological image I0 in the sample set is Input to the Block module 1, and then the images I1, I2 and I3 Input in the Input Block additional Input module are obtained by downsampling the pathological image I0, and the sizes of the images I1, I2 and I3 are respectively equal to the size of the insertion layer image.
In one embodiment, all of the additional input modules have the same topology. As shown in fig. 5, a schematic diagram of an additional input module is provided. The additional input module may be a three-layer stack, using three convolutional layers, 1 x 1, 3 x 3 and 1 x 1 convolutional layers respectively, to gradually expand the feature extraction channels of the image from 3 to N, where N represents the number of layers of the downsampled layer. The output of the additional input module and the output of the down-sampling layer are aggregated in a serial mode and are used as the input of the insertion layer after being connected in series.
In one embodiment, the additional input modules are inserted into the original neural network model according to a preset insertion rule. The preset insertion rule is set that each additional input module can only be inserted into the end of a down-sampling layer or a continuous down-sampling layer of the original neural network model, and the output width of the additional input module is the same as that of the down-sampling layer or the continuous down-sampling layer. Specifically, each additional input module is inserted into the end of the corresponding down-sampling layer or the continuous down-sampling layer of the original neural network, and the output width of each additional input module is the same as that of the down-sampling layer or the continuous down-sampling layer of the original neural network.
Specifically, training each neural network model according to a training set and a verification set to obtain each corresponding image detection model; the neural network model is constructed by adding a characteristic extraction channel in the original neural network model.
And S306, carrying out image detection test by adopting each image detection model to obtain corresponding image detection test results.
Specifically, image detection tests are performed on the image detection models according to the test set, and image detection test results corresponding to the image detection models are obtained. Wherein the image detection test result may include detection accuracy.
And step S308, comparing the detection test results of the images, and determining a pathological image detection model from the image detection models according to the comparison result.
Specifically, the detection test results of the images, that is, the detection precision is compared, and the image detection model with the highest detection precision is determined as the pathological image detection model for pathological detection.
In the pathological image detection method, a pathological image to be detected is obtained; carrying out pathological detection on a pathological image to be detected by adopting a pre-trained pathological image detection model to obtain a pathological image detection result; in the training process of the pathological image detection model, training each neural network model according to a sample set to obtain each corresponding image detection model; the neural network model is constructed by adding a characteristic extraction channel in an original neural network model; and further determining a pathological image detection model from the image detection models. By adopting the method of the embodiment, the additional input module is inserted into the original neural network model to increase the characteristic extraction channel, the neural network model is constructed, more image characteristics can be stored in the pathological image detection model obtained by training, and therefore the detection precision and efficiency of pathological image detection are effectively improved.
Since the number of the additional input modules inserted into the original neural network model and the corresponding insertion positions are different, different influences may be exerted on the model obtained by subsequent training, and there may be a case where the detection accuracy of the model is still degraded even though the additional input modules are inserted. Therefore, in order to improve the detection accuracy and efficiency of the model, two determination methods for the number of additional input modules and the corresponding insertion positions to be inserted into the original neural network model are proposed.
In one embodiment, a method for determining the number of additional input modules inserted into an original neural network model and corresponding insertion positions is provided, and the method prioritizes the detection precision of the model, and is called a precision-first multi-scale network search algorithm, and the method comprises the following steps:
step S402, determining a search space, wherein the search space comprises all insertable positions of the original neural network model.
And determining each insertable position of the original neural network model according to the number of the discontinuous down-sampling layers of the original neural network model, and taking a set formed by all the insertable positions as a search space. Specifically, if the m insertable positions Q are denoted as Q1, Q2 …, qm, respectively, then the search space Q is denoted as Q ═ Q1, Q2 …, qm }.
Step S404, acquiring a current additional input module, and respectively inserting the current additional input module into each insertable position of the original neural network model to acquire each initial neural network model; and carrying out image detection test on each initial neural network model to obtain corresponding initial detection results.
When each additional input module is inserted into the original neural network model, only one additional input module and the corresponding insertion position thereof are determined each time, and then the number of each additional input module and the corresponding insertion position are determined. When the number of the additional input modules is one, determining the corresponding insertion positions of the additional input modules; when the number of the additional input modules is two, the insertion position corresponding to the first additional input module is determined through the steps, and the insertion position corresponding to the second additional input module only needs to be determined, and so on.
Specifically, a current additional input module is obtained, and the current additional input module is respectively inserted into each insertable position q1, q2 … and qm of the original neural network model to obtain each initial neural network model; and carrying out image detection test on each initial neural network model to obtain corresponding each initial detection result, namely detection precision.
Step S406, determining the insertion position of the current additional input module according to each initial detection result, and removing the determined insertion position and the insertable position corresponding to the initial detection result with the accuracy less than the preset accuracy from the search space.
And setting the preset precision as an image detection test result corresponding to the original neural network after the original neural network model is subjected to the image detection test. Specifically, after each initial detection result corresponding to each initial network model is obtained, an initial detection result with high detection precision is selected, and an insertable position corresponding to the initial detection result is determined as the insertion position of the current additional input module. Then, the insertion position is removed from the search space, and an insertable position corresponding to an initial detection result smaller than a preset accuracy is removed from the search space, so that the search space becomes smaller and smaller.
And step S408, acquiring the next additional input module as the current additional input module, and returning to the step of respectively inserting the current additional input module into each insertable position of the original neural network model until the search space is empty.
After the insertion position corresponding to the first additional input module is determined, when the number of the additional input modules is two, only the insertion position corresponding to the second additional input module needs to be determined. The determination may be made by returning and repeating the above steps S404-S406 until no insertable locations exist in the search space, i.e., the search space is empty.
Specifically, the next additional input module is obtained as the current additional input module, and the step of respectively inserting the current additional input module into each insertable position of the original neural network model is returned until the search space is empty.
And step S410, determining each additional input module and corresponding insertion position corresponding to the initial detection result with high detection precision as each additional input module and corresponding insertion position capable of being inserted into the original neural network model in the initial detection result corresponding to each determined insertion position.
Specifically, in the above steps S404-S408, initial detection results of a plurality of additional input modules and corresponding insertion positions are obtained, and each additional input module and corresponding insertion position corresponding to the initial detection result with high detection accuracy are selected from the initial detection results and determined as each additional input module and corresponding insertion position that can be inserted into the original neural network model.
In one embodiment, computer code for the precision-first multi-scale network search algorithm is as follows. Where Q denotes a search space, m denotes the number of insertable positions, Q1, Q2, …, qm denotes the 1 st, 2 … m insertable positions, respectively, model denotes an initial network model, r denotes the detection accuracy of the model, W denotes a WSI pathology image sample set, and σ denotes a model accuracy threshold to improve the efficiency of the network search, which may be set to 0.0005. The time complexity O of the precision-first multi-scale network searching algorithm is expressed as (m)2T, where T represents the time to train and test the model on the sample set.
Figure BDA0002967479210000111
Figure BDA0002967479210000121
In one embodiment, another method for determining the number of additional input modules inserted into the original neural network model and the corresponding insertion positions is provided, and the method prioritizes the training speed of the model, and is called a speed-first multi-scale network search algorithm, and comprises the following steps:
step S502, determining a search space, wherein the search space comprises all insertable positions of the original neural network model.
And determining each insertable position of the original neural network model according to the number of the discontinuous down-sampling layers of the original neural network model, and taking a set formed by all the insertable positions as a search space. In the search space, the insertable positions are ordered from shallow to deep according to the depth in the original neural network model. Specifically, if the m insertable positions Q are denoted as Q1, Q2 …, qm, respectively, then the search space Q is denoted as Q ═ Q1, Q2 …, qm }.
Step S504, image detection test is carried out on the original neural network model to obtain an original detection result.
Specifically, an image detection test is performed on the original neural network model before the additional input module is inserted, so that an original detection result is obtained.
Step S506, the current additional input module and the current pluggable position of the search space are obtained.
When each additional input module is inserted into the original neural network model, only one additional input module and the corresponding insertion position thereof are determined each time, and then the number of each additional input module and the corresponding insertion position are determined. When the number of the additional input modules is one, determining the corresponding insertion positions of the additional input modules; when the number of the additional input modules is two, the insertion position corresponding to the first additional input module is determined through the steps, and the insertion position corresponding to the second additional input module only needs to be determined, and so on.
Specifically, the current additional input module is obtained, as well as the current insertable position of the search space Q. Wherein, the current insertion position is sequentially obtained in the search space, i.e. the current insertion position is q 1.
Step S508, inserting the current additional input module to obtain the current initial neural network model, wherein the current additional input module is inserted into the current insertable position of the original neural network model; and carrying out image detection test on the current initial neural network model to obtain a first initial detection result.
Specifically, the current additional input module is inserted into the current insertable position q1 to obtain a current initial neural network model, and an image detection test is performed on the current initial neural network model to obtain a first initial detection result.
Step S510, when the first initial detection result is greater than the original detection result, determining an insertion position corresponding to the first initial detection result as an insertion position of the current additional input module, and using the current initial network model corresponding to the first initial detection result as the original neural network model.
Specifically, when the first initial detection result is larger than the original detection result, the insertion position q1 corresponding to the first initial detection result is directly determined as the insertion position of the current additional input module, and the calculation of the second initial detection result corresponding to the next current insertable position q2 is not continued, so as to improve the training speed of the model. Then, the current initial network model corresponding to the first initial detection result is used as a new original neural network model when the second additional input module is inserted, that is, the second initial detection result corresponding to the second additional input module is inserted and compared with the original detection result of the new original neural network model, that is, compared with the first initial detection result.
In one embodiment, when the first initial detection result is less than or equal to the original detection result, the step of using the current initial network model corresponding to the first initial detection result as the original neural network model is not performed, that is, the structure of the original neural network model is not changed.
Specifically, when the first initial detection result is greater than the original detection result, the insertion position corresponding to the first initial detection result is determined as the insertion position of the current additional input module, and the current initial network model corresponding to the first initial detection result is used as the original neural network model.
Step S512, removing the current pluggable position from the search space, using the next additional input module as the current additional input module, obtaining the next pluggable position of the search space as the current pluggable position, and returning to the step of obtaining the current additional input module and the current pluggable position of the search space until the search space is empty.
Wherein the computationally used insertable position q1 is removed from the search space after the first initial detection result is compared to the original detection result, whether the first initial detection result is greater than the original detection result or the first initial detection result is less than the original detection result, such that the search space becomes smaller and smaller. When the number of the additional input modules is two, only the insertion position corresponding to the second additional input module needs to be determined again. Taking the next insertable position q2 of the search space as the current insertable position, inserting the current additional input module into the current insertable position q2, returning and repeating the above steps S504-S512 until there is no insertable position in the search space, i.e. the search space is empty.
Specifically, the current insertable position is removed from the search space, the next additional input module is taken as the current additional input module, the next insertable position of the search space is obtained as the current insertable position, and the steps of obtaining the current additional input module and the current insertable position of the search space are returned until the search space is empty.
And step S514, determining each additional input module and corresponding insertion position corresponding to the initial detection result with high detection precision as each additional input module and corresponding insertion position capable of being inserted into the original neural network model in the initial detection result corresponding to each determined insertion position.
Specifically, in the above steps S504-S512, initial detection results of a plurality of additional input modules and corresponding insertion positions are obtained, and each additional input module and corresponding insertion position corresponding to the initial detection result with high detection accuracy are selected from the initial detection results and determined as each additional input module and corresponding insertion position that can be inserted into the original neural network model.
In one embodiment, computer code for a speed-first multi-scale network search algorithm is as follows. Where Q denotes a search space, m denotes the number of insertable positions, Q1, Q2, …, qm denotes the 1 st, 2 … m insertable positions, respectively, model denotes an initial network model, r denotes the detection accuracy of the model, W denotes a WSI pathology image sample set, and σ denotes a model accuracy threshold to improve the efficiency of the network search, which may be set to 0.0005. The time complexity O of the velocity-first multi-scale web search algorithm is denoted m x T, where T denotes the time to train and test the model on the sample set. Compared with the precision-first multi-scale network searching algorithm, the time complexity of the speed-first multi-scale network searching algorithm is greatly reduced.
Figure BDA0002967479210000141
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is further described in detail below with reference to the accompanying drawings and one embodiment thereof. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In one embodiment, when the pathological image detection method is applied to skin pathological detection, the method comprises the following steps:
firstly, acquiring a sample set
2241 WSI original images from Xiangya Hospital, Yale medical college and TCGA, including four diseases of melanoma, intradermal nevus, boundary nevus and compound nevus, were obtained. And (3) performing color normalization processing on the WSI original pathological images by adopting a Macenko method, and cutting each WSI image into a patch image with the size of 512 by 512 pixels.
Selecting patch images with lesion tissue area accounting for 50% or more of the total area of the patch images in the patch images as pathological images in a WSI sample set, wherein the pathological images specifically comprise 200,000 patch images in 2241 WSI original images. The pathology images in the WSI sample set are divided into a training set, a validation set, and a test set, as shown in table 1. Taking 42,002/760 in the Melanoma training set in table 1 as an example, 42002 shows 42002 patch images, 760 shows 760 WSI original images, and so on. A schematic of pathology images for each disease type is collected in a WSI sample set, as shown in detail in fig. 6.
TABLE 1WSI sample set
Figure BDA0002967479210000151
Secondly, determining a pathological image detection model
Selecting an original neural network model as a ResNet50 neural network model, inserting additional input modules into the model positions of the original neural network model to construct neural network models, comparing the detection precision of corresponding image detection models, evaluating the effectiveness of a precision-first multi-scale network search algorithm and a speed-first multi-scale network search algorithm, determining the number of the additional input modules and the corresponding insertion positions, and finally determining a pathological image detection model.
The WSI sample set is expanded through modes of transverse rotation, turnover, shearing and the like, a Stochastic Gradient Descent (SGD) optimization method is adopted, the momentum is 0.9, the weight attenuation is 0.0001, and the initial learning rate is 0.01. There are 4 pluggable locations for the ResNet50 neural network model. The calculation process and the result of the precision-first multi-scale network search algorithm are specifically shown in table 2, and 8 models are trained. Wherein, the Inserted position and the Possible position insertable position represent the original neural network model for the None, the Inserted position represents that the position is Inserted, and the Possible position insertable position represents the position of the current insertion and test.
Specifically, in step 1, the selected insertion position is determined to be q1, excluding the insertable position q 4. In step 2, the selected insertion position is determined to be q 3. In step 3, if all the insertable positions cannot achieve higher detection accuracy, the calculation is stopped. As can be seen from table 2, when the number of the additional input modules is two, and the insertion positions are q1 and q3, the detection accuracy of the pathological image detection model trained by the constructed neural network model is optimal, and reaches 97.3%.
TABLE 2 calculation procedure and results of precision-first multiscale network search algorithm
Figure BDA0002967479210000161
The calculation process and results of the speed-first multi-scale network search algorithm are specifically shown in table 3, and 5 models are trained. Wherein, the Inserted position and the Possible position insertable position represent the original neural network model for the None, the Inserted position represents that the position is Inserted, and the Possible position insertable position represents the position of the current insertion and test.
Specifically, in step 1, the selected insertion position is determined to be q 1. In step 2, when the insertion position is q2, the model cannot achieve higher detection accuracy, and therefore the calculation is continued excluding the insertable position q 2. In step 3, the selected insertion position is determined to be q 3. The insertable position q4 is finally excluded in step 4. As can be seen from table 3, when the number of the additional input modules is two, and the insertion positions are q1 and q3, the detection accuracy of the pathological image detection model trained by the constructed neural network model is optimal, and reaches 97.3%.
TABLE 3 calculation procedure and results of speed-first multiscale network search algorithm
Figure BDA0002967479210000171
Similarly, the original neural network models are selected as InveptionV4 and Effi-cientNetB0 neural network models, and the number of the optimal additional input modules and the corresponding insertion positions are determined by using the two algorithms, which are specifically shown in Table 4.
The Inserted position and the Possible position can be Inserted into the None to represent an original neural network model, and the Inserted position represents the best detection precision of a pathological image detection model trained by the neural network model constructed when the position is Inserted.
TABLE 4 corresponding calculated results for other original neural network models
Figure BDA0002967479210000172
By using the two algorithms, a plurality of neural network models with high detection precision can be constructed, so that the detection precision of the original neural network model is improved by 1.1-2.7%. In practical application, when the number of the additional input modules and the corresponding insertion positions are determined to be inserted into the original neural network model, one algorithm can be selected at will. Determining one model with the highest detection precision as a pathological image detection model
Third, pathological image detection
And acquiring a pathological image to be detected, and performing pathological detection on the pathological image to be detected by adopting a pathological image detection model to obtain a disease type corresponding to the pathological image to be detected.
In one embodiment, the performance of the neural network model constructed after insertion of the additional input module was also analyzed by ablation experiments. The ablation experiment means that a condition or a parameter is controlled to be unchanged, and the influence of the condition or the parameter on a result is determined to be larger according to the ablation experiment result.
Specifically, 3 network models were designed in the ablation experiments, including: the series network model is replaced by a sum, the series network model by a 1 x 1 convolutional layer, and the additional input module by a wider output channel module, which only broadens the output channel at the insertion site.
Selecting an original neural network model as a ResNet50 neural network model, and constructing the neural network models with the number of additional input modules of 3 and insertion positions of q1, q2 and q3 and the network models of the 3 ablation experiments. As shown in table 5, it can be seen from the comparison of the ablation experiment results that the insertion of the additional input module into the original neural network model is a main reason for effectively improving the detection accuracy and efficiency of pathological image detection.
TABLE 5 comparison of ablation test results
Figure BDA0002967479210000181
It should be understood that, although the steps in the flowchart of fig. 2 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 2 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
In one embodiment, as shown in fig. 7, there is provided a pathological image detection apparatus including: a pathology image acquisition module 610, a pathology image detection module 620, and a pathology image detection model acquisition module 630, wherein:
a pathological image obtaining module 610, configured to obtain a pathological image to be detected.
And the pathological image detection module 620 is configured to perform pathological detection on the pathological image to be detected by using a pre-trained pathological image detection model to obtain a pathological image detection result.
A pathological image detection model obtaining module 630, configured to obtain the pathological image detection model. Wherein the training process of the pathological image detection model comprises the following steps: acquiring a sample set, wherein the sample set comprises pathological images obtained after processing all original pathological images; training each neural network model according to the sample set to obtain each corresponding image detection model; the neural network model is constructed by adding a characteristic extraction channel in an original neural network model; carrying out image detection test by adopting each image detection model to obtain corresponding image detection test results; and comparing the detection test results of the images, and determining the pathological image detection model from the image detection models according to the comparison result.
In one embodiment, the pathological image detection model obtaining module 630 is configured to obtain the pathological image detection model from a model training device. When the pathological image detection device is a terminal, the model training device may be a server or other terminal device for training to obtain the pathological image detection model.
In one embodiment, the pathological image detection model obtaining module 630 may train itself to obtain the pathological image detection model. At this time, as shown in fig. 8, the pathological image detection model obtaining module 630 specifically includes: a sample set acquisition module 710, a training module 720, a testing module 730, and a model determination module 740, wherein:
a sample set obtaining module 710, configured to obtain a sample set, where the sample set includes pathological images obtained by processing each original pathological image.
A training module 720, configured to train each neural network model according to the sample set, to obtain each corresponding image detection model; the neural network model is constructed by adding a characteristic extraction channel in the original neural network model.
The testing module 730 is configured to perform an image detection test using each image detection model to obtain each corresponding image detection test result.
The model determining module 740 is configured to compare the image detection test results and determine the pathological image detection model from the image detection models according to the comparison result.
In one embodiment, the training module 720 includes the following elements:
and the model construction unit is used for respectively inserting each additional input module into each model position of the original neural network model to obtain each constructed neural network model, the additional input modules are used for increasing the feature extraction channels of the sample set during model training, and the number of the additional input modules is at least one.
And the image input unit is used for training each neural network model according to the sample set to obtain each corresponding image detection model, and the images input in each additional input module are the scaled pathological images in the sample set.
In one embodiment, the model building unit comprises the following units:
an additional input module insertion unit, configured to insert each additional input module into an end of a corresponding down-sampling layer or a successive down-sampling layer of the original neural network, respectively, wherein an output width of each additional input module is the same as an output width of the down-sampling layer or the successive down-sampling layer of the original neural network.
An insertable position defining unit, configured to define that the number of insertable positions of each additional input module in the original neural network model is the same as the number of layers of the non-continuous down-sampling layer of the original neural network model, where the number of insertable positions is at least one.
And the number of the additional input modules and the insertion positions are determined by the unit, and the number of the additional input modules and the corresponding insertion positions are determined to be inserted into the original neural network model.
In one embodiment, the additional input module number and insertion position determining unit includes the following units:
a first search space determination unit for determining a search space comprising respective insertable positions of the original neural network model.
An initial detection result obtaining unit, configured to obtain a current additional input module, insert the current additional input module into each insertable position of the original neural network model, respectively, and obtain each initial neural network model; and carrying out image detection test on each initial neural network model to obtain corresponding initial detection results.
And the first insertion position determining unit is used for determining the insertion position of the current additional input module according to each initial detection result and removing the determined insertion position and the insertable position corresponding to the initial detection result with the accuracy less than the preset accuracy from the search space.
And the second insertion position determining unit is used for acquiring a next additional input module as a current additional input module, and returning to the step of respectively inserting the current additional input module into each insertable position of the original neural network model until the search space is empty.
And a first result determining unit, configured to determine, among the initial detection results corresponding to the determined insertion positions, each additional input module and corresponding insertion position corresponding to an initial detection result with high detection accuracy as each additional input module and corresponding insertion position that can be inserted into the original neural network model.
In one embodiment, the additional input module number and insertion position determining unit includes the following units:
a second search space determination unit for determining a search space comprising respective insertable positions of the original neural network model.
And the original detection result acquisition unit is used for carrying out image detection test on the original neural network model to obtain an original detection result.
A current insertable position obtaining unit for obtaining a current additional input module and a current insertable position of the search space.
A first initial detection result obtaining unit, configured to insert the current additional input module to obtain a current initial neural network model, where the current additional input module is inserted into the current insertable position of the original neural network model; and carrying out image detection test on the current initial neural network model to obtain a first initial detection result.
A third insertion position determining unit, configured to determine, when the first initial detection result is greater than the original detection result, an insertion position corresponding to the first initial detection result as an insertion position of a current additional input module, and use the current initial network model corresponding to the first initial detection result as the original neural network model.
A fourth insertion position determining unit, configured to remove the current insertable position from the search space, use a next additional input module as the current additional input module, acquire the next insertable position of the search space as the current insertable position, and return to the steps of acquiring the current additional input module and the current insertable position of the search space until the search space is empty.
And a second result determining unit, configured to determine, among the initial detection results corresponding to the determined insertion positions, each additional input module and corresponding insertion position corresponding to an initial detection result with high detection accuracy as each additional input module and corresponding insertion position that can be inserted into the original neural network model.
For specific limitations of the pathological image detection device, reference may be made to the above limitations of the pathological image detection method, which are not described herein again. The modules in the pathological image detection device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 9. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing pathological image detection data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a pathological image detection method.
In one embodiment, a computer device is provided, and the computer device may be a terminal, and the internal structure diagram thereof may be as shown in fig. 10. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a pathological image detection method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the configurations shown in fig. 9-10 are only block diagrams of some of the configurations relevant to the present application, and do not constitute a limitation on the computing devices to which the present application may be applied, and that a particular computing device may include more or less components than shown, or combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, which includes a memory in which a computer program is stored and a processor, which when executing the computer program implements the steps of the pathology image detection method described above.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the above-mentioned pathology image detection method.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A pathological image detection method, the method comprising:
acquiring a pathological image to be detected;
carrying out pathological detection on the pathological image to be detected by adopting a pre-trained pathological image detection model to obtain a pathological image detection result;
the training process of the pathological image detection model comprises the following steps:
acquiring a sample set, wherein the sample set comprises pathological images obtained after processing all original pathological images;
training each neural network model according to the sample set to obtain each corresponding image detection model; the neural network model is constructed by adding a characteristic extraction channel in an original neural network model;
carrying out image detection test by adopting each image detection model to obtain corresponding image detection test results;
and comparing the detection test results of the images, and determining the pathological image detection model from the image detection models according to the comparison result.
2. The method of claim 1, wherein each of the neural network models is constructed in a manner comprising:
and respectively inserting additional input modules into the model positions of the original neural network model to obtain the constructed neural network models, wherein the additional input modules are used for increasing the feature extraction channels of the sample set during model training, and the number of the additional input modules is at least one.
3. The method of claim 2, wherein said inserting additional input modules at model locations of said original neural network model, respectively, comprises:
inserting each of the additional input modules into an end of a corresponding down-sampling layer or a successive down-sampling layer of the original neural network, respectively, an output width of each of the additional input modules being the same as an output width of the down-sampling layer or the successive down-sampling layer of the original neural network.
4. The method according to claim 1, wherein when training each neural network model according to the sample set to obtain each corresponding image detection model, the image inputted in each additional input module is a scaled pathological image in the sample set.
5. The method of claim 2, wherein the number of insertable positions of each additional input module in the original neural network model is the same as the number of layers of the non-consecutive downsampling layers of the original neural network model, and the number of insertable positions is at least one.
6. The method of claim 5, wherein the step of inserting the number of additional input modules and the corresponding insertion positions into the original neural network model comprises:
determining a search space comprising insertable locations of the original neural network model;
acquiring a current additional input module, and respectively inserting the current additional input module into each insertable position of the original neural network model to acquire each initial neural network model; carrying out image detection test on each initial neural network model to obtain corresponding initial detection results;
determining the insertion position of the current additional input module according to each initial detection result, and removing the determined insertion position and the insertable position corresponding to the initial detection result with the accuracy less than the preset accuracy from the search space;
acquiring a next additional input module as a current additional input module, and returning to the step of respectively inserting the current additional input module into each insertable position of the original neural network model until the search space is empty;
and determining each additional input module and corresponding insertion position corresponding to the initial detection result with high detection precision as each additional input module and corresponding insertion position which can be inserted into the original neural network model in the initial detection result corresponding to each determined insertion position.
7. The method of claim 5, wherein the step of inserting the number of additional input modules and the corresponding insertion positions into the original neural network model comprises:
determining a search space comprising insertable locations of the original neural network model;
carrying out image detection test on the original neural network model to obtain an original detection result;
acquiring a current additional input module and a current pluggable position of the search space;
inserting the current additional input module to obtain a current initial neural network model, wherein the current additional input module is inserted into the current insertable position of the original neural network model; performing image detection test on the current initial neural network model to obtain a first initial detection result;
when the first initial detection result is larger than the original detection result, determining an insertion position corresponding to the first initial detection result as an insertion position of a current additional input module, and taking the current initial network model corresponding to the first initial detection result as the original neural network model;
removing the current pluggable position from the search space, using a next additional input module as a current additional input module, acquiring the next pluggable position of the search space as a current pluggable position, and returning to the step of acquiring the current additional input module and the current pluggable position of the search space until the search space is empty;
and determining each additional input module and corresponding insertion position corresponding to the initial detection result with high detection precision as each additional input module and corresponding insertion position which can be inserted into the original neural network model in the initial detection result corresponding to each determined insertion position.
8. A pathological image detection apparatus, characterized in that the apparatus comprises:
the pathological image acquisition module is used for acquiring a pathological image to be detected;
the pathological image detection module is used for carrying out pathological detection on the pathological image to be detected by adopting a pre-trained pathological image detection model to obtain a pathological image detection result;
the pathological image detection model acquisition module is used for acquiring the pathological image detection model; wherein the training process of the pathological image detection model comprises the following steps: acquiring a sample set, wherein the sample set comprises pathological images obtained after processing all original pathological images; training each neural network model according to the sample set to obtain each corresponding image detection model; the neural network model is constructed by adding a characteristic extraction channel in an original neural network model; carrying out image detection test by adopting each image detection model to obtain corresponding image detection test results; and comparing the detection test results of the images, and determining the pathological image detection model from the image detection models according to the comparison result.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202110254516.9A 2021-03-09 2021-03-09 Pathological image detection method, pathological image detection device, computer equipment and storage medium Pending CN112802012A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110254516.9A CN112802012A (en) 2021-03-09 2021-03-09 Pathological image detection method, pathological image detection device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110254516.9A CN112802012A (en) 2021-03-09 2021-03-09 Pathological image detection method, pathological image detection device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112802012A true CN112802012A (en) 2021-05-14

Family

ID=75816729

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110254516.9A Pending CN112802012A (en) 2021-03-09 2021-03-09 Pathological image detection method, pathological image detection device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112802012A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114565628A (en) * 2022-03-23 2022-05-31 中南大学 Image segmentation method and system based on boundary perception attention

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170116497A1 (en) * 2015-09-16 2017-04-27 Siemens Healthcare Gmbh Intelligent Multi-scale Medical Image Landmark Detection
CN108717554A (en) * 2018-05-22 2018-10-30 复旦大学附属肿瘤医院 A kind of thyroid tumors histopathologic slide image classification method and its device
CN110084237A (en) * 2019-05-09 2019-08-02 北京化工大学 Detection model construction method, detection method and the device of Lung neoplasm
CN110543900A (en) * 2019-08-21 2019-12-06 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN111401247A (en) * 2020-03-17 2020-07-10 杭州趣维科技有限公司 Portrait segmentation method based on cascade convolution neural network
US20200402204A1 (en) * 2019-06-19 2020-12-24 Neusoft Medical Systems Co., Ltd. Medical imaging using neural networks
CN112215117A (en) * 2020-09-30 2021-01-12 北京博雅智康科技有限公司 Abnormal cell identification method and system based on cervical cytology image
CN112381733A (en) * 2020-11-13 2021-02-19 四川大学 Image recovery-oriented multi-scale neural network structure searching method and network application

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170116497A1 (en) * 2015-09-16 2017-04-27 Siemens Healthcare Gmbh Intelligent Multi-scale Medical Image Landmark Detection
CN108717554A (en) * 2018-05-22 2018-10-30 复旦大学附属肿瘤医院 A kind of thyroid tumors histopathologic slide image classification method and its device
CN110084237A (en) * 2019-05-09 2019-08-02 北京化工大学 Detection model construction method, detection method and the device of Lung neoplasm
US20200402204A1 (en) * 2019-06-19 2020-12-24 Neusoft Medical Systems Co., Ltd. Medical imaging using neural networks
CN110543900A (en) * 2019-08-21 2019-12-06 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN111401247A (en) * 2020-03-17 2020-07-10 杭州趣维科技有限公司 Portrait segmentation method based on cascade convolution neural network
CN112215117A (en) * 2020-09-30 2021-01-12 北京博雅智康科技有限公司 Abnormal cell identification method and system based on cervical cytology image
CN112381733A (en) * 2020-11-13 2021-02-19 四川大学 Image recovery-oriented multi-scale neural network structure searching method and network application

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
谢斌 等: ""基于卷积神经网络的基底细胞癌和色素痣的临床图像鉴别"", 《中南大学学报(医学版)》, vol. 44, no. 9, pages 1063 - 1070 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114565628A (en) * 2022-03-23 2022-05-31 中南大学 Image segmentation method and system based on boundary perception attention
CN114565628B (en) * 2022-03-23 2022-09-13 中南大学 Image segmentation method and system based on boundary perception attention

Similar Documents

Publication Publication Date Title
CN109754389B (en) Image processing method, device and equipment
CN110427970A (en) Image classification method, device, computer equipment and storage medium
CN112580668B (en) Background fraud detection method and device and electronic equipment
CN112132265B (en) Model training method, cup-disk ratio determining method, device, equipment and storage medium
CN110245714B (en) Image recognition method and device and electronic equipment
CN111274999B (en) Data processing method, image processing device and electronic equipment
CN109840524A (en) Kind identification method, device, equipment and the storage medium of text
CN111951276A (en) Image segmentation method and device, computer equipment and storage medium
KR102330263B1 (en) Method and apparatus for detecting nuclear region using artificial neural network
CN112581477A (en) Image processing method, image matching method, device and storage medium
CN114120030A (en) Medical image processing method based on attention mechanism and related equipment
CN113065593A (en) Model training method and device, computer equipment and storage medium
CN115438804A (en) Prediction model training method, device and equipment and image prediction method
CN112419342A (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN115690672A (en) Abnormal image recognition method and device, computer equipment and storage medium
CN112802012A (en) Pathological image detection method, pathological image detection device, computer equipment and storage medium
CN112102235B (en) Human body part recognition method, computer device, and storage medium
CN113160199A (en) Image recognition method and device, computer equipment and storage medium
CN115827877B (en) Proposal-assisted case merging method, device, computer equipment and storage medium
CN109871814B (en) Age estimation method and device, electronic equipment and computer storage medium
CN114693642B (en) Nodule matching method and device, electronic equipment and storage medium
CN112509052B (en) Method, device, computer equipment and storage medium for detecting macula fovea
CN115082999A (en) Group photo image person analysis method and device, computer equipment and storage medium
CN115713769A (en) Training method and device of text detection model, computer equipment and storage medium
CN115758271A (en) Data processing method, data processing device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination