CN115690486A - Method, device and equipment for identifying focus in image and storage medium - Google Patents

Method, device and equipment for identifying focus in image and storage medium Download PDF

Info

Publication number
CN115690486A
CN115690486A CN202211214860.6A CN202211214860A CN115690486A CN 115690486 A CN115690486 A CN 115690486A CN 202211214860 A CN202211214860 A CN 202211214860A CN 115690486 A CN115690486 A CN 115690486A
Authority
CN
China
Prior art keywords
image
target
capsule endoscope
block
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211214860.6A
Other languages
Chinese (zh)
Inventor
潘宁
冯媛
胡怀飞
刘海华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South Central Minzu University
Original Assignee
South Central University for Nationalities
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South Central University for Nationalities filed Critical South Central University for Nationalities
Priority to CN202211214860.6A priority Critical patent/CN115690486A/en
Publication of CN115690486A publication Critical patent/CN115690486A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Endoscopes (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of computer analysis of medical images, in particular to a method, a device, equipment and a storage medium for identifying a focus in an image, wherein the method comprises the following steps: performing feature extraction on the preprocessed training capsule endoscope image block through a convolutional neural network to obtain a training capsule endoscope image block feature vector, and updating the convolutional neural network parameters to obtain a target convolutional neural network; classifying feature vectors of an image block of the training capsule endoscope through a classifier, and updating parameters of the classifier to obtain a target classifier; performing clustering iteration on feature vectors of an image block of the training capsule endoscope to obtain a target clustering center; and calculating the distance between the feature block and each target clustering center to obtain a block class cluster containing focus region features, and classifying the block class cluster containing the focus region features through a target classifier to obtain a classification result, so that focuses in the image can be rapidly and intelligently identified.

Description

Method, device and equipment for identifying focus in image and storage medium
Technical Field
The invention relates to the field of computer analysis of medical images, in particular to a method, a device, equipment and a storage medium for identifying a focus in an image.
Background
The wireless capsule endoscope is used as swallow-type noninvasive detection equipment developed in the beginning of the 21 st century, discomfort of a patient in the detection process can be relieved, and the limitation of the traditional endoscope to detect the small intestine is overcome. Compared with the traditional endoscope examination mode, the examination process of the wireless capsule endoscope has the advantages of no pain, no wound, convenience, safety and the like, and is widely applied to the clinical detection of diseases of the digestive tract. The capsule endoscope works for about 8-13 hours after entering a human body, and 5-8 ten thousand color images to be diagnosed can be generated. Therefore, how to perform computer-aided diagnosis of rapid intelligent diagnosis becomes a technical problem to be solved urgently at present, in the conventional mode, a clinician is mainly adopted to screen images possibly with pathological changes from capsule endoscopy images with huge data volume, waste of medical resources is caused, and meanwhile, the probability of false detection and missed detection is increased due to the problems that the attention of the clinician is not concentrated and the like caused by few pathological change images and long-time film reading.
The above is only for the purpose of assisting understanding of the technical solution of the present invention, and does not represent an admission that the above is the prior art.
Disclosure of Invention
The invention mainly aims to provide a method, a device, equipment and a storage medium for identifying a focus in an image, and aims to solve the technical problems of low detection efficiency, false detection and missing detection of the focus in the image in the prior art.
In order to achieve the above object, the present invention provides a method for identifying a lesion in an image, the method comprising the steps of:
acquiring a preprocessed training capsule endoscope image block;
performing feature extraction on the preprocessed training capsule endoscope image blocks through a convolutional neural network to obtain feature vectors of the training capsule endoscope image blocks, and updating parameters of the convolutional neural network to obtain a target convolutional neural network;
classifying the feature vectors of the training capsule endoscope image block through a classifier, and updating the parameters of the classifier to obtain a target classifier;
performing clustering iteration on feature vectors of an image block of the training capsule endoscope to obtain a target clustering center;
acquiring a target capsule endoscopy image feature vector extracted by a target convolutional neural network and blocking the image feature map to obtain a feature block;
and calculating the distance between the feature block and each target clustering center to obtain a block class cluster containing focus region features, and classifying the block class cluster containing the focus region features through a target classifier to obtain a classification result.
Optionally, the acquiring the preprocessed image block of the training capsule endoscope includes:
acquiring a training capsule endoscopy image, and intercepting a target area on the corresponding capsule endoscopy image;
storing the target area on the capsule endoscope image into a folder with a label to obtain a capsule endoscope image block training sample library;
and sequentially extracting a target area on the capsule endoscope image in the capsule endoscope image block training sample library, and performing the same cutting, rotating and turning operations to obtain the capsule endoscope image of the target area with the same size.
Optionally, the classifying the feature vector of the training capsule endoscope image block through a classifier, and updating the classifier parameters to obtain a target classifier, including:
performing Softmax classification on the characteristic vectors of the training capsule endoscope image blocks to obtain classification results, wherein a loss function adopted by Softmax is a cross entropy loss function;
processing the loss function through cross entropy loss, classification labels and softmax classification results to obtain classification loss;
carrying out gradient back transmission on the classification loss through back propagation, and updating the classifier parameters;
and returning to the step of performing Softmax classification on the characteristic vectors of the training capsule endoscope image blocks to obtain classification results until the characteristic vectors of all the training capsule endoscope image blocks are classified, so as to obtain the target classifier.
Optionally, the performing clustering iteration on feature vectors of image blocks of the trained capsule endoscope to obtain a target clustering center includes:
randomly extracting K image blocks in the capsule endoscope image block training sample library as initial clustering centers, wherein K is an integer greater than 1;
calculating the distance between each image block feature vector of the capsule endoscope image block training sample library and the initial clustering center, wherein the image block feature vectors are left in the capsule endoscope image block training sample library;
dividing the image block feature vectors into K clusters according to the distance between each image block feature vector and the initial clustering center;
calculating the mean value of all the feature vectors of the K clusters, and taking the mean value as a new clustering center;
and returning to the step of randomly extracting K image blocks in the capsule endoscope image block training sample library as initial clustering centers until the position of the new clustering center is not changed any more, and stopping iteration to obtain a target clustering center.
Optionally, the obtaining of the target capsule endoscopy image feature vector extracted by the target convolutional neural network and the blocking of the image feature map to obtain a feature block includes:
performing feature extraction on a target capsule endoscope image through the target convolutional neural network and partitioning the image feature map to obtain a target capsule endoscope image feature map;
and dividing the characteristic diagram of the target capsule endoscope image block into characteristic blocks with the same size as the characteristic diagram of the training capsule endoscope image block.
Optionally, the calculating the distance between the feature block and each target cluster center to obtain a block class cluster containing a focus area feature, and classifying the block class cluster containing the focus area feature by using a target classifier to obtain a classification result includes:
calculating the distance between each characteristic image block and each clustering center through the target clustering center;
judging the class cluster to which the characteristic image block belongs according to the distance between each characteristic image block and each cluster center, and screening out the class cluster of the characteristic image block containing the focus area;
and classifying the feature map block cluster which is judged to contain the focus area through a target classifier to obtain a classification result.
Optionally, after the target classifier classifies the cluster of the segment classes containing the lesion region feature to obtain a classification result, the method further includes:
mapping the classification result to the original image according to a preset mapping relation;
and repeatedly calculating the distance between the feature block and each target clustering center to obtain a feature block cluster containing a focus area, classifying by using a target classifier to obtain a final classification result, and returning the final classification result to the original image according to a preset mapping relation until all capsule endoscope image detection is finished.
In addition, to achieve the above object, the present invention further provides an apparatus for identifying a lesion in an image, including:
the acquisition module is used for acquiring the preprocessed image blocks of the training capsule endoscope;
the feature extraction module is used for performing feature extraction on the preprocessed training capsule endoscope image block through a convolutional neural network to obtain a training capsule endoscope image block feature vector, and updating the convolutional neural network parameters to obtain a target convolutional neural network;
the classification module is used for classifying the feature vectors of the training capsule endoscope image block through a classifier and updating the parameters of the classifier to obtain a target classifier;
the clustering module is used for performing clustering iteration on the feature vectors of the image blocks of the training capsule endoscope to obtain a target clustering center;
the acquisition module is also used for acquiring the target capsule endoscopy image feature vectors extracted by the target convolutional neural network and blocking the image feature maps to obtain feature blocks;
the clustering module is further used for calculating the distance between the feature block and each target clustering center to obtain a block class cluster containing the focus area features, and classifying the block class cluster containing the focus area features through a target classifier to obtain a classification result.
In addition, to achieve the above object, the present invention also provides an apparatus for identifying a lesion in an image, the apparatus including: a memory, a processor, and a program stored on the memory and executed on the processor for identifying a lesion in an image, the program for identifying a lesion in an image configured to implement a method for identifying a lesion in an image as described above.
In addition, to achieve the above object, the present invention further provides a storage medium storing a program for identifying a lesion in an image, wherein the program for identifying a lesion in an image is executed by a processor to implement the method for identifying a lesion in an image as described above.
The invention discloses a method, a device, equipment and a storage medium for identifying a focus in an image, wherein the method comprises the following steps: acquiring a preprocessed training capsule endoscope image; acquiring a preprocessed training capsule endoscope image block; performing feature extraction on the preprocessed training capsule endoscope image block through a convolutional neural network to obtain a training capsule endoscope image block feature vector, and updating the convolutional neural network parameters to obtain a target convolutional neural network; classifying the feature vectors of the training capsule endoscope image block through a classifier, and updating the parameters of the classifier to obtain a target classifier; performing clustering iteration on feature vectors of an image block of the training capsule endoscope to obtain a target clustering center; acquiring a target capsule endoscopy image feature vector extracted by a target convolutional neural network and blocking the image feature map to obtain a feature block; and calculating the distance between the feature block and each target clustering center to obtain a block class cluster containing focus region features, and classifying the block class cluster containing the focus region features through a target classifier to obtain a classification result, so that the focus in the image is identified through training a neural network, a classifier and the clustering centers, and the computer-aided diagnosis of rapid intelligent diagnosis is realized.
Drawings
Fig. 1 is a schematic diagram of a focal device structure in an identification image of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a first embodiment of a method for identifying a lesion in an image according to the present invention;
FIG. 3 is a schematic flow chart illustrating a training phase of a method for identifying a lesion in an image according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating a testing phase of a method for identifying a lesion in an image according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating a second embodiment of a method for identifying lesions in an image according to the present invention;
FIG. 6 is a flowchart illustrating a third embodiment of a method for identifying lesions in an image according to the present invention;
fig. 7 is a functional block diagram of a first embodiment of an apparatus for identifying a lesion in an image according to the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a device for identifying a lesion in an image according to a hardware operating environment according to an embodiment of the present invention.
As shown in fig. 1, the apparatus for identifying a lesion in an image may include: a processor 1001, such as a Central Processing Unit (CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), and the optional user interface 1003 may further include a standard wired interface and a wireless interface, and the wired interface for the user interface 1003 may be a USB interface in the present invention. The network interface 1004 may optionally include a standard wired interface, a Wireless interface (e.g., a Wireless-Fidelity (Wi-Fi) interface). The Memory 1005 may be a Random Access Memory (RAM) Memory, or a Non-volatile Memory (NVM), such as a disk Memory. The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the configuration shown in fig. 1 does not constitute a limitation of the apparatus for identifying lesions in an image, and may include more or fewer components than those shown, or some components in combination, or a different arrangement of components.
As shown in fig. 1, a memory 1005, identified as a computer storage medium, may include an operating system, a network communication module, a user interface module, and a lesion program in an image.
In the device for identifying a lesion in an image shown in fig. 1, the network interface 1004 is mainly used for connecting a background server and performing data communication with the background server; the user interface 1003 is mainly used for connecting user equipment; the apparatus for recognizing a lesion in an image calls a lesion program in a recognized image stored in the memory 1005 through the processor 1001 and performs a method for recognizing a lesion in an image according to an embodiment of the present invention.
Based on the hardware structure, the embodiment of the method for identifying the focus in the image is provided.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for identifying a lesion in an image according to a first embodiment of the present invention.
In a first embodiment, the method for identifying a lesion in an image comprises the steps of:
step S10: and acquiring the preprocessed image block of the training capsule endoscope.
It should be understood that the main implementation of the present embodiment is to identify a device of a lesion in an image, where the device of the lesion in the image has functions of data processing, data communication, and program execution.
In specific implementation, the preprocessed image blocks of the training capsule endoscope are obtained, and the images of the training capsule endoscope are preprocessed. The method comprises the following specific steps: acquiring a training capsule endoscopy image, and intercepting a target area on the corresponding capsule endoscopy image; storing the target area on the capsule endoscope image into a folder with a label to obtain a capsule endoscope image block training sample library; and sequentially extracting the target areas on the capsule endoscope images in the capsule endoscope image block training sample library, and performing the same cutting, rotating and overturning operations to obtain the capsule endoscope image blocks of the target areas with the same size.
It should be noted that the embodiment is divided into a training stage and a testing stage, where a training flowchart is shown in fig. 3, and after preprocessing a capsule endoscopy image to obtain an image block, feature extraction is performed on the image block to obtain a feature vector, and then classification and clustering are performed; as shown in fig. 4, the test flow chart firstly extracts the features of the images of the capsule endoscopy, then divides the extracted feature maps into blocks, judges the cluster of the feature maps, then carries out final classification, and finally maps the final classification result to return to the original image.
Step S20: and performing feature extraction on the preprocessed training capsule endoscope image block through a convolutional neural network to obtain a training capsule endoscope image block feature vector, and updating the convolutional neural network parameters to obtain a target convolutional neural network.
It should be understood that feature extraction is performed on the preprocessed image block of the capsule endoscope, so as to obtain feature vectors of the image block of the capsule endoscope. Before the image blocks are subjected to feature extraction, the capsule endoscope image block training sample base needs to be randomly disturbed, the method adopts a convolutional neural network to perform feature extraction on the capsule endoscope image blocks, and the network layer parameters of a feature extraction module are set as shown in table 1. In the experiment, feature vectors are obtained by extracting features of ten convolutional layers of an image block, and the size of the obtained feature map is 5 × 5 × 512.
Figure BDA0003876387450000071
Figure BDA0003876387450000081
TABLE 1 feature extraction Module network layer parameters
Step S30: and classifying the feature vectors of the training capsule endoscope image block through a classifier, and updating the parameters of the classifier to obtain a target classifier.
In specific implementation, softmax classification is carried out on the characteristic vectors of the training capsule endoscope image blocks, and a loss function adopted by a network is a cross entropy loss function to obtain a classification result; processing the loss function through cross entropy loss, classification labels and softmax classification results to obtain classification loss; carrying out gradient back transmission on the classification loss through back propagation, and updating the classifier parameters; and returning to the step of performing Softmax classification on the characteristic vectors of the training capsule endoscope image blocks to obtain classification results until the characteristic vectors of all the training capsule endoscope image blocks are classified, so as to obtain the target classifier.
Step S40: and performing clustering iteration on the feature vectors of the image blocks of the training capsule endoscope to obtain a target clustering center.
In specific implementation, randomly extracting K image blocks in the capsule endoscopy image block training sample library as initial clustering centers, wherein K is an integer greater than 1; calculating the distance between each image block feature vector of the capsule endoscope image block training sample library and the initial clustering center, wherein the image block feature vectors are left in the capsule endoscope image block training sample library; dividing the image block feature vectors into K clusters according to the distance between each image block feature vector and the initial clustering center; calculating the mean value of all the feature vectors of the K clusters, and taking the mean value as a new clustering center; and returning to the step of randomly extracting K image blocks in the capsule endoscope image block training sample library as initial clustering centers until the position of the new clustering center is not changed any more, stopping iteration, obtaining a target clustering center, and training a model for a test stage.
Step S50: and acquiring a target capsule endoscopy image feature vector extracted by a target convolutional neural network and blocking the image feature map to obtain a feature block.
In specific implementation, feature extraction is carried out on the feature vector of the target capsule endoscopy image through the target convolutional neural network to obtain a target capsule endoscopy image feature map; and dividing the target capsule endoscope image characteristic diagram into characteristic blocks with the same size as the training capsule endoscope image block characteristic diagram, thereby obtaining a target characteristic block.
It should be understood that the feature extraction is to extract features of the images in the capsule endoscopy image test sample library by using the trained network parameters in the training phase to obtain feature vectors. For example: after the feature vectors are obtained, before the next step is performed, the feature maps need to be divided into the same size as the feature maps in the training stage, taking a 240 × 240 capsule endoscope image as an example, the size of the feature maps obtained after feature extraction is 30 × 30, and the feature maps with the size of 30 × 30 are divided into 6 × 6 non-overlapping regions, so as to obtain 36 feature blocks, wherein the size of each block is 5 × 5.
Step S60: and calculating the distance between the feature block and each target cluster center to obtain a block class cluster containing the focus region features, and classifying the block class cluster containing the focus region features through a target classifier to obtain a classification result.
In specific implementation, the image block class clusters containing the focus region characteristics are classified through a target classifier, and after a classification result is obtained, the classification result is mapped to an original image according to a preset mapping relation; and repeatedly calculating the distance between the feature block and each target clustering center to obtain a feature block cluster containing a focus area, classifying by using a target classifier to obtain a final classification result, and returning the final classification result to the original image according to a preset mapping relation until all capsule endoscope image detection is finished.
It should be noted that, with the use of the target clustering center with good iteration, the feature blocks are subjected to auxiliary classification judgment to obtain the class cluster to which each feature block belongs after clustering. The main purpose of the step is to reduce the number of characteristic image blocks sent into the classifier, the classifier is composed of a full connection layer, the parameter quantity of the full connection layer is large, the calculated quantity is large, the K mean value clustering has the advantages of high operation speed and simple execution process, the distance between each characteristic image block and each clustering center is calculated through the clustering center with good iteration, the cluster to which the characteristic image block belongs is judged, only the characteristic image block cluster which contains the focus area is judged to be classified in the next step, the characteristic image blocks are greatly reduced from being input into the classifier for classification, and therefore the purpose of realizing rapid detection under the condition of not reducing the accuracy rate is achieved.
It should be understood that the network parameters of the classifier trained in the training phase are used to perform the final classification on the cluster-like feature blocks containing the lesion area. And returning the original image through a preset mapping relation according to the classification result to finish detection. The specific steps of the preset mapping relation comprise: obtaining the width w, the height h and the center point coordinate (x, y) of the detection frame according to the coordinates of the upper left corner (x 0, y 0) and the lower right corner (x 1, y 1) of the characteristic image block, mapping the characteristic image block frame back to the original image size according to the network down-sampling multiple M, and obtaining the coordinates of the upper left corner (x 0', y 0') and the lower right corner (x 1', y 1') of the final detection frame, wherein the formula is as follows:
w=x1-x0
h=y1-y0
Figure BDA0003876387450000101
y=y0+h/2
x0’=2 M *x-2 M *w/2
y0’=2 M *y-2 M *h/2
x1’=2 M *x+2 M *w/2
y1’=2 M *y+2 M *h/2
and completing the mapping from the characteristic diagram to the original diagram through the steps, completing the detection, and repeating the steps until all the capsule endoscope images are detected.
In the embodiment, a preprocessed training capsule endoscope image block is obtained; performing feature extraction on the preprocessed training capsule endoscope image block through a convolutional neural network to obtain a training capsule endoscope image block feature vector, and updating the convolutional neural network parameters to obtain a target convolutional neural network; classifying the feature vectors of the training capsule endoscope image block through a classifier, and updating the parameters of the classifier to obtain a target classifier; performing clustering iteration on feature vectors of an image block of the training capsule endoscope to obtain a target clustering center; acquiring a target capsule endoscopy image feature vector extracted by a target convolutional neural network and blocking the image feature map to obtain a feature block; and calculating the distance between the feature block and each target cluster center to obtain a block class cluster containing focus region features, and classifying the block class cluster containing the focus region features through a target classifier to obtain a classification result, so that the focus in the image is identified through training a neural network, the classifier and the cluster centers, and the computer-aided diagnosis of the rapid intelligent diagnosis is realized.
Referring to fig. 5, fig. 5 is a flowchart illustrating a second embodiment of a method for identifying a lesion in an image according to the present invention, based on the first embodiment shown in fig. 2.
In the second embodiment, the step S30 includes:
step S301: and performing Softmax classification on the feature vectors of the training capsule endoscope image blocks to obtain classification results, wherein a loss function adopted by Softmax is a cross entropy loss function.
It should be noted that the classifier adopted in the experiment in this embodiment is composed of three full connection layers (FCs), and the network layer parameter settings of the classification module are shown in table 2.
Type (B) Input size Output size
12800
FC1 12800 4096
FC2 4096 1000
FC3 1000 6
TABLE 2 Classification Module network layer parameters
Step S302: and processing the loss function through cross entropy loss, classification labels and softmax classification results to obtain classification loss.
In specific implementation, the classifier inputs the feature vector obtained by feature extraction into the full-link layer, and obtains a probability vector through Softmax, and the corresponding high probability value is the predicted class.
Step S303: and carrying out gradient back transmission on the classification loss through back propagation, and updating the classifier parameters.
It should be noted that the back propagation algorithm is the most common and effective algorithm currently used for training an Artificial Neural Network (ANN), and its main idea is: inputting training set data into an input layer of the ANN, passing through a hidden layer, finally reaching an output layer and outputting a result, which is a forward propagation process of the ANN; because the output result of the ANN has errors with the actual result, the error between the estimated value and the actual value is calculated firstly, and the error is propagated reversely from the output layer to the hidden layer until the error is propagated to the input layer; in the process of back propagation, adjusting the values of various parameters according to errors; and continuously iterating the process until convergence.
Step S304: and returning to the step of performing Softmax classification on the characteristic vectors of the training capsule endoscope image blocks to obtain classification results until the characteristic vectors of all the training capsule endoscope image blocks are classified, so as to obtain the target classifier.
In the specific implementation, the Loss function of the network adopts Cross Entropy Loss (Cross Entropy Loss), classification Loss is obtained by using classification labels and softmax classification results, gradient feedback is performed by using a back propagation algorithm, and parameters of the network are updated until all capsule endoscope image blocks are classified.
In the embodiment, softmax classification is carried out on the feature vectors of the image blocks of the training capsule endoscope to obtain a classification result; processing the loss function through cross entropy loss, classification labels and softmax classification results to obtain classification loss; carrying out gradient back transmission on the classification loss through a back propagation algorithm, and updating the classifier parameters; and returning to the step of performing Softmax classification on the characteristic vectors of the training capsule endoscope image blocks to obtain classification results until the characteristic vectors of all the training capsule endoscope image blocks are classified, so as to obtain the target classifier. Therefore, the characteristic vectors of the image blocks of the capsule endoscope can be trained, and the efficiency of identifying the focus in the image is improved.
Referring to fig. 6, fig. 6 is a flowchart illustrating a third embodiment of the method for identifying a lesion in an image according to the present invention, and the third embodiment of the method for identifying a lesion in an image according to the present invention is proposed based on the first embodiment shown in fig. 2.
In a third embodiment, the step S20 includes:
step S401: and randomly extracting K image blocks in the capsule endoscope image block training sample library as initial clustering centers, wherein K is an integer greater than 1.
It should be noted that the image block feature vectors obtained by the feature extraction module are iterated through a preset clustering algorithm to obtain clustering centers, and K image blocks in the capsule endoscope image block training sample library are randomly extracted as initial clustering centers.
Step S402: and calculating the distance between each image block feature vector of the capsule endoscope image block training sample library and the initial clustering center, wherein the image block feature vectors are left in the capsule endoscope image block training sample library.
It should be noted that the distance between the feature vector of each image block of the remaining training sample library of the image blocks of the capsule endoscope and the initial clustering center, such as mahalanobis distance, euclidean distance, etc., is calculated, and the method is not limited thereto.
Step S403: and dividing the image block feature vectors into K clusters according to the distance between each image block feature vector and the initial clustering center.
It should be noted that the image block feature vectors are divided into classes corresponding to the clustering centers closest to the clustering centers, and K clusters are formed after the image block feature vectors of all the capsule endoscope image block training sample libraries are divided.
Step S404: and calculating the mean value of all the feature vectors of the K clusters, and taking the mean value as a new cluster center.
It should be noted that the K-means clustering algorithm, where K is the number of clusters, is described as the K-means clustering algorithm in the preset clustering algorithm, and has the advantages of fast operation speed and simple execution process.
Step S405: and returning to the step of randomly extracting K image blocks in the capsule endoscope image block training sample library as initial clustering centers until the position of the new clustering center is not changed any more, and stopping iteration to obtain a target clustering center.
In the specific implementation, the mean value of all the feature vectors of each cluster is recalculated, the mean value is used as a new clustering center, finally, the same process is continuously operated on the calculated clusters and the clustering centers until the positions of the clustering centers are not changed any more, iteration is stopped, clustering is finished, a target clustering center is finally obtained, and the training phase is finished.
In this embodiment, K image blocks in the capsule endoscope image block training sample library are randomly extracted as an initial clustering center, where K is an integer greater than 1; calculating the distance between each image block feature vector of the capsule endoscope image block training sample library and the initial clustering center, wherein the image block feature vectors are left in the capsule endoscope image block training sample library; dividing the image block feature vectors into K clusters according to the distance between each image block feature vector and the initial clustering center; calculating the mean value of all the feature vectors of the K clusters, and taking the mean value as a new clustering center; and returning to the step of randomly extracting K image blocks in the capsule endoscope image block training sample library as initial clustering centers until the position of the new clustering center is not changed any more, and stopping iteration to obtain a target clustering center. Therefore, the characteristic image blocks are reduced and sent to the classifier in the testing stage, and the rapid detection is achieved.
In addition, an embodiment of the present invention further provides a storage medium, where the storage medium stores a program for identifying a lesion in an image, and the program for identifying a lesion in an image when executed by a processor implements the steps of the method for identifying a lesion in an image as described above.
Since the storage medium may adopt the technical solutions of all the embodiments, beneficial effects brought by the technical solutions of the embodiments are at least achieved, and are not described in detail herein.
Referring to fig. 7, fig. 7 is a functional block diagram of a first embodiment of an apparatus for identifying a lesion in an image according to the present invention.
In a first embodiment of the apparatus for identifying a lesion in an image according to the present invention, the apparatus for identifying a lesion in an image comprises:
the acquisition module 10 is used for acquiring the preprocessed image blocks of the training capsule endoscope;
the feature extraction module 20 is configured to perform feature extraction on the preprocessed training capsule endoscope image blocks through a convolutional neural network to obtain feature vectors of the training capsule endoscope image blocks, and update parameters of the convolutional neural network to obtain a target convolutional neural network;
the classification module 30 is used for classifying the feature vectors of the training capsule endoscopy image blocks through a classifier, and updating parameters of the classifier to obtain a target classifier;
the clustering module 40 is used for performing clustering iteration on the feature vectors of the image blocks of the training capsule endoscope to obtain a target clustering center;
the obtaining module 10 is further configured to obtain a target capsule endoscopy image feature vector extracted through a target convolutional neural network and block the image feature map to obtain a feature block;
the clustering module 40 is further configured to calculate distances between the feature blocks and the respective target clustering centers to obtain block class clusters containing the lesion area features, and classify the block class clusters containing the lesion area features by using a target classifier to obtain classification results.
In the embodiment, a preprocessed training capsule endoscope image block is obtained; performing feature extraction on the preprocessed training capsule endoscope image block through a convolutional neural network to obtain a training capsule endoscope image block feature vector, and updating the convolutional neural network parameters to obtain a target convolutional neural network; classifying the feature vectors of the training capsule endoscope image block through a classifier, and updating the parameters of the classifier to obtain a target classifier; performing clustering iteration on feature vectors of an image block of the training capsule endoscope to obtain a target clustering center; acquiring a target capsule endoscopy image feature vector extracted by a target convolutional neural network and blocking the image feature map to obtain a feature block; and calculating the distance between the feature block and each target clustering center to obtain a block class cluster containing focus region features, and classifying the block class cluster containing the focus region features through a target classifier to obtain a classification result, so that the focus in the image is identified through training a neural network, a classifier and the clustering centers, and the computer-aided diagnosis of rapid intelligent diagnosis is realized.
In an embodiment, the acquiring module 10 is further configured to acquire a preprocessed training capsule endoscope image block, and includes:
acquiring a training capsule endoscopy image, and intercepting a target area on the corresponding capsule endoscopy image;
storing the target area on the capsule endoscope image into a folder with a label to obtain a capsule endoscope image block training sample library;
and sequentially extracting the target areas on the capsule endoscope images in the capsule endoscope image block training sample library, and performing the same cutting, rotating and overturning operations to obtain the capsule endoscope image blocks of the target areas with the same size.
In an embodiment, the classification module 20 is further configured to classify the feature vectors of the image blocks of the trained capsule endoscope by a classifier, and update parameters of the classifier to obtain a target classifier, and the method includes:
performing Softmax classification on the characteristic vectors of the training capsule endoscope image blocks to obtain classification results, wherein a loss function adopted by Softmax is a cross entropy loss function;
processing the loss function through cross entropy loss, classification labels and softmax classification results to obtain classification loss;
carrying out gradient back transmission on the classification loss through a back propagation relation, and updating the classifier parameters;
and returning to the step of performing Softmax classification on the feature vectors of the image blocks of the training capsule endoscope to obtain classification results until the feature vectors of all the image blocks of the training capsule endoscope are classified, and obtaining a target classifier.
In an embodiment, the clustering module 40 is further configured to perform clustering iteration on the feature vectors of the training capsule endoscope image blocks, and includes:
randomly extracting K image blocks in the capsule endoscope image block training sample library as initial clustering centers, wherein K is an integer greater than 1;
calculating the distance between each image block feature vector of the capsule endoscope image block training sample library and the initial clustering center, wherein the image block feature vectors are left in the capsule endoscope image block training sample library;
dividing the image block feature vectors into K clusters according to the distance between each image block feature vector and the initial clustering center;
calculating the mean value of all the feature vectors of the K clusters, and taking the mean value as a new clustering center;
and returning to the step of randomly extracting K image blocks in the capsule endoscope image block training sample library as initial clustering centers until the position of the new clustering center is not changed any more, and stopping iteration to obtain a target clustering center.
In an embodiment, the obtaining module 10 is further configured to obtain a target capsule endoscopy image feature vector extracted by a target convolutional neural network and block the image feature map to obtain a feature block, and the obtaining module includes:
performing feature extraction on the target capsule endoscope image through the target convolutional neural network to obtain a target capsule endoscope image feature map;
and dividing the target capsule endoscope image feature map into feature blocks with the same size as the training capsule endoscope image block feature map.
In an embodiment, the clustering module 40 is further configured to calculate distances between the feature blocks and the respective target clustering centers to obtain block class clusters containing lesion area features, and classify the block class clusters containing lesion area features by using a target classifier to obtain a classification result, where the classification result includes:
calculating the distance between each characteristic image block and each clustering center through the target clustering center;
judging the class cluster to which the characteristic image block belongs according to the distance between each characteristic image block and each cluster center, and screening out the class cluster of the characteristic image block containing the focus area;
and classifying the feature map block cluster which is judged to contain the focus area through a target classifier to obtain a classification result.
In an embodiment, the clustering module 40 is further configured to classify the cluster of the segment classes containing the lesion region features by a target classifier, and after obtaining a classification result, further includes:
mapping the classification result to the original image according to a preset mapping relation;
and repeatedly calculating the distance between the feature block and each target clustering center to obtain a feature block cluster containing a focus area, classifying by using a target classifier to obtain a final classification result, and returning the final classification result to the original image according to a preset mapping relation until all capsule endoscope image detection is finished.
Other embodiments or specific implementation manners of the apparatus for identifying a lesion in an image according to the present invention may refer to the above method embodiments, and therefore, at least all the beneficial effects brought by the technical solutions of the above embodiments are provided, and no further description is provided herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order, but rather the words first, second, third, etc. are to be interpreted as names.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention or portions thereof that contribute to the prior art may be embodied in the form of a software product, where the computer software product is stored in a storage medium (e.g., a Read Only Memory (ROM)/Random Access Memory (RAM), a magnetic disk, or an optical disk), and includes several instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A method of identifying a lesion in an image, the method comprising the steps of:
acquiring a preprocessed training capsule endoscope image block;
performing feature extraction on the preprocessed training capsule endoscope image block through a convolutional neural network to obtain a training capsule endoscope image block feature vector, and updating the convolutional neural network parameters to obtain a target convolutional neural network;
classifying the feature vectors of the training capsule endoscope image block through a classifier, and updating the parameters of the classifier to obtain a target classifier;
performing clustering iteration on feature vectors of an image block of the training capsule endoscope to obtain a target clustering center;
acquiring a target capsule endoscopy image feature vector extracted by a target convolutional neural network and blocking the image feature map to obtain a feature block;
and calculating the distance between the feature block and each target cluster center to obtain a block class cluster containing the focus region features, and classifying the block class cluster containing the focus region features through a target classifier to obtain a classification result.
2. The method of claim 1, wherein said obtaining a pre-processed training capsule endoscopic image patch comprises:
acquiring a training capsule endoscopy image, and intercepting a target area on the corresponding capsule endoscopy image;
storing the target area on the capsule endoscope image into a folder with a label to obtain a capsule endoscope image block training sample library;
and sequentially extracting the target areas on the capsule endoscope images in the capsule endoscope image block training sample library, and performing the same cutting, rotating and overturning operations to obtain the capsule endoscope image blocks of the target areas with the same size.
3. The method of claim 1, wherein the classifying the feature vectors of the training capsule endoscopic image blocks by a classifier and updating classifier parameters to obtain a target classifier comprises:
performing Softmax classification on the characteristic vectors of the training capsule endoscope image blocks to obtain classification results, wherein a loss function adopted by Softmax is a cross entropy loss function;
processing the loss function through cross entropy loss, classification labels and softmax classification results to obtain classification loss;
carrying out gradient back transmission on the classification loss through a back propagation relation, and updating the classifier parameters;
and returning to the step of performing Softmax classification on the characteristic vectors of the training capsule endoscope image blocks to obtain classification results until the characteristic vectors of all the training capsule endoscope image blocks are classified, so as to obtain the target classifier.
4. The method of claim 2, wherein the performing clustering iteration on feature vectors of an image block of a training capsule endoscope to obtain a target clustering center comprises:
randomly extracting K image blocks in the capsule endoscope image block training sample library as initial clustering centers, wherein K is an integer greater than 1;
calculating the distance between each image block feature vector of the capsule endoscope image block training sample library and the initial clustering center, wherein the image block feature vectors are left in the capsule endoscope image block training sample library;
dividing the image block feature vectors into K clusters according to the distance between each image block feature vector and the initial clustering center;
calculating the mean value of all the feature vectors of the K clusters, and taking the mean value as a new clustering center;
and returning to the step of randomly extracting K image blocks in the capsule endoscope image block training sample library as initial clustering centers until the position of the new clustering center is not changed any more, and stopping iteration to obtain a target clustering center.
5. The method of claim 1, wherein the obtaining target intracapsular image feature vectors extracted by a target convolutional neural network and blocking the image feature map to obtain feature blocks comprises:
performing feature extraction on a target capsule endoscope image through the target convolutional neural network and partitioning the image feature map to obtain a target capsule endoscope image feature map;
and dividing the target capsule endoscope image characteristic diagram into characteristic blocks with the same size as the training capsule endoscope image block characteristic diagram.
6. The method of claim 1, wherein calculating the distance between the feature block and the center of each target cluster to obtain a cluster class of blocks containing the lesion region features, and classifying the cluster class of blocks containing the lesion region features by a target classifier to obtain a classification result comprises:
calculating the distance between each characteristic image block and each clustering center through the target clustering center;
judging the cluster to which the characteristic image block belongs according to the distance between each characteristic image block and each clustering center, and screening out the characteristic image block cluster containing a focus area;
and classifying the feature map block cluster which is judged to contain the focus area through a target classifier to obtain a classification result.
7. The method as claimed in any one of claims 1 to 6, wherein after the classifying the cluster of the segment classes containing the lesion region features by the target classifier to obtain the classification result, the method further comprises:
mapping the classification result to the original image according to a preset mapping relation;
and repeatedly calculating the distance between the feature block and each target clustering center to obtain a feature block cluster containing a focus area, classifying by using a target classifier to obtain a final classification result, and returning the final classification result to the original image according to a preset mapping relation until all capsule endoscope image detection is finished.
8. An apparatus for identifying a lesion in an image, the apparatus comprising:
the acquisition module is used for acquiring the preprocessed image blocks of the training capsule endoscope;
the feature extraction module is used for extracting features of the preprocessed training capsule endoscope image blocks through a convolutional neural network to obtain feature vectors of the training capsule endoscope image blocks, and updating parameters of the convolutional neural network to obtain a target convolutional neural network;
the classification module is used for classifying the feature vectors of the training capsule endoscope image block through a classifier and updating the parameters of the classifier to obtain a target classifier;
the clustering module is used for performing clustering iteration on the feature vectors of the training capsule endoscope image blocks to obtain a target clustering center;
the acquisition module is also used for acquiring the target capsule endoscopy image feature vectors extracted by the target convolutional neural network and blocking the image feature maps to obtain feature blocks;
the clustering module is further used for calculating the distance between the feature block and each target clustering center to obtain a block class cluster containing the focus area features, and classifying the block class cluster containing the focus area features through a target classifier to obtain a classification result.
9. An apparatus for identifying a lesion in an image, the apparatus comprising a memory, a processor, and a program for identifying a lesion in an image stored in the memory and executable on the processor, wherein the program for identifying a lesion in an image when executed by the processor implements the method for identifying a lesion in an image according to any one of claims 1 to 7.
10. A storage medium having stored thereon a program for identifying a lesion in an image, the program for identifying a lesion in an image when executed by a processor implementing a method for identifying a lesion in an image according to any one of claims 1 to 7.
CN202211214860.6A 2022-09-30 2022-09-30 Method, device and equipment for identifying focus in image and storage medium Pending CN115690486A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211214860.6A CN115690486A (en) 2022-09-30 2022-09-30 Method, device and equipment for identifying focus in image and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211214860.6A CN115690486A (en) 2022-09-30 2022-09-30 Method, device and equipment for identifying focus in image and storage medium

Publications (1)

Publication Number Publication Date
CN115690486A true CN115690486A (en) 2023-02-03

Family

ID=85064312

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211214860.6A Pending CN115690486A (en) 2022-09-30 2022-09-30 Method, device and equipment for identifying focus in image and storage medium

Country Status (1)

Country Link
CN (1) CN115690486A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116580254A (en) * 2023-07-12 2023-08-11 菲特(天津)检测技术有限公司 Sample label classification method and system and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116580254A (en) * 2023-07-12 2023-08-11 菲特(天津)检测技术有限公司 Sample label classification method and system and electronic equipment
CN116580254B (en) * 2023-07-12 2023-10-20 菲特(天津)检测技术有限公司 Sample label classification method and system and electronic equipment

Similar Documents

Publication Publication Date Title
Wang et al. A multi-view deep convolutional neural networks for lung nodule segmentation
CN113538313B (en) Polyp segmentation method and device, computer equipment and storage medium
Chan et al. Texture-map-based branch-collaborative network for oral cancer detection
de Oliveira et al. Classification of breast regions as mass and non-mass based on digital mammograms using taxonomic indexes and SVM
CN109919230B (en) Medical image pulmonary nodule detection method based on cyclic feature pyramid
CN108615237A (en) A kind of method for processing lung images and image processing equipment
US10853409B2 (en) Systems and methods for image search
CN110415250B (en) Overlapped chromosome segmentation method and device based on deep learning
CN110838114B (en) Pulmonary nodule detection method, device and computer storage medium
CN113239755B (en) Medical hyperspectral image classification method based on space-spectrum fusion deep learning
US20230052133A1 (en) Medical image processing method and apparatus, device, storage medium, and product
US20230005140A1 (en) Automated detection of tumors based on image processing
WO2022110525A1 (en) Comprehensive detection apparatus and method for cancerous region
CN106682127A (en) Image searching system and method
CN112862756A (en) Method for identifying pathological change type and gene mutation in thyroid tumor pathological image
CN115690486A (en) Method, device and equipment for identifying focus in image and storage medium
Yang et al. A multi-stage progressive learning strategy for COVID-19 diagnosis using chest computed tomography with imbalanced data
Haiying et al. False-positive reduction of pulmonary nodule detection based on deformable convolutional neural networks
Hassan et al. A dilated residual hierarchically fashioned segmentation framework for extracting Gleason tissues and grading prostate cancer from whole slide images
CN107590806B (en) Detection method and system based on brain medical imaging
CN111986205A (en) Vessel tree generation and lesion identification method, device, equipment and readable storage medium
CN113808137A (en) Method, device, equipment and storage medium for screening image map of upper gastrointestinal endoscope
CN117352164A (en) Multi-mode tumor detection and diagnosis platform based on artificial intelligence and processing method thereof
CN113298773A (en) Heart view identification and left ventricle detection device and system based on deep learning
CN113781387A (en) Model training method, image processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination