CN114663661A - Space life science experimental object semantic segmentation method and device and storage medium - Google Patents

Space life science experimental object semantic segmentation method and device and storage medium Download PDF

Info

Publication number
CN114663661A
CN114663661A CN202210387409.8A CN202210387409A CN114663661A CN 114663661 A CN114663661 A CN 114663661A CN 202210387409 A CN202210387409 A CN 202210387409A CN 114663661 A CN114663661 A CN 114663661A
Authority
CN
China
Prior art keywords
image
pixel
class
segmented
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210387409.8A
Other languages
Chinese (zh)
Inventor
刘康
李盛阳
杨简
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Technology and Engineering Center for Space Utilization of CAS
Original Assignee
Technology and Engineering Center for Space Utilization of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Technology and Engineering Center for Space Utilization of CAS filed Critical Technology and Engineering Center for Space Utilization of CAS
Priority to CN202210387409.8A priority Critical patent/CN114663661A/en
Publication of CN114663661A publication Critical patent/CN114663661A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a space life science experimental object semantic segmentation method, a device and a storage medium. The method comprises the following steps: determining an image category and a category score of the input image based on a category prediction model; generating a class activation graph based on the class score of the image to be segmented and the pixel gradient weight of each pixel in the last layer of convolution feature graph in the class prediction model; carrying out binarization on the class activation image to obtain a rough segmentation result of the image to be segmented; obtaining a guide backward propagation map of an image to be segmented; and taking the rough segmentation result as an initial contour, taking the guide back propagation graph as a base graph to be fitted, and carrying out iterative evolution based on the level set to obtain a pixel-level semantic segmentation result of the image to be segmented. The method does not need high-precision pixel-level labeling information as a basis for strong supervised learning, retains semantic information, can effectively improve segmentation precision, provides reliable and powerful support for experimental data analysis of scientists, and reduces a data processing threshold.

Description

Space life science experimental object semantic segmentation method and device and storage medium
Technical Field
The invention relates to the technical field of image data processing, in particular to a semantic segmentation method and device for space life science experimental objects and a storage medium.
Background
A plurality of types of space science loading equipment are usually carried on large-scale spacecrafts such as a space station and the like so as to develop various space science experiments, and the space science experiments relate to space life science, microgravity fluid physics, combustion and the like. The space life science experiment helps people to understand and explore the growth and development rule of organisms in space by researching the influence of microgravity on the growth and development, proliferation and differentiation of the organisms, tissues and cells, and lays a foundation for further space exploration and space utilization. Many cell images are generated in the process of a scientist conducting space life science experiment, and the images are preprocessed, for example, the categories of cells are identified, the range of the cells is segmented, and the like, so that researchers can be helped to explore life laws, draw scientific conclusions and produce results. The spatial scientific experimental data has particularity and specialty, and the development of the special AI design based on the spatial scientific experimental data provides auxiliary reference for scientists to develop scientific research, and promotes scientific discovery and achievement output.
In the face of massive scientific experimental image data, the traditional manual labeling mode depending on experts wastes time and labor, and the urgency of the image semantic segmentation technology based on computer vision is increasingly highlighted. With the increase of data volume and computational power, the deep convolutional neural network is widely applied to segmentation tasks to acquire relevant visual information. However, the supervised learning-based method requires pixel-level labeling data for model training, so that the labeling cost is high, the specificity of space science experimental data is high, the labeling difficulty is increased, and a professional is required to develop the labeling.
Disclosure of Invention
The invention aims to solve the technical problems of high labeling cost and low segmentation precision in space life science experimental object semantic segmentation, and provides a space life science experimental object semantic segmentation method, a device and a storage medium.
In order to solve the technical problem, the invention provides a semantic segmentation method for an experimental object of space life science, which comprises the following steps: determining the image category and the category score of an input image by using a category prediction model based on a convolutional neural network, and taking the input image of which the image category comprises a space life science experimental object as an image to be segmented; generating a class activation graph based on the class score of the image to be segmented and the pixel gradient weight of each pixel in the last layer of convolution feature graph in the class prediction model, and carrying out image binarization on the class activation graph to obtain a rough segmentation result of the image to be segmented; obtaining a guide back propagation map of the image to be segmented by a guide back propagation method; and taking the rough segmentation result as an initial contour, taking the guide back propagation graph as a base graph to be fitted, and performing iterative evolution based on a level set to obtain a pixel-level semantic segmentation result of the image to be segmented.
The beneficial effects of the invention are: screening an image to be segmented containing a space life science experimental object through a category prediction model, and obtaining a corresponding category score; generating a class activation graph based on the class score of the image to be segmented and the pixel gradient weight of each pixel in the last layer of convolution feature graph in the class prediction model, and performing binarization processing on the class activation graph to obtain a coarse segmentation result, namely, a pixel-level segmentation task can be realized by using image-level labeling information, so that the labeling workload and difficulty are greatly reduced; the rough segmentation result is used as an initial contour, the guide back propagation graph is used as a base graph to be fitted, accurate pixel-level semantic segmentation is finally realized through a level set method, the influence of local noise on the segmentation accuracy is avoided, and the segmentation accuracy is greatly improved. The invention does not need high-precision pixel-level labeling information as the basis of strong supervised learning, can retain semantic information, effectively improves the segmentation precision, provides reliable and powerful support for the experimental data analysis of scientists and reduces the data processing threshold.
On the basis of the technical scheme, the invention can be further improved as follows.
Further, the above technical solution further includes a step of pre-constructing the convolutional neural network-based class prediction model, including: acquiring space life science experiment data, and performing category labeling on the space life science experiment data according to whether a space life science experiment object is included or not to obtain a category data set; performing model training on a neural network-based classification model by using the classification data set; and taking the optimal model obtained by the neural network-based classification model through transfer learning as the class prediction model.
The method has the advantages that the total amount of data of space science experiments is large, but the data of a specific experiment is relatively small, and large-scale labeled data cannot be acquired for supervised learning; the method can obtain the category prediction model with good performance in the actual task by labeling a small amount and low cost aiming at the actual application scene. Wherein, the 'small amount' means that the total amount of the labeled samples needed by the classification task is small, only thousands of orders/types of data sets are needed, and the classification precision can be more than 90 percent or even higher. The low cost is that the class label which is weaker than the pixel grade label corresponding to the segmentation task is adopted as the supervision information, the labeling difficulty is reduced, and the labeling efficiency is improved. Generally, the labeling at an image level requires only 1 second, which is 1/30 for the pixel level labeling.
Further, the neural network-based classification model includes: a convolutional layer for feature extraction and a fully connected layer for classification tasks; wherein, the convolutional layer selects a pre-training model on ImageNet.
The beneficial effect of adopting the further scheme is that the training cost of the new model can be effectively reduced by utilizing the pre-training model of the large-scale data set, and the training cost specifically comprises the calculation cost and the time cost. The parameter quantity of the deep neural network model is huge, the training of the model cannot be completed by a small-scale data set, and the overfitting phenomenon easily occurs, so that the generalization capability of the model is reduced. According to the method, good feature extraction effect can still be obtained on new data by utilizing the pre-training weight on ImageNet, only the full connection layer for classification is needed to be modified, the output dimension of the full connection layer is changed into the number of classes of a new task, and then a high-performance new class prediction model supporting a specific task is obtained by utilizing small-scale data training through transfer learning.
Further, generating a class activation map based on the class score of the image to be segmented and the pixel gradient weight of each pixel in the last layer of convolution feature map in the class prediction model, including: determining the pixel gradient weight of each pixel in the sub-feature map of the last layer of convolution feature map in the category prediction model according to the category score of the image to be segmented; wherein the last layer of the convolution feature map comprises a plurality of layers of sub-feature maps; carrying out weighted summation on pixel gradients larger than zero in the sub-feature map according to the pixel gradient weights to obtain the sub-feature map weights; and weighting each layer of sub-feature graph according to the weight of the sub-feature graph to obtain the class activation graph of the last layer of convolution feature graph.
The method has the advantages that the position information of the object to be segmented is represented by the class activation graph, the pixel gradient weight of each pixel on the feature graph is calculated respectively, the weight of each layer of sub-feature graph is determined according to the pixel gradient weight of each pixel, and finally the class activation graph of the final layer of convoluted feature graph is determined according to the weight of the sub-feature graph; the method fully considers the difference of the contribution degree of each pixel to the class activation map, effectively solves the problem that the traditional class activation map mapping method tends to identify local information which has important influence on the classification result and causes low segmentation precision, and has a better positioning result for multiple objects.
Further, the pixel gradient weight of each pixel in the sub-feature map of the last layer of convolution feature map in the category prediction model is determined according to the category score of the image to be segmented
Figure BDA0003594212830000047
The calculation formula is as follows:
Figure BDA0003594212830000041
in the formula (I), the compound is shown in the specification,
Figure BDA0003594212830000042
the pixel gradient weight of a pixel with a position coordinate of (i, j) on the K-th layer sub-feature map is obtained, Y is the category score of the image to be segmented obtained by forward propagation, and a is the feature map of the last layer of convolution, wherein i is 1, 2, … … M, j is 1, 2, … … N, and K is 1, 2, … … K; m, N and K are the number of pixels in the length direction, the number of pixels in the width direction and the number of channels of the feature map, respectively;
Figure BDA0003594212830000043
the pixel with the coordinate (i, j) on the k-th layer sub-feature map is obtained;
Figure BDA0003594212830000044
and
Figure BDA0003594212830000045
and respectively solving a second partial derivative and a third partial derivative for Y and A.
The method has the advantages that the pixel gradient weight of each pixel on the characteristic diagram is obtained, the difference of contribution degree of each pixel to the class activation diagram is fully considered, the problem that the segmentation precision is low due to the fact that the traditional class activation diagram mapping method tends to identify local information which has important influence on the classification result is effectively solved, and a better positioning result is achieved for multiple objects.
Further, carrying out weighted summation on pixel gradients larger than zero in the sub-feature map according to the pixel gradient weight to obtain the sub-feature map weight omegakThe calculation formula is as follows:
Figure BDA0003594212830000046
in the formula, omegakThe sub-feature map weights for the k-th layer sub-feature map,
Figure BDA0003594212830000051
pixel gradient weight for pixel with position coordinate (i, j) on the k-th layer sub-feature map, relu is activation function, whichThe effect being to preserve pixel gradients
Figure BDA0003594212830000052
Pixels greater than 0, the pixel gradients of the remaining pixels are assigned to zero.
The method has the advantages that the weight of the sub-feature graph is obtained by performing weighted summation on the pixel gradient of the pixel meeting the pixel gradient requirement, and the method has a better effect on multi-target class activation.
Further, weighting each layer of sub-feature graph according to the weight of the sub-feature graph to obtain a class activation graph of the last layer of convolution feature graph, wherein a calculation formula is as follows:
Figure BDA0003594212830000053
wherein L is a class activation diagram, ωkSub-feature map weight, A, for the k-th layer sub-feature mapkIs the k-th layer sub-feature map of the feature map A of the last layer of convolution.
The beneficial effect of adopting the further scheme is that the class activation graph is finally obtained through the weighting and is used as the basis of rough segmentation.
In order to solve the above technical problems, the present invention provides a semantic segmentation apparatus for space life science experimental objects, comprising: the system comprises a category labeling module, a rough segmentation module, a guide back propagation module and a semantic segmentation module.
The class marking module is used for determining the image class and the class score of the input image by using a class prediction model based on a convolutional neural network, and taking the input image of which the image class contains a space life science experimental object as an image to be segmented; the rough segmentation module is used for generating a class activation map based on the class score of the image to be segmented and the pixel gradient weight of each pixel in the last layer of convolution feature map in the class prediction model, and carrying out image binarization on the class activation map to obtain a rough segmentation result of the image to be segmented; the guide back propagation module is used for obtaining a guide back propagation map of the image to be segmented by a guide back propagation method; and the semantic segmentation module is used for taking the rough segmentation result as an initial contour, taking the guide back propagation map as a base map to be fitted, and performing iterative evolution on the basis of a level set to obtain a pixel-level semantic segmentation result of the image to be segmented.
In order to solve the above technical problems, the present invention provides a semantic segmentation apparatus for a space life science experimental object, comprising a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor implements the semantic segmentation method for the space life science experimental object according to the above technical solution when executing the program.
In order to solve the technical problem, the present invention provides a computer-readable storage medium, which includes instructions, and when the instructions are run on a computer, the instructions cause the computer to execute the semantic segmentation method for space life science experimental objects according to the above technical solution.
Additional aspects of the invention and its advantages will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
FIG. 1 is a flow chart of a semantic segmentation method for an experimental object of space life science according to an embodiment of the present invention;
FIG. 2 is a diagram showing an implementation process of the semantic segmentation method for the experimental object of space life science according to the embodiment of the present invention;
fig. 3 is a structural block diagram of a semantic segmentation apparatus for an experimental object of space life science according to an embodiment of the present invention.
Detailed Description
The embodiments of the present disclosure are described below with specific examples, and other advantages and effects of the present disclosure will be readily apparent to those skilled in the art from the disclosure in the specification. It is to be understood that the described embodiments are merely some embodiments of the disclosure, and not all embodiments. The disclosure may be embodied or carried out in various other specific embodiments, and various modifications and changes may be made in the details within the description without departing from the spirit of the disclosure. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
The space life science is a subject for researching life phenomena and laws of space under the action of special environmental factors (such as vacuum, high temperature, low temperature, weightlessness, cosmic radiation and the like) of the space. The space life science experimental object comprises cells, tissues, proteins, animal and plant individuals and the like involved in scientific experiments carried out on spacecrafts such as space stations and the like.
Fig. 1 is a flowchart of a semantic segmentation method for a space life science experimental object according to an embodiment of the present invention. As shown in fig. 1, the method includes:
and S110, determining the image category and the category score of the input image by using a category prediction model based on a convolutional neural network, and taking the input image of which the image category comprises a space life science experimental object as an image to be segmented.
The class prediction model is an optimal model obtained by transfer learning based on a classification model of a neural network. The image categories of the category labels may include: a first image class containing a space life science subject and a second image class not containing a space life science subject. Further, depending on the actual application scenario, the image categories may be a plurality of categories: class 1, class 2, class 3, … …, and class n (n is a positive integer), and the class prediction model is changed from a two-class model to a multi-class model by setting the output layer of the class prediction model to n.
In this embodiment, the input image is labeled by the category prediction model at the image level, and the labeling at the image level only needs 1 second, which is 1/30 of the labeling at the pixel level.
And S120, generating a class activation map based on the class score of the image to be segmented and the pixel gradient weight of each pixel in the last layer of convolution feature map in the class prediction model, and carrying out image binarization on the class activation map to obtain a rough segmentation result of the image to be segmented.
And for the image to be segmented containing the space life science experimental object, generating a class activation map by utilizing the class score and the pixel gradient weight of each pixel in the last layer of convolution characteristic map in the class prediction model, and carrying out image binarization on the class activation map to obtain a rough segmentation result. The embodiment of the invention can realize the pixel-level segmentation task by utilizing the image-level labeling information, thereby greatly reducing the workload and difficulty of labeling.
S130, obtaining a guide backward propagation map of the image to be segmented by a guide backward propagation method.
A Guided back propagation map obtained by a Guided back propagation (Guided back propagation) method.
And S140, taking the rough segmentation result as an initial contour, taking the guide backward propagation map as a base map to be fitted, and performing iterative evolution based on a level set to obtain a pixel-level semantic segmentation result of the image to be segmented.
In the embodiment of the present invention, the execution sequence of S130 and S120 is not limited, and the step numbers are only used for step differentiation.
The semantic segmentation method for the space life science experimental object provided by the embodiment of the invention screens an image to be segmented containing the space life science experimental object through a category prediction model and obtains a corresponding category score; generating a class activation graph based on the class score of the image to be segmented and the pixel gradient weight of each pixel in the last layer of convolution characteristic graph in the class prediction model, and performing binarization processing on the class activation graph to obtain a rough segmentation result; taking the rough segmentation result as an initial contour, and providing initial positioning information for the level set; the guided back propagation map is used as a base map to be fitted, accurate pixel-level semantic segmentation is finally realized through a level set method, the influence of local noise on segmentation accuracy is avoided, and the segmentation accuracy is greatly improved. The semantic segmentation with low labeling cost, high robustness and high precision is realized.
The embodiment of the invention does not need high-precision pixel-level labeling information as a basis for strong supervised learning, can retain semantic information, can effectively improve segmentation precision, realizes semantic segmentation with low labeling cost, high robustness and high precision, provides reliable and powerful support for experimental data analysis of scientists, and reduces the threshold of data processing.
Fig. 2 is a diagram showing an implementation process of the method for semantic segmentation of the spatial life science experimental object according to the embodiment of the present invention. The overall steps are described as follows according to a specific implementation way:
and S1, making a classification data set.
Specifically, class labeling is performed based on space life science experimental data acquired by the space station, and the class labeling can be a first image class and a second image class as required. Wherein the image of the first image class (identifiable with the number 1) is an image containing a life science experimental object to be segmented. The images of the second image category (which may be identified by the number 0) are images that do not include the space life science experimental subject, but are only the background or other foreign matter, and the like. The final produced class dataset image format may be JPG and the image size may be 224pixelx224 pixels. The training set, validation set, and test set may be partitioned according to a preset ratio (e.g., 7:2: 1). Wherein, the training set and the verification set are used for S2 classification network training and selection of an optimal model; the verification set serves as an input of forward propagation of S3, and semantic classification results are finally output through S3, S4, S5 and S6.
And S2, performing model training and optimization on the neural network-based classification model by using the classification data set to obtain a class prediction model.
Specifically, the neural network-based classification model comprises a convolutional layer for feature extraction and a fully-connected layer for classification tasks, wherein the convolutional layer can be selected from feature extraction parts of classical convolutional neural network pre-training models such as VGG, ResNet, DenseNet, Inception V3 and the like on ImageNet, and the fully-connected layer sets the last layer of output units to be 2. And (5) learning the network parameters by using a transfer learning technology to obtain a classification model. The ImageNet project is a large visual database used for visual object recognition software research. The project has manually annotated over 1400 million images to indicate objects in the picture and provides a border in at least 100 million images. ImageNet contains 2 million typical categories, such as "balloon" or "strawberry," each containing hundreds of images.
Taking deep convolutional network VGG16 as an example, the convolutional layer and the first full connection layer are retained, and the last layer of output cells is set to 2. The initial parameters of the network adopt the parameters of a pre-trained model on ImageNet, the parameters of the convolutional layer are fixed, the full-link layer is trained by using the training set in S1, and the model is finely adjusted. The specific parameter settings may be: the network input size is 224 × 224, the ReLU is selected as an activation function, the cross entropy loss function is selected as a loss function, an Adam optimizer is selected as an optimizer, the learning rate is set to be 0.00001, the batch size batch _ size is set to be 128, training is carried out until the precision rises and the loss falls gently, and the optimal model structure and parameters are respectively stored to obtain the class prediction model.
And S3, determining the image type and the type score of the input image based on the type prediction model, and taking the input image of which the image type comprises the space life science experiment object as the image to be segmented.
Specifically, the images in the test set in the classification dataset are input and are propagated forward through the classification prediction models of the convolutional layer and the full-link layer (i.e., the optimal model finally obtained in S2, and the parameters of the optimal model are consistent with those of the optimal model in S2). And obtaining the category of the image to be segmented and the category score corresponding to the category, namely the Y value, through forward propagation. If the category is 1, the input picture contains the space life science experimental object, S4 is executed to obtain a rough segmentation result; if the category is 0, S4 and other subsequent steps are not performed.
S4, generating a class activation graph for the image to be segmented containing the space life science experimental object based on the class score and the pixel gradient weight of each pixel in the last layer of convolution feature graph in the class prediction model; and carrying out image binarization on the class activation image to obtain a rough segmentation result of the image to be segmented.
Specifically, the method comprises the following steps:
s41, determining the pixel gradient weight of each pixel in the sub-feature graph of the last layer of convolution feature graph in the category prediction model according to the category score of the image to be segmented, and calculating a formula as formula (1); wherein the last layer of the convolution feature map comprises a plurality of layers of sub-feature maps.
Figure BDA0003594212830000101
In the formula (I), the compound is shown in the specification,
Figure BDA0003594212830000111
the pixel gradient weight of a pixel with a position coordinate of (i, j) on the K-th layer sub-feature map is obtained, Y is a category score of an image to be segmented obtained by forward propagation, and a is a feature map of the last layer of convolution, wherein i is 1, 2 and … … M, j is 1, 2 and … … N, and K is 1, 2 and … … K; m, N and K are the number of pixels in the length direction, the number of pixels in the width direction and the number of channels (i.e. the number of layers of the sub-feature map) of the feature map, respectively;
Figure BDA0003594212830000112
the pixel with the coordinate (i, j) on the k-th layer sub-feature map is obtained;
Figure BDA0003594212830000113
and
Figure BDA0003594212830000114
and respectively solving a second partial derivative and a third partial derivative for Y and A.
And S42, carrying out weighted summation on the pixel gradients larger than zero in the sub-feature map according to the pixel gradient weights to obtain the sub-feature map weights, wherein the calculation formula is shown as a formula (2).
Figure BDA0003594212830000115
In the formula, ωkThe sub-feature map weights for the k-th layer sub-feature map,
Figure BDA0003594212830000116
the pixel gradient weight of the pixel with the position coordinate (i, j) on the k-th layer sub-feature map, relu is an activation function, and the function of the pixel gradient weight is to reserve the pixel gradient
Figure BDA0003594212830000117
Pixels greater than 0, the pixel gradients of the remaining pixels are assigned to zero.
S43, weighting each layer of sub-feature graph according to the sub-feature graph weight to obtain a class activation graph of the last layer of convolution feature graph, wherein a calculation formula is shown as a formula (3);
Figure BDA0003594212830000118
wherein L is a class activation diagram, ωkIs the sub-feature-map weight of the k-th sub-feature-map, AkIs the k-th layer sub-feature map of the feature map A of the last layer of convolution.
According to the embodiment of the invention, the weighted combination of the positive partial derivatives of the category score Y of the image to be segmented of the last convolutional layer feature graph A is used as the pixel gradient weight, and a visual explanation is generated for the corresponding category label, namely, a category activation graph, wherein the redder color indicates that the influence of the corresponding region on the classification result is higher, and the bluer color indicates that the influence of the corresponding region on the classification result is lower. The embodiment of the invention can carry out binarization on the class activation graph based on a large law method to obtain a rough segmentation result.
In the embodiment of the invention, the position information of the object to be segmented is represented by the class activation graph. In the embodiment of the invention, the pixel gradient weight of each pixel on the feature map is respectively calculated, the weight of each layer of sub-feature map is determined according to the pixel gradient weight of each pixel, and finally the class activation map of the final layer of convolution feature map is determined according to the weight of the sub-feature map. The embodiment of the invention fully considers that the contribution degree of each pixel to the class activation graph is different, and further the positioning result of multiple objects can be obtained.
And S5, obtaining a guide back propagation map (namely a GB map) of the image to be segmented by a guide back propagation method.
And (3) adopting a Guided back propagation method (Guided back propagation), and modifying the gradient back propagation of the activation function ReLU to ensure that parts smaller than zero are not back propagated and only parts larger than zero are back propagated, thereby obtaining fine-grained pixel scale representation, namely a GB (GB) graph.
And S6, taking the rough segmentation result as an initial contour, taking the guide backward propagation graph as a base graph to be fitted, and carrying out iterative evolution based on the level set to obtain a pixel-level semantic segmentation result of the image to be segmented.
Specifically, based on a level set method, the coarse segmentation result of S4 is used as an initial contour of a level set, a GB graph obtained by guiding back propagation in S5 is used as a base graph to be fitted, a time step may be set to 5, an inner core (iter _ inner) is set to 5, an outer core (iter _ outer) is set to 40, a coefficient of a weighted length term may be set to 5, a coefficient of a weighted area term may be set to 1.5, a scale parameter of a gaussian core may be set to 1.5, and a potential function is a Double potential Well (Double _ Well), curve evolution is implicitly expressed by surface evolution, and an accurate segmentation result is obtained through multiple rounds of iterative evolution.
In order to evaluate the segmentation performance of the semantic segmentation method for the space life science experimental object provided by the embodiment of the invention, the segmentation result of the test set is compared with the annotation truth value, and the performance of the method provided by the embodiment of the invention is quantitatively evaluated by using the intersection ratio IoU, wherein IoU can reach 82.3%.
As shown in fig. 3, an embodiment of the present invention provides a device for semantic segmentation of an experimental object of space life science, including: a category labeling module 310, a rough segmentation module 320, a guided back propagation module 330, and a semantic segmentation module 340. The category labeling module 310 is configured to determine an image category and a category score of the input image by using a category prediction model based on a convolutional neural network, and take the input image of which the image category is a space life science experimental object as an image to be segmented; the rough segmentation module 320 is configured to generate a class activation map based on the class score of the image to be segmented and the pixel gradient weight of each pixel in the last layer of convolution feature map in the class prediction model, and perform image binarization on the class activation map to obtain a rough segmentation result of the image to be segmented; the guiding back propagation module 330 is configured to obtain a guiding back propagation map of the image to be segmented by a guiding back propagation method; the semantic segmentation module 340 is configured to use the rough segmentation result as an initial contour, use the guided back propagation map as a base map to be fitted, and perform iterative evolution based on a level set to obtain a pixel-level semantic segmentation result of the image to be segmented.
The embodiment of the invention provides a semantic segmentation device for a space life science experimental object, which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor realizes the semantic segmentation method for the space life science experimental object provided by the embodiment when executing the program.
The embodiment of the invention provides a computer-readable storage medium, which comprises instructions, and when the instructions are run on a computer, the computer is enabled to execute the semantic segmentation method for the experimental object of space life science provided by the embodiment.
The embodiment of the invention provides the semantic segmentation scheme of the space life science experimental object aiming at the problems of high labeling cost and low segmentation precision in the semantic segmentation of the space life science experimental object, and realizes the high-robustness and high-precision segmentation of the space life science experimental object. The method provides a new technical means for intelligent analysis of a large number of scientific experimental images and provides auxiliary support for scientists to develop further analysis and research. In addition, the method can be further applied to more fields such as animal and plant monitoring in the agricultural field, tumor auxiliary diagnosis in the medical field, target identification in the military field and the like, and has wide application prospect.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially or partly contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A semantic segmentation method for space life science experimental objects is characterized by comprising the following steps:
determining the image category and the category score of an input image by using a category prediction model based on a convolutional neural network, and taking the input image of which the image category comprises a space life science experimental object as an image to be segmented;
generating a class activation graph based on the class score of the image to be segmented and the pixel gradient weight of each pixel in the last layer of convolution feature graph in the class prediction model, and carrying out image binarization on the class activation graph to obtain a rough segmentation result of the image to be segmented;
obtaining a guide back propagation map of the image to be segmented by a guide back propagation method;
and taking the rough segmentation result as an initial contour, taking the guide back propagation graph as a base graph to be fitted, and performing iterative evolution based on a level set to obtain a pixel-level semantic segmentation result of the image to be segmented.
2. The method of claim 1, further comprising the step of pre-constructing the convolutional neural network-based class prediction model, comprising:
acquiring space life science experiment data, and performing category labeling on the space life science experiment data according to whether a space life science experiment object is included or not to obtain a category data set;
performing model training on a neural network-based classification model by using the classification data set;
and taking the optimal model obtained by the neural network-based classification model through transfer learning as the class prediction model.
3. The method of claim 2, wherein the neural network-based classification model comprises: a convolutional layer for feature extraction and a fully connected layer for classification tasks; wherein, the convolutional layer selects a pre-training model on ImageNet.
4. The method according to any one of claims 1 to 3, wherein the generating of the class activation map based on the class score of the image to be segmented and the pixel gradient weight of each pixel in the last layer of the convolution feature map in the class prediction model comprises:
determining the pixel gradient weight of each pixel in the sub-feature map of the last layer of convolution feature map in the category prediction model according to the category score of the image to be segmented; wherein the last layer of the convolution feature map comprises a plurality of layers of sub-feature maps;
carrying out weighted summation on pixel gradients larger than zero in the sub-feature map according to the pixel gradient weights to obtain the sub-feature map weights;
and weighting each layer of sub-feature graph according to the weight of the sub-feature graph to obtain the class activation graph of the final layer of convolution feature graph.
5. The method according to claim 4, wherein the pixel gradient weight of each pixel in the sub-feature map of the last layer of the convolution feature map in the class prediction model is determined according to the class score of the image to be segmented
Figure FDA0003594212820000021
The calculation formula is as follows:
Figure FDA0003594212820000022
in the formula (I), the compound is shown in the specification,
Figure FDA0003594212820000023
the pixel gradient weight of a pixel with a position coordinate of (i, j) on the K-th layer sub-feature map is obtained, Y is the category score of the image to be segmented obtained by forward propagation, and a is the feature map of the last layer of convolution, wherein i is 1, 2, … … M, j is 1, 2, … … N, and K is 1, 2, … … K; m, N and K are the pixel number in the length direction, the pixel number in the width direction and the channel number of the feature map respectively;
Figure FDA0003594212820000024
the pixel with the coordinate (i, j) on the k-th layer sub-feature map is obtained;
Figure FDA0003594212820000025
and
Figure FDA0003594212820000026
and respectively solving a second partial derivative and a third partial derivative for Y and A.
6. The method of claim 5, wherein the sub-feature map weight ω is obtained by weighted summation of pixel gradients greater than zero in the sub-feature map according to the pixel gradient weightkThe calculation formula is as follows:
Figure FDA0003594212820000027
in the formula, ωkThe sub-feature map weights for the k-th layer sub-feature map,
Figure FDA0003594212820000028
the pixel gradient weight of the pixel with position coordinate (i, j) on the k-th sub-feature map, relu is the activation function, and the function is to reserve the pixel gradient
Figure FDA0003594212820000029
Pixels greater than 0, the pixel gradients of the remaining pixels are assigned to zero.
7. The method according to claim 6, wherein the weighting is performed on each layer of sub-feature map according to the weight of the sub-feature map to obtain the class activation map of the last layer of convolution feature map, and a calculation formula is as follows:
Figure FDA0003594212820000031
wherein L is a class activation diagram, ωkIs the sub-feature-map weight of the k-th sub-feature-map, AkIs the k-th layer sub-feature map of the feature map A of the last layer of convolution.
8. The utility model provides a space life science experimental object semantic segmentation device which characterized in that includes:
the class marking module is used for determining the image class and the class score of the input image by using a class prediction model based on a convolutional neural network, and taking the input image of which the image class contains a space life science experimental object as an image to be segmented;
the rough segmentation module is used for generating a class activation map based on the class score of the image to be segmented and the pixel gradient weight of each pixel in the last layer of convolution feature map in the class prediction model, and carrying out image binarization on the class activation map to obtain a rough segmentation result of the image to be segmented;
the guide back propagation module is used for obtaining a guide back propagation map of the image to be segmented by a guide back propagation method;
and the semantic segmentation module is used for taking the rough segmentation result as an initial contour, taking the guide backward propagation map as a base map to be fitted, and performing iterative evolution on the basis of a level set to obtain a pixel-level semantic segmentation result of the image to be segmented.
9. A device for semantic segmentation of a space life science experimental object, comprising a memory, a processor and a computer program stored in the memory and operable on the processor, wherein the processor executes the program to implement the method for semantic segmentation of the space life science experimental object according to any one of claims 1 to 7.
10. A computer-readable storage medium comprising instructions that, when executed on a computer, cause the computer to perform the method for semantic segmentation of space life science subjects according to any one of claims 1 to 7.
CN202210387409.8A 2022-04-13 2022-04-13 Space life science experimental object semantic segmentation method and device and storage medium Pending CN114663661A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210387409.8A CN114663661A (en) 2022-04-13 2022-04-13 Space life science experimental object semantic segmentation method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210387409.8A CN114663661A (en) 2022-04-13 2022-04-13 Space life science experimental object semantic segmentation method and device and storage medium

Publications (1)

Publication Number Publication Date
CN114663661A true CN114663661A (en) 2022-06-24

Family

ID=82034452

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210387409.8A Pending CN114663661A (en) 2022-04-13 2022-04-13 Space life science experimental object semantic segmentation method and device and storage medium

Country Status (1)

Country Link
CN (1) CN114663661A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116823807A (en) * 2023-08-02 2023-09-29 北京梦诚科技有限公司 Method and system for identifying cast-in-situ beam of bridge superstructure

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109712165A (en) * 2018-12-29 2019-05-03 安徽大学 A kind of similar foreground picture image set dividing method based on convolutional neural networks
US20200027002A1 (en) * 2018-07-20 2020-01-23 Google Llc Category learning neural networks
CN112766147A (en) * 2021-01-16 2021-05-07 大连理工大学 Error action positioning method based on deep learning
CN112906867A (en) * 2021-03-03 2021-06-04 安徽省科亿信息科技有限公司 Convolutional neural network feature visualization method and system based on pixel gradient weighting

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200027002A1 (en) * 2018-07-20 2020-01-23 Google Llc Category learning neural networks
CN109712165A (en) * 2018-12-29 2019-05-03 安徽大学 A kind of similar foreground picture image set dividing method based on convolutional neural networks
CN112766147A (en) * 2021-01-16 2021-05-07 大连理工大学 Error action positioning method based on deep learning
CN112906867A (en) * 2021-03-03 2021-06-04 安徽省科亿信息科技有限公司 Convolutional neural network feature visualization method and system based on pixel gradient weighting

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KRISTOFFER WICKSTROM: "uncertainty and interpretability in convolutional neural networks for semantic segmentation of colorectal polyps", 《MEDICAL IMAGE ANALYSIS》 *
霍冠英: "《侧扫声呐图像目标分割》", 30 May 2017 *
青晨等: "深度卷积神经网络图像语义分割研究进展", 《中国图象图形学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116823807A (en) * 2023-08-02 2023-09-29 北京梦诚科技有限公司 Method and system for identifying cast-in-situ beam of bridge superstructure
CN116823807B (en) * 2023-08-02 2024-04-05 北京梦诚科技有限公司 Method and system for identifying cast-in-situ beam of bridge superstructure

Similar Documents

Publication Publication Date Title
CN109191476B (en) Novel biomedical image automatic segmentation method based on U-net network structure
CN110598029B (en) Fine-grained image classification method based on attention transfer mechanism
Li et al. Tobler’s First Law in GeoAI: A spatially explicit deep learning model for terrain feature detection under weak supervision
CN112668579A (en) Weak supervision semantic segmentation method based on self-adaptive affinity and class distribution
Liu et al. IOUC-3DSFCNN: Segmentation of brain tumors via IOU constraint 3D symmetric full convolution network with multimodal auto-context
CN112906867B (en) Convolutional neural network feature visualization method and system based on pixel gradient weighting
CN112215119A (en) Small target identification method, device and medium based on super-resolution reconstruction
CN108664986B (en) Based on lpNorm regularized multi-task learning image classification method and system
Luo et al. OXnet: deep omni-supervised thoracic disease detection from chest X-rays
CN116522143B (en) Model training method, clustering method, equipment and medium
CN117437423A (en) Weak supervision medical image segmentation method and device based on SAM collaborative learning and cross-layer feature aggregation enhancement
Ettaouil Generalization Ability Augmentation and Regularization of Deep Convolutional Neural Networks Using l1/2 Pooling
CN115578353A (en) Multi-modal medical image segmentation method and device based on image flow distillation
Tan et al. Rapid fine-grained classification of butterflies based on FCM-KM and mask R-CNN fusion
Li et al. Robust blood cell image segmentation method based on neural ordinary differential equations
CN116229061A (en) Semantic segmentation method and system based on image generation
CN114663661A (en) Space life science experimental object semantic segmentation method and device and storage medium
CN117274754A (en) Gradient homogenization point cloud multi-task fusion method
CN117011640A (en) Model distillation real-time target detection method and device based on pseudo tag filtering
CN115690492A (en) Interpretable saliency map-based weak supervised learning method
Pei et al. FGO-Net: Feature and Gaussian Optimization Network for visual saliency prediction
CN111178174B (en) Urine formed component image identification method based on deep convolutional neural network
Patra et al. Hybrid deep CNN-LSTM network for breast histopathological image classification.
Lu et al. Weakly supervised retinal vessel segmentation algorithm without groundtruth
Weiyue et al. Facial Expression Recognition with Small Samples under Convolutional Neural Network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220624

RJ01 Rejection of invention patent application after publication