CN115049836B - Image segmentation method, device, equipment and storage medium - Google Patents
Image segmentation method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN115049836B CN115049836B CN202210978696.XA CN202210978696A CN115049836B CN 115049836 B CN115049836 B CN 115049836B CN 202210978696 A CN202210978696 A CN 202210978696A CN 115049836 B CN115049836 B CN 115049836B
- Authority
- CN
- China
- Prior art keywords
- image
- image segmentation
- activation
- confidence coefficient
- label
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003709 image segmentation Methods 0.000 title claims abstract description 229
- 238000000034 method Methods 0.000 title claims abstract description 37
- 230000004913 activation Effects 0.000 claims abstract description 157
- 238000012549 training Methods 0.000 claims abstract description 69
- 238000005070 sampling Methods 0.000 claims abstract description 10
- 230000004927 fusion Effects 0.000 claims description 66
- 230000006870 function Effects 0.000 claims description 48
- 238000010586 diagram Methods 0.000 claims description 19
- 238000012216 screening Methods 0.000 claims description 18
- 238000011176 pooling Methods 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 14
- 238000005516 engineering process Methods 0.000 abstract description 4
- 238000013473 artificial intelligence Methods 0.000 abstract description 2
- 230000011218 segmentation Effects 0.000 description 9
- 238000004891 communication Methods 0.000 description 8
- 238000007726 management method Methods 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 4
- 238000005192 partition Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000013136 deep learning model Methods 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000007794 visualization technique Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the field of artificial intelligence, and discloses an image segmentation method, which comprises the following steps: segmenting the unmarked image set by using the trained image segmentation model to obtain a first image confidence coefficient; performing category activation operation on the first image confidence coefficient to obtain a category activation graph; the category activation image is subjected to up-sampling to obtain a second image confidence coefficient; fusing the first image confidence coefficient and the second image confidence coefficient to obtain a pseudo label of the unmarked image set; training a preset target image segmentation model by using a training data set and a final label to obtain a trained target image segmentation model; and segmenting the image to be segmented by utilizing the trained target image segmentation model to obtain an image segmentation result. The invention also relates to a block chain technology, and the image to be segmented can be stored in the block chain node. The invention also provides an image segmentation device, equipment and a medium. The invention can improve the accuracy of image segmentation.
Description
Technical Field
The present invention relates to the field of artificial intelligence, and in particular, to an image segmentation method, apparatus, device, and storage medium.
Background
Currently, with the rapid development of the deep learning technology, the image segmentation technology based on the deep learning is widely applied to the fields of insurance, finance and the like with the rapid and accurate image segmentation capability. Most of the existing image segmentation algorithms train a fully supervised learning model by adopting an artificially labeled image set, and a prediction label obtained by segmenting and predicting an unlabeled image is directly used as a pseudo label of the unlabeled image by using the model. However, the prediction result of the fully supervised learning model is not completely accurate, and some pseudo labels with poor accuracy influence poor image segmentation results, so that the accuracy of image segmentation is low.
Disclosure of Invention
The invention provides an image segmentation method, an image segmentation device, image segmentation equipment and a storage medium, and aims to improve the accuracy of image segmentation.
In order to achieve the above object, the present invention provides an image segmentation method, comprising:
acquiring an unlabeled image set, an labeled image set and a real image label corresponding to the labeled image set, and segmenting the unlabeled image set by using a trained image segmentation model to obtain a first image confidence coefficient;
performing category activation operation on the first image confidence coefficient to obtain a category activation graph of the unmarked image set;
the category activation image is subjected to up-sampling to obtain a second image confidence coefficient;
fusing the first image confidence coefficient and the second image confidence coefficient to obtain a pseudo label of the unmarked image set;
integrating the real image labels and the pseudo labels to obtain final labels, taking the unmarked image sets and the marked image sets as training data sets, and training a preset target image segmentation model by using the training data sets and the final labels to obtain a trained target image segmentation model;
and acquiring an image to be segmented, and segmenting the image to be segmented by using the trained target image segmentation model to obtain an image segmentation result of the image to be segmented.
Optionally, performing a category activation operation on the first image confidence to obtain a category activation map of the unlabeled image set, including:
acquiring a convolution characteristic diagram of the unmarked image set, and setting the activation value of the convolution characteristic diagram to zero to obtain the activation image confidence of the unmarked image set;
calculating a difference set of the first image confidence coefficient and the activation image confidence coefficient, screening out a positive difference value larger than zero from the difference set, and dividing the positive difference value by the number of pixel points corresponding to the category activation confidence coefficient to obtain a fusion weight;
and weighting the convolution characteristic diagram by using the fusion weight, and inputting the weighted characteristic diagram into a preset activation function to obtain the category activation diagram.
Optionally, the fusing the first image confidence coefficient and the second image confidence coefficient to obtain a pseudo label of the unlabeled image set includes:
acquiring a first weight coefficient of the first image confidence coefficient, and multiplying the first weight coefficient by the first image confidence coefficient to obtain a first parameter;
acquiring a second weight coefficient of the second image confidence coefficient, and multiplying the second weight coefficient by the second image confidence coefficient to obtain a second parameter;
summing the first parameter and the second parameter to obtain a fusion parameter;
judging whether the fusion parameter is smaller than a preset threshold value or not;
when the fusion parameter is smaller than a preset threshold value, screening out a label corresponding to the fusion parameter;
and when the fusion parameter is not less than a preset threshold value, taking the label corresponding to the fusion parameter as the pseudo label.
Optionally, the upsampling the class activation map to obtain a second image confidence level includes:
and sequentially selecting four adjacent pixel values of the category activation graph, and performing linear interpolation on each corresponding pixel value in the unmarked image in the horizontal direction and the vertical direction by using the four adjacent pixel values to obtain the confidence coefficient of the second image.
Optionally, the training a preset target image segmentation model by using the training data set and the final label to obtain a trained target image segmentation model includes:
extracting the characteristics of the training data set by using a cavity convolution layer in a preset target image segmentation model and carrying out characteristic fusion to obtain a characteristic fusion image set;
performing target object identification on the feature fusion image set by using a pyramid pooling layer in the preset target image segmentation model to obtain a target object identification image set;
inputting the target object identification image set into an activation function in the preset target image segmentation model to obtain a predicted image segmentation label of the training data set output by the activation function;
calculating loss values of a predicted image segmentation label and the final label by using a loss function in the preset target image segmentation model, and adjusting parameters of the preset target image segmentation model according to the loss values until the loss values meet preset conditions to obtain the trained target image segmentation model.
Optionally, the segmenting the unlabeled image set by using the trained image segmentation model to obtain a first image confidence coefficient includes:
performing convolution operation on the unmarked image set by using the trained image segmentation model to obtain an image characteristic data set;
pooling the image characteristic data set to obtain a pooled data set;
and performing activation operation on the pooled data set by using an activation function to obtain a first image confidence corresponding to the pooled data in the pooled data set.
In order to solve the above problem, the present invention also provides an image segmentation apparatus comprising:
the first image segmentation module is used for acquiring an unlabeled image set, an labeled image set and a real image label corresponding to the labeled image set, and segmenting the unlabeled image set by using a trained image segmentation model to obtain a first image confidence coefficient;
the image category activation module is used for carrying out category activation operation on the first image confidence coefficient to obtain a category activation graph of the unmarked image set;
the image up-sampling module is used for up-sampling the category activation map to obtain a second image confidence coefficient;
the image pseudo label generating module is used for fusing the first image confidence coefficient and the second image confidence coefficient to obtain a pseudo label of the unmarked image set;
the target image segmentation model training module is used for integrating the real image labels and the pseudo labels to obtain final labels, taking the unmarked image sets and the marked image sets as training data sets, and training a preset target image segmentation model by using the training data sets and the final labels to obtain a trained target image segmentation model;
and the image segmentation module is used for acquiring an image to be segmented, and segmenting the image to be segmented by using the trained target image segmentation model to obtain an image segmentation result of the image to be segmented.
In order to solve the above problem, the present invention also provides an electronic device, including:
a memory storing at least one computer program; and
and a processor executing the computer program stored in the memory to implement the image segmentation method described above.
In order to solve the above problem, the present invention also provides a computer readable storage medium, in which at least one computer program is stored, and the at least one computer program is executed by a processor in an electronic device to implement the image segmentation method described above.
In the embodiment of the invention, firstly, the unmarked image set is segmented by using the trained image segmentation model to obtain the confidence coefficient of the first image, which is convenient for subsequently judging which image areas in the unmarked image set are close to the real label distribution of the unmarked image; secondly, performing a class activation operation on the first image confidence coefficient to determine the importance degree of each image pixel in the unmarked segmentation image, further performing upsampling on the class activation image to obtain a second image confidence coefficient, restoring the size of the class activation image to the size consistent with that of the unmarked image, further determining the importance degree of each image pixel to improve the accuracy of subsequent false label screening, and then fusing the first image confidence coefficient and the second image confidence coefficient to obtain the false label of the unmarked image set, so that the image pixel label with lower confidence coefficient can be screened out to remove the false label with poor accuracy, and the integrity of image information can be ensured while screening out the low confidence coefficient pixel, and the accuracy of subsequent image segmentation can be improved; and finally, training a target image segmentation model through the real image label, the pseudo label, the unmarked image set and the marked image set, training the target image segmentation model by using complete and accurate image information, improving the accuracy of the model in image segmentation, and performing image segmentation on an image to be segmented by using the model to obtain an image segmentation result, so that the accuracy of image segmentation can be improved. Therefore, the image segmentation method, the image segmentation device, the image segmentation equipment and the storage medium provided by the embodiment of the invention can improve the accuracy of image segmentation.
Drawings
Fig. 1 is a schematic flowchart of an image segmentation method according to an embodiment of the present invention;
FIG. 2 is a detailed flowchart illustrating a step of an image segmentation method according to an embodiment of the present invention;
FIG. 3 is a detailed flowchart illustrating a step of an image segmentation method according to an embodiment of the present invention;
FIG. 4 is a block diagram of an image segmentation apparatus according to an embodiment of the present invention;
fig. 5 is a schematic internal structural diagram of an electronic device for implementing an image segmentation method according to an embodiment of the present invention;
the implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the invention provides an image segmentation method. The execution subject of the image segmentation method includes, but is not limited to, at least one of electronic devices such as a server and a terminal that can be configured to execute the method provided by the embodiments of the present application. In other words, the image segmentation method may be performed by software or hardware installed in a terminal device or a server device, and the software may be a blockchain platform. The server includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like.
Referring to a schematic flow chart of an image segmentation method provided in an embodiment of the present invention shown in fig. 1, in an embodiment of the present invention, the image segmentation method includes the following steps S1 to S6:
s1, obtaining an unmarked image set, an marked image set and a real image label corresponding to the marked image set, and segmenting the unmarked image set by using a trained image segmentation model to obtain a first image confidence coefficient.
In the embodiment of the invention, the unlabeled image set refers to a label-free segmented image set; the labeled image set is a segmented image set with labels; the real image label is a real result of image segmentation obtained after the labeling image set is manually labeled; the first image confidence level refers to a probability that a target object exists in the unmarked image set, for example, in a car insurance scene, the unmarked image set may be a probability of identifying scratches, i.e., concavities and convexities of the target vehicle.
In the embodiment of the invention, the trained image segmentation model is a model trained by a labeled image set, and the model is mainly used for carrying out image segmentation on an unlabeled image set by a deep learning algorithm, so that the first image confidence coefficient of a target object in the unlabeled image set is identified. Wherein the trained image segmentation model comprises: convolutional layers, pooling layers, and activation functions.
According to the embodiment of the invention, the unmarked image set is segmented by using the trained image segmentation model to obtain the first image confidence coefficient, so that the subsequent judgment of which image areas in the unmarked image set are close to the real label distribution of the unmarked image can be conveniently carried out, and the accuracy of the subsequent image segmentation is improved.
As an embodiment of the present invention, the segmenting the unlabeled image set by using the trained image segmentation model to obtain a first image confidence coefficient includes:
performing convolution operation on the unmarked image set by using the trained image segmentation model to obtain an image characteristic data set; performing pooling operation on the image characteristic data set to obtain a pooled data set; and performing activation operation on the pooled data set by using an activation function to obtain a first image confidence corresponding to the pooled data in the pooled data set.
The convolution operation can extract image characteristic data in the unmarked image set; the pooling operation can perform dimension reduction operation on the image feature data set, so that the image segmentation efficiency can be improved by reducing the calculated amount while maintaining the key information of the image; the activation function may be a Sigmoid function, and may activate the unlabeled image set to obtain a first image confidence of the unlabeled image set output by the activation function.
And S2, performing a category activation operation on the first image confidence coefficient to obtain a category activation graph of the unmarked image set.
In the embodiment of the present invention, the category activation map refers to a binary map having a size consistent with that of the unmarked segmented image. The category activation operation may be handled by an association-CAM (Ablation-visualization technique).
Furthermore, the embodiment of the invention performs the class activation operation on the first image confidence coefficient to obtain the class activation map of the unlabeled image set, so that the importance degree of each image pixel in the unlabeled segmentation map can be determined, and the accuracy of the subsequent image segmentation can be improved conveniently.
As an embodiment of the present invention, referring to fig. 2, the performing a category activation operation on the first image confidence to obtain a category activation map of the unlabeled image set includes the following steps S21 to S23:
s21, acquiring a convolution feature map of the unmarked image set, and setting the activation value of the convolution feature map to zero to obtain the activation image confidence of the unmarked image set;
s22, calculating a difference set of the first image confidence coefficient and the activation image confidence coefficient, screening out a positive difference value larger than zero from the difference set, and dividing the positive difference value by the number of pixel points corresponding to the category activation confidence coefficient to obtain a fusion weight;
s23, weighting the convolution characteristic graph by using the fusion weight, and inputting the weighted characteristic graph into a preset activation function to obtain the category activation graph.
The obtaining of the convolution specially-obtained graph can extract a feature graph corresponding to the last layer of convolution from an image feature data set obtained after the convolution of the image segmentation model; and setting the activation value of the convolution characteristic diagram to zero, namely setting the activation value of a convolution channel to 0, so that distortion of the unmarked segmentation image can be avoided.
In an embodiment of the present invention, the fusion weight is used to determine the importance degree of each image position in the unlabeled segmentation map, and the fusion weight can be implemented by the following formula:
wherein,representing a fusion weight; m is a group of pred Representing a first image confidence;representing a class activation confidence for an ith image pixel in a kth convolution channel;representing a positive difference between the first image confidence and the activation image confidence; p i Representing the ith image pixel value.
Further, the category activation map may be implemented by the following formula:
wherein L is i Representing a category activation graph; reLU denotes activation function; k refers to the convolution channel;representing the fusion weight of the ith image pixel in the kth convolution channel;a feature map representing the ith image pixel in the kth channel.
And S3, performing up-sampling on the class activation image to obtain a second image confidence coefficient.
In this embodiment of the present invention, the second image confidence level refers to a probability that the target object exists in the class activation map. Further, since the class activation map is obtained by convolving the feature map, the size of the feature map is often smaller than that of the original map, the image needs to be restored to the original size for subsequent image segmentation, and the process of restoring the image to the original size is called upsampling.
According to the method and the device, the class activation image is up-sampled to obtain the confidence coefficient of the second image, the size of the class activation image can be restored to the size consistent with the size of the unmarked image, the importance degree of each image pixel is further determined, and the accuracy of subsequent pseudo label screening is improved.
As an embodiment of the present invention, the upsampling the category activation map to obtain a second image confidence level includes:
and sequentially selecting four adjacent pixel values of the category activation graph, and performing linear interpolation on each corresponding pixel value in the unmarked image in the horizontal direction and the vertical direction by using the four adjacent pixel values to obtain the confidence coefficient of the second image.
The four adjacent pixel values are used for carrying out linear interpolation on each corresponding pixel value in an unlabeled image in the horizontal direction and the vertical direction respectively, namely, the corresponding pixel values are subjected to linear interpolation in the horizontal direction twice through the four adjacent pixel values, the interpolation obtained in the horizontal direction is substituted into the linear interpolation in the vertical direction, weights corresponding to the four adjacent pixel points can be obtained, and each pixel point in the four adjacent pixel points is multiplied by the weight and summed respectively, so that the confidence coefficient corresponding to the pixel value is obtained.
Specifically, one pixel value in the unmarked image set is f (x, y), and the four adjacent pixel values in the category activation map are respectively f (Q) 11 )=(x 1 ,y 1 ),f(Q 12 )=(x 1 ,y 2 ),f(Q 21 )=(x 2 ,y 1 ) And f (Q) 22 )=(x 2 ,y 2 )。
In an embodiment of the present invention, a linear interpolation value in the horizontal direction can be obtained according to the pixel values:
a linear interpolation value in the vertical direction can be obtained from the pixel values:
wherein, x is 2 -x 1 =1,y 2 -y 1 =1, the pixel value f (x, y) can be expressed as:
wherein (x) 2 -x)(y 2 -y) can be represented as f (Q) 11 ) Corresponding weight, (x-x) 1 )(y 2 -y) can be expressed as f (Q) 21 ) Corresponding weight, (x) 2 -x)(y-y 1 ) Is expressed as f (Q) 12 ) Corresponding weight, (x-x) 1 )(y-y 1 ) Is expressed as f (Q) 22 ) The corresponding weight.
Wherein f (x, y) represents any pixel point in the unmarked image set, x represents the abscissa of the pixel point, and y represents the ordinate of the pixel point; f (Q) 11 ) Representing pixel points coinciding with the pixel positions of the top left image in the class activation map, and f (Q) 11 ) The selection of the pixel point can be gradually shifted to the right horizontally or vertically until the pixel point coincides with the pixel position of the lower right image in the category activation map, and the pixel point x 1 Abscissa, y, representing pixel point 1 Representing the vertical coordinate of the pixel point; f (Q) 12 ) Is shown in the class activation graph with f (Q) 11 ) Horizontally adjacent pixel points, x 1 Abscissa, y, representing pixel point 1 Representing the vertical coordinate of the pixel point; f (Q) 21 ) Is shown in the class activation graph with f (Q) 11 ) Vertically adjacent pixel points, x 2 Abscissa, y, representing pixel point 1 Representing the vertical coordinate of the pixel point; f (Q) 22 ) Is shown in the class activation graph with f (Q) 21 ) Horizontally adjacent pixel points, x 2 Abscissa, y, representing pixel point 2 Represents the vertical of a pixelAnd (4) coordinates.
And S4, fusing the first image confidence coefficient and the second image confidence coefficient to obtain a pseudo label of the unmarked image set.
In the embodiment of the invention, the pseudo label refers to a segmented image label corresponding to an unlabeled image set.
According to the embodiment of the invention, the first image confidence coefficient and the second image confidence coefficient are fused to obtain the pseudo label of the unmarked image set, the image pixel label with lower confidence coefficient can be screened out to remove the pseudo label with poorer accuracy, and the pseudo label corresponding to the confidence coefficient of each image pixel can be finally determined according to the fused confidence coefficient, so that the useful information in the image is avoided being lost while the low confidence coefficient pixel is screened out, the integrity of the image information is ensured, and the accuracy of the subsequent image segmentation is improved.
As an embodiment of the present invention, referring to fig. 3, the fusing the first image confidence coefficient and the second image confidence coefficient to obtain the pseudo label of the unlabeled image set includes the following steps S41 to S46:
s41, obtaining a first weight coefficient of the first image confidence coefficient, and multiplying the first weight coefficient by the first image confidence coefficient to obtain a first parameter;
s42, acquiring a second weight coefficient of the second image confidence coefficient, and multiplying the second weight coefficient by the second image confidence coefficient to obtain a second parameter;
s43, summing the first parameter and the second parameter to obtain a fusion parameter;
s44, judging whether the fusion parameters are smaller than a preset threshold value;
s45, when the fusion parameter is smaller than a preset threshold value, screening out the label corresponding to the fusion parameter;
and S46, when the fusion parameter is not smaller than a preset threshold value, taking the label corresponding to the fusion parameter as the pseudo label.
Wherein the first weight coefficient represents a degree of importance of the first image confidence; the second weight coefficient represents the degree of importance of the second image confidence, and the value of the sum of the first weight coefficient and the second weight coefficient is 1.
In an embodiment of the present invention, the preset threshold may be customized according to a specific scene, for example, in an identification scene of a vehicle picture in the field of vehicle insurance, the preset threshold may be 0.7, and if the fusion parameter exceeds 0.7, it is identified that a vehicle exists in an undivided image set; if the fusion parameter does not exceed 0.7, it indicates that no vehicle information in the set of undivided images is identified.
And S5, integrating the real image labels and the pseudo labels to obtain final labels, taking the unlabeled image set and the labeled image set as training data sets, and training a preset target image segmentation model by using the training data sets and the final labels to obtain a trained target image segmentation model.
In the embodiment of the present invention, the preset target image segmentation model may be a deep learning model CNN or RNN.
In an embodiment of the present invention, the pseudo label in the final label is a label corresponding to the unlabeled image set, and since the image pixel label with poor accuracy has been removed, the accuracy of the pseudo label, that is, the accuracy of the final label, can be ensured, and when the preset target image segmentation model is trained by using the training data set and the final label, the accuracy of the model is improved.
According to the embodiment of the invention, the real image labels and the pseudo labels are integrated to obtain the final labels, the unmarked image set and the marked image set are used as training data sets, and the training data sets are used for training the preset target image segmentation model to obtain the trained target image segmentation model, so that the semi-supervised image segmentation model training can be realized, the model training efficiency is improved, the complete and accurate image information is used for training the target image segmentation model, and the accuracy of the model in image segmentation is improved.
As an embodiment of the present invention, the training a preset target image segmentation model by using the training data set and the final label to obtain a trained target image segmentation model includes:
extracting the characteristics of the training data set by using a cavity convolution layer in a preset target image segmentation model and carrying out characteristic fusion to obtain a characteristic fusion image set; performing target object identification on the feature fusion image set by using a pyramid pooling layer in the preset target image segmentation model to obtain a target object identification image set; inputting the target object identification image set into an activation function in the preset target image segmentation model to obtain a predicted image segmentation label of the training data set output by the activation function; calculating loss values of predicted image segmentation labels and the final labels by using a loss function in the preset target image segmentation model, and adjusting parameters of the preset target image segmentation model according to the loss values until the loss values meet preset conditions to obtain the trained target image segmentation model.
Compared with the common convolutional layer, the hole convolutional layer has more special holes and wider visual field, so that when the features are extracted, the global information of more images can be mastered through feature fusion under the condition that the images are reduced by the same multiple. Further, the pyramid pooling layer may replace a general pooling layer, and a divided image having a uniform image size may be output regardless of the size of the input image. Further, in this embodiment of the present invention, the activation function may be a Sigmoid function, and the target object recognition image set may be activated to obtain a predictive image segmentation label of the training data set output by the activation function.
In the embodiment of the present invention, the preset condition may be set according to the actual model training scenario, for example, the preset condition may be that the loss value is smaller than a preset threshold.
In detail, the calculating the loss values of the predicted image segmentation tag and the final tag by using the loss function in the preset target image segmentation model comprises:
calculating a loss value for a predicted image segmentation tag and the final tag using the following loss function:
wherein L(s) represents a loss value; k represents the number of predicted image partition tags; j represents a variable of the number of predicted image segmentation tags; y is i Representing the ith prediction image partition tag;indicating the ith final label.
S6, obtaining an image to be segmented, and segmenting the image to be segmented by using the trained target image segmentation model to obtain an image segmentation result of the image to be segmented.
In the embodiment of the invention, the image to be segmented can be a damaged image of a vehicle with a vehicle risk, and can be acquired from a database of a client platform.
In the embodiment of the invention, the trained target image segmentation model can be used for segmenting the vehicle information in the vehicle damage image and the vehicle scratch set vehicle impact position to obtain the vehicle damage detail image.
In the embodiment of the invention, firstly, the unmarked image set is segmented by utilizing the trained image segmentation model to obtain the confidence coefficient of the first image, which is convenient for subsequently judging which image areas in the unmarked image set are close to the real label distribution of the unmarked image; secondly, performing a class activation operation on the first image confidence coefficient to determine the importance degree of each image pixel in the unmarked segmentation image, further performing upsampling on the class activation image to obtain a second image confidence coefficient, recovering the size of the class activation image to the size consistent with that of the unmarked image, further determining the importance degree of each image pixel, improving the accuracy of subsequent pseudo label screening, then obtaining the pseudo label of the unmarked image set by fusing the first image confidence coefficient and the second image confidence coefficient, screening out the image pixel label with lower confidence coefficient to remove the pseudo label with poorer accuracy, and simultaneously ensuring the integrity of image information and improving the accuracy of subsequent image segmentation; and finally, training a target image segmentation model through the real image label, the pseudo label, the unmarked image set and the marked image set, training the target image segmentation model by using complete and accurate image information, improving the accuracy of the model in image segmentation, and performing image segmentation on an image to be segmented by using the model to obtain an image segmentation result, so that the accuracy of image segmentation can be improved. Therefore, the image segmentation method provided by the embodiment of the invention can improve the accuracy of image segmentation.
The image segmentation apparatus 100 according to the present invention may be installed in an electronic device. According to the implemented functions, the image segmentation apparatus may include a first image segmentation module 101, an image class activation module 102, an image upsampling module 103, an image pseudo tag generation module 104, a target image segmentation model training module 105, and an image segmentation module 106, which may also be referred to as a unit, and refer to a series of computer program segments capable of being executed by a processor of an electronic device and performing a fixed function, and which are stored in a memory of the electronic device.
In the present embodiment, the functions of the respective modules/units are as follows:
the first image segmentation module 101 is configured to obtain an unlabeled image set, an labeled image set, and a real image label corresponding to the labeled image set, and segment the unlabeled image set by using a trained image segmentation model to obtain a first image confidence.
In the embodiment of the invention, the unlabeled image set refers to a label-free segmented image set; the labeled image set is a segmented image set with labels; the real image label is a real result of image segmentation obtained after the labeling image set is manually labeled; the first image confidence level refers to a probability that the target object exists in the unlabeled image set, for example, in a car insurance scene, the unlabeled image set may be a probability that a scratch, i.e., a concave-convex portion, of the target vehicle is identified.
In the embodiment of the invention, the trained image segmentation model is a model trained by a labeled image set, and the model mainly performs image segmentation on an unlabeled image set through a deep learning algorithm, so that a first image confidence coefficient of a target object in the unlabeled image set is identified. Wherein the trained image segmentation model comprises: convolutional layers, pooling layers, and activation functions.
According to the embodiment of the invention, the unmarked image set is segmented by utilizing the trained image segmentation model to obtain the confidence coefficient of the first image, which is convenient for subsequently judging which image areas in the unmarked image set are close to the real label distribution of the unmarked image, and the accuracy of subsequent image segmentation is improved.
As an embodiment of the present invention, the first image segmentation module 101 segments the unlabeled image set by using a trained image segmentation model by performing the following operations to obtain a first image confidence level, including:
performing convolution operation on the unmarked image set by using the trained image segmentation model to obtain an image characteristic data set; performing pooling operation on the image characteristic data set to obtain a pooled data set; and performing activation operation on the pooled data set by using an activation function to obtain a first image confidence corresponding to the pooled data in the pooled data set.
The convolution operation can extract image characteristic data in the unmarked image set; the pooling operation can perform dimension reduction operation on the image feature data set, so that the image segmentation efficiency can be improved by reducing the calculated amount while maintaining the key information of the image; the activation function may be a Sigmoid function, and may activate the unlabeled image set to obtain a first image confidence of the unlabeled image set output by the activation function.
The image category activation module 102 is configured to perform a category activation operation on the first image confidence level to obtain a category activation map of the unlabeled image set.
In the embodiment of the present invention, the category activation map refers to a binary map having a size consistent with that of the unmarked segmented image. The category activation operation may be handled by an approximation-CAM (Ablation-visualization technique).
Furthermore, the embodiment of the invention performs the class activation operation on the first image confidence coefficient to obtain the class activation map of the unlabeled image set, so that the importance degree of each image pixel in the unlabeled segmentation map can be determined, and the accuracy of the subsequent image segmentation can be improved conveniently.
As an embodiment of the present invention, the image category activation module 102 performs a category activation operation on the first image confidence level by performing the following operations to obtain a category activation map of the unlabeled image set, including:
acquiring a convolution characteristic diagram of the unmarked image set, and setting the activation value of the convolution characteristic diagram to zero to obtain the activation image confidence of the unmarked image set;
calculating a difference set of the first image confidence coefficient and the activation image confidence coefficient, screening out a positive difference value larger than zero from the difference set, and dividing the positive difference value by the number of pixel points corresponding to the category activation confidence coefficient to obtain a fusion weight;
and weighting the convolution characteristic graph by using the fusion weight, and inputting the weighted characteristic graph into a preset activation function to obtain the class activation graph.
The obtaining of the convolution feature acquisition graph can extract a feature graph corresponding to the last layer of convolution from an image feature data set obtained after the convolution of the image segmentation model; and setting the activation value of the convolution characteristic diagram to zero, namely setting the activation value of a convolution channel to 0, so that distortion of the unmarked segmentation image can be avoided.
In an embodiment of the present invention, the fusion weight is used to determine the importance degree of each image position in the unlabeled segmentation graph, and the fusion weight can be implemented by the following formula:
wherein,representing a fusion weight; m pred Representing a first image confidence;representing a class activation confidence for an ith image pixel in a kth convolution channel;representing a positive difference between the first image confidence and the activation image confidence; p i Representing the ith image pixel value.
Further, the category activation graph may be implemented by the following formula:
wherein L is i Representing a category activation graph; reLU denotes the activation function; k refers to the convolution channel;representing the fusion weight of the ith image pixel in the kth convolution channel;a feature map representing the ith image pixel in the kth channel.
The image upsampling module 103 is configured to upsample the class activation map to obtain a second image confidence.
In this embodiment of the present invention, the second image confidence level refers to a probability that the target object exists in the class activation map. Further, since the class activation map is obtained by convolving the feature map, the size of the feature map is often smaller than that of the original map, the image needs to be restored to the original size for subsequent image segmentation, and the process of restoring the image to the original size is called upsampling.
According to the method and the device, the class activation image is up-sampled to obtain the confidence coefficient of the second image, the size of the class activation image can be restored to the size consistent with the size of the unmarked image, the importance degree of each image pixel is further determined, and the accuracy of subsequent pseudo label screening is improved.
As an embodiment of the present invention, the image upsampling module 103 upsamples the category activation map by performing the following operations to obtain a second image confidence level, including:
and sequentially selecting four adjacent pixel values of the category activation graph, and performing linear interpolation on each corresponding pixel value in the unmarked image in the horizontal direction and the vertical direction by using the four adjacent pixel values to obtain the confidence coefficient of the second image.
The four adjacent pixel values are used for carrying out linear interpolation on each corresponding pixel value in the unmarked image in the horizontal direction and the vertical direction respectively, namely, the corresponding pixel values are subjected to linear interpolation in the horizontal direction twice through the four adjacent pixel values, the interpolation obtained in the horizontal direction is substituted into the linear interpolation in the vertical direction, weights corresponding to the four adjacent pixel points can be obtained, and each pixel point in the four adjacent pixel points is multiplied by the weight and summed respectively, so that the confidence corresponding to the pixel value is obtained.
Specifically, one pixel value in the unlabeled image set is f (x, y), and the four adjacent pixel values in the category activation map are respectively f (Q) 11 )=(x 1 ,y 1 ),f(Q 12 )=(x 1 ,y 2 ),f(Q 21 )=(x 2 ,y 1 ) And f (Q) 22 )=(x 2 ,y 2 )。
In an embodiment of the present invention, a linear interpolation value in the horizontal direction can be obtained according to the pixel values:
a linear interpolation value in the vertical direction can be obtained from the pixel values:
wherein, the x 2 -x 1 =1,y 2 -y 1 =1, the pixel value f (x, y) can be expressed as:
wherein (x) 2 -x)(y 2 -y) can be represented as f (Q) 11 ) Corresponding weight, (x-x) 1 )(y 2 -y) can be represented as f (Q) 21 ) Corresponding weight, (x) 2 -x)(y-y 1 ) Is expressed as f (Q) 12 ) Corresponding weight, (x-x) 1 )(y-y 1 ) Is expressed as f (Q) 22 ) The corresponding weight.
Wherein f (x, y) represents any pixel point in the unmarked image set, x represents the abscissa of the pixel point, and y represents the ordinate of the pixel point; f (Q) 11 ) Representing pixel points coinciding with the pixel positions of the top left image in the class activation map, and f (Q) 11 ) The selection of the pixel point can be gradually shifted to the right horizontally or vertically until the pixel point coincides with the pixel position of the lower right image in the category activation map, and the pixel point x 1 Abscissa, y, representing pixel point 1 Representing the vertical coordinate of the pixel point; f (Q) 12 ) Is shown in the class activation graph with f (Q) 11 ) Horizontally adjacent pixel points, x 1 Abscissa, y, representing pixel point 1 Representing the vertical coordinate of the pixel point; f (Q) 21 ) Is shown in the class activation graph with f (Q) 11 ) Vertically adjacent pixel points, x 2 Abscissa, y, representing pixel point 1 Representing the vertical coordinate of the pixel point; f (Q) 22 ) Is shown in the class activation graph with f (Q) 21 ) Horizontally adjacent pixel points, x 2 Abscissa, y, representing pixel point 2 Indicating the ordinate of the pixel.
The image pseudo tag generation module 104 is configured to fuse the first image confidence coefficient and the second image confidence coefficient to obtain a pseudo tag of the unlabeled image set.
In the embodiment of the invention, the pseudo label refers to a segmented image label corresponding to an unlabeled image set.
According to the embodiment of the invention, the first image confidence coefficient and the second image confidence coefficient are fused to obtain the pseudo label of the unmarked image set, the image pixel label with lower confidence coefficient can be screened out to remove the pseudo label with poorer accuracy, and the pseudo label corresponding to the confidence coefficient of each image pixel can be finally determined according to the fused confidence coefficient, so that the useful information in the image is avoided being lost while the low confidence coefficient pixel is screened out, the integrity of the image information is ensured, and the accuracy of the subsequent image segmentation is improved.
As an embodiment of the present invention, the image pseudo tag generating module 104 fuses the first image confidence coefficient and the second image confidence coefficient to obtain a pseudo tag of the unlabeled image set by performing the following operations, including:
acquiring a first weight coefficient of the first image confidence coefficient, and multiplying the first weight coefficient by the first image confidence coefficient to obtain a first parameter;
acquiring a second weight coefficient of the second image confidence coefficient, and multiplying the second weight coefficient by the second image confidence coefficient to obtain a second parameter;
summing the first parameter and the second parameter to obtain a fusion parameter;
judging whether the fusion parameters are smaller than a preset threshold value or not;
when the fusion parameter is smaller than a preset threshold value, screening out a label corresponding to the fusion parameter;
and when the fusion parameter is not less than a preset threshold value, taking the label corresponding to the fusion parameter as the pseudo label.
Wherein the first weight coefficient represents an importance degree of the first image confidence; the second weight coefficient represents an importance degree of the second image confidence, and a value of addition of the first weight coefficient and the second weight coefficient is 1.
In an embodiment of the present invention, the preset threshold may be customized according to a specific scene, for example, in an identification scene of a vehicle picture in the field of vehicle insurance, the preset threshold may be 0.7, and if the fusion parameter exceeds 0.7, it is identified that a vehicle exists in an undivided image set; if the fusion parameter does not exceed 0.7, it indicates that no vehicle information in the undivided image set is identified.
The target image segmentation model training module 105 is configured to integrate the real image labels and the pseudo labels to obtain final labels, use the unlabeled image set and the labeled image set as training data sets, and train a preset target image segmentation model by using the training data sets and the final labels to obtain a trained target image segmentation model.
In the embodiment of the present invention, the preset target image segmentation model may be a deep learning model CNN or RNN.
In an embodiment of the present invention, the pseudo label in the final label is a label corresponding to the unlabeled image set, and since the image pixel label with poor accuracy has been removed, the accuracy of the pseudo label, that is, the accuracy of the final label, can be ensured, and when the preset target image segmentation model is trained by using the training data set and the final label, the accuracy of the model is improved.
According to the embodiment of the invention, the real image labels and the pseudo labels are integrated to obtain the final labels, the unmarked image set and the marked image set are used as training data sets, and the training data sets are used for training the preset target image segmentation model to obtain the trained target image segmentation model, so that the semi-supervised image segmentation model training can be realized, the model training efficiency is improved, the complete and accurate image information is used for training the target image segmentation model, and the accuracy of the model in image segmentation is improved.
As an embodiment of the present invention, the target image segmentation model training module 105 performs the following operations to train a preset target image segmentation model by using the training data set and the final label, so as to obtain a trained target image segmentation model, including:
extracting the characteristics of the training data set by using a cavity convolution layer in a preset target image segmentation model and carrying out characteristic fusion to obtain a characteristic fusion image set; performing target object identification on the feature fusion image set by using a pyramid pooling layer in the preset target image segmentation model to obtain a target object identification image set; inputting the target object identification image set into an activation function in the preset target image segmentation model to obtain a predicted image segmentation label of the training data set output by the activation function; calculating loss values of a predicted image segmentation label and the final label by using a loss function in the preset target image segmentation model, and adjusting parameters of the preset target image segmentation model according to the loss values until the loss values meet preset conditions to obtain the trained target image segmentation model.
Compared with the common convolutional layer, the hole convolutional layer has more special holes and wider visual field, so that when the features are extracted, the global information of more images can be mastered through feature fusion under the condition that the images are reduced by the same multiple. Further, the pyramid pooling layer may replace a general pooling layer, and a divided image having a uniform image size may be output regardless of the size of the input image. Further, in this embodiment of the present invention, the activation function may be a Sigmoid function, and the target object recognition image set may be activated to obtain a predictive image segmentation label of the training data set output by the activation function.
In the embodiment of the present invention, the preset condition may be set according to the actual model training scene, for example, the preset condition may be that the loss value is smaller than a preset threshold.
In detail, the calculating the loss values of the predicted image segmentation label and the final label by using the loss function in the preset target image segmentation model comprises:
calculating a loss value for a predicted image segmentation tag and the final tag using the following loss function:
wherein L(s) represents a loss value; k represents the number of predicted image partition tags; j represents a variable of the number of predicted image segmentation tags; y is i Representing the ith prediction image partition tag;indicating the ith final label.
The image segmentation module 106 is configured to obtain an image to be segmented, and segment the image to be segmented by using the trained target image segmentation model to obtain an image segmentation result of the image to be segmented.
In the embodiment of the invention, the image to be segmented can be a damaged image of a vehicle with a vehicle risk, and can be acquired from a database of a client platform.
In the embodiment of the invention, the trained target image segmentation model can be used for segmenting the vehicle information in the vehicle damage image and the vehicle scratch set vehicle impact position to obtain the vehicle damage detail image.
In the embodiment of the invention, firstly, the unmarked image set is segmented by utilizing the trained image segmentation model to obtain the confidence coefficient of the first image, which is convenient for subsequently judging which image areas in the unmarked image set are close to the real label distribution of the unmarked image; secondly, performing a class activation operation on the first image confidence coefficient to determine the importance degree of each image pixel in the unmarked segmentation image, further performing upsampling on the class activation image to obtain a second image confidence coefficient, restoring the size of the class activation image to the size consistent with that of the unmarked image, further determining the importance degree of each image pixel to improve the accuracy of subsequent false label screening, and then fusing the first image confidence coefficient and the second image confidence coefficient to obtain the false label of the unmarked image set, so that the image pixel label with lower confidence coefficient can be screened out to remove the false label with poor accuracy, and the integrity of image information can be ensured while screening out the low confidence coefficient pixel, and the accuracy of subsequent image segmentation can be improved; and finally, training a target image segmentation model through a real image label, a pseudo label, an unlabeled image set and an labeled image set, training the target image segmentation model by using complete and accurate image information, improving the accuracy of the model in image segmentation, and performing image segmentation on an image to be segmented by using the model to obtain an image segmentation result, so that the accuracy of image segmentation can be improved. Therefore, the image segmentation device provided by the embodiment of the invention can improve the accuracy of image segmentation.
Fig. 5 is a schematic structural diagram of an electronic device for implementing the image segmentation method according to the present invention.
The electronic device may comprise a processor 10, a memory 11, a communication bus 12 and a communication interface 13, and may further comprise a computer program, such as an image segmentation program, stored in the memory 11 and executable on the processor 10.
The memory 11 includes at least one type of media, which includes flash memory, removable hard disk, multimedia card, card type memory (e.g., SD or DX memory, etc.), magnetic memory, local disk, optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device, for example a removable hard disk of the electronic device. The memory 11 may also be an external storage device of the electronic device in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device. The memory 11 may be used not only to store application software installed in the electronic device and various types of data, such as codes of an image segmentation program, but also to temporarily store data that has been output or is to be output.
The processor 10 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the whole electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device by running or executing programs or modules (e.g., image segmentation programs, etc.) stored in the memory 11 and calling data stored in the memory 11.
The communication bus 12 may be a PerIPheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus. The bus may be divided into an address bus, a data bus, a control bus, etc. The communication bus 12 is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
Fig. 5 shows only an electronic device having components, and those skilled in the art will appreciate that the structure shown in fig. 5 does not constitute a limitation of the electronic device, and may include fewer or more components than those shown, or some components may be combined, or a different arrangement of components.
For example, although not shown, the electronic device may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so that functions such as charge management, discharge management, and power consumption management are implemented through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Optionally, the communication interface 13 may include a wired interface and/or a wireless interface (such as a WI-FI interface, a bluetooth interface, etc.), which are generally used to establish a communication connection between the electronic device and other electronic devices.
Optionally, the communication interface 13 may further include a user interface, which may be a Display (Display), an input unit (such as a Keyboard), and optionally, a standard wired interface and a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable, among other things, for displaying information processed in the electronic device and for displaying a visualized user interface.
It is to be understood that the embodiments described are illustrative only and are not to be construed as limiting the scope of the claims.
The image segmentation program stored in the memory 11 of the electronic device is a combination of a plurality of computer programs, which when executed in the processor 10, enable:
acquiring an unlabeled image set, an labeled image set and a real image label corresponding to the labeled image set, and segmenting the unlabeled image set by using a trained image segmentation model to obtain a first image confidence coefficient;
performing category activation operation on the first image confidence coefficient to obtain a category activation graph of the unmarked image set;
the category activation image is subjected to up-sampling to obtain a second image confidence coefficient;
fusing the first image confidence coefficient and the second image confidence coefficient to obtain a pseudo label of the unmarked image set;
integrating the real image labels and the pseudo labels to obtain final labels, taking the unmarked image sets and the marked image sets as training data sets, and training a preset target image segmentation model by using the training data sets and the final labels to obtain a trained target image segmentation model;
and acquiring an image to be segmented, and segmenting the image to be segmented by using the trained target image segmentation model to obtain an image segmentation result of the image to be segmented.
Specifically, the processor 10 may refer to the description of the relevant steps in the embodiment corresponding to fig. 1 for a specific implementation method of the computer program, which is not described herein again.
Further, the electronic device integrated module/unit, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable medium. The computer readable medium may be non-volatile or volatile. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U.S. disk, removable hard disk, magnetic diskette, optical disk, computer Memory, read-Only Memory (ROM).
Embodiments of the present invention may also provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor of an electronic device, the computer program may implement:
acquiring an unlabeled image set, an labeled image set and a real image label corresponding to the labeled image set, and segmenting the unlabeled image set by using a trained image segmentation model to obtain a first image confidence coefficient;
performing category activation operation on the first image confidence coefficient to obtain a category activation graph of the unmarked image set;
the category activation image is subjected to up-sampling to obtain a second image confidence coefficient;
fusing the first image confidence coefficient and the second image confidence coefficient to obtain a pseudo label of the unmarked image set;
integrating the real image labels and the pseudo labels to obtain final labels, taking the unlabeled image set and the labeled image set as training data sets, and training a preset target image segmentation model by using the training data sets and the final labels to obtain a trained target image segmentation model;
and acquiring an image to be segmented, and segmenting the image to be segmented by using the trained target image segmentation model to obtain an image segmentation result of the image to be segmented.
Further, the computer-readable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the blockchain node, and the like.
In the embodiments provided by the present invention, it should be understood that the disclosed media, devices, apparatuses and methods may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not to denote any particular order.
Finally, it should be noted that the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the same, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made to the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.
Claims (9)
1. An image segmentation method, characterized in that the method comprises:
acquiring an unlabeled image set, an labeled image set and a real image label corresponding to the labeled image set, and segmenting the unlabeled image set by using a trained image segmentation model to obtain a first image confidence coefficient;
performing category activation operation on the first image confidence coefficient to obtain a category activation graph of the unmarked image set;
the category activation image is subjected to up-sampling to obtain a second image confidence coefficient;
fusing the first image confidence coefficient and the second image confidence coefficient to obtain a pseudo label of the unmarked image set;
integrating the real image labels and the pseudo labels to obtain final labels, taking the unmarked image sets and the marked image sets as training data sets, and training a preset target image segmentation model by using the training data sets and the final labels to obtain a trained target image segmentation model;
acquiring an image to be segmented, and segmenting the image to be segmented by using the trained target image segmentation model to obtain an image segmentation result of the image to be segmented;
performing a category activation operation on the first image confidence to obtain a category activation map of the unlabeled image set, including:
acquiring a convolution characteristic diagram of the unmarked image set, and setting the activation value of the convolution characteristic diagram to zero to obtain the activation image confidence of the unmarked image set;
calculating a difference set of the first image confidence coefficient and the activation image confidence coefficient, screening out a positive difference value larger than zero from the difference set, and dividing the positive difference value by the number of pixel points corresponding to the category activation confidence coefficient to obtain a fusion weight;
and weighting the convolution characteristic graph by using the fusion weight, and inputting the weighted characteristic graph into a preset activation function to obtain the class activation graph.
2. The image segmentation method according to claim 1, wherein the fusing the first image confidence level and the second image confidence level to obtain the pseudo label of the unlabeled image set comprises:
acquiring a first weight coefficient of the first image confidence coefficient, and multiplying the first weight coefficient by the first image confidence coefficient to obtain a first parameter;
acquiring a second weight coefficient of the second image confidence coefficient, and multiplying the second weight coefficient by the second image confidence coefficient to obtain a second parameter;
summing the first parameter and the second parameter to obtain a fusion parameter;
judging whether the fusion parameter is smaller than a preset threshold value or not;
when the fusion parameter is smaller than a preset threshold value, screening out a label corresponding to the fusion parameter;
and when the fusion parameter is not less than a preset threshold value, taking the label corresponding to the fusion parameter as the pseudo label.
3. The image segmentation method of claim 1, wherein the upsampling the class activation map to obtain a second image confidence level comprises:
and sequentially selecting four adjacent pixel values of the category activation graph, and performing linear interpolation on each corresponding pixel value in the unmarked image in the horizontal direction and the vertical direction by using the four adjacent pixel values to obtain the confidence coefficient of the second image.
4. The image segmentation method of claim 1, wherein the training a preset target image segmentation model using the training data set and the final label to obtain a trained target image segmentation model comprises:
extracting the characteristics of the training data set by using a cavity convolution layer in a preset target image segmentation model and carrying out characteristic fusion to obtain a characteristic fusion image set;
performing target object identification on the feature fusion image set by using a pyramid pooling layer in the preset target image segmentation model to obtain a target object identification image set;
inputting the target object identification image set into an activation function in the preset target image segmentation model to obtain a predicted image segmentation label of the training data set output by the activation function;
calculating loss values of a predicted image segmentation label and the final label by using a loss function in the preset target image segmentation model, and adjusting parameters of the preset target image segmentation model according to the loss values until the loss values meet preset conditions to obtain the trained target image segmentation model.
5. The image segmentation method of claim 1, wherein the segmenting the set of unlabeled images using the trained image segmentation model to obtain a first image confidence level comprises:
performing convolution operation on the unmarked image set by using the trained image segmentation model to obtain an image characteristic data set;
pooling the image characteristic data set to obtain a pooled data set;
and performing activation operation on the pooled data set by using an activation function to obtain a first image confidence corresponding to the pooled data in the pooled data set.
6. An image segmentation apparatus, characterized in that the apparatus comprises:
the first image segmentation module is used for acquiring an unlabeled image set, an labeled image set and a real image label corresponding to the labeled image set, and segmenting the unlabeled image set by using a trained image segmentation model to obtain a first image confidence coefficient;
the image category activation module is used for carrying out category activation operation on the first image confidence coefficient to obtain a category activation graph of the unmarked image set;
the image up-sampling module is used for up-sampling the category activation map to obtain a second image confidence coefficient;
the image pseudo label generating module is used for fusing the first image confidence coefficient and the second image confidence coefficient to obtain a pseudo label of the unmarked image set;
the target image segmentation model training module is used for integrating the real image labels and the pseudo labels to obtain final labels, taking the unmarked image sets and the marked image sets as training data sets, and training a preset target image segmentation model by using the training data sets and the final labels to obtain a trained target image segmentation model;
the image segmentation module is used for acquiring an image to be segmented, and segmenting the image to be segmented by using the trained target image segmentation model to obtain an image segmentation result of the image to be segmented;
performing a category activation operation on the first image confidence to obtain a category activation map of the unlabeled image set, including:
acquiring a convolution characteristic diagram of the unmarked image set, and setting the activation value of the convolution characteristic diagram to zero to obtain the activation image confidence of the unmarked image set;
calculating a difference set of the first image confidence coefficient and the activation image confidence coefficient, screening out a positive difference value larger than zero from the difference set, and dividing the positive difference value by the number of pixel points corresponding to the category activation confidence coefficient to obtain a fusion weight;
and weighting the convolution characteristic diagram by using the fusion weight, and inputting the weighted characteristic diagram into a preset activation function to obtain the category activation diagram.
7. The image segmentation apparatus according to claim 6, wherein the fusing the first image confidence level and the second image confidence level to obtain a pseudo label of the unlabeled image set comprises:
acquiring a first weight coefficient of the first image confidence coefficient, and multiplying the first weight coefficient by the first image confidence coefficient to obtain a first parameter;
acquiring a second weight coefficient of the second image confidence coefficient, and multiplying the second weight coefficient by the second image confidence coefficient to obtain a second parameter;
summing the first parameter and the second parameter to obtain a fusion parameter;
judging whether the fusion parameter is smaller than a preset threshold value or not;
when the fusion parameter is smaller than a preset threshold value, screening out a label corresponding to the fusion parameter;
and when the fusion parameter is not less than a preset threshold value, taking the label corresponding to the fusion parameter as the pseudo label.
8. An electronic device, characterized in that the electronic device comprises:
at least one processor; and (c) a second step of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the image segmentation method according to any one of claims 1 to 5.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the image segmentation method according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210978696.XA CN115049836B (en) | 2022-08-16 | 2022-08-16 | Image segmentation method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210978696.XA CN115049836B (en) | 2022-08-16 | 2022-08-16 | Image segmentation method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115049836A CN115049836A (en) | 2022-09-13 |
CN115049836B true CN115049836B (en) | 2022-10-25 |
Family
ID=83168095
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210978696.XA Active CN115049836B (en) | 2022-08-16 | 2022-08-16 | Image segmentation method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115049836B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115501050A (en) * | 2022-10-31 | 2022-12-23 | 江苏理工学院 | Wheelchair user health monitoring system based on computer vision and training method of detection network thereof |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112884770A (en) * | 2021-04-28 | 2021-06-01 | 腾讯科技(深圳)有限公司 | Image segmentation processing method and device and computer equipment |
CN112990218A (en) * | 2021-03-25 | 2021-06-18 | 北京百度网讯科技有限公司 | Optimization method and device of image semantic segmentation model and electronic equipment |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110930417B (en) * | 2019-11-26 | 2023-08-08 | 腾讯科技(深圳)有限公司 | Training method and device for image segmentation model, and image segmentation method and device |
CN112883190A (en) * | 2021-01-28 | 2021-06-01 | 平安科技(深圳)有限公司 | Text classification method and device, electronic equipment and storage medium |
CN113096138B (en) * | 2021-04-13 | 2023-04-28 | 西安电子科技大学 | Weak supervision semantic image segmentation method for selective pixel affinity learning |
CN113435522A (en) * | 2021-06-30 | 2021-09-24 | 平安科技(深圳)有限公司 | Image classification method, device, equipment and storage medium |
CN114821063A (en) * | 2022-05-11 | 2022-07-29 | 北京百度网讯科技有限公司 | Semantic segmentation model generation method and device and image processing method |
-
2022
- 2022-08-16 CN CN202210978696.XA patent/CN115049836B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112990218A (en) * | 2021-03-25 | 2021-06-18 | 北京百度网讯科技有限公司 | Optimization method and device of image semantic segmentation model and electronic equipment |
CN112884770A (en) * | 2021-04-28 | 2021-06-01 | 腾讯科技(深圳)有限公司 | Image segmentation processing method and device and computer equipment |
Also Published As
Publication number | Publication date |
---|---|
CN115049836A (en) | 2022-09-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111914939B (en) | Method, apparatus, device and computer readable storage medium for recognizing blurred image | |
CN112137591B (en) | Target object position detection method, device, equipment and medium based on video stream | |
CN111783982B (en) | Method, device, equipment and medium for acquiring attack sample | |
CN112699775A (en) | Certificate identification method, device and equipment based on deep learning and storage medium | |
CN113705461B (en) | Face definition detection method, device, equipment and storage medium | |
CN112052850A (en) | License plate recognition method and device, electronic equipment and storage medium | |
CN112132216B (en) | Vehicle type recognition method and device, electronic equipment and storage medium | |
CN112749653A (en) | Pedestrian detection method, device, electronic equipment and storage medium | |
CN112541902A (en) | Similar area searching method, similar area searching device, electronic equipment and medium | |
CN112990374A (en) | Image classification method, device, electronic equipment and medium | |
CN115049836B (en) | Image segmentation method, device, equipment and storage medium | |
CN114494800B (en) | Predictive model training method and device, electronic equipment and storage medium | |
CN111985449A (en) | Rescue scene image identification method, device, equipment and computer medium | |
CN112269875A (en) | Text classification method and device, electronic equipment and storage medium | |
CN112036488A (en) | Event identification method, device and equipment based on image identification and storage medium | |
CN112101481B (en) | Method, device, equipment and storage medium for screening influence factors of target object | |
CN111583215B (en) | Intelligent damage assessment method and device for damaged image, electronic equipment and storage medium | |
CN114463685B (en) | Behavior recognition method, behavior recognition device, electronic equipment and storage medium | |
CN113706019B (en) | Service capability analysis method, device, equipment and medium based on multidimensional data | |
CN112215336B (en) | Data labeling method, device, equipment and storage medium based on user behaviors | |
CN111652226B (en) | Picture-based target identification method and device and readable storage medium | |
CN113343882A (en) | Crowd counting method and device, electronic equipment and storage medium | |
CN114359645B (en) | Image expansion method, device, equipment and storage medium based on characteristic area | |
CN114742828B (en) | Intelligent analysis method and device for workpiece damage assessment based on machine vision | |
CN112101279B (en) | Target object abnormality detection method, target object abnormality detection device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |