CN116363365A - Image segmentation method based on semi-supervised learning and related equipment - Google Patents
Image segmentation method based on semi-supervised learning and related equipment Download PDFInfo
- Publication number
- CN116363365A CN116363365A CN202310332817.8A CN202310332817A CN116363365A CN 116363365 A CN116363365 A CN 116363365A CN 202310332817 A CN202310332817 A CN 202310332817A CN 116363365 A CN116363365 A CN 116363365A
- Authority
- CN
- China
- Prior art keywords
- image
- pixel
- voting
- label
- pseudo
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003709 image segmentation Methods 0.000 title claims abstract description 59
- 238000000034 method Methods 0.000 title claims abstract description 36
- 230000011218 segmentation Effects 0.000 claims abstract description 113
- 238000012549 training Methods 0.000 claims abstract description 85
- 239000013598 vector Substances 0.000 claims description 58
- 230000006870 function Effects 0.000 claims description 33
- 230000015654 memory Effects 0.000 claims description 25
- 238000004364 calculation method Methods 0.000 claims description 11
- 238000011478 gradient descent method Methods 0.000 claims description 5
- 238000004590 computer program Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 7
- 238000013507 mapping Methods 0.000 description 7
- 238000002372 labelling Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 206010028980 Neoplasm Diseases 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
The application provides an image segmentation method and device based on semi-supervised learning, electronic equipment and storage medium, wherein the image segmentation method based on semi-supervised learning comprises the following steps: collecting a label image as a first image set, and collecting a label-free image as a second image set; training an image segmentation network based on the first image set to obtain a first segmentation network, and segmenting a second image set based on the first segmentation network to obtain first pseudo labels of each label-free image; voting based on the first pseudo tag to obtain a voting graph of the corresponding unlabeled image on each pixel type; calculating the credibility of each pixel point in the corresponding label-free image based on the first pseudo label; updating the first pseudo tag based on the credibility and the voting graph to obtain a second pseudo tag of each label-free image; the first segmentation network is trained based on the first image set and the second image set with the second pseudo tag to obtain a second segmentation network. The method and the device can improve the accuracy of the pseudo tag, and further improve the accuracy of image segmentation.
Description
Technical Field
The application relates to the technical field of artificial intelligence, in particular to an image segmentation method, an image segmentation device, electronic equipment and a storage medium based on semi-supervised learning.
Background
Image segmentation is one of the basic tasks in the artificial intelligence field, is widely applied to various fields such as digital medical treatment, smart cities and the like, and requires a large amount of manpower and material resources to carry out pixel-level manual labeling on all images, however, image segmentation in semi-supervised learning can be realized by only using a small amount of manually labeled images and a large amount of unlabeled images, and the manpower and fund consumption caused by image labeling can be greatly reduced in practical application.
At present, the image segmentation of semi-supervised learning usually firstly uses the marked image to train a segmentation network, then inputs the unmarked image into the segmentation network to generate a pseudo tag, and then adds the image with the pseudo tag into training to obtain the trained segmentation network. However, the segmentation network obtained by training a small amount of data cannot accurately generate a pseudo tag for an unlabeled image, and meanwhile, the accuracy of the pseudo tag is low due to the fact that a great gap exists between the labeled image and the unlabeled image, so that the accuracy of image segmentation is not high.
Disclosure of Invention
In view of the foregoing, it is necessary to provide an image segmentation method based on semi-supervised learning and related equipment to solve the technical problem of how to improve the accuracy of pseudo labels and thus the accuracy of image segmentation, wherein the related equipment comprises an image segmentation device based on semi-supervised learning, an electronic device and a storage medium.
The application provides an image segmentation method based on semi-supervised learning, which comprises the following steps:
collecting a label image with label data as a first image set, and collecting a label-free image without label data as a second image set, wherein the label data comprises pixel types of all pixel points in the label image;
training an image segmentation network based on the first image set to obtain a first segmentation network, and inputting the unlabeled images in the second image set into the first segmentation network to obtain first pseudo labels of each unlabeled image;
voting is carried out based on the first pseudo tag so as to obtain a voting graph of the corresponding unlabeled image on each pixel type, wherein the voting graph comprises voting results of the pixel type corresponding to the voting graph at each pixel point in the unlabeled image;
calculating the credibility of each pixel point in the corresponding label-free image based on the first pseudo label;
updating the first pseudo tag based on the confidence level and the voting map to obtain a second pseudo tag for each unlabeled image;
training the first segmentation network based on the first image set and a second image set with a second pseudo tag to obtain a second segmentation network, wherein the input of the second segmentation network is an image to be segmented, and the output is a segmentation result of the image to be segmented.
In some embodiments, the first pseudo tag includes a first probability vector for each pixel in the unlabeled image, the first probability vector including a first probability that the pixel belongs to each pixel class.
In some embodiments, the voting based on the first pseudo tag to obtain a voting map of the corresponding unlabeled image on each pixel class includes:
taking any pixel point in any one first pseudo tag as a target pixel point;
voting is carried out in the neighborhood range of the target pixel point so as to obtain voting results of various pixel types at the target pixel point, and the voting results meet the relation:
wherein, ROI (x,y) For the neighborhood range of the target pixel point (x, y), N is the number of all pixel types, and alpha (i, N) isThe voting coefficients are used to determine the voting coefficients,representing ROI (x,y) Inner pixel (x) * ,y * ) First probability, num (ROI (x,y) ) Representing ROI (x,y) The number of all pixels in +.>For a voting result of a pixel class n at the target pixel point (x, y), the voting coefficient satisfies a relation:
Traversing all pixel points in the first pseudo tag to obtain voting results of pixel types at each pixel point;
And selecting voting results of the same pixel type from all the pixel points to construct a voting graph of the unlabeled image corresponding to the first pseudo label on each pixel type.
In some embodiments, the calculating the credibility of each pixel point in the corresponding label-free image based on the first pseudo label includes:
taking any pixel point in any one first pseudo tag as a target pixel point;
calculating the credibility of the target pixel point based on the first probability vector of the target pixel point, wherein the credibility satisfies the relation:
where N is the number of all pixel types,for the first probability of belonging to the pixel class i in the first probability vector of the target pixel point (x, y), conf (x, y) is the credibility of the target pixel point (x, y)The value range is [0,1 ]];
Traversing all pixel points in the first pseudo tag to obtain the credibility of each pixel point in the corresponding non-tag image.
In some embodiments, the updating the first pseudo tag based on the confidence level and the voting map to obtain a second pseudo tag for each of the label-free images comprises:
for each unlabeled image, selecting a voting result of a pixel point to be updated from voting graphs of all pixel types as a voting result set, wherein the pixel point to be updated is any pixel point in the unlabeled image;
Selecting a first probability vector of the pixel point to be updated from a first pseudo tag of the label-free image;
updating the first probability vector based on the reliability of the pixel to be updated and the voting result set to obtain a second probability vector of the pixel to be updated, wherein the second probability vector comprises a second probability that the pixel to be updated belongs to each pixel category, and the second probability satisfies a relation:
wherein Conf (x ', y') is the credibility of the pixel point (x ', y') to be updated,t is the first probability of belonging to the pixel class i in the first probability vector i (x ', y') is the voting result of pixel class i in said voting result set, ">A second probability that the pixel point (x ', y') to be updated belongs to the pixel class i;
traversing all pixel points in the label-free image to obtain a second probability vector of each pixel point, and taking the second probability vector of all pixel points as a second pseudo label of the label-free image.
In some embodiments, the training the first segmentation network to obtain a second segmentation network based on the first image set and a second image set with a second pseudo tag comprises:
Randomly selecting a preset number of training images from the first image set and the second image set with the second pseudo tag as a training batch, wherein the training images comprise tag images and unlabeled images;
inputting training images in the training batch into the first segmentation network to obtain segmentation results of all pixel points in each training image;
calculating the numerical value of a cost function based on the segmentation result of the pixel points;
updating the first segmentation network according to a gradient descent method to reduce the value of the cost function;
and continuously acquiring new training batches from the first image set and the second image set with the second pseudo tag, updating the first segmentation network until the value of the cost function is smaller than a preset threshold value, and obtaining a second segmentation network.
In some embodiments, the cost function satisfies the relationship:
wherein Q is 1 And Q 2 Representing the first image set and the second image set, N 1 And N 2 Representing the number of training images belonging to the first image set and the second image set in the training batch respectively, wherein W and H are the width and height dimensions of the training images, and P u (x, y) and P v (x, y) respectively representing the segmentation results of the training images u and v in the training batch at the pixel points (x, y),label representing training image uPixel type of pixel point (x, y) in data, < >>A second probability vector representing a pixel (x, y) in a second pseudo tag of the training image v,/v>Representing the calculation P u (x, y) and->Cross entropy loss function of->Representing the calculation P v (x, y) and->Is the value of the cost function.
The embodiment of the application also provides an image segmentation device based on semi-supervised learning, which comprises:
the system comprises an acquisition unit, a first image set and a second image set, wherein the acquisition unit is used for acquiring a label image with label data as a first image set and acquiring an unlabeled image without label data as a second image set, and the label data comprises pixel types of all pixel points in the label image;
an acquisition unit for training an image segmentation network based on the first image set to obtain a first segmentation network, and inputting the unlabeled images in the second image set into the first segmentation network to acquire a first pseudo label of each unlabeled image;
a voting unit, configured to vote based on the first pseudo tag to obtain a voting graph of a corresponding unlabeled image on each pixel type, where the voting graph includes a voting result of the pixel type corresponding to the voting graph at each pixel point in the unlabeled image;
The calculating unit is used for calculating the credibility of each pixel point in the corresponding label-free image based on the first pseudo label;
an updating unit configured to update the first pseudo tag based on the credibility and the voting map to obtain a second pseudo tag of each label-free image;
the training unit is used for training the first segmentation network based on the first image set and the second image set with the second pseudo tag to obtain a second segmentation network, wherein the input of the second segmentation network is an image to be segmented, and the output of the second segmentation network is a segmentation result of the image to be segmented.
The embodiment of the application also provides electronic equipment, which comprises:
a memory storing at least one instruction;
and the processor executes the instructions stored in the memory to realize the image segmentation method based on semi-supervised learning.
Embodiments of the present application also provide a computer-readable storage medium having at least one instruction stored therein, the at least one instruction being executed by a processor in an electronic device to implement the semi-supervised learning based image segmentation method.
In summary, the reliability of each pixel point in the first pseudo tag is calculated on the basis of the first pseudo tag so as to evaluate the accuracy of each pixel point, meanwhile, local information in the neighborhood range of the pixel point is perceived in the first pseudo tag to obtain a voting result of each pixel point, further, larger disturbance is applied to the pixel point with smaller reliability in the first pseudo tag based on the voting result, smaller disturbance is applied to the pixel point with larger reliability in the first pseudo tag, and therefore updating of the first pseudo tag is achieved, accuracy of the pseudo tag is improved, and finally, the updated pseudo tag is utilized to conduct second training on the first segmentation network, and further image segmentation accuracy is improved.
Drawings
Fig. 1 is a flowchart of a preferred embodiment of the semi-supervised learning based image segmentation method of the present application.
Fig. 2 is a schematic diagram of the correspondence relationship between the label-free image, the first pseudo label, the voting map of the pixel type, and the credibility of the pixel point according to the present application.
Fig. 3 is a functional block diagram of a preferred embodiment of the semi-supervised learning based image segmentation apparatus of the present application.
Fig. 4 is a schematic structural diagram of an electronic device according to a preferred embodiment of the image segmentation method based on semi-supervised learning according to the present application.
Detailed Description
In order that the objects, features and advantages of the present application may be more clearly understood, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, the described embodiments are merely some, rather than all, of the embodiments of the present application.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more of the described features. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
The embodiment of the application provides an image segmentation method based on semi-supervised learning, which can be applied to one or more electronic devices, wherein the electronic devices are devices capable of automatically performing numerical calculation and/or information processing according to preset or stored instructions, and the hardware of the electronic devices comprises, but is not limited to, a microprocessor, an application specific integrated circuit (Application Specific Integrated Circuit, an ASIC), a programmable gate array (Field-Programmable Gate Array, an FPGA), a digital processor (Digital Signal Processor, a DSP), an embedded device and the like.
The electronic device may be any electronic product that can interact with a customer in a human-machine manner, such as a personal computer, tablet, smart phone, personal digital assistant (Personal Digital Assistant, PDA), gaming machine, interactive web television (Internet Protocol Television, IPTV), smart wearable device, etc.
The electronic device may also include a network device and/or a client device. Wherein the network device includes, but is not limited to, a single network server, a server group composed of a plurality of network servers, or a Cloud based Cloud Computing (Cloud Computing) composed of a large number of hosts or network servers.
The network in which the electronic device is located includes, but is not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a virtual private network (Virtual Private Network, VPN), and the like.
As shown in fig. 1, a flowchart of a preferred embodiment of the image segmentation method based on semi-supervised learning is shown. The order of the steps in the flowchart may be changed and some steps may be omitted according to various needs.
The image segmentation method based on semi-supervised learning provided by the embodiment of the application can be applied to any scene needing image segmentation, and the method can be applied to products of the scenes, such as tumor segmentation in the medical field, lane line segmentation in the smart city field and the like.
S10, acquiring a label image with label data as a first image set, and acquiring an unlabeled image without label data as a second image set, wherein the label data comprises pixel types of all pixel points in the label image.
In an alternative embodiment, the number of the pixel types is at least two, the label data of the label images in the first image set is obtained by manual labeling, and the number of the label-free images in the second image set is greater than the number of the label images in the first image set.
Thus, a first image set and a second image set are obtained, wherein the first image set comprises a label image with label data, and a data basis is provided for realizing semi-supervised image segmentation.
S11, training an image segmentation network based on the first image set to obtain a first segmentation network, and inputting the unlabeled images in the second image set into the first segmentation network to obtain first pseudo labels of each unlabeled image.
In an optional embodiment, the training the image segmentation network based on the first image set to obtain a first segmentation network includes:
randomly selecting a label image from the first image set without returning, and inputting the label image into the image segmentation network to obtain a segmentation result;
calculating the value of a cross entropy loss function according to the segmentation result and the label data, and updating the image segmentation network by using a gradient descent method;
And continuously selecting the label images from the first image set to update the image segmentation network until the value of the cross entropy loss function is smaller than a preset value or all the label images in the first image set are traversed, and stopping to obtain the first segmentation network.
The image segmentation network may be an existing semantic segmentation network such as Unet, FCN, deepLab, which is not limited in the application; the preset value is 0.001.
In an alternative embodiment, the first segmentation network may learn the mapping between each pixel point and all pixel classes in the image. Because the number of the label images in the first image set is limited, the accuracy of the segmentation result obtained by the first segmentation network is not high, and the segmentation result output by the first segmentation network can reflect the mapping relation between each pixel point and different pixel types in the label-free image to a certain extent.
In an alternative embodiment, the first pseudo tag includes a first probability vector for each pixel in the unlabeled image, the first probability vector including a first probability that the pixel belongs to each pixel class.
The first probability vectors are in one-to-one correspondence with the pixel points in the label-free image, and the size of the first probability vectors is N rows and 1 columns, wherein N is the number of all pixel types; the sum of all first probabilities in the first probability vector is 1.
For example, if the number of all pixel categories is 5, dividing all pixel points in the image into 5 categories; if the size of the label-free image is 3×3, the first pseudo labels corresponding to the label-free image include 9 first probability vectors, and the size of the first probability vectors is 5 rows and 1 column, including the first probabilities that the corresponding pixel points belong to 5 pixel categories.
In this way, by means of the first segmentation network trained in the first image set, a first pseudo label of each unlabeled image in the second image set is obtained, and the first pseudo label can reflect the mapping relation between each pixel point and different pixel types in the unlabeled image.
And S12, voting is carried out based on the first pseudo tag so as to obtain a voting graph of the corresponding unlabeled image on each pixel type, wherein the voting graph comprises voting results of the pixel type corresponding to the voting graph at each pixel point in the unlabeled image.
In an optional embodiment, the voting based on the first pseudo tag to obtain a voting map of the corresponding unlabeled image on each pixel type includes:
taking any pixel point in any one first pseudo tag as a target pixel point;
voting is carried out in the neighborhood range of the target pixel point so as to obtain voting results of various pixel types at the target pixel point, and the voting results meet the relation:
wherein, ROI (x,y) N is the number of all pixel types for the neighborhood range of the target pixel point (x, y), alpha (i, N) is a voting coefficient,representing ROI (x,y) Inner pixel (x) * ,y * ) First probability, num (ROI (x,y) ) Representing ROI (x,y) The number of all pixels in +.>For a voting result of a pixel class n at the target pixel point (x, y), the voting coefficient satisfies a relation:
Traversing all pixel points in the first pseudo tag to obtain voting results of pixel types at each pixel point;
and selecting voting results of the same pixel type from all the pixel points to construct a voting graph of the unlabeled image corresponding to the first pseudo label on each pixel type.
Wherein, the neighborhood range of the target pixel point (x, y) is a rectangular area with the width and height dimensions of h multiplied by w taking the target pixel point (x, y) as a center, and h and w are preset; the voting result is the probability that the pixel points obtained by the local information in the neighborhood range of the reference pixel points belong to each pixel category.
In this alternative embodiment, all the first pseudo tags are traversed to obtain a voting map for each unlabeled image in the second set of images over each pixel class. One unlabeled image corresponds to an N Zhang Toupiao image, N is the number of all pixel types, the size of the voting image is the same as that of the unlabeled image, and the voting image of the pixel type N comprises the voting result of each pixel point in the unlabeled image in the pixel type N.
And obtaining a voting result of each pixel point on each pixel type based on the first pseudo tag, and further obtaining a voting diagram of each unlabeled image in the second image set on each pixel type, wherein the voting result is a probability value of each pixel point belonging to each pixel type, which is obtained by referring to local information in a neighborhood range of the pixel point.
S13, calculating the credibility of each pixel point in the corresponding label-free image based on the first pseudo label.
In an optional embodiment, the calculating the credibility of each pixel point in the corresponding label-free image based on the first pseudo label includes:
taking any pixel point in any one first pseudo tag as a target pixel point;
Calculating the credibility of the target pixel point based on the first probability vector of the target pixel point, wherein the credibility satisfies the relation:
where N is the number of all pixel types,for the first probability of belonging to the pixel class i in the first probability vector of the target pixel point (x, y), conf (x, y) is the credibility of the target pixel point (x, y), and the value range is [0,1];
Traversing all pixel points in the first pseudo tag to obtain the credibility of each pixel point in the corresponding non-tag image.
The reliability may reflect the accuracy of the first probability vector of the pixel point in the first pseudo tag, and the larger the value of the reliability is, the higher the accuracy of the first probability vector is. For example, assuming that the number of all pixel types is 4, if the first probability vector of the target pixel point (x, y) is [0.1,0.3,0,0.6], the reliability of the target pixel point (x, y) is:
in this alternative embodiment, all the first pseudo tags are traversed to obtain the credibility of each pixel point in the label-free image corresponding to each first pseudo tag.
Thus, the accurate quantification of the credibility of the first probability vector of each pixel point in the first pseudo tag is realized.
S14, updating the first pseudo tag based on the credibility and the voting graph to obtain a second pseudo tag of each label-free image.
In an alternative embodiment, each first pseudo tag corresponds to an unlabeled image, and at the same time, each first pseudo tag corresponds to a voting map of each pixel type and reliability of each pixel point, and correspondence among the unlabeled image, the first pseudo tag, the voting map of the pixel type and the reliability of the pixel point is shown in fig. 2.
In an alternative embodiment, updating the first pseudo tag based on the confidence level and the voting map to obtain a second pseudo tag for each of the label-free images includes:
for each unlabeled image, selecting a voting result of a pixel point to be updated from voting graphs of all pixel types as a voting result set, wherein the pixel point to be updated is any pixel point in the unlabeled image;
selecting a first probability vector of the pixel point to be updated from a first pseudo tag of the label-free image;
updating the first probability vector based on the reliability of the pixel to be updated and the voting result set to obtain a second probability vector of the pixel to be updated, wherein the second probability vector comprises a second probability that the pixel to be updated belongs to each pixel category, and the second probability satisfies a relation:
Wherein Conf (x ', y') is the credibility of the pixel point (x ', y') to be updated,t is the first probability of belonging to the pixel class i in the first probability vector i (x ', y') is the voting result of pixel class i in said voting result set, ">A second probability that the pixel point (x ', y') to be updated belongs to the pixel class i;
traversing all pixel points in the label-free image to obtain a second probability vector of each pixel point, and taking the second probability vector of all pixel points as a second pseudo label of the label-free image.
In this alternative embodiment, each unlabeled image in the second set of images corresponds to a second pseudo-label, and the second pseudo-label includes a second probability vector for each pixel in the unlabeled image.
In this way, the first pseudo labels are updated based on the credibility of each pixel point in the label-free image and the voting graph of each pixel type, and the second pseudo labels of each label-free image can be obtained without manual labeling, and the second pseudo labels comprehensively consider local information in the neighborhood range of the first pseudo labels and the pixel points, so that the mapping relation between the pixel points in the label-free image and each pixel type can be accurately reflected.
S15, training the first segmentation network based on the first image set and the second image set with the second pseudo tag to obtain a second segmentation network, wherein the input of the second segmentation network is an image to be segmented, and the segmentation result of the image to be segmented is output.
In an alternative embodiment, after the second pseudo tag of each label-free image is obtained, the first segmentation network is trained for the second time based on the first image set and the second image set with the second pseudo tag, and the first segmentation network is constrained to learn an accurate mapping relation between the pixel point and each pixel type according to the second pseudo tag.
In an alternative embodiment, the training the first segmentation network based on the first image set and a second image set with a second pseudo tag to obtain a second segmentation network includes:
randomly selecting a preset number of training images from the first image set and the second image set with the second pseudo tag as a training batch, wherein the training images comprise tag images and unlabeled images;
inputting training images in the training batch into the first segmentation network to obtain segmentation results of all pixel points in each training image;
Calculating a numerical value of a cost function based on the segmentation result of the pixel points, wherein the cost function satisfies a relation:
wherein Q is 1 And Q 2 Representing a first image set and a second image set, N 1 And N 2 Representing the number of training images belonging to the first image set and the second image set in the training batch respectively, wherein W and H are the width and height dimensions of the training images, and P u (x, y) and Pv (x, y) represent the segmentation results of the training images u and v at the pixel points (x, y) in the training batch,pixel type of pixel point (x, y) in label data representing training image u, +.>A second probability vector representing a pixel (x, y) in a second pseudo tag of the training image v,/v>Representing the calculation P u (x, y) and->Cross entropy loss function of->Representing the calculation P v (x, y) andis the value of the cost function;
updating the first segmentation network according to a gradient descent method to reduce the value of the cost function;
and continuously acquiring new training batches from the first image set and the second image set with the second pseudo tag, updating the first segmentation network until the value of the cost function is smaller than a preset threshold value, and obtaining a second segmentation network.
Wherein the preset threshold value is 0.001, the preset number of values is 32, namely 32 images are included in one training batch, and N is 2 And N 1 And equal to said preset number.
In this optional embodiment, the second segmentation network may learn an accurate mapping relationship between a pixel point and each pixel type in the image, and input the image to be segmented into the second segmentation network to obtain a segmentation result of the image to be segmented, where the segmentation result includes a pixel type of each pixel point in the image to be segmented.
In this way, the label image with the label data and the label-free image with the second pseudo label are utilized to train the first segmentation network for the second time to obtain a second segmentation network, and the second segmentation network can learn the accurate mapping relation between the pixel points and the pixel types in the image.
According to the technical scheme, the reliability of each pixel point in the first pseudo tag is calculated on the basis of the first pseudo tag so as to evaluate the accuracy of each pixel point, meanwhile, the local information in the neighborhood range of the pixel point is perceived in the first pseudo tag to obtain the voting result of each pixel point, further, larger disturbance is applied to the pixel point with smaller reliability in the first pseudo tag on the basis of the voting result, smaller disturbance is applied to the pixel point with larger reliability in the first pseudo tag, and therefore updating of the first pseudo tag is achieved, accuracy of the pseudo tag is improved, and finally, the updated pseudo tag is utilized to conduct second training on the first segmentation network, so that the accuracy of image segmentation is improved.
Referring to fig. 3, fig. 3 is a functional block diagram of a preferred embodiment of the image segmentation apparatus based on semi-supervised learning in the present application. The image segmentation apparatus 11 based on semi-supervised learning includes an acquisition unit 110, an acquisition unit 111, a voting unit 112, a calculation unit 113, an update unit 114, and a training unit 115. The module/unit referred to herein is a series of computer readable instructions capable of being executed by the processor 13 and of performing a fixed function, stored in the memory 12. In the present embodiment, the functions of the respective modules/units will be described in detail in the following embodiments.
In an alternative embodiment, the acquisition unit 110 is configured to acquire a label image with label data as a first image set, and acquire an unlabeled image without label data as a second image set, where the label data includes a pixel type of each pixel point in the label image.
In an alternative embodiment, the obtaining unit 111 is configured to train the image segmentation network based on the first image set to obtain a first segmentation network, and input the unlabeled images in the second image set into the first segmentation network to obtain the first pseudo label of each unlabeled image.
In an alternative embodiment, the first pseudo tag includes a first probability vector for each pixel in the unlabeled image, the first probability vector including a first probability that the pixel belongs to each pixel class.
In an alternative embodiment, the voting unit 112 is configured to vote based on the first pseudo tag to obtain a voting map of the corresponding unlabeled image on each pixel type, where the voting map includes a voting result of the pixel type corresponding to the voting map at each pixel point in the unlabeled image.
In an optional embodiment, the voting based on the first pseudo tag to obtain a voting map of the corresponding unlabeled image on each pixel type includes:
taking any pixel point in any one first pseudo tag as a target pixel point;
voting is carried out in the neighborhood range of the target pixel point so as to obtain voting results of various pixel types at the target pixel point, and the voting results meet the relation:
wherein, ROI (x,y) N is the number of all pixel types for the neighborhood range of the target pixel point (x, y), alpha (i, N) is a voting coefficient,representing ROI (x,y) Inner pixel (x) * ,y * ) First probability, num (ROI (x,y) ) Representing ROI (x,y) The number of all pixels in +.>For a voting result of a pixel class n at the target pixel point (x, y), the voting coefficient satisfies a relation:
Traversing all pixel points in the first pseudo tag to obtain voting results of pixel types at each pixel point;
and selecting voting results of the same pixel type from all the pixel points to construct a voting graph of the unlabeled image corresponding to the first pseudo label on each pixel type.
In an alternative embodiment, the calculating unit 113 is configured to calculate the credibility of each pixel point in the corresponding label-free image based on the first pseudo label.
In an optional embodiment, the calculating the credibility of each pixel point in the corresponding label-free image based on the first pseudo label includes:
taking any pixel point in any one first pseudo tag as a target pixel point;
calculating the credibility of the target pixel point based on the first probability vector of the target pixel point, wherein the credibility satisfies the relation:
where N is the number of all pixel types,for the first probability of belonging to the pixel class i in the first probability vector of the target pixel point (x, y), conf (x, y) is the credibility of the target pixel point (x, y), and the value range is [0,1 ];/>
Traversing all pixel points in the first pseudo tag to obtain the credibility of each pixel point in the corresponding non-tag image.
In an alternative embodiment, the updating unit 114 is configured to update the first pseudo tag to obtain a second pseudo tag for each of the untag images based on the confidence level and the voting map.
In an alternative embodiment, said updating said first pseudo tag based on said confidence level and said voting map to obtain a second pseudo tag for each unlabeled image comprises:
for each unlabeled image, selecting a voting result of a pixel point to be updated from voting graphs of all pixel types as a voting result set, wherein the pixel point to be updated is any pixel point in the unlabeled image;
selecting a first probability vector of the pixel point to be updated from a first pseudo tag of the label-free image;
updating the first probability vector based on the reliability of the pixel to be updated and the voting result set to obtain a second probability vector of the pixel to be updated, wherein the second probability vector comprises a second probability that the pixel to be updated belongs to each pixel category, and the second probability satisfies a relation:
Wherein Conf (x ', y') is the credibility of the pixel point (x ', y') to be updated,t is the first probability of belonging to the pixel class i in the first probability vector i (x ', y') is the voting result of pixel class i in said voting result set, ">A second probability that the pixel point (x ', y') to be updated belongs to the pixel class i;
traversing all pixel points in the label-free image to obtain a second probability vector of each pixel point, and taking the second probability vector of all pixel points as a second pseudo label of the label-free image.
In an alternative embodiment, the training unit 115 is configured to train the first segmentation network to obtain a second segmentation network based on the first image set and a second image set with a second pseudo tag, where an input of the second segmentation network is an image to be segmented, and output a segmentation result of the image to be segmented.
In an alternative embodiment, the training the first segmentation network based on the first image set and a second image set with a second pseudo tag to obtain a second segmentation network includes:
randomly selecting a preset number of training images from the first image set and the second image set with the second pseudo tag as a training batch, wherein the training images comprise tag images and unlabeled images;
Inputting training images in the training batch into the first segmentation network to obtain segmentation results of all pixel points in each training image;
calculating the numerical value of a cost function based on the segmentation result of the pixel points;
updating the first segmentation network according to a gradient descent method to reduce the value of the cost function;
and continuously acquiring new training batches from the first image set and the second image set with the second pseudo tag, updating the first segmentation network until the value of the cost function is smaller than a preset threshold value, and obtaining a second segmentation network.
In an alternative embodiment, the cost function satisfies the relation:
wherein Q is 1 And Q 2 Representing the first image set and the second image set, N 1 And N 2 Representing the number of training images belonging to the first image set and the second image set in the training batch respectively, wherein W and H are the width and height dimensions of the training images, and P u (x, y) and P v (x, y) respectively representing the segmentation results of the training images u and v in the training batch at the pixel points (x, y),pixel type of pixel point (x, y) in label data representing training image u, +.>A second probability vector representing a pixel (x, y) in a second pseudo tag of the training image v,/v >Representing the calculation P u (x, y) and->Cross entropy loss function of->Representing the calculation P v (x, y) and->Is the value of the cost function.
According to the technical scheme, the reliability of each pixel point in the first pseudo tag is calculated on the basis of the first pseudo tag so as to evaluate the accuracy of each pixel point, meanwhile, the local information in the neighborhood range of the pixel point is perceived in the first pseudo tag to obtain the voting result of each pixel point, further, larger disturbance is applied to the pixel point with smaller reliability in the first pseudo tag on the basis of the voting result, smaller disturbance is applied to the pixel point with larger reliability in the first pseudo tag, and therefore updating of the first pseudo tag is achieved, accuracy of the pseudo tag is improved, and finally, the updated pseudo tag is utilized to conduct second training on the first segmentation network, so that the accuracy of image segmentation is improved.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 1 comprises a memory 12 and a processor 13. The memory 12 is used for storing computer readable instructions, and the processor 13 is used to execute the computer readable instructions stored in the memory to implement the semi-supervised learning based image segmentation method according to any of the above embodiments.
In an alternative embodiment, the electronic device 1 further comprises a bus, a computer program stored in said memory 12 and executable on said processor 13, such as an image segmentation program based on semi-supervised learning.
Fig. 4 shows only the electronic device 1 with a memory 12 and a processor 13, it being understood by a person skilled in the art that the structure shown in fig. 4 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or may combine certain components, or a different arrangement of components.
In connection with fig. 1, the memory 12 in the electronic device 1 stores a plurality of computer readable instructions to implement a semi-supervised learning based image segmentation method, the processor 13 being executable to implement:
collecting a label image with label data as a first image set, and collecting a label-free image without label data as a second image set, wherein the label data comprises pixel types of all pixel points in the label image;
training an image segmentation network based on the first image set to obtain a first segmentation network, and inputting the unlabeled images in the second image set into the first segmentation network to obtain first pseudo labels of each unlabeled image;
Voting is carried out based on the first pseudo tag so as to obtain a voting graph of the corresponding unlabeled image on each pixel type, wherein the voting graph comprises voting results of the pixel type corresponding to the voting graph at each pixel point in the unlabeled image;
calculating the credibility of each pixel point in the corresponding label-free image based on the first pseudo label;
updating the first pseudo tag based on the confidence level and the voting map to obtain a second pseudo tag for each unlabeled image;
training the first segmentation network based on the first image set and a second image set with a second pseudo tag to obtain a second segmentation network, wherein the input of the second segmentation network is an image to be segmented, and the output is a segmentation result of the image to be segmented.
Specifically, the specific implementation method of the above instructions by the processor 13 may refer to the description of the relevant steps in the corresponding embodiment of fig. 1, which is not repeated herein.
It will be appreciated by those skilled in the art that the schematic diagram is merely an example of the electronic device 1 and does not constitute a limitation of the electronic device 1, the electronic device 1 may be a bus type structure, a star type structure, the electronic device 1 may further comprise more or less other hardware or software than illustrated, or a different arrangement of components, e.g. the electronic device 1 may further comprise an input-output device, a network access device, etc.
It should be noted that the electronic device 1 is only used as an example, and other electronic products that may be present in the present application or may be present in the future are also included in the scope of the present application and are incorporated herein by reference.
The memory 12 includes at least one type of readable storage medium, which may be non-volatile or volatile. The readable storage medium includes flash memory, a removable hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, etc. The memory 12 may in some embodiments be an internal storage unit of the electronic device 1, such as a mobile hard disk of the electronic device 1. The memory 12 may in other embodiments also be an external storage device of the electronic device 1, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the electronic device 1. The memory 12 may be used not only for storing application software installed in the electronic apparatus 1 and various types of data, such as codes of an image segmentation program based on semi-supervised learning, but also for temporarily storing data that has been output or is to be output.
The processor 13 may be comprised of integrated circuits in some embodiments, for example, a single packaged integrated circuit, or may be comprised of multiple integrated circuits packaged with the same or different functions, including one or more central processing units (Central Processing unit, CPU), microprocessors, digital processing chips, graphics processors, a combination of various control chips, and the like. The processor 13 is a Control Unit (Control Unit) of the electronic device 1, connects the respective components of the entire electronic device 1 using various interfaces and lines, executes various functions of the electronic device 1 and processes data by running or executing programs or modules stored in the memory 12 (for example, executing an image segmentation program based on semi-supervised learning, etc.), and calls data stored in the memory 12.
The processor 13 executes the operating system of the electronic device 1 and various types of applications installed. The processor 13 executes the application program to implement the steps of the various embodiments of the semi-supervised learning based image segmentation methods described above, such as those shown in FIG. 1.
Illustratively, the computer program may be partitioned into one or more modules/units that are stored in the memory 12 and executed by the processor 13 to complete the present application. The one or more modules/units may be a series of computer readable instruction segments capable of performing the specified functions, which instruction segments describe the execution of the computer program in the electronic device 1. For example, the computer program may be divided into an acquisition unit 110, an acquisition unit 111, a voting unit 112, a calculation unit 113, an update unit 114, a training unit 115.
The integrated units implemented in the form of software functional modules described above may be stored in a computer readable storage medium. The software functional module is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a computer device, or a network device, etc.) or a Processor (Processor) to execute portions of the semi-supervised learning-based image segmentation methods described in various embodiments of the present application.
The integrated modules/units of the electronic device 1 may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as a stand alone product. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by instructing the relevant hardware device by a computer program, where the computer program may be stored in a computer readable storage medium, and the computer program may implement the steps of each method embodiment described above when executed by a processor.
Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory, other memories, and the like.
Further, the computer-readable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created from the use of blockchain nodes, and the like.
The blockchain referred to in the application is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, encryption algorithm and the like. The Blockchain (Blockchain), which is essentially a decentralised database, is a string of data blocks that are generated by cryptographic means in association, each data block containing a batch of information of network transactions for verifying the validity of the information (anti-counterfeiting) and generating the next block. The blockchain may include a blockchain underlying platform, a platform product services layer, an application services layer, and the like.
The bus may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. For ease of illustration, only one arrow is shown in FIG. 4, but only one bus or one type of bus is not shown. The bus is arranged to enable a connection communication between the memory 12 and at least one processor 13 or the like.
The embodiment of the present application further provides a computer readable storage medium (not shown), where computer readable instructions are stored, where the computer readable instructions are executed by a processor in an electronic device to implement the image segmentation method based on semi-supervised learning according to any of the embodiments above.
In the several embodiments provided in this application, it should be understood that the disclosed systems, apparatuses, and methods may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. Several of the elements or devices described in the specification may be embodied by one and the same item of software or hardware. The terms first, second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above embodiments are merely for illustrating the technical solution of the present application and not for limiting, and although the present application has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that the technical solution of the present application may be modified or substituted without departing from the spirit and scope of the technical solution of the present application.
Claims (10)
1. An image segmentation method based on semi-supervised learning, the method comprising:
collecting a label image with label data as a first image set, and collecting a label-free image without label data as a second image set, wherein the label data comprises pixel types of all pixel points in the label image;
training an image segmentation network based on the first image set to obtain a first segmentation network, and inputting the unlabeled images in the second image set into the first segmentation network to obtain first pseudo labels of each unlabeled image;
Voting is carried out based on the first pseudo tag so as to obtain a voting graph of the corresponding unlabeled image on each pixel type, wherein the voting graph comprises voting results of the pixel type corresponding to the voting graph at each pixel point in the unlabeled image;
calculating the credibility of each pixel point in the corresponding label-free image based on the first pseudo label;
updating the first pseudo tag based on the confidence level and the voting map to obtain a second pseudo tag for each unlabeled image;
training the first segmentation network based on the first image set and a second image set with a second pseudo tag to obtain a second segmentation network, wherein the input of the second segmentation network is an image to be segmented, and the output is a segmentation result of the image to be segmented.
2. The semi-supervised learning based image segmentation method as set forth in claim 1, wherein the first pseudo tag includes a first probability vector for each pixel in the unlabeled image, the first probability vector including a first probability that the pixel belongs to each pixel class.
3. The semi-supervised learning based image segmentation method as set forth in claim 1, wherein the voting based on the first pseudo tag to obtain a voting map of the corresponding unlabeled image across pixel classes includes:
Taking any pixel point in any one first pseudo tag as a target pixel point;
voting is carried out in the neighborhood range of the target pixel point so as to obtain voting results of various pixel types at the target pixel point, and the voting results meet the relation:
wherein, ROI (x,y) N is the number of all pixel types for the neighborhood range of the target pixel point (x, y), alpha (i, N) is a voting coefficient,representing ROI (x,y) Inner pixel (x) * ,y * ) First probability, num (ROI (x,y) ) Representing ROI (x,y) The number of all pixels in +.>For a voting result of a pixel class n at the target pixel point (x, y), the voting coefficient satisfies a relation:
Traversing all pixel points in the first pseudo tag to obtain voting results of pixel types at each pixel point;
and selecting voting results of the same pixel type from all the pixel points to construct a voting graph of the unlabeled image corresponding to the first pseudo label on each pixel type.
4. The method for image segmentation based on semi-supervised learning as set forth in claim 1, wherein the calculating the credibility of each pixel point in the corresponding unlabeled image based on the first pseudo label includes:
Taking any pixel point in any one first pseudo tag as a target pixel point;
calculating the credibility of the target pixel point based on the first probability vector of the target pixel point, wherein the credibility satisfies the relation:
where N is the number of all pixel types,for the first probability of belonging to the pixel class i in the first probability vector of the target pixel point (x, y), conf (x, y) is the credibility of the target pixel point (x, y), and the value range is [0,1];
Traversing all pixel points in the first pseudo tag to obtain the credibility of each pixel point in the corresponding non-tag image.
5. The semi-supervised learning based image segmentation method as set forth in claim 1, wherein the updating the first pseudo tags based on the confidence level and the voting map to obtain second pseudo tags for each unlabeled image includes:
for each unlabeled image, selecting a voting result of a pixel point to be updated from voting graphs of all pixel types as a voting result set, wherein the pixel point to be updated is any pixel point in the unlabeled image;
selecting a first probability vector of the pixel point to be updated from a first pseudo tag of the label-free image;
Updating the first probability vector based on the reliability of the pixel to be updated and the voting result set to obtain a second probability vector of the pixel to be updated, wherein the second probability vector comprises a second probability that the pixel to be updated belongs to each pixel category, and the second probability satisfies a relation:
wherein Conf (x ', y') is the credibility of the pixel point (x ', y') to be updated,t is the first probability of belonging to the pixel class i in the first probability vector i (x ', y') is the voting result of pixel class i in said voting result set, ">A second probability that the pixel point (x ', y') to be updated belongs to the pixel class i;
traversing all pixel points in the label-free image to obtain a second probability vector of each pixel point, and taking the second probability vector of all pixel points as a second pseudo label of the label-free image.
6. The semi-supervised learning based image segmentation method as set forth in claim 1, wherein the training the first segmentation network based on the first image set and a second image set with a second pseudo tag to obtain a second segmentation network comprises:
randomly selecting a preset number of training images from the first image set and the second image set with the second pseudo tag as a training batch, wherein the training images comprise tag images and unlabeled images;
Inputting training images in the training batch into the first segmentation network to obtain segmentation results of all pixel points in each training image;
calculating the numerical value of a cost function based on the segmentation result of the pixel points;
updating the first segmentation network according to a gradient descent method to reduce the value of the cost function;
and continuously acquiring new training batches from the first image set and the second image set with the second pseudo tag, updating the first segmentation network until the value of the cost function is smaller than a preset threshold value, and obtaining a second segmentation network.
7. The semi-supervised learning based image segmentation method as set forth in claim 6, wherein the cost function satisfies the relationship:
wherein Q is 1 And Q 2 Representing the first image set and the second image set, N 1 And N 2 Representing the number of training images belonging to the first image set and the second image set in the training batch respectively, wherein W and H are the width and height dimensions of the training images, and P u (x, y) and P v (x, y) respectively representing the segmentation results of the training images u and v in the training batch at the pixel points (x, y),pixel type of pixel point (x, y) in label data representing training image u, +. >A second probability vector representing a pixel (x, y) in a second pseudo tag of the training image v,/v>Representing the calculation P u (x, y) and->Cross entropy loss function of->Representing the calculation P v (x, y) andis the value of the cost function.
8. An image segmentation apparatus based on semi-supervised learning, the apparatus comprising:
the system comprises an acquisition unit, a first image set and a second image set, wherein the acquisition unit is used for acquiring a label image with label data as a first image set and acquiring an unlabeled image without label data as a second image set, and the label data comprises pixel types of all pixel points in the label image;
an acquisition unit for training an image segmentation network based on the first image set to obtain a first segmentation network, and inputting the unlabeled images in the second image set into the first segmentation network to acquire a first pseudo label of each unlabeled image;
a voting unit, configured to vote based on the first pseudo tag to obtain a voting graph of a corresponding unlabeled image on each pixel type, where the voting graph includes a voting result of the pixel type corresponding to the voting graph at each pixel point in the unlabeled image;
the calculating unit is used for calculating the credibility of each pixel point in the corresponding label-free image based on the first pseudo label;
An updating unit configured to update the first pseudo tag based on the credibility and the voting map to obtain a second pseudo tag of each label-free image;
the training unit is used for training the first segmentation network based on the first image set and the second image set with the second pseudo tag to obtain a second segmentation network, wherein the input of the second segmentation network is an image to be segmented, and the output of the second segmentation network is a segmentation result of the image to be segmented.
9. An electronic device, the electronic device comprising:
a memory storing computer readable instructions; and
A processor executing computer readable instructions stored in the memory to implement the semi-supervised learning based image segmentation method as recited in any one of claims 1 to 7.
10. A computer readable storage medium having stored thereon computer readable instructions which when executed by a processor implement the semi-supervised learning based image segmentation method as recited in any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310332817.8A CN116363365A (en) | 2023-03-23 | 2023-03-23 | Image segmentation method based on semi-supervised learning and related equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310332817.8A CN116363365A (en) | 2023-03-23 | 2023-03-23 | Image segmentation method based on semi-supervised learning and related equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116363365A true CN116363365A (en) | 2023-06-30 |
Family
ID=86906877
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310332817.8A Pending CN116363365A (en) | 2023-03-23 | 2023-03-23 | Image segmentation method based on semi-supervised learning and related equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116363365A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117197593A (en) * | 2023-11-06 | 2023-12-08 | 天河超级计算淮海分中心 | Medical image pseudo tag generation system |
-
2023
- 2023-03-23 CN CN202310332817.8A patent/CN116363365A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117197593A (en) * | 2023-11-06 | 2023-12-08 | 天河超级计算淮海分中心 | Medical image pseudo tag generation system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111898696B (en) | Pseudo tag and tag prediction model generation method, device, medium and equipment | |
CN115063632B (en) | Vehicle damage identification method, device, equipment and medium based on artificial intelligence | |
CN112132032B (en) | Traffic sign board detection method and device, electronic equipment and storage medium | |
CN111739016B (en) | Target detection model training method and device, electronic equipment and storage medium | |
CN113705462B (en) | Face recognition method, device, electronic equipment and computer readable storage medium | |
CN112232203B (en) | Pedestrian recognition method and device, electronic equipment and storage medium | |
CN111860377A (en) | Live broadcast method and device based on artificial intelligence, electronic equipment and storage medium | |
CN115049878B (en) | Target detection optimization method, device, equipment and medium based on artificial intelligence | |
CN115063589A (en) | Knowledge distillation-based vehicle component segmentation method and related equipment | |
CN112508078A (en) | Image multitask multi-label identification method, system, equipment and medium | |
CN116363365A (en) | Image segmentation method based on semi-supervised learning and related equipment | |
CN115205225A (en) | Training method, device and equipment of medical image recognition model and storage medium | |
CN113920382B (en) | Cross-domain image classification method based on class consistency structured learning and related device | |
CN112052409B (en) | Address resolution method, device, equipment and medium | |
CN111950707B (en) | Behavior prediction method, device, equipment and medium based on behavior co-occurrence network | |
CN117611569A (en) | Vehicle fascia detection method, device, equipment and medium based on artificial intelligence | |
CN117671254A (en) | Image segmentation method and device | |
CN116543460A (en) | Space-time action recognition method based on artificial intelligence and related equipment | |
CN116503608A (en) | Data distillation method based on artificial intelligence and related equipment | |
CN116416632A (en) | Automatic file archiving method based on artificial intelligence and related equipment | |
CN116229547A (en) | Micro-expression recognition method, device, equipment and storage medium based on artificial intelligence | |
CN112102205B (en) | Image deblurring method and device, electronic equipment and storage medium | |
CN114972761B (en) | Vehicle part segmentation method based on artificial intelligence and related equipment | |
CN114943865B (en) | Target detection sample optimization method based on artificial intelligence and related equipment | |
CN115223113B (en) | Training sample set cleaning method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |