WO2023207360A1 - 图像分割方法、装置、电子设备及存储介质 - Google Patents

图像分割方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2023207360A1
WO2023207360A1 PCT/CN2023/080694 CN2023080694W WO2023207360A1 WO 2023207360 A1 WO2023207360 A1 WO 2023207360A1 CN 2023080694 W CN2023080694 W CN 2023080694W WO 2023207360 A1 WO2023207360 A1 WO 2023207360A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
segmentation
model
normal vector
loss
Prior art date
Application number
PCT/CN2023/080694
Other languages
English (en)
French (fr)
Inventor
朱渊略
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2023207360A1 publication Critical patent/WO2023207360A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • Embodiments of the present disclosure relate to image processing technology, for example, to an image segmentation method, device, electronic device, and storage medium.
  • Image segmentation can currently be achieved through deep learning algorithms based on convolutional neural networks, or traditional algorithms based on edge detection and plane estimation information.
  • the present disclosure provides an image segmentation method, device, electronic equipment and storage medium to improve image segmentation accuracy and stability.
  • an embodiment of the present disclosure provides an image segmentation method, which method includes:
  • the preliminary segmentation image and the target normal vector image are image fused to obtain a target segmentation image.
  • embodiments of the present disclosure also provide an image segmentation device, which includes:
  • Acquisition module set to obtain the image to be segmented
  • a processing module configured to determine a preliminary segmentation image and a target normal vector image corresponding to the image to be segmented
  • a fusion module configured to perform image fusion on the preliminary segmentation image and the target normal vector image. Combined, the target segmentation image is obtained.
  • embodiments of the present disclosure also provide an electronic device, where the electronic device includes:
  • the processor When the program is executed by the processor, the processor implements the image segmentation method as described in any one of the embodiments of the present disclosure.
  • embodiments of the disclosure further provide a storage medium containing computer-executable instructions that, when executed by a computer processor, are used to perform the image as described in any one of the embodiments of the disclosure. Segmentation method.
  • Figure 1 is a schematic flowchart of an image segmentation method provided by an embodiment of the present disclosure
  • Figure 2 is a schematic flowchart of another image segmentation method provided by an embodiment of the present disclosure.
  • FIG. 3 is a schematic flowchart of another image segmentation method provided by an embodiment of the present disclosure.
  • Figure 4 is a schematic flowchart of another image segmentation method provided by an embodiment of the present disclosure.
  • Figure 5 is a schematic structural diagram of an image segmentation device provided by an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • the term “include” and its variations are open-ended, ie, “including but not limited to.”
  • the term “based on” means “based at least in part on.”
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; and the term “some embodiments” means “at least some embodiments”. Relevant definitions of other terms will be given in the description below.
  • a prompt message is sent to the user to clearly remind the user that the operation requested will require the acquisition and use of the user's personal information. Therefore, users can autonomously choose whether to provide personal information to software or hardware such as electronic devices, applications, servers or storage media that perform the operations of the technical solution of the present disclosure based on the prompt information.
  • the method of sending prompt information to the user may be, for example, a pop-up window, and the prompt information may be presented in the form of text in the pop-up window.
  • the pop-up window can also contain a selection control for the user to choose "agree” or "disagree” to provide personal information to the electronic device.
  • Figure 1 is a schematic flowchart of an image segmentation method provided by an embodiment of the present disclosure.
  • the embodiment of the present disclosure is suitable for performing image segmentation on a predetermined part to be segmented in an image.
  • This method can be executed by an image segmentation device.
  • the device can be implemented in the form of software and/or hardware, optionally, through electronic equipment, and the electronic equipment can be a mobile terminal, a personal computer (Personal Computer, PC) or a server, etc.
  • PC Personal Computer
  • the method includes:
  • the image to be segmented may be an image with a part to be segmented, and the part to be segmented may be a part that needs to be segmented.
  • the part to be segmented may be a floor, a wall, a ceiling, etc.
  • the image to be segmented can be obtained based on the shooting device, or the image to be segmented can be obtained by user uploading or downloading.
  • the method of obtaining the image to be segmented can be set according to the actual situation.
  • the image to be segmented may also be a video frame to be segmented in the video, for example, it may be each frame or partial frame in the video.
  • obtaining the image to be segmented may include obtaining a target video frame in the target video, and using the target video frame as the image to be segmented.
  • obtaining the target video frame in the target video may include: obtaining the video frames in the target video frame by frame as the target video frame, or obtaining the video frames in the target video as the target video frame at a preset number of video frames, and Alternatively, a video frame in the target video is acquired at a preset time interval as the target video frame, or a video frame containing the target object in the target video is acquired as the target video frame.
  • the preliminary segmented image may be a segmented image obtained by preliminary segmentation of the image to be segmented, and the preliminary segmented image includes a roughly segmented portion to be segmented.
  • the preliminary segmented image may be an image obtained by segmenting the image to be segmented based on a segmentation model, or may be an image obtained by calculating the image to be segmented based on an image segmentation algorithm.
  • the target normal vector image may be an image obtained by extracting the normal vector of the image to be segmented.
  • the target normal vector image may be an image obtained based on a normal vector extraction model or a normal vector calculation method. Wherein, the normal vector extraction model can be trained based on the sample segmentation image and the sample normal vector image corresponding to the sample segmentation image.
  • the pixel value of each pixel in the normal vector image represents the normal vector corresponding to the pixel in the image to be segmented corresponding to the normal vector image.
  • the normal vector may be a value obtained based on normal-assisted stereoscopic depth estimation, or may be obtained based on other normal vector determination methods.
  • a preliminary segmentation process can be performed on the image to be segmented to obtain a preliminary segmented image, and a normal vector extraction process is performed on the image to be segmented to obtain a target normal vector image; an overall image processing can also be pre-trained.
  • the model is used to initially extract the segmented image and the normal vector image.
  • the image to be segmented is processed through the overall image processing model to obtain the preliminary segmented image and the target normal vector image.
  • the target segmented image may be the segmented image of the part to be segmented that is finally obtained after segmentation.
  • the corresponding weight of each pixel can be determined based on the obtained target normal vector image, and then, the pixel value of each pixel in the preliminary segmentation image is weighted according to the corresponding weight of each pixel, and the weighted value can be obtained
  • the weighted segmented image is the target segmented image.
  • the following steps can be used to fuse the preliminary segmentation image and the target normal vector image to obtain the target segmentation image:
  • Step 1 For each pixel in the preliminary segmented image, calculate the target normal vector map based on the pixel. The predicted pixel value in the image and the preset segmentation threshold determine the prediction weight of the pixel.
  • the preset segmentation threshold may be a threshold used to identify the normal vector of the part to be segmented.
  • the predicted pixel value may be the value at the corresponding pixel point in the preset channel in the target normal vector image.
  • the prediction weight can be calculated by predicting the pixel value and the preset segmentation threshold, and is used to subsequently weight the pixel value of each pixel in the initial segmentation image.
  • the prediction weight can be the predicted pixel value and the preset segmentation. threshold quotient.
  • the predicted pixel value corresponding to each pixel in the target normal vector image is determined.
  • the prediction weight of each pixel is calculated based on the predicted pixel value of each pixel and the preset segmentation threshold, so as to subsequently weight the pixel value of each pixel in the initial segmented image.
  • Step 2 Weight the pixel value of the pixel in the preliminary segmentation image based on the prediction weight to obtain the target pixel value of the pixel.
  • the target pixel value may be the product of the pixel value in the preliminary segmentation image and the prediction weight.
  • the product of the pixel value of the pixel in the preliminary segmented image and the prediction weight is used as the target pixel value of the pixel.
  • Step 3 Determine the target segmented image based on the target pixel value of each pixel in the preliminary segmented image.
  • the target pixel value of each pixel in the preliminary segmented image is integrated according to the position of each pixel to obtain the target segmented image.
  • the part to be segmented is the ground part
  • the target normal vector image is a three-channel image.
  • the ground is usually 255
  • the ceiling is usually 0. Therefore, the preliminary segmented image of the floor can be processed with this information, thereby reducing the portion of the ceiling that is segmented into the floor portion.
  • refined_mask is the target segmentation image before normalization
  • ground_mask is the preliminary segmentation image
  • pred_normal is the second channel value in the target normal vector image
  • threshold is the preset segmentation threshold
  • the refined_mask can be standardized and the refined_mask value is limited to [0, 255] to divide the pixel value of each pixel in the target segmentation image into a range of 0-255.
  • the part to be segmented in the image to be segmented is related to the shooting angle information of the image capturing device, for example, the part to be segmented is the ground part, and the shooting angle information is the elevation angle of 90 degrees, then it can be considered that there is no ground part in the image to be segmented. Therefore, the image adjustment process can be performed after image fusion of the preliminary segmentation image and the target normal vector image:
  • the image capturing device may be a device used to capture images to be segmented, such as a smart phone, Video cameras, digital cameras, etc.
  • the shooting angle information may be the elevation angle information of the image shooting device when shooting the image to be segmented.
  • the shooting angle information may be measured based on an inertial measurement unit (Inertial Measurement Unit, IMU).
  • IMU Inertial Measurement Unit
  • the shooting angle information when shooting the image to be segmented can be obtained based on the IMU in the image shooting device. According to the shooting angle information, it is judged whether the target segmentation image contains the part to be segmented, and the target segmentation image is processed according to the judgment result to obtain the final target segmentation image. If it is determined based on the shooting angle information that the target segmented image does not contain the part to be segmented, each pixel in the target segmented image can be set to zero, and the zeroed image is used as the final target segmented image; if it is determined based on the shooting angle information If the target segmentation image contains the part to be segmented, the target segmentation image can be used as the final target segmentation image.
  • the part to be segmented in the image to be segmented is the ground. Then, if the shooting angle information is 30 degrees to 90 degrees, then each pixel point in the target segmented image is set to zero. If the shooting angle information is other angles, then Keep the pixel value of each pixel in the target segmentation image.
  • the technical solution of the embodiment of the present disclosure is to obtain the image to be segmented, determine the preliminary segmentation image and the target normal vector image corresponding to the image to be segmented, and perform image fusion between the preliminary segmentation image and the target normal vector image to obtain the target segmentation image, which solves the problem
  • the problem of poor accuracy and stability of image segmentation has been solved, and the technical effect of improving image segmentation accuracy and stability has been achieved.
  • FIG. 2 is a schematic flowchart of another image segmentation method provided by an embodiment of the present disclosure. Based on the foregoing technical solutions, the implementation of determining the preliminary segmentation image and the target normal vector image corresponding to the image to be segmented can be found in this document. Detailed elaboration of technical solutions. The explanations of terms that are the same as or corresponding to each of the above technical solutions will not be repeated here.
  • the method includes:
  • the image segmentation model is trained based on the sample segmentation image, the segmentation annotation image corresponding to the sample segmentation image, and the sample normal vector image corresponding to the sample segmentation image.
  • the image segmentation model is used to process the image to obtain the preliminary extracted segmentation image and normal vector image. .
  • the image to be segmented is input into a pre-trained image segmentation model, the image to be segmented is processed through the pre-trained image segmentation model, and the output result of the image segmentation model is determined as the preliminary segmented image corresponding to the image to be segmented and Target normal vector image.
  • the image segmentation model Before using the pre-trained image segmentation model, the image segmentation model can be trained, for example, the following steps can be included:
  • Step 1 Use the sample segmentation image as the input image of the pre-established large model, and combine it with the sample segmentation image.
  • the segmentation annotation image and sample normal vector image corresponding to the cut image are used as the expected output image of the large model, and the large model is trained to obtain the teacher model.
  • the pre-established large model can be an initial model used for detailed processing to obtain segmentation images and normal vector images, or a model with default model structure and model parameters.
  • the pre-built large model can be deeplab v3 (a semantic segmentation network), etc.
  • the sample segmentation image may be a sample image including the portion to be segmented.
  • the segmented annotated image may be an image in which the parts to be segmented are annotated.
  • the sample normal vector image may be an image composed of the normal vector of each pixel in the sample segmentation image.
  • the teacher model can be a model obtained by training a pre-established large model.
  • a large model is established in advance, the sample segmentation image is used as the input image of the large model, and the segmentation annotation image and the sample normal vector image corresponding to the sample segmentation image are used as the expected output image of the large model.
  • the large model can be trained based on the input image and the expected output image, and the trained large model is used as the teacher model.
  • the pre-established large model can be trained in the following ways to obtain the teacher model:
  • the large model segmentation image may be a segmentation image output by the large model.
  • the large model normal vector image may be a normal vector image output by the large model.
  • the sample segmentation image is input into a pre-established large model, the sample segmentation image is processed through the large model, the segmentation image in the output image is used as the large model segmentation image, and the normal vector image in the output image is used as the large model segmentation image.
  • Model normal vector image is used as the large model segmentation image.
  • the large model segmentation loss may be the loss value between the large model segmentation image calculated based on a preset loss function and the segmentation annotation image corresponding to the sample segmentation image.
  • the large model normal vector loss may be a loss value between the large model normal vector image and the sample normal vector image calculated based on a preset loss function.
  • the two loss functions can be the same or different, and the loss function can be selected during actual use.
  • the large model segmentation loss can be calculated based on any of the following loss functions:
  • Method 1 Calculate the large model segmentation loss between the large model segmentation image and the segmentation annotation image corresponding to the sample segmentation image according to the binary cross-entropy loss function.
  • the loss value between the large model segmentation image and the segmentation annotation image corresponding to the sample segmentation image is calculated, which is the large model segmentation loss.
  • Method 2 Calculate the large model segmentation loss between the large model segmentation image and the segmentation annotation image corresponding to the sample segmentation image based on the binary cross-entropy loss function and the regional mutual information loss function.
  • the first loss value between the segmentation annotation image corresponding to the large model segmentation image and the sample segmentation image is calculated based on BCE Loss
  • the large model segmentation image and The second loss value between the segmentation annotation images corresponding to the sample segmentation image can be processed based on the first loss value and the second loss value to obtain the large model segmentation loss.
  • the processing method can be addition, weighting, etc., and the processing method can be determined according to the actual situation.
  • the large model normal vector loss can be calculated based on the following loss function:
  • the large model normal vector loss between the large model normal vector image and the sample normal vector image corresponding to the sample segmentation image is calculated according to the mean square error loss function.
  • the loss value between the large model normal vector image and the sample normal vector image corresponding to the sample segmentation image is calculated based on the mean square error loss function (Mean Square Error Loss, MSE Loss), which is the large model normal vector loss.
  • MSE Loss mean square Error Loss
  • the model parameters of the large model are adjusted according to the large model segmentation loss and the large model normal vector loss.
  • the loss functions of the large model have reached convergence, for example, the large model segmentation loss and the large model normal vector loss are both less than the predetermined value. Assuming that the error or error change trend becomes stable, or the current number of iterations reaches the preset number, it can be considered that the effect of the large model can meet the usage needs.
  • the model training is stopped and the current large model is used as the teacher model.
  • Step 2 Use the sample segmentation image as the input image of the pre-established small model, use the large model segmentation image and the large model normal vector image output by the teacher model corresponding to the sample segmentation image as the expected output of the small model, and train the small model. Get the image segmentation model.
  • the pre-established small model can be the initial model used for rough processing to obtain segmentation images and normal vector images, or it can be a model with the default model structure and model parameters.
  • the structure of the small model is more complex than that of the large model.
  • the pre-built small model can be ghostnet (a lightweight neural network), etc.
  • a small model is established in advance, the sample segmentation image is used as the input image of the small model, and the large model segmentation image and the large model normal vector image output by the teacher model corresponding to the sample segmentation image are used as the expected output of the small model. image.
  • the small model can be trained based on the input image and the expected output image, and the trained small model is used as an image segmentation model.
  • the pre-established small model can be trained in the following way to obtain the image segmentation model:
  • the small model segmentation image may be a segmentation image output by the small model.
  • the small model normal vector image may be a normal vector image output by the small model.
  • the sample segmentation image is input into a pre-established small model, the sample segmentation image is processed through the small model, the segmentation image in the output image is used as the small model segmentation image, and the normal vector image in the output image is used as the small model segmentation image.
  • Model normal vector image is used as the small model segmentation image.
  • the small model segmentation output loss can be the loss value between the small model segmentation image calculated based on the preset loss function and the segmentation annotation image corresponding to the sample segmentation image, and the loss value between the small model segmentation image and the large model segmentation image output by the teacher model.
  • the two loss functions can be the same or different, and the loss function can be selected during actual use.
  • the loss value between the small model segmented image and the segmented annotated image, and the loss value between the small model segmented image and the large model segmented image are respectively calculated. After obtaining the two loss values, the small model segmentation output loss is comprehensively determined.
  • the small model segmentation output loss can be calculated based on the following method:
  • the small model first segmentation loss between the small model segmentation image and the segmentation annotation image of the sample segmentation image is calculated.
  • the first segmentation loss of the small model may be the loss value between the small model segmentation image of the sample segmentation image and the segmentation annotation image.
  • the loss value between the small model segmentation image and the segmentation annotation image corresponding to the sample segmentation image is calculated, which is the first segmentation loss.
  • calculate the first loss value between the small model segmentation image and the segmentation annotation image corresponding to the sample segmentation image based on BCE Loss and calculate the second loss between the small model segmentation image and the segmentation annotation image corresponding to the sample segmentation image based on RMI Loss.
  • the first segmentation loss of the small model can be obtained by processing based on the first loss value and the second loss value.
  • the processing method can be addition, weighting, etc., and the processing method can be determined according to the actual situation.
  • the small model second segmentation loss between the small model segmented image and the large model segmented image output by the teacher model is calculated according to the relative entropy loss function.
  • the second segmentation loss of the small model may be the loss value between the segmented image of the small model and the segmented image of the large model output by the teacher model.
  • the loss value between the small model segmentation image and the large model segmentation image output by the teacher model is calculated based on the relative entropy loss function (Kullback-Leibler Divergence Loss, KL Loss), which is the second segmentation loss of the small model.
  • the small model segmentation output loss is determined based on the first segmentation loss of the small model and the second segmentation loss of the small model.
  • the processing method can be addition, weighting, etc., and the processing method can be determined according to the actual situation.
  • the loss value between the small model normal vector image and the sample normal vector image, and the loss value between the small model normal vector image and the large model normal vector image are respectively calculated. After obtaining the two loss values, the small model normal vector output loss is comprehensively determined.
  • the small model normal vector output loss can be calculated based on the following method:
  • the small model first normal vector loss between the small model normal vector image of the sample segmentation image and the sample normal vector image is calculated according to the mean square error loss function.
  • the small model first normal vector loss may be the loss value between the small model normal vector image and the sample normal vector image.
  • the loss value between the small model normal vector image and the sample normal vector image of the sample segmentation image is calculated, which is the first normal vector loss of the small model.
  • the small model second normal vector loss between the small model normal vector image and the teacher model output large model normal vector image is calculated according to the relative entropy loss function.
  • the second normal vector loss of the small model may be the loss value between the normal vector image of the small model and the normal vector image of the large model output by the teacher model.
  • the loss value between the small model normal vector image and the large model normal vector image output by the teacher model is calculated, which is the second normal vector loss of the small model.
  • the small model normal vector output loss is determined based on the small model first normal vector loss and the small model second normal vector loss.
  • the small model normal vector output loss can be obtained by processing the small model first normal vector loss and the small model second normal vector loss.
  • the processing method can be addition, weighting, etc., and the processing method can be determined according to the actual situation.
  • the model parameters of the small model are adjusted according to the small model segmentation output loss and the small model normal vector output loss.
  • each loss function of the small model reaches convergence, such as the small model segmentation output loss and the small model normal vector If the output losses are less than the preset error or the error trend tends to be stable, or the current number of iterations reaches the preset number, it can be considered that the effect of the small model can meet the usage needs.
  • the model training is stopped and the current small model is as an image segmentation model.
  • the technical solution of the embodiment of the present disclosure is to obtain the image to be segmented and input the image to be segmented into a pre-trained image segmentation model to obtain the preliminary segmentation image and the target normal vector image corresponding to the image to be segmented, so as to process it through the image segmentation model.
  • Figure 3 is a schematic flow chart of another image segmentation method provided by an embodiment of the present disclosure.
  • a segmented image discriminator is added for discriminating small model segmented images and large model segmented images.
  • For specific implementation methods please refer to the detailed description of this technical solution. The explanations of terms that are the same as or corresponding to each of the above technical solutions will not be repeated here.
  • the method includes:
  • S320 Input the sample segmentation image to the input image of the pre-established small model to obtain the small model segmentation image and the small model normal vector image.
  • the segmented image discriminator is trained using the large model segmented image corresponding to the sample segmented image output by the teacher model as the real sample, and the small model segmented image output by the small model as the false sample.
  • the segmentation discrimination result may be the result output by the segmentation image discriminator.
  • the expected discriminant result may be the result of the expected output of the segmented image discriminator. Normally, the expected discriminant result is that the image segmented by the large model and the image segmented by the small model cannot be distinguished, that is, the image segmented by the small model is recognized as a real sample.
  • the segmentation discrimination loss can be a loss value calculated based on a preset loss function in the segmentation image discriminator.
  • the preset loss function in the segmentation image discriminator can be L1 loss (absolute error), L2 loss (square error), cross entropy One or more of error, KL divergence (Kullback-Leibler Divergence, a metric used to measure the similarity of two probability distributions).
  • the small model segmented image output by the small model is input into a pre-trained segmented image discriminator to obtain a segmentation discrimination result.
  • the segmentation discrimination loss of the segmented image discriminator can be calculated based on the segmentation discrimination result and the expected discrimination result of the segmented image discriminator.
  • the small model segmentation image output by the small model is regarded as fake, and the large model segmentation image output by the large model is regarded as real for adversarial training.
  • the segmented image discriminator is D
  • the small model is G_s
  • the large model is G_t.
  • the sample segmented images input to the large model and the small model are input.
  • MSE_loss(a,b) be (ab) 2
  • the segmented image generated by the small model can make the output of the segmented image discriminator be 1, so as to achieve the purpose of confusing the real one.
  • the segmented image discriminator also improves its ability to distinguish authenticity through training. As the number of model training iterations increases, the small model and the segmented image discriminator learn in the process of playing with each other, and will eventually reach an equilibrium point, that is, the small model can generate data that is very close to the segmented image of the large model, and the segmented image discriminator has It is impossible to judge the authenticity, and the final output is 0.5.
  • S360 Adjust the model segmentation parameters of the small model according to the small model segmentation output loss and the segmentation discrimination loss, and adjust the model normal vector parameters of the small model according to the small model normal vector output loss to obtain an image segmentation model.
  • the model segmentation parameters may be model parameters used in the small model to generate the segmented image part of the small model.
  • the model normal vector parameter may be a model parameter in the small model used to generate the normal vector image portion of the small model.
  • the model segmentation parameters of the small model are adjusted based on the small model segmentation output loss and the segmentation discrimination loss
  • the model normal vector parameters of the small model are adjusted based on the small model normal vector output loss.
  • S390 Perform image fusion on the preliminary segmentation image and the target normal vector image to obtain the target segmentation image.
  • the technical solution of the embodiment of the present disclosure uses the sample segmentation image as the input image of the pre-established large model, and uses the segmentation annotation image and the sample normal vector image corresponding to the sample segmentation image as the expected output image of the large model, and performs the processing on the large model.
  • Train to obtain the teacher model input the sample segmentation image to the input image of the pre-established small model, and obtain the small model segmentation image and the small model normal vector image.
  • the segmentation annotation image and the teacher model output Calculate the small model segmentation output loss based on the large model segmentation image.
  • the small model normal vector output loss is calculated.
  • the small model segmented image is input to the pre-trained segmented image discriminator , obtain the segmentation discrimination result, determine the segmentation discrimination loss based on the segmentation discrimination result and the expected discrimination result, adjust the model segmentation parameters of the small model based on the small model segmentation output loss and segmentation discrimination loss, and adjust the model segmentation parameters based on the small model normal vector output loss.
  • the model normal vector parameters of the small model are adjusted to obtain the image segmentation model to improve the accuracy and stability of the image segmentation model through various loss calculations.
  • the image to be segmented is obtained and the image to be segmented is input to the pre-trained image.
  • the segmentation model the preliminary segmentation image and the target normal vector image corresponding to the image to be segmented are obtained, and the preliminary segmentation image and the target normal vector image are image fused to obtain the target segmentation image, which solves the high complexity and poor accuracy of the image segmentation model. As well as the problem of poor stability, it achieves the technical effect of improving the accuracy and stability of model segmentation and reducing model complexity.
  • Figure 4 is a schematic flow chart of another image segmentation method provided by an embodiment of the present disclosure.
  • a normal vector image discriminator is added for performing small model normal vector images and large model segmentation images.
  • identification and specific implementation please refer to the detailed description of this technical solution. The explanations of terms that are the same as or corresponding to each of the above technical solutions will not be repeated here.
  • the method includes:
  • the normal vector image discriminator uses the large model normal vector image output by the teacher model corresponding to the sample segmentation image as a true sample, and trains the small model normal vector image output by the small model as a false sample.
  • the normal vector discrimination result may be a result output by a normal vector image discriminator.
  • the expected discriminant result may be the result of the expected output of the normal vector image discriminator.
  • the expected discrimination result is that the large model normal vector image and the small model normal vector image cannot be distinguished, that is, the small model normal vector image is recognized as a real sample.
  • the normal vector discrimination loss can be a loss value calculated based on a preset loss function in the normal vector image discriminator.
  • the preset loss function in the normal vector image discriminator can be L1 loss, L2 loss, cross entropy error, and KL divergence. one or more of them.
  • the small model normal vector image output by the small model is input to the pre-trained normal vector image.
  • the normal vector discrimination result is obtained.
  • the normal vector discrimination loss of the normal vector image discriminator can be calculated based on the normal vector discrimination result and the expected discrimination result of the normal vector image discriminator.
  • S460 Adjust the model segmentation parameters of the small model based on the small model segmentation output loss, and adjust the model normal vector parameters of the small model based on the small model normal vector output loss and normal vector discrimination loss to obtain an image segmentation model.
  • the model segmentation parameters of the small model are adjusted based on the small model segmentation output loss
  • the model normal vector parameters of the small model are adjusted based on the small model normal vector output loss and normal vector discrimination loss.
  • each loss function of the small model reaches convergence, for example, the segmentation output loss of the small model is less than the preset error, the normal vector output loss of the small model is less than the preset error, the normal vector discrimination loss is greater than the preset error, and the error change trend tends to be stable.
  • the current number of iterations reaches the preset number, it can be considered that the effect of the small model can meet the usage requirements.
  • the model training is stopped and the current small model is used as the image segmentation model.
  • S490 Perform image fusion on the preliminary segmentation image and the target normal vector image to obtain the target segmentation image.
  • pre-trained segmentation image discriminator and the pre-trained normal vector image discriminator can be used in combination to adjust the model segmentation parameters and model normal vector parameters of the small model.
  • the technical solution of the embodiment of the present disclosure uses the sample segmentation image as the input image of the pre-established large model, and uses the segmentation annotation image and the sample normal vector image corresponding to the sample segmentation image as the expected output image of the large model, and performs the processing on the large model.
  • Train to obtain the teacher model input the sample segmentation image to the input image of the pre-established small model, and obtain the small model segmentation image and the small model normal vector image.
  • the segmentation annotation image and the teacher model output Calculate the small model segmentation output loss based on the large model segmentation image.
  • the small model normal vector output loss is calculated.
  • the normal vector image of the small model is input into the pre-trained normal vector image discriminator to obtain the normal vector discrimination result, and the normal vector discrimination loss is determined based on the normal vector discrimination result and the expected discrimination result.
  • the small model is evaluated The model segmentation parameters are adjusted, and the model normal vector parameters of the small model are adjusted according to the small model normal vector output loss and normal vector discrimination loss to obtain the image segmentation model, in order to improve the accuracy and accuracy of the image segmentation model through various loss calculations.
  • FIG. 5 is a schematic structural diagram of an image segmentation device provided by an embodiment of the present disclosure. As shown in FIG. 5 , the device includes: an acquisition module 510, a processing module 520, and a fusion module 530.
  • the acquisition module 510 is configured to acquire the image to be segmented; the processing module 520 is configured to determine the preliminary segmentation image and the target normal vector image corresponding to the image to be segmented; the fusion module 530 is configured to combine the preliminary segmentation image with The target normal vector image is image fused to obtain a target segmentation image.
  • the technical solution of the embodiment of the present disclosure is to obtain the image to be segmented, determine the preliminary segmentation image and the target normal vector image corresponding to the image to be segmented, and perform image fusion between the preliminary segmentation image and the target normal vector image to obtain the target segmentation image, which solves the problem
  • the problem of poor image segmentation accuracy and stability has been solved, and the technical effect of improving image segmentation accuracy and stability has been achieved.
  • the processing module 520 is configured to determine the preliminary segmentation image and the target normal vector image corresponding to the image to be segmented in the following manner: input the image to be segmented into a pre-trained image segmentation model, and obtain the image segmentation model corresponding to the image to be segmented.
  • the device further includes: a model training module, configured to use the sample segmentation image as the input image of the pre-established large model, and use the segmentation annotation image and the sample normal vector image corresponding to the sample segmentation image as the said
  • the expected output image of the large model is trained on the large model to obtain the teacher model; the sample segmentation image is used as the input image of the pre-established small model, and the output of the teacher model corresponding to the sample segmentation image is
  • the large model segmentation image and the large model normal vector image are used as the expected output of the small model.
  • the small model is trained to obtain an image segmentation model.
  • the model training module is set to obtain the teacher model in the following ways: input the sample segmentation image into a pre-established large model to obtain the large model segmentation image and the large model normal vector image; calculate the large model segmentation image and The large model segmentation loss between the segmentation annotation images corresponding to the sample segmentation images is calculated, and the large model normal vector loss between the large model normal vector image and the sample normal vector image corresponding to the sample segmentation image is calculated; according to The large model segmentation loss and the large model normal vector loss adjust the model parameters of the large model to obtain the teacher model.
  • the model training module is configured to calculate the large model segmentation loss in the following manner: calculate the large model segmentation between the large model segmentation image and the segmentation annotation image corresponding to the sample segmentation image according to the binary cross-entropy loss function. loss; or, calculate the large model segmentation loss between the large model segmentation image and the segmentation annotation image corresponding to the sample segmentation image according to the binary cross entropy loss function and the regional mutual information loss function.
  • model training module set up to calculate large model normal vector loss via: root
  • the large model normal vector loss between the large model normal vector image and the sample normal vector image corresponding to the sample segmentation image is calculated based on the mean square error loss function.
  • the model training module is configured to obtain the image segmentation model in the following manner: input the sample segmentation image to the input image of the pre-established small model to obtain the small model segmentation image and the small model normal vector image; according to Calculate the small model segmentation output loss based on the small model segmentation image, the segmentation annotation image of the sample segmentation image, and the large model segmentation image output by the teacher model; calculate the small model segmentation output loss based on the small model normal vector image, the sample normal vector image of the sample segmentation image, and the The large model normal vector image output by the teacher model calculates the small model normal vector output loss; the model parameters of the small model are adjusted according to the small model segmentation output loss and the small model normal vector output loss to obtain an image segmentation model. .
  • the device further includes: a first discrimination module, configured to input the small model segmented image output by the small model into a pre-trained segmented image discriminator, obtain a segmentation discrimination result, and perform segmentation based on the segmentation The discrimination result and the expected discrimination result determine the segmentation discrimination loss, wherein the segmented image discriminator uses the large model segmented image output by the teacher model corresponding to the sample segmented image as a true sample, and uses the small model output by the small model.
  • the model segmentation image is trained as a fake sample;
  • the model training module is configured to adjust the model parameters of the small model in the following manner: adjust the model segmentation parameters of the small model according to the segmentation output loss of the small model and the segmentation discrimination loss. Make adjustments; adjust the model normal vector parameters of the small model according to the small model normal vector output loss.
  • the model training module is configured to calculate the small model segmentation output loss in the following manner: calculate the sample segmentation image according to the binary cross-entropy loss function, or the binary cross-entropy loss function and the regional mutual information loss function.
  • the second segmentation loss of the small model between the small model segmented image and the large model segmented image output by the teacher model is calculated according to the relative entropy loss function
  • the small model first segmentation loss and the small model second segmentation loss determine the small model segmentation output loss.
  • the model training module is configured to calculate the small model normal vector output loss in the following manner: calculate the small model normal vector image between the small model normal vector image of the sample segmentation image and the sample normal vector image according to the mean square error loss function. a normal vector loss; calculate the second normal vector loss of the small model between the normal vector image of the small model and the normal vector image of the large model output by the teacher model according to the relative entropy loss function; calculate the first normal vector loss of the small model and the normal vector loss of the small model The second normal vector loss of the small model determines the normal vector output loss of the small model.
  • the device further includes: a second discrimination module, configured to input the small model normal vector image output by the small model into the pre-trained normal vector image discriminator to obtain the normal vector discrimination result, and based on The normal vector discrimination result and the expected discrimination result determine the normal vector discrimination loss, wherein the normal vector image discriminator uses the large model normal vector image output by the teacher model corresponding to the sample segmentation image as a true sample, and The small model normal vector image output by the small model is obtained as a fake sample training; the model training module is also configured to adjust the model parameters of the small model in the following manner: according to the small model segmentation output loss, the small model Model segmentation parameters are adjusted; root The model normal vector parameters of the small model are adjusted according to the small model normal vector output loss and the normal vector discrimination loss.
  • a second discrimination module configured to input the small model normal vector image output by the small model into the pre-trained normal vector image discriminator to obtain the normal vector discrimination result, and based on The normal vector discrimination result and the expected discrimination result determine
  • the fusion module 530 is configured to obtain the target segmented image in the following manner: for each pixel point in the preliminary segmented image, according to the predicted pixel value of the pixel point in the target normal vector image and the predetermined Set a segmentation threshold to determine the prediction weight of the pixel; weight the pixel value of the pixel in the preliminary segmentation image based on the prediction weight to obtain the target pixel value of the pixel; based on the preliminary segmentation The target pixel value of each pixel in the image determines the target segmentation image.
  • the device further includes: an adjustment module configured to obtain shooting angle information of an image shooting device used to shoot the image to be segmented, and adjust the target segmented image according to the shooting angle information.
  • an adjustment module configured to obtain shooting angle information of an image shooting device used to shoot the image to be segmented, and adjust the target segmented image according to the shooting angle information.
  • the image segmentation device provided by the embodiments of the present disclosure can execute the image segmentation method provided by any embodiment of the present disclosure, and has functional modules and beneficial effects corresponding to the execution method.
  • FIG. 6 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • Terminal devices in embodiments of the present disclosure may include mobile phones, notebook computers, digital broadcast receivers, personal digital assistants (Personal Digital Assistant, PDA), tablet computers (Portable Android Device, PAD), portable multimedia players (Portable Media Player, PMP), mobile terminals such as vehicle-mounted terminals (such as vehicle-mounted navigation terminals), and fixed terminals such as digital televisions (that is, digital TVs), desktop computers, etc.
  • PDA Personal Digital Assistant
  • PAD Portable multimedia players
  • PMP portable multimedia players
  • mobile terminals such as vehicle-mounted terminals (such as vehicle-mounted navigation terminals)
  • fixed terminals such as digital televisions (that is, digital TVs), desktop computers, etc.
  • the electronic device shown in FIG. 6 is only an example.
  • the electronic device 600 may include a processing device (such as a central processing unit, a graphics processor, etc.) 601.
  • the processing device 601 may process data according to a program stored in a read-only memory (Read Only Memory, ROM) 602 or from a storage device.
  • the device 608 loads the program in the random access memory (Random Access Memory, RAM) 603 to perform various appropriate actions and processes.
  • RAM Random Access Memory
  • various programs and data required for the operation of the electronic device 600 are also stored.
  • the processing device 601, ROM 602 and RAM 603 are connected to each other via a bus 604.
  • An input/output (I/O) interface 605 is also connected to bus 604.
  • input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a Liquid Crystal Display (LCD) , an output device 607 such as a speaker, a vibrator, etc.; a storage device 608 including a magnetic tape, a hard disk, etc.; and a communication device 609.
  • Communication device 609 may allow electronic device 600 to communicate wirelessly or wiredly with other devices to exchange data.
  • FIG. 6 illustrates electronic device 600 with various means, it should be understood that implementation or availability of all illustrated means is not required. More or fewer means may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product including a computer program carried on a non-transitory computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via communication device 609, or from storage device 608, or from ROM 602.
  • the processing device 601 When the computer program is executed by the processing device 601, the above functions defined in the method of the embodiment of the present disclosure are performed.
  • the electronic device provided by the embodiments of the present disclosure and the image segmentation method provided by the above embodiments belong to the same inventive concept.
  • Technical details that are not described in detail in this embodiment can be referred to the above embodiments, and this embodiment has the same features as the above embodiments. beneficial effects.
  • Embodiments of the present disclosure provide a computer storage medium on which a computer program is stored.
  • the computer program is executed by a processor, the image segmentation method provided in the above embodiments is implemented.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or a combination of the above two.
  • the computer-readable storage medium may be, for example, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination thereof.
  • Examples of computer readable storage media may include: an electrical connection having at least one conductor, a portable computer disk, a hard drive, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (e.g., electronically removable memory).
  • a computer-readable storage medium may be a tangible medium that contains or stores a program that may be used by or in connection with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take a variety of forms, including electromagnetic signals, optical signals, or a suitable combination of the above.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium that can send, propagate, or transmit a program for use by or in connection with an instruction execution system, apparatus, or device .
  • Program code contained on a computer-readable medium can be transmitted using any appropriate medium, including: wires, optical cables, radio frequency (Radio Frequency, RF), etc., or appropriate combinations of the above.
  • the client and server can communicate using currently known or future developed network protocols such as HyperText Transfer Protocol (HTTP), and can communicate with digital data in any form or medium. (e.g., communications network) interconnection.
  • HTTP HyperText Transfer Protocol
  • Examples of communication networks include Local Area Network (LAN), Wide Area Network (WAN), Internet (eg, Internet), and end-to-end networks (eg, ad hoc end-to-end networks), as well as networks currently known or developed in the future.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; it may also exist independently without being assembled into the electronic device.
  • the computer-readable medium carries at least one program.
  • the electronic device obtains the image to be segmented; determines the preliminary segmentation image and the target normal vector image corresponding to the image to be segmented. ; Perform image fusion of the preliminary segmentation image and the target normal vector image to obtain a target segmentation image.
  • Computer program code for performing the operations of the present disclosure may be written in one or more programming languages, including object-oriented programming languages such as Java, Smalltalk, C++, and conventional procedures, or a combination thereof. programming language such as "C" or a similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as through the Internet using an Internet service provider) .
  • LAN local area network
  • WAN wide area network
  • Internet service provider such as AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains at least one operable function for implementing the specified logical function.
  • Execute instructions may also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown one after another may actually execute substantially in parallel, or they may sometimes execute in the reverse order, depending on the functionality involved.
  • each block of the block diagram and/or flowchart illustration, and combinations of blocks in the block diagram and/or flowchart illustration can be implemented by special purpose hardware-based systems that perform the specified functions or operations. , or can be implemented using a combination of specialized hardware and computer instructions.
  • the units involved in the embodiments of the present disclosure can be implemented in software or hardware. Among them, the name of a unit does not constitute a limitation on the unit itself under certain circumstances.
  • exemplary types of hardware logic components include: Field-Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), Application Specific Standard Parts (ASSP) ), System on Chip (SOC), Complex Programmable Logic Device (CPLD), etc.
  • FPGA Field-Programmable Gate Array
  • ASIC Application Specific Integrated Circuit
  • ASSP Application Specific Standard Parts
  • SOC System on Chip
  • CPLD Complex Programmable Logic Device
  • a machine-readable medium may be a tangible medium that may contain or be stored for use by or in connection with an instruction execution system, apparatus, or device program used.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or suitable combinations of the foregoing. Examples of machine-readable storage media may include an electrical connection based on at least one wire, a portable computer disk, a hard disk, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory). flash memory), fiber optics, portable compact disk read-only memory (CD-ROM), optical storage devices, magnetic storage devices, or a suitable combination of the above.
  • Example 1 provides an image segmentation method, which includes:
  • the preliminary segmentation image and the target normal vector image are image fused to obtain a target segmentation image.
  • Example 2 provides an image segmentation method, which further includes:
  • the determining the preliminary segmented image and the target normal vector image corresponding to the image to be segmented includes:
  • the image to be segmented is input into a pre-trained image segmentation model to obtain a preliminary segmented image and a target normal vector image corresponding to the image to be segmented, wherein the image segmentation model is based on a sample segmented image, and the sample
  • the segmentation annotation image corresponding to the segmentation image and the sample normal vector image corresponding to the sample segmentation image are obtained by training.
  • Example 3 provides an image segmentation method, which further includes:
  • the method before inputting the image to be segmented into the pre-trained image segmentation model, the method further includes:
  • the sample segmentation image is used as the input image of the pre-established large model, the segmentation annotation image and the sample normal vector image corresponding to the sample segmentation image are used as the expected output image of the large model, and the large model is trained to obtain teacher model;
  • the sample segmentation image is used as the input image of the pre-established small model, and the large model segmentation image and the large model normal vector image output by the teacher model corresponding to the sample segmentation image are used as the expected output of the small model.
  • the small model is trained as described above to obtain the image segmentation model.
  • Example 4 provides an image segmentation method, which further includes:
  • the sample segmentation image is used as the input image of the pre-established large model, which will be compared with the The segmentation annotation image and the sample normal vector image corresponding to the sample segmentation image are used as the expected output image of the large model.
  • the large model is trained to obtain a teacher model, which includes:
  • Model normal vector loss Calculate the large model segmentation loss between the large model segmentation image and the segmentation annotation image corresponding to the sample segmentation image, and calculate the large model normal vector image and the sample normal vector image corresponding to the sample segmentation image.
  • the model parameters of the large model are adjusted according to the large model segmentation loss and the large model normal vector loss to obtain a teacher model.
  • Example 5 provides an image segmentation method, which further includes:
  • calculating the large model segmentation loss between the large model segmentation image and the segmentation annotation image corresponding to the sample segmentation image includes:
  • the large model segmentation loss between the large model segmentation image and the segmentation annotation image corresponding to the sample segmentation image is calculated according to the binary cross entropy loss function and the regional mutual information loss function.
  • Example 6 provides an image segmentation method, which further includes:
  • the calculation of the large model normal vector loss between the large model normal vector image and the sample normal vector image corresponding to the sample segmentation image includes:
  • the large model normal vector loss between the large model normal vector image and the sample normal vector image corresponding to the sample segmentation image is calculated according to the mean square error loss function.
  • Example 7 provides an image segmentation method, which further includes:
  • the sample segmentation image is used as the input image of a pre-established small model
  • the large model segmentation image and the large model normal vector image output by the teacher model corresponding to the sample segmentation image are used as the small model.
  • the expected output, for the small model training includes:
  • the model parameters of the small model are adjusted according to the small model segmentation output loss and the small model normal vector output loss to obtain an image segmentation model.
  • Example 8 provides an image segmentation method, which further includes:
  • this method also includes:
  • the small model segmented image output by the small model is input into the pre-trained segmented image discriminator to obtain the segmentation discrimination result, and the segmentation discrimination loss is determined based on the segmentation discrimination result and the expected discrimination result, wherein the segmented image
  • the discriminator is trained by using the large model segmented image output by the teacher model corresponding to the sample segmented image as a true sample, and using the small model segmented image output by the small model as a false sample;
  • Adjusting the model parameters of the small model according to the small model segmentation output loss and the small model normal vector output loss includes:
  • the model normal vector parameters of the small model are adjusted according to the small model normal vector output loss.
  • Example 9 provides an image segmentation method, which further includes:
  • calculating the small model segmentation output loss based on the small model segmentation image, the segmentation annotation image of the sample segmentation image, and the large model segmentation image output by the teacher model includes:
  • the binary cross-entropy loss function calculates the small model first segmentation loss between the small model segmentation image and the segmentation annotation image of the sample segmentation image
  • the small model segmentation output loss is determined according to the small model first segmentation loss and the small model second segmentation loss.
  • Example 10 provides an image segmentation method, which method further includes:
  • calculating the small model normal vector output loss based on the small model normal vector image, the sample normal vector image of the sample segmentation image, and the large model normal vector image output by the teacher model includes:
  • the small model normal vector image is calculated and the teacher model outputs the large model method.
  • Small model second normal vector loss between vector images
  • the small model normal vector output loss is determined according to the small model first normal vector loss and the small model second normal vector loss.
  • Example 11 provides an image segmentation method, which method further includes:
  • this method also includes:
  • the small model normal vector image output by the small model is input into the pre-trained normal vector image discriminator to obtain the normal vector discrimination result, and the normal vector discrimination loss is determined based on the normal vector discrimination result and the expected discrimination result, where , the normal vector image discriminator uses the large model normal vector image output by the teacher model corresponding to the sample segmentation image as a true sample, and trains the small model normal vector image output by the small model as a false sample;
  • Adjusting the model parameters of the small model according to the small model segmentation output loss and the small model normal vector output loss includes:
  • the model normal vector parameters of the small model are adjusted according to the small model normal vector output loss and the normal vector discrimination loss.
  • Example 12 provides an image segmentation method, which further includes:
  • image fusion of the preliminary segmentation image and the target normal vector image to obtain the target segmentation image includes:
  • the target segmented image is determined based on the target pixel value of each pixel point in the preliminary segmented image.
  • Example 13 provides an image segmentation method, which further includes:
  • the method further includes:
  • Example 14 provides an image segmentation device, which includes:
  • Acquisition module set to obtain the image to be segmented
  • a processing module configured to determine a preliminary segmentation image and a target normal vector image corresponding to the image to be segmented
  • the fusion module is configured to perform image fusion on the preliminary segmentation image and the target normal vector image to obtain a target segmentation image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

本公开实施例提供了一种图像分割方法、装置、电子设备及存储介质,该方法包括:获取待分割图像;确定与所述待分割图像对应的初步分割图像和目标法向量图像;将所述初步分割图像与所述目标法向量图像进行图像融合,得到目标分割图像。

Description

图像分割方法、装置、电子设备及存储介质
本公开要求在2022年4月29日提交中国专利局、申请号为202210475990.9的中国专利申请的优先权,以上申请的全部内容通过引用结合在本公开中。
技术领域
本公开实施例涉及图像处理技术,例如涉及一种图像分割方法、装置、电子设备及存储介质。
背景技术
针对图像分割,目前可以通过基于卷积神经网络的深度学习算法来实现,也可以基于边缘检测和平面估计信息的传统算法来实现。
然而,基于卷积神经网络的深度学习算法可能存在局部漏分割的情况,导致分割效果不佳的问题。基于边缘检测和平面估计信息的传统算法,对分割图像的要求较高,如:分割图像中的分割部分具有平滑性等,因此,对于边缘模糊或边缘不规则的分割图像难以进行合理的分割。
发明内容
本公开提供一种图像分割方法、装置、电子设备及存储介质,以提高图像分割准确率和稳定性。
第一方面,本公开实施例提供了一种图像分割方法,该方法包括:
获取待分割图像;
确定与所述待分割图像对应的初步分割图像和目标法向量图像;
将所述初步分割图像与所述目标法向量图像进行图像融合,得到目标分割图像。
第二方面,本公开实施例还提供了一种图像分割装置,该装置包括:
获取模块,设置为获取待分割图像;
处理模块,设置为确定与所述待分割图像对应的初步分割图像和目标法向量图像;
融合模块,设置为将所述初步分割图像与所述目标法向量图像进行图像融 合,得到目标分割图像。
第三方面,本公开实施例还提供了一种电子设备,所述电子设备包括:
处理器;
存储装置,设置为存储程序,
在所述程序被所述处理器执行时,所述处理器实现如本公开实施例中任一所述的图像分割方法。
第四方面,本公开实施例还提供了一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行如本公开实施例中任一所述的图像分割方法。
附图说明
贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。
图1为本公开实施例所提供的一种图像分割方法的流程示意图;
图2为本公开实施例所提供的另一种图像分割方法的流程示意图;
图3为本公开实施例所提供的另一种图像分割方法的流程示意图;
图4为本公开实施例所提供的另一种图像分割方法的流程示意图;
图5为本公开实施例所提供的一种图像分割装置的结构示意图;
图6为本公开实施例所提供的一种电子设备的结构示意图。
具体实施方式
下面将参照附图描述本公开的实施例。虽然附图中显示了本公开的一些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例。应当理解的是,本公开的附图及实施例仅用于示例性作用。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。 术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
可以理解的是,在使用本公开各实施例公开的技术方案之前,均应当依据相关法律法规通过恰当的方式对本公开所涉及个人信息的类型、使用范围、使用场景等告知用户并获得用户的授权。
例如,在响应于接收到用户的主动请求时,向用户发送提示信息,以明确地提示用户,其请求执行的操作将需要获取和使用到用户的个人信息。从而,使得用户可以根据提示信息来自主地选择是否向执行本公开技术方案的操作的电子设备、应用程序、服务器或存储介质等软件或硬件提供个人信息。
作为一种可选的实现方式,响应于接收到用户的主动请求,向用户发送提示信息的方式例如可以是弹窗的方式,弹窗中可以以文字的方式呈现提示信息。此外,弹窗中还可以承载供用户选择“同意”或者“不同意”向电子设备提供个人信息的选择控件。
可以理解的是,上述通知和获取用户授权过程仅是示意性的,不对本公开的实现方式构成限定,其它满足相关法律法规的方式也可应用于本公开的实现方式中。
可以理解的是,本技术方案所涉及的数据(包括数据本身、数据的获取或使用)应当遵循相应法律法规及相关规定的要求。
图1为本公开实施例所提供的一种图像分割方法的流程示意图,本公开实施例适用于对图像中的预先确定的待分割部分进行图像分割的情形,该方法可以由图像分割装置来执行,该装置可以通过软件和/或硬件的形式实现,可选的,通过电子设备来实现,该电子设备可以是移动终端、个人计算机(Personal Computer,PC)端或服务器等。
如图1所示,所述方法包括:
S110、获取待分割图像。
其中,待分割图像可以是存在待分割部分的图像,其中待分割部分可以是需要分割出来的部分,例如:待分割部分可以是地面、墙壁、天花板等。
示例性的,可以基于拍摄装置获取待分割图像,还可以通过用户上传或下载等方式获取待分割图像,待分割图像的获取方式可以根据实际情况设置。
可选地,待分割图像还可以是视频中待分割的视频帧,例如可以是视频中的每一帧或者部分帧。示例性的,获取待分割图像可以包括获取目标视频中的目标视频帧,将所述目标视频帧作为待分割图像。例如,获取目标视频中的目标视频帧,可以包括:逐帧获取目标视频中的视频帧作为目标视频帧,或者,每间隔预设数量视频帧获取目标视频中的视频帧作为目标视频帧,又或者,每间隔预设时长获取目标视频中的视频帧作为目标视频帧,又或者,获取所述目标视频中包含有目标对象的视频帧作为目标视频帧。
S120、确定与待分割图像对应的初步分割图像和目标法向量图像。
其中,初步分割图像可以是对待分割图像进行初步分割得到的分割图像,初步分割图像中包括粗略分割出的待分割部分。初步分割图像可以是基于分割模型对待分割图像进行分割处理得到的图像,也可以是基于图像分割算法对待分割图像进行计算得到的图像。目标法向量图像可以是对待分割图像进行法向量提取得到的图像。目标法向量图像可以是基于法向量提取模型或法向量计算方法得到的图像。其中,法向量提取模型可以根据样本分割图像以及与所述样本分割图像对应的样本法向量图像训练得到。
需要说明的是,法向量图像中每个像素点的像素值表征与法向量图像对应的待分割图像中与该像素点对应的法向量。其中,法向量可以是基于法线辅助立体深度估计得到的值,还可以是基于其他法向量确定方式得到的。
示例性的,在获取到待分割图像后,可以对待分割图像进行初步分割处理得到初步分割图像,对待分割图像进行法向量提取处理,得到目标法向量图像;也可以是预先训练一个整体的图像处理模型,以用于初步提取分割图像和法向量图像,通过该整体的图像处理模型对待分割图像进行处理,得到初步分割图像和目标法向量图像。
S130、将初步分割图像与目标法向量图像进行图像融合,得到目标分割图像。
其中,目标分割图像可以是最终获取的分割得到待分割部分的分割图像。
示例性的,根据得到的目标法向量图像可以确定与每个像素点的对应权重,进而,根据每个像素点的对应权重对初步分割图像中每个像素点的像素值进行加权,可以得到加权后的分割图像,该加权后的分割图像就是目标分割图像。
可选的,可以通过下述步骤来将初步分割图像与目标法向量图像进行图像融合,得到目标分割图像:
步骤一、针对初步分割图像中的每个像素点,根据像素点在目标法向量图 像中的预测像素值以及预设分割阈值确定像素点的预测权重。
其中,预设分割阈值可以是用于识别待分割部分的法向量的阈值。预测像素值可以是目标法向量图像中预设通道中在对应像素点处的数值。预测权重可以是通过预测像素值和预设分割阈值计算得到的,用于后续对初始分割图像中每个像素点的像素值进行加权的权重,例如:预测权重可以是预测像素值和预设分割阈值的商。
示例性的,针对初步分割图像中的每个像素点,确定目标法向量图像中与每个像素点相对应的预测像素值。并根据每个像素点的预测像素值和预设分割阈值计算得到每个像素点的预测权重,以便后续对初始分割图像中每个像素点的像素值进行加权。
步骤二、基于预测权重对像素点在初步分割图像中的像素值进行加权,得到像素点的目标像素值。
其中,目标像素值可以是初步分割图像中的像素值与预测权重的乘积。
示例性的,针对初步分割图像中的每一个像素点,将初步分割图像中该像素点的像素值和预测权重的乘积作为该像素点的目标像素值。
步骤三、基于初步分割图像中每个像素点的目标像素值确定目标分割图像。
示例性的,将初步分割图像中每个像素点的目标像素值按照每个像素点的位置进行整合,可以得到目标分割图像。
示例性的,待分割部分为地面部分,目标法向量图像为三通道图像。对于目标法向量图像的第二通道数值,地面通常为255,而天花板通常为0。因此,可以通过这个信息对地面的初步分割图像进行处理,从而减少被分割为地面部分的天花板部分。其中,预设分割阈值可以是threshold=140.0。目标分割图像可以根据下述公式确定:
refined_mask=ground_mask*(pred_normal/threshold)
其中,refined_mask为标准化前的目标分割图像,ground_mask为初步分割图像,pred_normal为目标法向量图像中第二通道数值,threshold为预设分割阈值。
进而,可以对refined_mask进行标准化处理,将refined_mask数值限定在[0,255],以将目标分割图像中每个像素点的像素值划分至0-255之间。
考虑到待分割图像中的待分割部分与图像拍摄装置的拍摄角度信息相关,例如:待分割部分为地面部分,拍摄角度信息为仰角90度,那么,可以认为待分割图像中没有地面部分。因此,可以在将初步分割图像与目标法向量图像进行图像融合之后,进行图像的调整处理:
获取用于拍摄待分割图像的图像拍摄装置的拍摄角度信息,根据拍摄角度信息对目标分割图像进行调整。
其中,图像拍摄装置可以是用于拍摄待分割图像的装置,例如:智能手机、 摄像机、数码相机等。拍摄角度信息可以是图像拍摄装置在拍摄待分割图像时的仰角信息。拍摄角度信息可以是根据惯性测量单元(Inertial Measurement Unit,IMU)测量得到的。
示例性的,可以基于图像拍摄装置中的IMU获取拍摄待分割图像时的拍摄角度信息。根据拍摄角度信息,判断目标分割图像中的是否包含待分割部分,并根据判断结果处理目标分割图像,得到最终的目标分割图像。若根据拍摄角度信息确定目标分割图像中不包含待分割部分,则可以将目标分割图像中每个像素点置零,并将置零后的图像作为最终的目标分割图像;若根据拍摄角度信息确定目标分割图像包含待分割部分,则可以将目标分割图像作为最终的目标分割图像。
示例性的,待分割图像中的待分割部分为地面,那么,若拍摄角度信息为30度到90度,则将目标分割图像中每个像素点置零,若拍摄角度信息为其他角度,则保留目标分割图像中每个像素点的像素值。
本公开实施例的技术方案,通过获取待分割图像,确定与待分割图像对应的初步分割图像和目标法向量图像,将初步分割图像与目标法向量图像进行图像融合,得到目标分割图像,解决了图像分割的准确性和稳定性差的问题,达到了提高图像分割准确率和稳定性的技术效果。
图2为本公开实施例所提供的另一种图像分割方法的流程示意图,在前述技术方案的基础上,确定与待分割图像对应的初步分割图像和目标法向量图像的的实施方式可以参见本技术方案的详细阐述。其中,与上述各技术方案相同或相应的术语的解释在此不再赘述。
如图2所示,所述方法包括:
S210、获取待分割图像。
S220、将待分割图像输入至预先训练的图像分割模型中,得到与待分割图像对应的初步分割图像和目标法向量图像。
其中,图像分割模型基于样本分割图像、与样本分割图像对应的分割标注图像以及与样本分割图像对应的样本法向量图像训练得到,图像分割模型用于处理图像以得到初步提取分割图像和法向量图像。
示例性的,将待分割图像输入至预先训练的图像分割模型中,经由预先训练的图像分割模型对待分割图像进行处理,将图像分割模型的输出结果确定为与待分割图像对应的初步分割图像和目标法向量图像。
在使用预先训练的图像分割模型之前,可以对图像分割模型进行训练,例如可以包括如下步骤:
步骤一、将样本分割图像作为预先建立的大模型的输入图像,将与样本分 割图像对应的分割标注图像和样本法向量图像作为大模型的期望输出图像,对大模型进行训练,得到教师模型。
其中,预先建立的大模型可以是初始的用于细致处理得到分割图像和法向量图像的模型,可以是模型结构和模型参数等均为默认情况的模型。预先建立的大模型可以是deeplab v3(一种语义分割网络)等。样本分割图像可以是包括待分割部分的样本图像。分割标注图像可以是标注了待分割部分的图像。样本法向量图像可以是样本分割图像中每个像素点的法向量组成的图像。教师模型可以是对预先建立的大模型进行训练后得到的模型。
示例性的,预先建立一个大模型,将样本分割图像作为该大模型的输入图像,将与样本分割图像对应的分割标注图像和样本法向量图像作为该大模型的期望输出图像。基于输入图像和期望输出图像可以对该大模型进行训练,将训练完成的大模型作为教师模型。
可选的,可以通过下述方式对预先建立的大模型进行训练,以得到教师模型:
1、将样本分割图像输入至预先建立的大模型中,得到大模型分割图像和大模型法向量图像。
其中,大模型分割图像可以是大模型输出的分割图像。大模型法向量图像可以是大模型输出的法向量图像。
示例性的,将样本分割图像输入至预先建立的大模型中,经由大模型对样本分割图像进行处理,将输出图像中的分割图像作为大模型分割图像,将输出图像中的法向量图像作为大模型法向量图像。
2、计算大模型分割图像与样本分割图像对应的分割标注图像之间的大模型分割损失,并计算大模型法向量图像和与样本分割图像对应的样本法向量图像之间的大模型法向量损失。
其中,大模型分割损失可以是基于预先设置的损失函数计算的大模型分割图像与样本分割图像对应的分割标注图像之间的损失值。大模型法向量损失可以是基于预先设置的损失函数计算的大模型法向量图像与样本法向量图像之间的损失值。两种损失函数可以是相同的也可以是不同的,损失函数可以在实际使用过程中进行选择。
可选的,可以基于下述任意一种损失函数来计算大模型分割损失:
方式一、根据二分类交叉熵损失函数计算大模型分割图像与样本分割图像对应的分割标注图像之间的大模型分割损失。
示例性的,基于二分类交叉熵损失函数(Binary Cross Entropy Loss,BCE Loss),计算大模型分割图像与样本分割图像对应的分割标注图像之间的损失值,即为大模型分割损失。
方式二、根据二分类交叉熵损失函数和区域互信息损失函数计算大模型分割图像与样本分割图像对应的分割标注图像之间的大模型分割损失。
示例性的,基于BCE Loss计算大模型分割图像与样本分割图像对应的分割标注图像之间的第一损失值,基于区域互信息损失函数(Regional Mutual Information Loss,RMI Loss)计算大模型分割图像与样本分割图像对应的分割标注图像之间的第二损失值,基于第一损失值和第二损失值进行处理可以得到大模型分割损失。其中,处理方式可以是加和,加权等方式,处理方式可以根据实际情况确定。
可选的,可以基于下述损失函数来计算大模型法向量损失:
根据均方误差损失函数计算大模型法向量图像和与样本分割图像对应的样本法向量图像之间的大模型法向量损失。
示例性的,基于均方误差损失函数(Mean Square Error Loss,MSE Loss)计算大模型法向量图像和与样本分割图像对应的样本法向量图像之间的损失值,即为大模型法向量损失。
3、根据大模型分割损失和大模型法向量损失对大模型的模型参数进行调整,以得到教师模型。
示例性的,根据大模型分割损失和大模型法向量损失对大模型的模型参数进行调整,当大模型的损失函数都达到收敛的时候,比如大模型分割损失和大模型法向量损失均小于预设误差或误差变化趋势趋于稳定,或者当前的迭代次数达到预设次数,可以认为大模型的效果已经能够满足使用需求,此时,停止模型训练,并将当前的大模型作为教师模型。
步骤二、将样本分割图像作为预先建立的小模型的输入图像,将教师模型输出的与样本分割图像对应的大模型分割图像和大模型法向量图像作为小模型的期望输出,对小模型训练,得到图像分割模型。
其中,预先建立的小模型可以是初始的用于粗略处理得到分割图像和法向量图像的模型,可以是模型结构和模型参数等均为默认情况的模型,小模型的结构相较于大模型较为简单。预先建立的小模型可以是ghostnet(一种轻量级神经网络)等。
示例性的,预先建立一个小模型,将样本分割图像作为该小模型的输入图像,将教师模型输出的与样本分割图像对应的大模型分割图像和大模型法向量图像作为该小模型的期望输出图像。基于输入图像和期望输出图像可以对该小模型进行训练,将训练完成的小模型作为图像分割模型。
可选的,可以通过下述方式对预先建立的小模型进行训练,以得到图像分割模型:
1、将样本分割图像输入至预先建立的小模型的输入图像,得到小模型分割图像和小模型法向量图像。
其中,小模型分割图像可以是小模型输出的分割图像。小模型法向量图像可以是小模型输出的法向量图像。
示例性的,将样本分割图像输入至预先建立的小模型中,经由小模型对样本分割图像进行处理,将输出图像中的分割图像作为小模型分割图像,将输出图像中的法向量图像作为小模型法向量图像。
2、根据样本分割图像的小模型分割图像、分割标注图像以及教师模型输出的大模型分割图像计算小模型分割输出损失。
其中,小模型分割输出损失可以是基于预先设置的损失函数计算的小模型分割图像与样本分割图像对应的分割标注图像之间的损失值和小模型分割图像与教师模型输出的大模型分割图像之间的损失值的综合损失值。两种损失函数可以是相同的也可以是不同的,损失函数可以在实际使用过程中进行选择。
示例性的,分别计算小模型分割图像与分割标注图像之间的损失值,以及小模型分割图像与大模型分割图像之间的损失值。在得到两个损失值之后,综合确定出小模型分割输出损失。
可选的,可以基于下述方式来计算小模型分割输出损失:
根据二分类交叉熵损失函数,或者,二分类交叉熵损失函数和区域互信息损失函数,计算样本分割图像的小模型分割图像与分割标注图像之间的小模型第一分割损失。
其中,小模型第一分割损失可以是样本分割图像的小模型分割图像与分割标注图像之间的损失值。
示例性的,基于BCE Loss计算小模型分割图像与样本分割图像对应的分割标注图像之间的损失值,即为第一分割损失。或者,基于BCE Loss计算小模型分割图像与样本分割图像对应的分割标注图像之间的第一损失值,基于RMI Loss计算小模型分割图像与样本分割图像对应的分割标注图像之间的第二损失值,基于第一损失值和第二损失值进行处理可以得到小模型第一分割损失。其中,处理方式可以是加和,加权等方式,处理方式可以根据实际情况确定。
根据相对熵损失函数计算小模型分割图像与教师模型输出的大模型分割图像之间的小模型第二分割损失。
其中,小模型第二分割损失可以是小模型分割图像与教师模型输出的大模型分割图像之间的损失值。
示例性的,基于相对熵损失函数(Kullback-Leibler Divergence Loss,KL Loss)计算小模型分割图像与教师模型输出的大模型分割图像之间的损失值,即为小模型第二分割损失。
根据小模型第一分割损失和小模型第二分割损失确定小模型分割输出损失。
示例性的,将小模型第一分割损失和小模型第二分割损失进行处理可以得 到小模型分割输出损失。其中,处理方式可以是加和,加权等方式,处理方式可以根据实际情况确定。
3、根据样本分割图像的小模型法向量图像、样本法向量图像以及教师模型输出的大模型法向量图像计算小模型法向量输出损失。
示例性的,分别计算小模型法向量图像与样本法向量图像之间的损失值,以及小模型法向量图像与大模型法向量图像之间的损失值。在得到两个损失值之后,综合确定出小模型法向量输出损失。
可选的,可以基于下述方式来计算小模型法向量输出损失:
根据均方误差损失函数计算样本分割图像的小模型法向量图像与样本法向量图像之间的小模型第一法向量损失。
其中,小模型第一法向量损失可以是小模型法向量图像与样本法向量图像之间的损失值。
示例性的,基于MSE Loss计算样本分割图像的小模型法向量图像与样本法向量图像之间的损失值,即为小模型第一法向量损失。
根据相对熵损失函数计算小模型法向量图像与教师模型输出大模型法向量图像之间的小模型第二法向量损失。
其中,小模型第二法向量损失可以是小模型法向量图像与教师模型输出大模型法向量图像之间的损失值。
示例性的,基于KL Loss计算小模型法向量图像与教师模型输出的大模型法向量图像之间的损失值,即为小模型第二法向量损失。
根据小模型第一法向量损失和小模型第二法向量损失确定小模型法向量输出损失。
示例性的,将小模型第一法向量损失和小模型第二法向量损失进行处理可以得到小模型法向量输出损失。其中,处理方式可以是加和,加权等方式,处理方式可以根据实际情况确定。
4、根据小模型分割输出损失和小模型法向量输出损失对小模型的模型参数进行调整,以得到图像分割模型。
示例性的,根据小模型分割输出损失和小模型法向量输出损失对小模型的模型参数进行调整,当小模型的各损失函数都达到收敛的时候,比如小模型分割输出损失和小模型法向量输出损失均小于预设误差或误差变化趋势趋于稳定,或者当前的迭代次数达到预设次数,可以认为小模型的效果已经能够满足使用需求,此时,停止模型训练,并将当前的小模型作为图像分割模型。
S230、将初步分割图像与目标法向量图像进行图像融合,得到目标分割图像。
本公开实施例的技术方案,通过获取待分割图像,将待分割图像输入至预先训练的图像分割模型中,得到与待分割图像对应的初步分割图像和目标法向量图像,以通过图像分割模型处理得到合适的初步分割图像和目标法向量图像,将初步分割图像与目标法向量图像进行图像融合,得到目标分割图像,解决了图像分割的准确性和稳定性差的问题,实现了提高图像分割准确率和稳定性的技术效果。
图3为本公开实施例所提供的另一种图像分割方法的流程示意图,在前述技术方案的基础上,增加了分割图像判别器,用于对小模型分割图像以及大模型分割图像进行判别,具体的实施方式可以参见本技术方案的详细阐述。其中,与上述各技术方案相同或相应的术语的解释在此不再赘述。
如图3所示,所述方法包括:
S310、将样本分割图像作为预先建立的大模型的输入图像,将与样本分割图像对应的分割标注图像和样本法向量图像作为大模型的期望输出图像,对大模型进行训练,得到教师模型。
S320、将样本分割图像输入至预先建立的小模型的输入图像,得到小模型分割图像和小模型法向量图像。
S330、根据样本分割图像的小模型分割图像、分割标注图像以及教师模型输出的大模型分割图像计算小模型分割输出损失。
S340、根据样本分割图像的小模型法向量图像、样本法向量图像以及教师模型输出的大模型法向量图像计算小模型法向量输出损失。
S350、将小模型输出的小模型分割图像输入至预先训练完成的分割图像判别器中,得到分割判别结果,并根据分割判别结果和期望判别结果确定分割判别损失。
其中,分割图像判别器以教师模型输出的与样本分割图像对应的大模型分割图像作为真样本,以小模型输出的小模型分割图像作为假样本训练得到。分割判别结果可以是分割图像判别器输出的结果。期望判别结果可以是分割图像判别器的期望输出的结果。通常情况下,期望判别结果为无法区分大模型分割图像和小模型分割图像,即将小模型分割图像识别为真样本。分割判别损失可以是基于分割图像判别器中预先设置的损失函数计算得到的损失值,分割图像判别器中预先设置的损失函数可以是L1损失(绝对误差)、L2损失(平方误差)、交叉熵误差、KL散度(Kullback-Leibler Divergence,一种用来衡量两个概率分布的相似性的一个度量指标)中的一种或多种。
示例性的,将小模型输出的小模型分割图像输入至预先训练完成的分割图像判别器中,得到分割判别结果。进而,可以根据分割图像判别器的分割判别结果和期望判别结果,来计算分割图像判别器的分割判别损失。
示例性的,将小模型输出的小模型分割图像当作fake,大模型输出的大模型分割图像当作real,进行对抗训练。假设分割图像判别器为D,小模型为G_s,大模型为G_t,输入至大模型和小模型中的样本分割图像为input,设MSE_loss(a,b)为(a-b)2,则分割图像判别器的损失函数loss_D可以是如下形式:
loss_D=0.5*MSE_loss(D(G_s(input)),0)+0.5*MSE_loss(D(G_t(input)),1)。
需要说明的是,小模型通过训练,希望生成的小模型分割图像能够使分割图像判别器输出为1,达到以假乱真的目的。而分割图像判别器也通过训练提高鉴别真伪的能力。随着模型训练迭代次数增加,小模型与分割图像判别器在互相博弈的过程中学习,最终会到达一个均衡点,即小模型能够生成和大模型分割图像非常接近的数据,分割图像判别器已无法判断真伪,最终输出为0.5。
S360、根据小模型分割输出损失和分割判别损失对小模型的模型分割参数进行调整,并根据小模型法向量输出损失对小模型的模型法向量参数进行调整,得到图像分割模型。
其中,模型分割参数可以是小模型中用于生成小模型分割图像部分的模型参数。模型法向量参数可以是小模型中用于生成小模型法向量图像部分的模型参数。
示例性的,根据小模型分割输出损失和分割判别损失对小模型的模型分割参数进行调整,根据小模型法向量输出损失对小模型的模型法向量参数进行调整。当小模型的各损失函数都达到收敛的时候,比如小模型分割输出损失小于预设误差、小模型法向量输出损失小于预设误差、分割判别损失大于预设误差、误差变化趋势趋于稳定,或者当前的迭代次数达到预设次数,可以认为小模型的效果已经能够满足使用需求,此时,停止模型训练,并将当前的小模型作为图像分割模型。
S370、获取待分割图像。
S380、将待分割图像输入至预先训练的图像分割模型中,得到与待分割图像对应的初步分割图像和目标法向量图像。
S390、将初步分割图像与目标法向量图像进行图像融合,得到目标分割图像。
本公开实施例的技术方案,通过将样本分割图像作为预先建立的大模型的输入图像,将与样本分割图像对应的分割标注图像和样本法向量图像作为大模型的期望输出图像,对大模型进行训练,得到教师模型,将样本分割图像输入至预先建立的小模型的输入图像,得到小模型分割图像和小模型法向量图像,根据样本分割图像的小模型分割图像、分割标注图像以及教师模型输出的大模型分割图像计算小模型分割输出损失,根据样本分割图像的小模型法向量图像、样本法向量图像以及教师模型输出的大模型法向量图像计算小模型法向量输出损失,将小模型输出的小模型分割图像输入至预先训练完成的分割图像判别器 中,得到分割判别结果,并根据分割判别结果和期望判别结果确定分割判别损失,根据小模型分割输出损失和分割判别损失对小模型的模型分割参数进行调整,并根据小模型法向量输出损失对小模型的模型法向量参数进行调整,得到图像分割模型,以通过多种损失计算来提高图像分割模型的准确性和稳定性,进而,获取待分割图像,将待分割图像输入至预先训练的图像分割模型中,得到与待分割图像对应的初步分割图像和目标法向量图像,将初步分割图像与目标法向量图像进行图像融合,得到目标分割图像,解决了图像分割模型的复杂性高、准确性差以及稳定性差的问题,实现了提高模型分割的准确率和稳定性,并降低模型复杂度的技术效果。
图4为本公开实施例所提供的另一种图像分割方法的流程示意图,在前述技术方案的基础上,增加了法向量图像判别器,用于对小模型法向量图像以及大模型分割图像进行判别,具体的实施方式可以参见本技术方案的详细阐述。其中,与上述各技术方案相同或相应的术语的解释在此不再赘述。
如图4所示,所述方法包括:
S410、将样本分割图像作为预先建立的大模型的输入图像,将与样本分割图像对应的分割标注图像和样本法向量图像作为大模型的期望输出图像,对大模型进行训练,得到教师模型。
S420、将样本分割图像输入至预先建立的小模型的输入图像,得到小模型分割图像和小模型法向量图像。
S430、根据样本分割图像的小模型分割图像、分割标注图像以及教师模型输出的大模型分割图像计算小模型分割输出损失。
S440、根据样本分割图像的小模型法向量图像、样本法向量图像以及教师模型输出的大模型法向量图像计算小模型法向量输出损失。
S450、将小模型输出的小模型法向量图像输入至预先训练完成的法向量图像判别器中,得到法向量判别结果,并根据法向量判别结果和期望判别结果确定法向量判别损失。
其中,法向量图像判别器以教师模型输出的与样本分割图像对应的大模型法向量图像作为真样本,将小模型输出的小模型法向量图像作为假样本训练得到。法向量判别结果可以是法向量图像判别器输出的结果。期望判别结果可以是法向量图像判别器的期望输出的结果。通常情况下,期望判别结果为无法区分大模型法向量图像和小模型法向量图像,即将小模型法向量图像识别为真样本。法向量判别损失可以是基于法向量图像判别器中预先设置的损失函数计算得到的损失值,法向量图像判别器中预先设置的损失函数可以是L1损失、L2损失、交叉熵误差、KL散度中的一种或多种。
示例性的,将小模型输出的小模型法向量图像输入至预先训练完成的法向 量图像判别器中,得到法向量判别结果。进而,可以根据法向量图像判别器的法向量判别结果和期望判别结果,来计算法向量图像判别器的法向量判别损失。
需要说明的是,S450中涉及的法向量图像判别器的工作原理与S350中涉及的分割图像判别器的工作原理类似,在此不再赘述。
S460、根据小模型分割输出损失对小模型的模型分割参数进行调整,并根据小模型法向量输出损失和法向量判别损失对小模型的模型法向量参数进行调整,得到图像分割模型。
示例性的,根据小模型分割输出损失对小模型的模型分割参数进行调整,根据小模型法向量输出损失和法向量判别损失对小模型的模型法向量参数进行调整。当小模型的各损失函数都达到收敛的时候,比如小模型分割输出损失小于预设误差、小模型法向量输出损失小于预设误差、法向量判别损失大于预设误差、误差变化趋势趋于稳定,或者当前的迭代次数达到预设次数,可以认为小模型的效果已经能够满足使用需求,此时,停止模型训练,并将当前的小模型作为图像分割模型。
S470、获取待分割图像。
S480、将待分割图像输入至预先训练的图像分割模型中,得到与待分割图像对应的初步分割图像和目标法向量图像。
S490、将初步分割图像与目标法向量图像进行图像融合,得到目标分割图像。
需要说明的是,可以联合使用预先训练完成的分割图像判别器以及预先训练完成的法向量图像判别器,来对小模型的模型分割参数和模型法向量参数进行调整。
本公开实施例的技术方案,通过将样本分割图像作为预先建立的大模型的输入图像,将与样本分割图像对应的分割标注图像和样本法向量图像作为大模型的期望输出图像,对大模型进行训练,得到教师模型,将样本分割图像输入至预先建立的小模型的输入图像,得到小模型分割图像和小模型法向量图像,根据样本分割图像的小模型分割图像、分割标注图像以及教师模型输出的大模型分割图像计算小模型分割输出损失,根据样本分割图像的小模型法向量图像、样本法向量图像以及教师模型输出的大模型法向量图像计算小模型法向量输出损失,将小模型输出的小模型法向量图像输入至预先训练完成的法向量图像判别器中,得到法向量判别结果,并根据法向量判别结果和期望判别结果确定法向量判别损失,根据小模型分割输出损失对小模型的模型分割参数进行调整,并根据小模型法向量输出损失和法向量判别损失对小模型的模型法向量参数进行调整,得到图像分割模型,以通过多种损失计算来提高图像分割模型的准确性和稳定性,进而,获取待分割图像,将待分割图像输入至预先训练的图像分割模型中,得到与待分割图像对应的初步分割图像和目标法向量图像,将初步分割图像与目标法向量图像进行图像融合,得到目标分割图像,解决了图像分 割模型的复杂性高、准确性差以及稳定性差的问题,实现了提高模型分割的准确率和稳定性,并降低模型复杂度的技术效果。
图5为本公开实施例所提供的一种图像分割装置的结构示意图,如图5所示,所述装置包括:获取模块510、处理模块520以及融合模块530。
其中,获取模块510,设置为获取待分割图像;处理模块520,设置为确定与所述待分割图像对应的初步分割图像和目标法向量图像;融合模块530,设置为将所述初步分割图像与所述目标法向量图像进行图像融合,得到目标分割图像。
本公开实施例的技术方案,通过获取待分割图像,确定与待分割图像对应的初步分割图像和目标法向量图像,将初步分割图像与目标法向量图像进行图像融合,得到目标分割图像,解决了图像分割的准确性和稳定性差的问题,实现了提高图像分割准确率和稳定性的技术效果。
可选的,处理模块520,设置为通过以下方式确定与待分割图像对应的初步分割图像和目标法向量图像:将所述待分割图像输入至预先训练的图像分割模型中,得到与所述待分割图像对应的初步分割图像和目标法向量图像,其中,所述图像分割模型基于样本分割图像、与所述样本分割图像对应的分割标注图像以及与所述样本分割图像对应的样本法向量图像训练得到。
可选的,所述装置还包括:模型训练模块,设置为将样本分割图像作为预先建立的大模型的输入图像,将与所述样本分割图像对应的分割标注图像和样本法向量图像作为所述大模型的期望输出图像,对所述大模型进行训练,得到教师模型;将所述样本分割图像作为预先建立的小模型的输入图像,将所述教师模型输出的与所述样本分割图像对应的大模型分割图像和大模型法向量图像作为小模型的期望输出,对所述小模型训练,得到图像分割模型。
可选的,模型训练模块,设置为通过以下方式得到教师模型:将样本分割图像输入至预先建立的大模型中,得到大模型分割图像和大模型法向量图像;计算所述大模型分割图像与所述样本分割图像对应的分割标注图像之间的大模型分割损失,并计算大模型法向量图像和与所述样本分割图像对应的样本法向量图像之间的大模型法向量损失;根据所述大模型分割损失和大模型法向量损失对所述大模型的模型参数进行调整,以得到教师模型。
可选的,模型训练模块,设置为通过以下方式计算大模型分割损失:根据二分类交叉熵损失函数计算所述大模型分割图像与所述样本分割图像对应的分割标注图像之间的大模型分割损失;或者,根据二分类交叉熵损失函数和区域互信息损失函数计算所述大模型分割图像与所述样本分割图像对应的分割标注图像之间的大模型分割损失。
可选的,模型训练模块,设置为通过以下方式计算大模型法向量损失:根 据均方误差损失函数计算大模型法向量图像和与所述样本分割图像对应的样本法向量图像之间的大模型法向量损失。
可选的,模型训练模块,设置为通过以下方式得到图像分割模型:将所述样本分割图像输入至预先建立的小模型的输入图像,得到小模型分割图像和小模型法向量图像;根据所述样本分割图像的小模型分割图像、分割标注图像以及所述教师模型输出的大模型分割图像计算小模型分割输出损失;根据所述样本分割图像的小模型法向量图像、样本法向量图像以及所述教师模型输出的大模型法向量图像计算小模型法向量输出损失;根据所述小模型分割输出损失和所述小模型法向量输出损失对所述小模型的模型参数进行调整,以得到图像分割模型。
可选的,所述装置还包括:第一判别模块,设置为将所述小模型输出的小模型分割图像输入至预先训练完成的分割图像判别器中,得到分割判别结果,并根据所述分割判别结果和期望判别结果确定分割判别损失,其中,所述分割图像判别器以所述教师模型输出的与所述样本分割图像对应的大模型分割图像作为真样本,以所述小模型输出的小模型分割图像作为假样本训练得到;模型训练模块,设置为通过以下方式对小模型的模型参数进行调整:根据所述小模型分割输出损失和所述分割判别损失对所述小模型的模型分割参数进行调整;根据所述小模型法向量输出损失对所述小模型的模型法向量参数进行调整。
可选的,模型训练模块,设置为通过以下方式计算小模型分割输出损失:根据二分类交叉熵损失函数,或者,二分类交叉熵损失函数和区域互信息损失函数,计算所述样本分割图像的小模型分割图像与分割标注图像之间的小模型第一分割损失;根据相对熵损失函数计算小模型分割图像与所述教师模型输出的大模型分割图像之间的小模型第二分割损失;根据所述小模型第一分割损失和所述小模型第二分割损失确定小模型分割输出损失。
可选的,模型训练模块,设置为通过以下方式计算小模型法向量输出损失:根据均方误差损失函数计算所述样本分割图像的小模型法向量图像与样本法向量图像之间的小模型第一法向量损失;根据相对熵损失函数计算小模型法向量图像与所述教师模型输出大模型法向量图像之间的小模型第二法向量损失;根据所述小模型第一法向量损失和所述小模型第二法向量损失确定小模型法向量输出损失。
可选的,所述装置还包括:第二判别模块,设置为将所述小模型输出的小模型法向量图像输入至预先训练完成的法向量图像判别器中,得到法向量判别结果,并根据所述法向量判别结果和期望判别结果确定法向量判别损失,其中,所述法向量图像判别器以所述教师模型输出的与所述样本分割图像对应的大模型法向量图像作为真样本,将所述小模型输出的小模型法向量图像作为假样本训练得到;模型训练模块,还设置为通过以下方式对小模型的模型参数进行调整:根据所述小模型分割输出损失对所述小模型的模型分割参数进行调整;根 据所述小模型法向量输出损失和所述法向量判别损失对所述小模型的模型法向量参数进行调整。
可选的,融合模块530,设置为通过以下方式得到目标分割图像:针对所述初步分割图像中的每个像素点,根据所述像素点在所述目标法向量图像中的预测像素值以及预设分割阈值确定所述像素点的预测权重;基于所述预测权重对所述像素点在所述初步分割图像中的像素值进行加权,得到所述像素点的目标像素值;基于所述初步分割图像中每个像素点的目标像素值确定目标分割图像。
可选的,所述装置还包括:调整模块,设置为获取用于拍摄所述待分割图像的图像拍摄装置的拍摄角度信息,根据所述拍摄角度信息对所述目标分割图像进行调整。
本公开实施例所提供的图像分割装置可执行本公开任意实施例所提供的图像分割方法,具备执行方法相应的功能模块和有益效果。
值得注意的是,上述装置所包括的各个单元和模块只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能单元的名称也只是为了便于相互区分。
图6为本公开实施例所提供的一种电子设备的结构示意图。下面参考图6,其示出了适于用来实现本公开实施例的电子设备(例如图6中的终端设备或服务器)600的结构示意图。本公开实施例中的终端设备可以包括诸如移动电话、笔记本电脑、数字广播接收器、个人数字助理(Personal Digital Assistant,PDA)、平板电脑(Portable Android Device,PAD)、便携式多媒体播放器(Portable Media Player,PMP)、车载终端(例如车载导航终端)等的移动终端以及诸如数字电视机(也即数字TV)、台式计算机等的固定终端。图6示出的电子设备仅仅是一个示例。
如图6所示,电子设备600可以包括处理装置(例如中央处理器、图形处理器等)601,处理装置601可以根据存储在只读存储器(Read Only Memory,ROM)602中的程序或者从存储装置608加载到随机访问存储器(Random Access Memory,RAM)603中的程序而执行各种适当的动作和处理。在RAM 603中,还存储有电子设备600操作所需的各种程序和数据。处理装置601、ROM 602以及RAM 603通过总线604彼此相连。输入/输出(Input/Output,I/O)接口605也连接至总线604。
通常,以下装置可以连接至I/O接口605:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置606;包括例如液晶显示器(Liquid Crystal Display,LCD)、扬声器、振动器等的输出装置607;包括例如磁带、硬盘等的存储装置608;以及通信装置609。通信装置609可以允许电子设备600与其他设备进行无线或有线通信以交换数据。虽然图6示出了具有各种装置的电子设备600,但是应理解的是,并不要求实施或具备所有示出的装置。 可以替代地实施或具备更多或更少的装置。
在一实施例中,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置609从网络上被下载和安装,或者从存储装置608被安装,或者从ROM602被安装。在该计算机程序被处理装置601执行时,执行本公开实施例的方法中限定的上述功能。
本公开实施例提供的电子设备与上述实施例提供的图像分割方法属于同一发明构思,未在本实施例中详尽描述的技术细节可参见上述实施例,并且本实施例与上述实施例具有相同的有益效果。
本公开实施例提供了一种计算机存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述实施例所提供的图像分割方法。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的组合。计算机可读存储介质例如可以是电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者以上的组合。计算机可读存储介质的示例可以包括:具有至少一个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(如电子可编程只读存储器(Electronic Programable Read Only Memory,EPROM)或闪存)、光纤、便携式紧凑磁盘只读存储器(Compact Disc-Read Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的合适的组合。在本公开中,计算机可读存储介质可以是包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括电磁信号、光信号或上述的合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括:电线、光缆、射频(Radio Frequency,RF)等,或者上述的合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如超文本传输协议(HyperText Transfer Protocol,HTTP)之类的当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(Local Area Network,LAN),广域网(Wide Area Network,WAN),网际网(例如,互联网)以及端对端网络(例如,ad hoc 端对端网络),以及当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有至少一个程序,当上述至少一个程序被该电子设备执行时,使得该电子设备:获取待分割图像;确定与所述待分割图像对应的初步分割图像和目标法向量图像;将所述初步分割图像与所述目标法向量图像进行图像融合,得到目标分割图像。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括面向对象的程序设计语言诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络包括局域网(LAN)或广域网(WAN)连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含至少一个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定。
本文中以上描述的功能可以至少部分地由至少一个硬件逻辑部件来执行。例如,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(Field-Programmable Gate Array,FPGA)、专用集成电路(Application Specific Integrated Circuit,ASIC)、专用标准产品(Application Specific Standard Parts,ASSP)、片上系统(System on Chip,SOC)、复杂可编程逻辑设备(Complex Programmable Logic Device,CPLD)等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地 使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的合适组合。机器可读存储介质的示例可以包括基于至少一个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的合适组合。
根据本公开的一个或多个实施例,【示例一】提供了一种图像分割方法,该方法包括:
获取待分割图像;
确定与所述待分割图像对应的初步分割图像和目标法向量图像;
将所述初步分割图像与所述目标法向量图像进行图像融合,得到目标分割图像。
根据本公开的一个或多个实施例,【示例二】提供了一种图像分割方法,该方法还包括:
可选的,所述确定与所述待分割图像对应的初步分割图像和目标法向量图像,包括:
将所述待分割图像输入至预先训练的图像分割模型中,得到与所述待分割图像对应的初步分割图像和目标法向量图像,其中,所述图像分割模型基于样本分割图像、与所述样本分割图像对应的分割标注图像以及与所述样本分割图像对应的样本法向量图像训练得到。
根据本公开的一个或多个实施例,【示例三】提供了一种图像分割方法,该方法还包括:
可选的,在所述将所述待分割图像输入至预先训练的图像分割模型中之前,该方法还包括:
将样本分割图像作为预先建立的大模型的输入图像,将与所述样本分割图像对应的分割标注图像和样本法向量图像作为所述大模型的期望输出图像,对所述大模型进行训练,得到教师模型;
将所述样本分割图像作为预先建立的小模型的输入图像,将所述教师模型输出的与所述样本分割图像对应的大模型分割图像和大模型法向量图像作为小模型的期望输出,对所述小模型训练,得到图像分割模型。
根据本公开的一个或多个实施例,【示例四】提供了一种图像分割方法,该方法还包括:
可选的,所述将样本分割图像作为预先建立的大模型的输入图像,将与所 述样本分割图像对应的分割标注图像和样本法向量图像作为所述大模型的期望输出图像,对所述大模型进行训练,得到教师模型,包括:
将样本分割图像输入至预先建立的大模型中,得到大模型分割图像和大模型法向量图像;
计算所述大模型分割图像与所述样本分割图像对应的分割标注图像之间的大模型分割损失,并计算大模型法向量图像和与所述样本分割图像对应的样本法向量图像之间的大模型法向量损失;
根据所述大模型分割损失和大模型法向量损失对所述大模型的模型参数进行调整,以得到教师模型。
根据本公开的一个或多个实施例,【示例五】提供了一种图像分割方法,该方法还包括:
可选的,所述计算所述大模型分割图像与所述样本分割图像对应的分割标注图像之间的大模型分割损失,包括:
根据二分类交叉熵损失函数计算所述大模型分割图像与所述样本分割图像对应的分割标注图像之间的大模型分割损失;或者,
根据二分类交叉熵损失函数和区域互信息损失函数计算所述大模型分割图像与所述样本分割图像对应的分割标注图像之间的大模型分割损失。
根据本公开的一个或多个实施例,【示例六】提供了一种图像分割方法,该方法还包括:
可选的,所述计算大模型法向量图像和与所述样本分割图像对应的样本法向量图像之间的大模型法向量损失,包括:
根据均方误差损失函数计算大模型法向量图像和与所述样本分割图像对应的样本法向量图像之间的大模型法向量损失。
根据本公开的一个或多个实施例,【示例七】提供了一种图像分割方法,该方法还包括:
可选的,所述将所述样本分割图像作为预先建立的小模型的输入图像,将所述教师模型输出的与所述样本分割图像对应的大模型分割图像和大模型法向量图像作为小模型的期望输出,对所述小模型训练,包括:
将所述样本分割图像输入至预先建立的小模型的输入图像,得到小模型分割图像和小模型法向量图像;
根据所述样本分割图像的小模型分割图像、分割标注图像以及所述教师模型输出的大模型分割图像计算小模型分割输出损失;
根据所述样本分割图像的小模型法向量图像、样本法向量图像以及所述教师模型输出的大模型法向量图像计算小模型法向量输出损失;
根据所述小模型分割输出损失和所述小模型法向量输出损失对所述小模型的模型参数进行调整,以得到图像分割模型。
根据本公开的一个或多个实施例,【示例八】提供了一种图像分割方法,该方法还包括:
可选的,该方法还包括:
将所述小模型输出的小模型分割图像输入至预先训练完成的分割图像判别器中,得到分割判别结果,并根据所述分割判别结果和期望判别结果确定分割判别损失,其中,所述分割图像判别器以所述教师模型输出的与所述样本分割图像对应的大模型分割图像作为真样本,以所述小模型输出的小模型分割图像作为假样本训练得到;
所述根据所述小模型分割输出损失和所述小模型法向量输出损失对所述小模型的模型参数进行调整,包括:
根据所述小模型分割输出损失和所述分割判别损失对所述小模型的模型分割参数进行调整;
根据所述小模型法向量输出损失对所述小模型的模型法向量参数进行调整。
根据本公开的一个或多个实施例,【示例九】提供了一种图像分割方法,该方法还包括:
可选的,所述根据所述样本分割图像的小模型分割图像、分割标注图像以及所述教师模型输出的大模型分割图像计算小模型分割输出损失,包括:
根据二分类交叉熵损失函数,或者,二分类交叉熵损失函数和区域互信息损失函数,计算所述样本分割图像的小模型分割图像与分割标注图像之间的小模型第一分割损失;
根据相对熵损失函数计算小模型分割图像与所述教师模型输出的大模型分割图像之间的小模型第二分割损失;
根据所述小模型第一分割损失和所述小模型第二分割损失确定小模型分割输出损失。
根据本公开的一个或多个实施例,【示例十】提供了一种图像分割方法,该方法还包括:
可选的,所述根据所述样本分割图像的小模型法向量图像、样本法向量图像以及所述教师模型输出的大模型法向量图像计算小模型法向量输出损失,包括:
根据均方误差损失函数计算所述样本分割图像的小模型法向量图像与样本法向量图像之间的小模型第一法向量损失;
根据相对熵损失函数计算小模型法向量图像与所述教师模型输出大模型法 向量图像之间的小模型第二法向量损失;
根据所述小模型第一法向量损失和所述小模型第二法向量损失确定小模型法向量输出损失。
根据本公开的一个或多个实施例,【示例十一】提供了一种图像分割方法,该方法还包括:
可选的,该方法还包括:
将所述小模型输出的小模型法向量图像输入至预先训练完成的法向量图像判别器中,得到法向量判别结果,并根据所述法向量判别结果和期望判别结果确定法向量判别损失,其中,所述法向量图像判别器以所述教师模型输出的与所述样本分割图像对应的大模型法向量图像作为真样本,将所述小模型输出的小模型法向量图像作为假样本训练得到;
所述根据所述小模型分割输出损失和所述小模型法向量输出损失对所述小模型的模型参数进行调整,包括:
根据所述小模型分割输出损失对所述小模型的模型分割参数进行调整;
根据所述小模型法向量输出损失和所述法向量判别损失对所述小模型的模型法向量参数进行调整。
根据本公开的一个或多个实施例,【示例十二】提供了一种图像分割方法,该方法还包括:
可选的,所述将所述初步分割图像与所述目标法向量图像进行图像融合,得到目标分割图像,包括:
针对所述初步分割图像中的每个像素点,根据所述像素点在所述目标法向量图像中的预测像素值以及预设分割阈值确定所述像素点的预测权重;
基于所述预测权重对所述像素点在所述初步分割图像中的像素值进行加权,得到所述像素点的目标像素值;
基于所述初步分割图像中每个像素点的目标像素值确定目标分割图像。
根据本公开的一个或多个实施例,【示例十三】提供了一种图像分割方法,该方法还包括:
可选的,在所述将所述初步分割图像与所述目标法向量图像进行图像融合之后,该方法还包括:
获取用于拍摄所述待分割图像的图像拍摄装置的拍摄角度信息,根据所述拍摄角度信息对所述目标分割图像进行调整。
根据本公开的一个或多个实施例,【示例十四】提供了一种图像分割装置,该装置包括:
获取模块,设置为获取待分割图像;
处理模块,设置为确定与所述待分割图像对应的初步分割图像和目标法向量图像;
融合模块,设置为将所述初步分割图像与所述目标法向量图像进行图像融合,得到目标分割图像。

Claims (16)

  1. 一种图像分割方法,包括:
    获取待分割图像;
    确定与所述待分割图像对应的初步分割图像和目标法向量图像;
    将所述初步分割图像与所述目标法向量图像进行图像融合,得到目标分割图像。
  2. 根据权利要求1所述的方法,其中,所述确定与所述待分割图像对应的初步分割图像和目标法向量图像,包括:
    将所述待分割图像输入至预先训练的图像分割模型中,得到与所述待分割图像对应的初步分割图像和目标法向量图像,其中,所述图像分割模型基于样本分割图像、与所述样本分割图像对应的分割标注图像以及与所述样本分割图像对应的样本法向量图像训练得到。
  3. 根据权利要求2所述的方法,在所述将所述待分割图像输入至预先训练的图像分割模型中之前,所述方法还包括:
    将样本分割图像作为预先建立的大模型的输入图像,将与所述样本分割图像对应的分割标注图像和样本法向量图像作为所述大模型的期望输出图像,对所述大模型进行训练,得到教师模型;
    将所述样本分割图像作为预先建立的小模型的输入图像,将所述教师模型输出的与所述样本分割图像对应的大模型分割图像和大模型法向量图像作为小模型的期望输出,对所述小模型训练,得到图像分割模型。
  4. 根据权利要求3所述的方法,其中,所述将样本分割图像作为预先建立的大模型的输入图像,将与所述样本分割图像对应的分割标注图像和样本法向 量图像作为所述大模型的期望输出图像,对所述大模型进行训练,得到教师模型,包括:
    将样本分割图像输入至预先建立的大模型中,得到大模型分割图像和大模型法向量图像;
    计算所述大模型分割图像与所述样本分割图像对应的分割标注图像之间的大模型分割损失,并计算大模型法向量图像和与所述样本分割图像对应的样本法向量图像之间的大模型法向量损失;
    根据所述大模型分割损失和大模型法向量损失对所述大模型的模型参数进行调整,以得到所述教师模型。
  5. 根据权利要求4所述的方法,其中,所述计算所述大模型分割图像与所述样本分割图像对应的分割标注图像之间的大模型分割损失,包括:
    根据二分类交叉熵损失函数计算所述大模型分割图像与所述样本分割图像对应的分割标注图像之间的大模型分割损失;或者,
    根据二分类交叉熵损失函数和区域互信息损失函数计算所述大模型分割图像与所述样本分割图像对应的分割标注图像之间的大模型分割损失。
  6. 根据权利要求4所述的方法,其中,所述计算大模型法向量图像和与所述样本分割图像对应的样本法向量图像之间的大模型法向量损失,包括:
    根据均方误差损失函数计算大模型法向量图像和与所述样本分割图像对应的样本法向量图像之间的大模型法向量损失。
  7. 根据权利要求3所述的方法,其中,所述将所述样本分割图像作为预先建立的小模型的输入图像,将所述教师模型输出的与所述样本分割图像对应的 大模型分割图像和大模型法向量图像作为小模型的期望输出,对所述小模型训练,包括:
    将所述样本分割图像输入至预先建立的小模型的输入图像,得到小模型分割图像和小模型法向量图像;
    根据所述样本分割图像的小模型分割图像、分割标注图像以及所述教师模型输出的大模型分割图像计算小模型分割输出损失;
    根据所述样本分割图像的小模型法向量图像、样本法向量图像以及所述教师模型输出的大模型法向量图像计算小模型法向量输出损失;
    根据所述小模型分割输出损失和所述小模型法向量输出损失对所述小模型的模型参数进行调整,以得到所述图像分割模型。
  8. 根据权利要求7所述的方法,所述方法还包括:
    将所述小模型输出的小模型分割图像输入至预先训练完成的分割图像判别器中,得到分割判别结果,并根据所述分割判别结果和期望判别结果确定分割判别损失,其中,所述分割图像判别器以所述教师模型输出的与所述样本分割图像对应的大模型分割图像作为真样本,以所述小模型输出的小模型分割图像作为假样本训练得到;
    所述根据所述小模型分割输出损失和所述小模型法向量输出损失对所述小模型的模型参数进行调整,包括:
    根据所述小模型分割输出损失和所述分割判别损失对所述小模型的模型分割参数进行调整;
    根据所述小模型法向量输出损失对所述小模型的模型法向量参数进行调 整。
  9. 根据权利要求7所述的方法,其中,所述根据所述样本分割图像的小模型分割图像、分割标注图像以及所述教师模型输出的大模型分割图像计算小模型分割输出损失,包括:
    根据二分类交叉熵损失函数,或者,二分类交叉熵损失函数和区域互信息损失函数,计算所述样本分割图像的小模型分割图像与分割标注图像之间的小模型第一分割损失;
    根据相对熵损失函数计算小模型分割图像与所述教师模型输出的大模型分割图像之间的小模型第二分割损失;
    根据所述小模型第一分割损失和所述小模型第二分割损失确定小模型分割输出损失。
  10. 根据权利要求7所述的方法,其中,所述根据所述样本分割图像的小模型法向量图像、样本法向量图像以及所述教师模型输出的大模型法向量图像计算小模型法向量输出损失,包括:
    根据均方误差损失函数计算所述样本分割图像的小模型法向量图像与样本法向量图像之间的小模型第一法向量损失;
    根据相对熵损失函数计算小模型法向量图像与所述教师模型输出大模型法向量图像之间的小模型第二法向量损失;
    根据所述小模型第一法向量损失和所述小模型第二法向量损失确定小模型法向量输出损失。
  11. 根据权利要求7所述的方法,所述方法还包括:
    将所述小模型输出的小模型法向量图像输入至预先训练完成的法向量图像判别器中,得到法向量判别结果,并根据所述法向量判别结果和期望判别结果确定法向量判别损失,其中,所述法向量图像判别器以所述教师模型输出的与所述样本分割图像对应的大模型法向量图像作为真样本,将所述小模型输出的小模型法向量图像作为假样本训练得到;
    所述根据所述小模型分割输出损失和所述小模型法向量输出损失对所述小模型的模型参数进行调整,包括:
    根据所述小模型分割输出损失对所述小模型的模型分割参数进行调整;
    根据所述小模型法向量输出损失和所述法向量判别损失对所述小模型的模型法向量参数进行调整。
  12. 根据权利要求1所述的方法,其中,所述将所述初步分割图像与所述目标法向量图像进行图像融合,得到目标分割图像,包括:
    针对所述初步分割图像中的每个像素点,根据所述像素点在所述目标法向量图像中的预测像素值以及预设分割阈值确定所述像素点的预测权重;
    基于所述预测权重对所述像素点在所述初步分割图像中的像素值进行加权,得到所述像素点的目标像素值;
    基于所述初步分割图像中每个像素点的目标像素值确定目标分割图像。
  13. 根据权利要求1所述的方法,在所述将所述初步分割图像与所述目标法向量图像进行图像融合之后,所述方法还包括:
    获取用于拍摄所述待分割图像的图像拍摄装置的拍摄角度信息,根据所述拍摄角度信息对所述目标分割图像进行调整。
  14. 一种图像分割装置,包括:
    获取模块,设置为获取待分割图像;
    处理模块,设置为确定与所述待分割图像对应的初步分割图像和目标法向量图像;
    融合模块,设置为将所述初步分割图像与所述目标法向量图像进行图像融合,得到目标分割图像。
  15. 一种电子设备,所述电子设备包括:
    处理器;
    存储装置,设置为存储程序,
    在所述程序被所述处理器执行时,所述处理器实现如权利要求1-13中任一所述的图像分割方法。
  16. 一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行如权利要求1-13中任一所述的图像分割方法。
PCT/CN2023/080694 2022-04-29 2023-03-10 图像分割方法、装置、电子设备及存储介质 WO2023207360A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210475990.9 2022-04-29
CN202210475990.9A CN117036212A (zh) 2022-04-29 2022-04-29 图像分割方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2023207360A1 true WO2023207360A1 (zh) 2023-11-02

Family

ID=88517289

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/080694 WO2023207360A1 (zh) 2022-04-29 2023-03-10 图像分割方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN117036212A (zh)
WO (1) WO2023207360A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140198979A1 (en) * 2011-09-19 2014-07-17 Oxipita Inc. Methods and systems for interactive 3d image segmentation
CN107871321A (zh) * 2016-09-23 2018-04-03 南开大学 图像分割方法及装置
US20200357384A1 (en) * 2019-05-09 2020-11-12 Samsung Electronics Co., Ltd. Model training method and apparatus
CN112465111A (zh) * 2020-11-17 2021-03-09 大连理工大学 一种基于知识蒸馏和对抗训练的三维体素图像分割方法
CN113901903A (zh) * 2021-09-30 2022-01-07 北京百度网讯科技有限公司 道路识别方法和装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140198979A1 (en) * 2011-09-19 2014-07-17 Oxipita Inc. Methods and systems for interactive 3d image segmentation
CN107871321A (zh) * 2016-09-23 2018-04-03 南开大学 图像分割方法及装置
US20200357384A1 (en) * 2019-05-09 2020-11-12 Samsung Electronics Co., Ltd. Model training method and apparatus
CN112465111A (zh) * 2020-11-17 2021-03-09 大连理工大学 一种基于知识蒸馏和对抗训练的三维体素图像分割方法
CN113901903A (zh) * 2021-09-30 2022-01-07 北京百度网讯科技有限公司 道路识别方法和装置

Also Published As

Publication number Publication date
CN117036212A (zh) 2023-11-10

Similar Documents

Publication Publication Date Title
CN112733820B (zh) 障碍物信息生成方法、装置、电子设备和计算机可读介质
CN109672978B (zh) 一种无线热点扫描频率控制方法及装置
CN109670444B (zh) 姿态检测模型的生成、姿态检测方法、装置、设备及介质
CN110059623B (zh) 用于生成信息的方法和装置
CN111402112A (zh) 图像处理方法、装置、电子设备及计算机可读介质
CN115326099B (zh) 局部路径规划方法、装置、电子设备和计算机可读介质
CN115205925A (zh) 表情系数确定方法、装置、电子设备及存储介质
CN110008926B (zh) 用于识别年龄的方法和装置
WO2024056030A1 (zh) 一种图像深度估计方法、装置、电子设备及存储介质
CN113673446A (zh) 图像识别方法、装置、电子设备和计算机可读介质
CN111402159B (zh) 图像处理方法、装置、电子设备及计算机可读介质
CN111345928B (zh) 头部姿势监测方法及装置、存储介质、电子设备
WO2023138468A1 (zh) 虚拟物体的生成方法、装置、设备及存储介质
WO2023207360A1 (zh) 图像分割方法、装置、电子设备及存储介质
CN111930228A (zh) 检测用户姿势的方法、装置、设备、存储介质
CN115908219A (zh) 人脸识别方法、装置、设备及存储介质
CN111680754B (zh) 图像分类方法、装置、电子设备及计算机可读存储介质
CN114037716A (zh) 图像分割方法、装置、设备及存储介质
CN110717467A (zh) 头部姿势的估计方法、装置、设备及存储介质
CN111402133A (zh) 图像处理方法、装置、电子设备及计算机可读介质
CN110991312A (zh) 生成检测信息的方法、装置、电子设备和介质
CN116912808B (zh) 架桥机控制方法、电子设备和计算机可读介质
CN110838132B (zh) 基于视频流的物体分割方法、装置、设备及存储介质
CN114359673B (zh) 基于度量学习的小样本烟雾检测方法、装置和设备
CN111814807B (zh) 用于处理图像的方法、装置、电子设备和计算机可读介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23794796

Country of ref document: EP

Kind code of ref document: A1