WO2021179820A1 - 图像处理方法、装置、存储介质及电子设备 - Google Patents

图像处理方法、装置、存储介质及电子设备 Download PDF

Info

Publication number
WO2021179820A1
WO2021179820A1 PCT/CN2021/073842 CN2021073842W WO2021179820A1 WO 2021179820 A1 WO2021179820 A1 WO 2021179820A1 CN 2021073842 W CN2021073842 W CN 2021073842W WO 2021179820 A1 WO2021179820 A1 WO 2021179820A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature map
module
image
layer
feature
Prior art date
Application number
PCT/CN2021/073842
Other languages
English (en)
French (fr)
Inventor
刘钰安
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2021179820A1 publication Critical patent/WO2021179820A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform

Definitions

  • This application belongs to the field of electronic technology, and in particular relates to an image processing method, device, storage medium, and electronic equipment.
  • Portrait segmentation is a technology that separates the portrait in the image from the background. Portrait segmentation is one of the basic topics in the field of computer vision, and it has received extensive attention in both academia and industry.
  • the embodiments of the present application provide an image processing method, device, storage medium, and electronic equipment, which can improve the prediction accuracy of a portrait segmentation model, so that a portrait can be better segmented from an image.
  • an image processing method including:
  • the pre-trained portrait segmentation model includes: an encoding module, a feature pyramid module, and a decoding module;
  • a portrait is segmented from the image to be segmented.
  • an image processing device including:
  • the first acquisition module is used to acquire the image to be segmented that needs to be segmented;
  • the second acquisition module is used to acquire a pre-trained portrait segmentation model, where the pre-trained portrait segmentation model includes: an encoding module, a feature pyramid module, and a decoding module;
  • the first calling module is configured to call the encoding module to perform encoding processing on the image to be divided to obtain a first feature map set
  • An input module configured to input the first feature map set into the feature pyramid module to obtain a second feature map set
  • the second calling module is configured to call the decoding module to perform decoding processing on the second feature map set to obtain a portrait segmentation mask
  • the segmentation module is used to segment the portrait from the image to be segmented according to the portrait segmentation mask.
  • an embodiment of the present application provides a storage medium on which a computer program is stored.
  • the computer program is executed on a computer, the computer is caused to execute the process in the image processing method provided by the embodiment of the present application.
  • an embodiment of the present application further provides an electronic device, including a memory and a processor, and the processor is configured to execute the image processing method provided in the embodiment of the present application by calling a computer program stored in the memory. Process.
  • Fig. 1 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of the first structure of a portrait segmentation model provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of the structure of the first network block, the second network block, and the third network block provided by an embodiment of the present application.
  • Fig. 4 is a schematic diagram of a second structure of a portrait segmentation model provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a third structure of a portrait segmentation model provided by an embodiment of the present application.
  • Fig. 6 is a schematic structural diagram of an image processing device provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of the first structure of an electronic device provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a second structure of an electronic device provided by an embodiment of the present application.
  • Fig. 9 is a schematic structural diagram of an image processing circuit provided by an embodiment of the present application.
  • An embodiment of the application provides an image processing method, including:
  • the pre-trained portrait segmentation model includes: an encoding module, a feature pyramid module, and a decoding module;
  • a portrait is segmented from the image to be segmented.
  • the decoding module includes a first sub-module and a second sub-module
  • the invoking the decoding module to perform decoding processing on the second feature map set to obtain a portrait segmentation mask includes:
  • the second sub-module is called to perform fusion, convolution and sampling processing on the adjusted feature map to obtain a portrait segmentation mask.
  • the first submodule includes a first network block, a second network block, and a third network block, and the first submodule is called to convert the feature maps in the second feature map set.
  • the size of is adjusted to the preset size, and the adjusted feature map is obtained, including:
  • the first network block includes a convolutional layer, a normalization layer, and an activation layer that are sequentially connected;
  • the second network block includes a convolutional layer, a normalization layer, an activation layer, and an upsampling layer that are sequentially connected;
  • the third network block includes a convolutional layer, a normalization layer, an activation layer, and an up-sampling layer that are sequentially connected, and the number of input channels of the third network block is the same as the number of output channels of the third network block.
  • the second submodule includes a first fusion layer, a first convolutional layer, and a first upsampling layer, and the second submodule is called to fuse the adjusted feature map , Convolution and sampling processing to obtain the portrait segmentation mask, including:
  • the method before the acquiring the image to be segmented that requires portrait segmentation, the method further includes:
  • the portrait segmentation model is trained by using the sample image, the labeling mask corresponding to the sample image, and the supervision module.
  • the training of the portrait segmentation model by using the sample image, the annotation mask corresponding to the sample image, and the supervision module includes:
  • the supervision module includes a fourth convolutional layer and a third upsampling layer, and the supervision module is called to perform restoration processing on the fourth feature map set to obtain multiple supervision masks, include:
  • the acquiring a sample image includes:
  • Data enhancement processing is performed on the original image to obtain a sample image.
  • the execution subject of the embodiments of the present application may be an electronic device such as a smart phone or a tablet computer.
  • FIG. 1 is a schematic diagram of the first flow of an image processing method provided by an embodiment of the present application.
  • the flow may include:
  • the image to be segmented is an object used for portrait segmentation.
  • the image to be segmented may include a portrait.
  • the embodiment of the present application uses a model to perform portrait segmentation on the image to be segmented.
  • the model usually has some requirements for the attributes of the input image, and the image to be segmented should meet these requirements so that the model can be processed normally.
  • the electronic device may preprocess the image to make the image meet the requirements of the model.
  • the model requires the size of the input image to be a preset size, such as 256 ⁇ 256. If the image acquired by the electronic device is not of the preset size, the electronic device needs to adjust the size of the image to the preset size to obtain the image to be divided.
  • the model requires that the pixel value of the input image should be normalized.
  • the pixel value should be a real number between [0,1]. If the image acquired by the electronic device is not normalized, the electronic device should normalize it , Get the image to be divided.
  • the pixel value of an image is expressed as an integer between [0,255] and can be normalized by dividing by 255. It is understandable that normalization can have different definitions. For example, in another normalization definition, the pixel value should be a real number between [-1,1]. For different normalization definitions, normalization The unified approach should be adjusted accordingly.
  • the electronic device may use a camera to shoot a shooting scene containing a human body to obtain an image to be segmented.
  • the scene to which the camera of the electronic device is aimed is the shooting scene.
  • the shooting scene does not specifically refer to a specific scene, but a scene that is aligned in real time following the direction of the camera.
  • a pre-trained portrait segmentation model is obtained, and the pre-trained portrait segmentation model includes: an encoding module, a feature pyramid module, and a decoding module.
  • FIG. 2 is a schematic diagram of the first structure of the pre-trained portrait segmentation model provided by an embodiment of the application.
  • the pre-trained portrait segmentation model may include an encoding module, a feature pyramid module, and a decoding module.
  • the feature pyramid module is respectively connected with the encoding module and the decoding module.
  • the encoding module is called to perform encoding processing on the image to be segmented to obtain the first feature map set.
  • the electronic device can call the encoding module of the pre-trained portrait segmentation model to encode the image to be segmented to extract feature maps of different sizes.
  • the feature maps of different sizes constitute a first feature map set.
  • the number of channels corresponding to the feature maps in the first feature map set may be different.
  • the encoding module may be a multi-scale encoder.
  • the basic network of this multi-scale encoder can choose MobileNetV2 network. Due to the strong feature extraction capability of this network, the selection of this network can better extract image features from the image to be segmented and form a feature map. And because the network is a lightweight network, the network can be selected to achieve a small amount of calculation, which can be deployed in electronic devices such as smart phones.
  • the multi-scale encoder can include a five-layer structure.
  • the first layer can receive the image to be divided, and then output the image to be divided to the second layer.
  • the second layer can determine a feature map of the first size according to the image to be segmented.
  • the size of the feature map of the first size may be 1/4 of the size of the image to be divided, and the number of channels corresponding to the feature map of the first size may be 24, that is, the number of feature maps of the first size is 24 .
  • the third layer can receive the feature map of the first size output by the second layer, and determine the feature map of the second size according to the feature map of the first size.
  • the feature map of the second size may be 1/8 of the size of the image to be divided, and the number of channels corresponding to the feature map of the second size may be 32.
  • the fourth layer can receive the feature map of the second size output by the third layer, and determine the feature map of the third size according to the feature map of the second size.
  • the feature map of the third size may be 1/16 of the size of the image to be divided, and the number of channels corresponding to the feature map of the third size may be 64.
  • the fifth layer may receive the third-size feature map output by the fourth layer, and determine the fourth-size feature map based on the third-size feature map.
  • the feature map of the fourth size may be 1/32 of the size of the image to be divided, and the number of channels corresponding to the feature map of the fourth size may be 320.
  • the feature map of the first size, the feature map of the second size, the feature map of the third size, and the feature map of the fourth size constitute the first feature map set.
  • the above process is only an example of obtaining the first feature map set, and is not used to limit the application.
  • the number of layers of the multi-scale decoder can also be increased according to requirements to obtain feature maps of more sizes.
  • the second layer determines the feature map of the first size according to the image to be segmented may include: the second layer performs convolution and downsampling processing on the image to be segmented to obtain the feature map of the first size. It is understandable that the feature map of the second size, the feature map of the third size, and the feature map of the fourth size can also be obtained in the above manner, and will not be repeated here.
  • the electronic device can call the encoding module to output the first feature map set to the feature pyramid module.
  • the first feature map set is input to the feature pyramid module to obtain the second feature map set.
  • the feature pyramid module can perform feature fusion, convolution and other processing on the feature maps in the first feature map set to obtain the second feature map set.
  • the number of channels corresponding to the feature maps in the second feature map set may be the same.
  • the number of channels corresponding to the feature maps in the second feature map set may all be 64, 128, or 256.
  • the electronic device can call the feature pyramid module to output the second feature map set to the decoding module.
  • the decoding module is called to perform decoding processing on the second feature map set to obtain a portrait segmentation mask.
  • the decoding module may be a multi-scale decoder.
  • the electronic device can call the multi-scale decoder to perform decoding processing such as fusion and sampling on the feature maps in the second feature map set to obtain a portrait segmentation mask.
  • the portrait segmentation mask can be a binary image.
  • each pixel value can only take the value 0 or 1.
  • a certain pixel value in the portrait segmentation mask takes a value of 1
  • the value of a certain pixel in the portrait segmentation mask is 0, it means that the pixel value belongs to the background.
  • the foreground is a portrait.
  • the portrait is segmented from the image to be segmented.
  • the electronic device can segment the portrait from the image to be segmented according to the portrait segmentation mask.
  • the electronic device can determine the position of the pixel with the pixel value of 1 in the portrait segmentation mask. Then, the electronic device can reserve the pixel points at the corresponding position of the image to be divided according to the position.
  • the electronic device may first adjust the size of the portrait segmentation mask so that the size of the portrait segmentation mask is the same as the size of the image to be segmented. Then, the electronic device can determine the position of the pixel with the pixel value of 1 in the adjusted portrait segmentation mask. Then, the electronic device can reserve the pixel points corresponding to the position of the image to be divided according to the position.
  • the first feature map set obtained by encoding the image can be input to the feature pyramid module to obtain the second feature map set; and the decoding module can be called to decode the second feature map set to obtain the portrait segmentation mask. Therefore, the first feature map set can be fully utilized to better extract semantic information, and the prediction accuracy of the portrait segmentation model can be improved, and the portrait can be better segmented from the image.
  • the second feature map set includes feature maps of the first size, feature maps of the second size, feature maps of the third size, and feature maps of the fourth size, feature maps of the first size, and feature maps of the second size.
  • the number of channels corresponding to the feature map of the third size and the feature map of the fourth size is 128; the preset size is the fifth size.
  • the electronic device can divide the size of the feature map of the first size, the feature map of the second size, the feature map of the third size, and the feature map of the fourth size corresponding to each channel into the first size, the second size, and the size of the feature map, respectively.
  • the third size and the fourth size are adjusted to the fifth size, and the adjusted feature maps corresponding to the multiple channels are obtained.
  • the electronic device will still obtain the feature map of the first size after adjusting the size of the feature map of the first size, which may only be represented by the two. The characteristics are different.
  • the electronic device may also call the first sub-module according to actual needs to modulate the size of the feature maps in the second feature map set to other sizes, which is not specifically limited here.
  • the electronic device can call the second sub-module to perform fusion processing on the feature maps corresponding to each channel to obtain the fused feature maps corresponding to multiple channels. Then, the electronic device may perform convolution processing on the fused feature maps corresponding to the multiple channels to obtain the convolved feature maps. Wherein, the number of channels corresponding to the convolved feature map may be 2. Then, the electronic device can perform up-sampling processing on the convolved feature map to obtain the up-sampled feature map. Wherein, the number of channels corresponding to the up-sampled feature map may be 2.
  • the pixel value of each pixel in the up-sampled feature map of one channel (assuming it is channel C1) represents the probability that the pixel belongs to the portrait
  • each pixel in the up-sampled feature map of the other channel The pixel value of a point indicates the probability that the pixel belongs to the background.
  • the electronic device can determine the pixel value of each pixel in the portrait segmentation mask according to the pixel value of each pixel in the up-sampled feature map of the channel C1, so as to obtain the portrait segmentation mask.
  • the pixel value of a certain pixel in the up-sampled feature map is not less than 0.5, then the pixel value of the corresponding pixel in the portrait segmentation mask is 1; when a certain pixel in the up-sampled feature map When the pixel value of a point is less than 0.5, the pixel value of the corresponding pixel in the portrait segmentation mask is 0.
  • the upsampling of the convolved feature map is to make the size of the finally obtained portrait segmentation mask the same as the size of the image to be segmented, so that the portrait can be better segmented from the image to be segmented.
  • the first submodule includes a first network block, a second network block, and a third network block, "call the first submodule to adjust the size of the feature map in the second feature map set to a preset size, Get the adjusted feature map", including:
  • the first network block, the second network block, and/or the third network block are called to adjust the feature maps in the second feature map set to a preset size to obtain an adjusted feature map.
  • the electronic device can make the feature map of the first size pass through the first size feature map.
  • the electronic device may cause the feature map of the second size to pass through the second network block, thereby adjusting the size of the feature map of the second size to a preset size.
  • the electronic device can cause the feature map of the third size to pass through the sequentially connected third network block and the second network block, thereby adjusting the size of the feature map of the second size to the preset size.
  • the electronic device may cause the feature map of the fourth size to pass through the sequentially connected third network block N31, the third network block N32, and the second network block, thereby adjusting the size of the feature map of the fourth size to the preset size.
  • the preset size can be set according to actual needs.
  • the preset size may be 1/4 or 1/8 of the size of the image to be divided.
  • the preset size is 64 ⁇ 64.
  • the first network block may include a convolutional layer, a normalization layer, and an activation layer that are sequentially connected.
  • the second network block includes a convolutional layer, a normalization layer, an activation layer, and an up-sampling layer that are sequentially connected.
  • the third network block includes a convolutional layer, a normalization layer, an activation layer, and an up-sampling layer that are sequentially connected.
  • the number of input channels of the third network block is the same as the number of output channels of the third network block.
  • the composition structure of the first network block, the second network block, and the third network block may be as shown in FIG. 3.
  • the number of input channels of the first network block may be 128, and the number of output channels may be 64.
  • the number of input channels of the second network block can be 128, and the number of output channels can be 64.
  • the number of input and output channels of the third network block may both be 128.
  • Both the up-sampling layer in the second network block and the third network block can be a 2 times bilinear interpolation up-sampling layer.
  • the electronic device can make the feature map of the first size pass through the first
  • the convolutional layer, the normalization layer and the activation layer of the network block are sequentially connected, thereby adjusting the size of the feature map of the first size to the preset size.
  • the electronic device can make the feature map of the second size pass through the convolutional layer, the normalization layer, the activation layer and the up-sampling layer sequentially connected in the second network block, thereby adjusting the size of the feature map of the second size to a preset size .
  • the electronic device can make the feature map of the third size pass through the convolutional layer, the normalization layer, the activation layer and the upsampling layer of the third network block that are sequentially connected, and the convolutional layer and the normalization of the second network block. Layer, activation layer and up-sampling layer, thereby adjusting the size of the feature map of the second size to the preset size.
  • the electronic device can make the fourth-size feature map pass through the sequentially connected convolutional layer, normalization layer, activation layer, and upsampling layer of the third network block N31, and the sequentially connected convolutional layer of the third network block N32,
  • the normalization layer, the activation layer and the upsampling layer and the convolutional layer, the normalization layer, the activation layer and the upsampling layer are sequentially connected to the second network block, thereby adjusting the size of the fourth size feature map to the preset size.
  • the size of the feature map obtained after the feature map of the first size is adjusted by the first network block is still the first size.
  • the first network block also includes a convolutional layer, a normalization layer, etc., although the feature map of the first size is adjusted by the first network block, the size of the resulting feature map is still The first size, but the obtained feature map is different from the first size feature map.
  • the feature maps of other sizes are different from the feature maps before the size adjustment.
  • FIG. 3 is only an example proposed in the embodiment of the present application, and is not used to limit the present application.
  • the composition structure of the first network block, the second network block, and the third network block may also be in other forms. There are no specific restrictions.
  • the second sub-module includes a first fusion layer, a first convolutional layer, and a first upsampling layer.
  • the first fusion layer, the first convolutional layer, and the first upsampling layer are connected in sequence, and "call the first
  • the two sub-modules perform fusion, convolution and sampling processing on the adjusted feature map to obtain a portrait segmentation mask, which can include:
  • the first up-sampling layer is called to perform up-sampling processing on the convolved feature map to obtain a portrait segmentation mask.
  • the electronic device may call the first fusion layer to perform fusion processing on the feature maps corresponding to each channel to obtain fused feature maps corresponding to multiple channels. Then, the electronic device may call the first convolution layer to perform convolution processing on the fused feature maps corresponding to the multiple channels to obtain the convolved feature maps. Wherein, the number of channels corresponding to the convolved feature map may be 2. Then, the electronic device can call the first up-sampling layer to perform up-sampling processing on the convolved feature map to obtain a dual-channel feature map.
  • the pixel value of each pixel in the feature map of one channel indicates the probability that the pixel belongs to the portrait
  • the pixel value of each pixel in the feature map of the other channel indicates that the pixel belongs to the background The probability.
  • the electronic device can replace the pixel values of pixels with pixel values not less than 0.5 in the feature map of channel C1 with 1, and replace the pixel values of pixels with pixel values less than 0.5 with 0 to obtain a portrait segmentation mask.
  • the upsampling of the convolved feature map is to make the size of the finally obtained portrait segmentation mask the same as the size of the image to be segmented, so that the portrait can be better segmented from the image to be segmented.
  • the multiple feature maps are fused, that is, based on the overlap of the multiple feature maps, the pixel values at the same position in the multiple feature maps are added to calculate the average value, and the average value is used as the fused feature map The pixel value of the corresponding position.
  • the feature pyramid module may include a second convolutional layer, a third convolutional layer, a second upsampling layer, and a second fusion layer.
  • "Input the first feature map set into the feature pyramid module to obtain the second feature Picture collection” can include:
  • the feature map in the ninth feature map set and the feature map with the smallest size in the sixth feature map set constitute the second feature map set.
  • the first feature map set includes a first size feature map, a second size feature map, a third size feature map, and a fourth size feature map
  • the number of channels corresponding to the first size feature map is 24,
  • the number of channels corresponding to the second size feature map is 32
  • the number of channels corresponding to the third size feature map is 64
  • the number of channels corresponding to the fourth size feature map is 320
  • the second convolution layer includes 128 convolution kernels
  • the second up-sampling layer is a linear interpolation 2 times up-sampling layer.
  • the size of the feature map of the first size is twice the size of the feature map of the second size
  • the size of the feature map of the second size is twice the size of the feature map of the third size
  • the size of the feature map of the third size It is twice the size of the feature map of the fourth size.
  • the electronic device can call the 128 convolution kernels of the second convolution layer to perform convolution processing on the feature map of the first size, the feature map of the second size, the feature map of the third size, and the feature map of the fourth size.
  • the number of channels corresponding to the feature map of the first size, the feature map of the second size, the feature map of the third size, and the feature map of the fourth size are unified to 128, thereby forming a sixth feature map set.
  • the electronic device can call the linear interpolation 2 times up-sampling layer to perform 2 times up-sampling processing on the second size feature map, the third size feature map, and the fourth size feature map after the uniform number of channels, to obtain the first A target feature map of one size, a target feature map of a second size, and a target feature map of a third size.
  • the target feature map of the first size, the target feature map of the second size, and the target feature map of the third size constitute a seventh feature map set.
  • the electronic device can call the second fusion layer to perform fusion processing on the feature maps of the same size of each channel to obtain the fusion feature map of the first size, the fusion feature map of the second size, and the fusion feature map of the third size.
  • the electronic device may perform fusion processing on the feature map of the first size and the target feature map of the first size of each channel to obtain the fusion feature map of the first size.
  • the fusion feature map of the first size, the fusion feature map of the second size, and the fusion feature map of the third size constitute an eighth feature map set.
  • the electronic device may call the third convolution layer to perform convolution processing on the feature maps in the eighth feature map set again to obtain the ninth feature map set.
  • the feature maps in the ninth feature map set and the feature maps with the smallest size in the sixth feature map set can constitute the second feature map set.
  • the structure of the portrait segmentation model may include an encoding module, a feature pyramid module, a decoding module, and a supervision module.
  • the feature pyramid module is respectively connected with the encoding module, the decoding module and the supervision module.
  • the sample image may be an image in the supervisely data set.
  • the electronic device can obtain a sample image from the supervisely data set, and an annotation mask corresponding to the sample image. Subsequently, the electronic device can use the sample image and the labeling mask corresponding to the sample image to train the portrait segmentation model.
  • the electronic device can use the supervisely data set, based on the PyTorch framework, and use a suitable image processor to train the portrait segmentation model.
  • the loss function can adopt the cross entropy loss function
  • the evaluation function can adopt the intersection ratio IoU.
  • “training the portrait segmentation model using the sample image, the label mask corresponding to the sample image, and the supervision module” may include:
  • the electronic device can obtain the segmentation mask according to each sample image in the manner described above on how to obtain the portrait segmentation mask from the image to be segmented.
  • the desired mask corresponding to each sample image.
  • the electronic device may also obtain the fourth feature map set corresponding to each sample image according to each sample image in the manner of how to obtain the second feature map set according to the image to be divided as described above.
  • the electronic device can call the supervision module to perform reduction processing such as convolution and up-sampling on the feature maps in the fourth feature map set corresponding to each sample image to obtain multiple supervision masks corresponding to each sample image.
  • the desired mask and the supervision mask are 2-channel images
  • the pixel value of each pixel in the image of one channel represents the probability that the pixel belongs to the portrait
  • the value of each pixel in the image of the other channel is The pixel value indicates the probability that the pixel belongs to the background.
  • the electronic device can use the cross-entropy loss function to calculate the loss value of the desired mask and the label mask corresponding to each sample image, and the corresponding value of each sample image The loss value of each supervision mask and labeling mask of, get multiple loss values. Subsequently, the electronic device can calculate the sum of the multiple loss values to obtain the total loss value corresponding to each sample image. Then, the electronic device can calculate the average value of the total loss values corresponding to the multiple sample images, and use it as the total loss value corresponding to the portrait segmentation model. When the total loss value corresponding to the portrait segmentation model converges, the electronic device can save the parameters of the portrait segmentation model to obtain the trained portrait segmentation model. And when deploying the trained portrait segmentation model, the supervision module is removed to reduce the amount of calculation.
  • L represents the loss value of the sample image
  • y i represents the true category of the i-th pixel of the sample image, with a value of 0 or 1, where 0 indicates that the pixel is the background, and 1 indicates that the pixel is the foreground.
  • P i represents the i-th sample image pixels probability of belonging to class 1.
  • the electronic device can use the intersection ratio IoU evaluation function to calculate the evaluation value corresponding to each sample image. Subsequently, the electronic device can calculate the average value of the total evaluation values of the multiple sample images and use it as the evaluation value corresponding to the portrait segmentation model, so that the portrait segmentation model can be evaluated by the evaluation value.
  • the formula of the evaluation function of a single sample image can be:
  • IoU represents the evaluation value of the sample image
  • X represents the expected mask of the sample image
  • Y represents the annotation mask of the sample image.
  • the fourth feature map set includes a feature map of the first size, a feature map of the second size, a feature map of the third size, and a feature map of the fourth size;
  • the size of the feature map of the first size is the size of the image to be divided 1/4 of the size;
  • the size of the feature map of the second size is 1/8 of the size of the image to be divided;
  • the size of the feature map of the third size is 1/16 of the size of the image to be divided;
  • the feature map of the fourth size The size of is 1/32 of the size of the image to be divided;
  • the number of channels corresponding to the feature map of the first size, the feature map of the second size, the feature map of the third size, and the feature map of the fourth size are all 128, that is, the The number of one-size feature maps, second-size feature maps, third-size feature maps, and fourth-size feature maps are all 128;
  • the number of fourth convolutional layers can be 4, and each fourth convolutional layer Both can include 2 convolution kernels.
  • the number of the third upsampling layer can be 4.
  • the first third upsampling layer corresponds to a sampling multiple of 4 times
  • the second third upsampling layer corresponds to a sampling multiple of 8 times
  • the third third upsampling layer corresponds to a sampling multiple of 16 times
  • the fourth The corresponding multiple of each upsampling layer is 32 times.
  • the electronic device can call the two convolution kernels of the first and fourth convolution layers to perform convolution processing on the feature map of the first size, so that it obtains a feature map of the first size with a corresponding channel number of 2. Then, the electronic device can call the first third up-sampling layer to perform 4 times up-sampling processing on the feature map of the first size to obtain the first supervision mask.
  • the electronic device can call the second and fourth convolutional layer and the second and third up-sampling layer, and determine the second supervision mask based on the feature map of the second size.
  • the electronic device can call the third fourth convolutional layer and the third third upsampling layer, and determine the third supervision mask according to the feature map of the third size.
  • the electronic device can call the fourth fourth convolutional layer and the fourth third upsampling layer, and determine the fourth supervision mask based on the feature map of the fourth size, so that deep features can be supervised from multiple scales. And providing additional gradients for deep features can improve the effect of portrait segmentation and reduce false positive predictions.
  • "acquiring a sample image” may include:
  • the electronic device can obtain some original images from the supervisely data set; then, the electronic device can perform data enhancement processing such as random rotation, random left and right flips, random cropping, and Gamma transformation on these original images to obtain sample images, which can increase training
  • data enhancement processing such as random rotation, random left and right flips, random cropping, and Gamma transformation on these original images to obtain sample images, which can increase training
  • the amount of data improves the generalization ability of the model, and can increase the noise data to improve the robustness of the model.
  • the image processing method provided by the embodiments of the present application can provide accurate portrait segmentation masks for image processing algorithms such as beautification and background replacement, which can make the background blur of portraits more accurate, and can be used as a quick ID photo generation The core algorithm.
  • the embodiment of the present application does not limit the number of feature maps in each feature map set, and can be flexibly adjusted according to the specific data set situation.
  • FIG. 5 is a schematic diagram of a third structure of the portrait segmentation model provided by an embodiment of the present application.
  • the electronic device can use the supervisely data set, based on the PyTorch framework, and use a suitable image processor to train the portrait segmentation model.
  • the electronic device can divide the supervisely data set into a test set and a training set at a ratio of 2:8, and perform data enhancement processing such as random rotation, random left and right flips, random cropping, and Gamma transformation on the images in the training set to obtain sample images .
  • the electronic device may also obtain the labeling mask M6 corresponding to the sample image, and construct a portrait segmentation model, which may include an encoding module, a feature pyramid module, a supervision module, and a decoding module.
  • the feature pyramid module is respectively connected with the encoding module, the feature pyramid module, the supervision module and the decoding module.
  • the electronic device can input the sample image to the first layer of the encoding module.
  • the electronic device can call the first layer of the encoding module to output the sample image to the second layer of the encoding module.
  • the electronic device calls the second layer of the encoding module to determine a feature map F1 whose size is 1/4 of the size of the sample image according to the sample image, where the number of channels corresponding to the feature map F1 is 24.
  • the electronic device can call the second layer of the encoding module to output the feature map F1 to the third layer of the encoding module.
  • the electronic device calls the third layer of the encoding module to determine the feature map F2 whose size is 1/8 of the size of the sample image according to the feature map F1, where the number of channels corresponding to the feature map F2 is 32.
  • the electronic device can call the third layer of the encoding module to output the feature map F2 to the fourth layer of the encoding module.
  • the electronic device calls the fourth layer of the encoding module to determine the feature map F3 whose size is 1/16 of the size of the sample image according to the feature map F2, where the number of channels corresponding to the feature map F3 is 64.
  • the electronic device can call the fourth layer of the encoding module to output the feature map F3 to the fifth layer of the encoding module.
  • the electronic device calls the fifth layer of the encoding module to determine the feature map F4 whose size is 1/32 of the size of the sample image according to the feature map F3, where the number of channels corresponding to the feature map F4 is 320.
  • the feature maps F1, F2, F3, and F4 may constitute the first feature map set.
  • the electronic device can also call the second, third, fourth, and fifth layers of the encoding module to output the feature map F1, the feature map F2, the feature map F3, and the feature map F4 to the feature pyramid module, respectively.
  • the electronic device can call the convolution layer c1 of the feature pyramid module to perform convolution processing on the feature map F1 to obtain a feature map F5 whose size is 1/4 of the size of the sample image, wherein the number of channels corresponding to the feature map F5 is 128.
  • the electronic device can call the convolution layer c2 of the feature pyramid module to perform convolution processing on the feature map F2 to obtain a feature map F6 whose size is 1/8 of the size of the sample image, wherein the number of channels corresponding to the feature map F6 is 128.
  • the electronic device can call the convolution layer c3 of the feature pyramid module to perform convolution processing on the feature map F3 to obtain a feature map F7 with a size of 1/16 of the size of the sample image, wherein the number of channels corresponding to the feature map F7 is 128.
  • the electronic device can call the convolution layer c4 of the feature pyramid module to perform convolution processing on the feature map F4 to obtain a feature map F8 whose size is 1/32 of the size of the sample image, wherein the number of channels corresponding to the feature map F4 is 128.
  • the electronic device can call the linear interpolation 2 times upsampling layer u1 of the feature pyramid module to perform 2 times upsampling processing on the feature map F6 to obtain a feature map F9 whose size is 1/4 of the size of the sample image, where the feature map F9 The corresponding channel number is 128.
  • the electronic device can call the linear interpolation 2 times upsampling layer u2 of the feature pyramid module to perform 2 times upsampling processing on the feature map F7 to obtain a feature map F10 whose size is 1/8 of the size of the sample image, where the feature map F10 The corresponding channel number is 128.
  • the electronic device can call the linear interpolation 2 times upsampling layer u3 of the feature pyramid module to perform 2 times upsampling processing on the feature map F8 to obtain a feature map F11 whose size is 1/16 of the size of the sample image, where the feature map F11 The corresponding channel number is 128.
  • the electronic device can perform fusion processing on the feature maps F5 and F9 corresponding to each channel to obtain the feature map F12 corresponding to each channel. Among them, the number of channels corresponding to the feature map F12 is 128.
  • the electronic device may perform fusion processing on the feature maps F6 and F10 corresponding to each channel to obtain the feature map F13 corresponding to each channel. Among them, the number of channels corresponding to the feature map F13 is 128.
  • the electronic device can perform fusion processing on the feature maps F7 and F11 corresponding to each channel to obtain the feature map F14 corresponding to each channel. Among them, the number of channels corresponding to the feature map F14 is 128.
  • the two feature maps are fused to obtain the target feature map, which may include: on the basis of the overlap of the two feature maps, add the pixel values at the same position in the two feature maps to calculate the average value, and then The average value is used as the pixel value of the corresponding position of the target feature map.
  • the electronic device may call the convolution layer c5 of the feature pyramid module to perform convolution processing on the feature map F12 to obtain the feature map F15. Among them, the number of channels corresponding to the feature map F15 is 128.
  • the electronic device may call the convolution layer c6 of the feature pyramid module to perform convolution processing on the feature map F13 to obtain the feature map F16. Among them, the number of channels corresponding to the feature map F16 is 128.
  • the electronic device can call the convolution layer c7 of the feature pyramid module to perform convolution processing on the feature map F14 to obtain the feature map F17. Among them, the number of channels corresponding to the feature map F17 is 128.
  • Feature maps F8, F15, F16, and F17 can form a second feature map set.
  • the electronic device can call the feature pyramid module to output the feature maps F8, F15, F16, and F17 to the supervision module and the decoding module.
  • the electronic device can call the convolution layer c8 and the up-sampling layer u4 of the supervision module to perform convolution processing and 4 times up-sampling processing on the feature map F15 to obtain the supervision mask M1, where the number of channels corresponding to the supervision mask M1 is 2 , The size is the same as the size of the sample image.
  • the electronic device can call the convolution layer c9 and upsampling layer u5 of the supervision module to perform convolution processing and 8 times upsampling processing on the feature map F16 to obtain the supervision mask M2, where the number of channels corresponding to the supervision mask M2 is 2 , The size is the same as the size of the sample image.
  • the electronic device can call the convolution layer c10 and the upsampling layer u6 of the supervision module to perform convolution processing and 16 times upsampling processing on the feature map F17 to obtain the supervision mask M3, where the number of channels corresponding to the supervision mask M3 is 2 , The size is the same as the size of the sample image.
  • the electronic device can call the convolution layer c11 and the upsampling layer u7 of the supervision module to perform convolution processing and 32 times upsampling processing on the feature map F8 to obtain the supervision mask M4, where the number of channels corresponding to the supervision mask M4 is 2 , The size is the same as the size of the sample image.
  • the electronic device can call the first network block sgr of the decoding module to adjust the size of the feature map F15 to 1/4 of the size of the sample image to obtain the adjusted feature map F18. It can be understood that, in this embodiment, since the size of the feature map F15 is 1/4 of the size of the sample image, the size of the adjusted feature map F18 is still 1/4 of the size of the sample image.
  • the electronic device can call the second network block sgr2x1 of the decoding module to adjust the size of the feature map F16 to 1/4 of the size of the sample image to obtain the adjusted feature map F19.
  • the electronic device can call the sequentially connected third network block cgr2x1 and second network block sgr2x2 of the decoding module to adjust the size of the feature map F17 to 1/4 of the size of the sample image to obtain the adjusted feature map F20.
  • the electronic device can call the sequentially connected third network blocks cgr2x3, cgr2x2, and second network block sgr2x3 of the decoding module to adjust the size of the feature map F8 to 1/4 of the size of the sample image to obtain the adjusted feature map F21.
  • the first network block sgr includes a convolutional layer, a normalization layer, and an activation layer that are sequentially connected.
  • the second network blocks sgr2x1, sgr2x2, and sgr2x3 each include a convolutional layer, a normalization layer, an activation layer, and a linear interpolation 2 times upsampling layer that are sequentially connected.
  • the third network blocks cgr2x1, cgr2x2, and cgr2x3 each include a convolutional layer, a normalization layer, an activation layer, and a linear interpolation 2 times up-sampling layer that are sequentially connected.
  • the third network block cgr2x1, cgr2x2 and cgr2x3 have the same number of input and output channels.
  • the electronic device can perform fusion processing on the adjusted feature maps F18, F19, F20, and F21 to obtain the fused feature map F22.
  • the electronic device can call the two convolution kernels of the convolution layer c12 and the up-sampling layer u8 to perform convolution processing and 4 times up-sampling processing on the fused feature map F22 to obtain the desired mask M5.
  • the number of channels corresponding to the mask M5 is 2, and the size is the same as the size of the sample image.
  • the electronic device can calculate the cross entropy loss values of the labeling mask M6, the supervision masks M1, M2, M3, and M4, and the desired mask M5, to obtain multiple loss values, and use the average value of the multiple loss values as The loss value of a single sample image. Then, the electronic device may use the average value of the loss values of the multiple sample images as the current total loss value of the portrait segmentation model. The electronic device can execute a back propagation algorithm according to the total loss value to update the parameters of the portrait segmentation model until the total loss value converges, and the electronic device can save the obtained portrait segmentation model.
  • the electronic device can obtain the above-mentioned saved portrait segmentation model, and remove the supervising module of the portrait segmentation model. Subsequently, the electronic device can obtain the to-be-segmented image that needs to be segmented into the portrait segmentation model, thereby obtaining a portrait segmentation mask. After obtaining the portrait segmentation mask, the electronic device can segment the portrait from the image to be segmented according to the portrait segmentation mask.
  • the prediction process of the model is usually similar to the training process of the model. Therefore, how the electronic device uses the portrait segmentation model to obtain the portrait segmentation mask according to the image to be segmented can refer to the above-mentioned training process of the portrait segmentation model, which will not be repeated here. .
  • FIG. 6 is a schematic structural diagram of an image processing apparatus provided by an embodiment of the application.
  • the image processing device 200 includes: a first acquisition module 201, a second acquisition module 202, a first invocation module 203, a second invocation module 204, a third invocation module 205, and a dividing module 206.
  • the first acquisition module 201 is used to acquire the image to be segmented that needs to be segmented.
  • the second acquisition module 202 is configured to acquire a pre-trained portrait segmentation model.
  • the pre-trained portrait segmentation model includes: an encoding module, a feature pyramid module, and a decoding module.
  • the first calling module 203 is configured to call the encoding module to perform encoding processing on the image to be divided to obtain a first feature map set.
  • the input module 204 is configured to input the first feature map set into the feature pyramid module to obtain a second feature map set.
  • the second calling module 205 is configured to call the decoding module to perform decoding processing on the second feature map set to obtain a portrait segmentation mask.
  • the segmentation module 206 is configured to segment the portrait from the image to be segmented according to the portrait segmentation mask.
  • the decoding module includes a first submodule and a second submodule.
  • the second calling module 205 may be used to: The size of the feature map is adjusted to a preset size to obtain an adjusted feature map; the second sub-module is called to perform fusion, convolution, and sampling processing on the adjusted feature map to obtain a portrait segmentation mask.
  • the first submodule includes a first network block, a second network block, and a third network block.
  • the second calling module 205 may be used to: call the first network block, the The second network block and/or the third network block adjust the feature maps in the second feature map set to a preset size to obtain an adjusted feature map.
  • the first network block includes a convolutional layer, a normalization layer, and an activation layer that are sequentially connected;
  • the second network block includes a convolutional layer, a normalization layer, and an activation layer that are sequentially connected.
  • the third network block includes a convolutional layer, a normalization layer, an activation layer, and an up-sampling layer that are sequentially connected, the number of input channels of the third network block and the output channels of the third network block The numbers are the same.
  • the second sub-module includes a first fusion layer, a first convolutional layer, and a first upsampling layer
  • the second calling module 205 may be used to: call the first fusion layer pair Perform fusion processing on the adjusted feature map to obtain a fused feature map; call the first convolutional layer to perform convolution processing on the fused feature map to obtain a convolved feature map; call the The first up-sampling layer performs up-sampling processing on the convolved feature map to obtain a portrait segmentation mask.
  • the first obtaining module 201 may be used to: obtain a sample image and an annotation mask corresponding to the sample image; obtain a supervision module; use the sample image and the annotation corresponding to the sample image
  • the mask and the supervision module train the portrait segmentation model.
  • the first acquisition module 201 may be used to: call the encoding module to perform encoding processing on the sample image to obtain a third feature map set; and input the third feature map set into the The feature pyramid module obtains the fourth feature map set; calls the decoding module to decode the fourth feature map set to obtain the desired mask; calls the supervision module to restore the fourth feature map set, Obtain multiple supervision masks; adjust the parameters of the portrait segmentation model according to the difference between the desired mask and the annotation mask, and the difference between each supervision mask and the annotation mask.
  • the supervision module includes a fourth convolutional layer and a third upsampling layer
  • the first acquisition module 201 can be used to: call the fourth convolutional layer to compare the fourth feature map
  • the feature maps in the set are respectively subjected to convolution processing to obtain a fifth feature map set
  • the third up-sampling layer is called to perform up-sampling processing on the feature maps in the fifth feature map set to obtain multiple supervision masks .
  • the first acquisition module 201 may be used to: acquire an original image; perform data enhancement processing on the original image to obtain a sample image.
  • the embodiment of the present application provides a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed on a computer, the computer is caused to execute the process in the image processing method provided in this embodiment.
  • An embodiment of the present application also provides an electronic device, including a memory and a processor, and the processor is configured to execute a process in the image processing method provided in this embodiment by calling a computer program stored in the memory.
  • the above-mentioned electronic device may be a mobile terminal such as a tablet computer or a smart phone.
  • a mobile terminal such as a tablet computer or a smart phone.
  • FIG. 7 is a schematic structural diagram of an electronic device provided by an embodiment of the application.
  • the electronic device 300 may include components such as a camera module 301, a memory 302, and a processor 303.
  • components such as a camera module 301, a memory 302, and a processor 303.
  • FIG. 7 does not constitute a limitation on the electronic device, and may include more or fewer components than those shown in the figure, or a combination of certain components, or different component arrangements.
  • the camera module 301 may include a lens, an image sensor, and an image signal processor.
  • the lens is used to collect an external light source signal and provide it to the image sensor.
  • the image sensor senses the light source signal from the lens and converts it into a digitized original image, namely RAW image, and provide the RAW image to the image signal processor for processing.
  • the image signal processor can perform format conversion and noise reduction on the RAW image to obtain a YUV image.
  • RAW is an unprocessed and uncompressed format, which can be vividly called a "digital negative.”
  • YUV is a color coding method, where Y represents brightness, U represents chroma, and V represents density. Human eyes can intuitively feel the natural features contained in YUV images.
  • the memory 302 can be used to store application programs and data.
  • the application program stored in the memory 302 contains executable code.
  • Application programs can be composed of various functional modules.
  • the processor 303 executes various functional applications and data processing by running application programs stored in the memory 302.
  • the processor 303 is the control center of the electronic device. It uses various interfaces and lines to connect the various parts of the entire electronic device, and executes the electronic device by running or executing the application program stored in the memory 302 and calling the data stored in the memory 302
  • the various functions and processing data of the electronic equipment can be used to monitor the electronic equipment as a whole.
  • the processor 303 in the electronic device will load the executable code corresponding to the process of one or more application programs into the memory 302 according to the following instructions, and the processor 303 will run and store the executable code in the memory.
  • the pre-trained portrait segmentation model includes: an encoding module, a feature pyramid module, and a decoding module;
  • a portrait is segmented from the image to be segmented.
  • the electronic device 300 may include components such as a camera module 301, a memory 302, a processor 303, a touch screen 304, a speaker 305, and a microphone 306.
  • the camera module 301 may include an image processing circuit, which may be implemented by hardware and/or software components, and may include various processing units that define an image signal processing (Image Signal Processing) pipeline.
  • the image processing circuit may at least include a camera, an image signal processor (Image Signal Processor, ISP processor), a control logic, an image memory, a display, and so on.
  • the camera may at least include one or more lenses and image sensors.
  • the image sensor may include a color filter array (such as a Bayer filter). The image sensor can obtain the light intensity and wavelength information captured by each imaging pixel of the image sensor, and provide a set of raw image data that can be processed by the image signal processor.
  • the image signal processor can process the original image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the image signal processor may perform one or more image processing operations on the original image data and collect statistical information about the image data. Among them, the image processing operations can be performed with the same or different bit depth accuracy.
  • the original image data can be stored in the image memory after being processed by the image signal processor.
  • the image signal processor can also receive image data from the image memory.
  • the image memory may be a part of a memory device, a storage device, or an independent dedicated memory in an electronic device, and may include DMA (Direct Memory Access) features.
  • DMA Direct Memory Access
  • the image signal processor can perform one or more image processing operations, such as temporal filtering.
  • the processed image data can be sent to the image memory for additional processing before being displayed.
  • the image signal processor may also receive processed data from the image memory, and perform image data processing in the original domain and in the RGB and YCbCr color spaces on the processed data.
  • the processed image data can be output to a display for viewing by the user and/or further processed by a graphics engine or GPU (Graphics Processing Unit, image processor).
  • the output of the image signal processor can also be sent to the image memory, and the display can read image data from the image memory.
  • the image memory may be configured to implement one or more frame buffers.
  • the statistical data determined by the image signal processor can be sent to the control logic.
  • statistical data may include image sensor statistical information such as automatic exposure, automatic white balance, automatic focus, flicker detection, black level compensation, and lens shading correction.
  • the control logic may include a processor and/or microcontroller that executes one or more routines (such as firmware).
  • routines can determine the control parameters of the camera and the ISP control parameters based on the received statistical data.
  • the control parameters of the camera may include camera flash control parameters, lens control parameters (for example, focal length for focusing or zooming), or a combination of these parameters.
  • ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (for example, during RGB processing).
  • FIG. 9 is a schematic diagram of the structure of the image processing circuit in this embodiment. As shown in FIG. 9, for ease of description, only various aspects of the image processing technology related to the embodiments of the present application are shown.
  • the image processing circuit may include: a camera, an image signal processor, a control logic, an image memory, and a display.
  • the camera may include one or more lenses and image sensors.
  • the camera may be any one of a telephoto camera or a wide-angle camera.
  • the first image collected by the camera is transmitted to the image signal processor for processing.
  • the image signal processor may send statistical data of the first image (such as the brightness of the image, the contrast value of the image, the color of the image, etc.) to the control logic.
  • the control logic can determine the control parameters of the camera according to the statistical data, so that the camera can perform operations such as autofocus and automatic exposure according to the control parameters.
  • the first image can be stored in the image memory after being processed by the image signal processor.
  • the image signal processor can also read the image stored in the image memory for processing.
  • the first image can be directly sent to the display for display after being processed by the image signal processor.
  • the display can also read the image in the image memory for display.
  • the electronic device may also include a CPU and a power supply module.
  • the CPU is connected to the logic controller, image signal processor, image memory, and display, and the CPU is used to implement global control.
  • the power supply module is used to supply power to each module.
  • the application program stored in the memory 302 contains executable code.
  • Application programs can be composed of various functional modules.
  • the processor 303 executes various functional applications and data processing by running application programs stored in the memory 302.
  • the processor 303 is the control center of the electronic device. It uses various interfaces and lines to connect the various parts of the entire electronic device, and executes the electronic device by running or executing the application program stored in the memory 302 and calling the data stored in the memory 302
  • the various functions and processing data of the electronic equipment can be used to monitor the electronic equipment as a whole.
  • the touch screen 304 may be used to receive a user's touch control operation on the electronic device.
  • the speaker 305 can play sound signals.
  • the sensor 306 may include a gyroscope sensor, an acceleration sensor, a direction sensor, a magnetic field sensor, etc., which may be used to obtain the current posture of the electronic device 300.
  • the processor 303 in the electronic device will load the executable code corresponding to the process of one or more application programs into the memory 302 according to the following instructions, and the processor 303 will run and store the executable code in the memory.
  • the pre-trained portrait segmentation model includes: an encoding module, a feature pyramid module, and a decoding module;
  • a portrait is segmented from the image to be segmented.
  • the decoding module includes a first sub-module and a second sub-module, and when the processor 303 executes calling the decoding module to decode the second feature map set to obtain a portrait segmentation mask, It can be executed: calling the first sub-module to adjust the size of the feature maps in the second feature map set to a preset size to obtain the adjusted feature map; calling the second sub-module to adjust the adjusted feature map
  • the feature map is processed by fusion, convolution and sampling to obtain a portrait segmentation mask.
  • the first sub-module includes a first network block, a second network block, and a third network block
  • the processor 303 executes to call the first sub-module to convert the data in the second feature map set
  • the size of the feature map is adjusted to a preset size, and when the adjusted feature map is obtained, it can be executed: calling the first network block, the second network block and/or the third network block to transfer the second feature
  • the feature map in the map set is adjusted to a preset size, and the adjusted feature map is obtained.
  • the first network block includes a convolutional layer, a normalization layer, and an activation layer that are sequentially connected;
  • the second network block includes a convolutional layer, a normalization layer, and an activation layer that are sequentially connected.
  • the third network block includes a convolutional layer, a normalization layer, an activation layer, and an up-sampling layer that are sequentially connected, the number of input channels of the third network block and the output of the third network block The number of channels is the same.
  • the second sub-module includes a first fusion layer, a first convolutional layer, and a first up-sampling layer
  • the processor 303 executes calling the second sub-module to perform the adjustment of the adjusted feature map.
  • the processor 303 may also execute: acquire a sample image and a labeling mask corresponding to the sample image; acquire a supervision module; use the sample image ,
  • the annotation mask corresponding to the sample image and the supervision module train the portrait segmentation model.
  • the processor 303 when the processor 303 executes the training of the portrait segmentation model using the sample image, the annotation mask corresponding to the sample image, and the supervision module, it may execute: call the encoding module to The sample image is encoded to obtain a third feature map set; the third feature map set is input to the feature pyramid module to obtain a fourth feature map set; the decoding module is called to perform the fourth feature map set Perform decoding processing to obtain a desired mask; call the supervision module to perform restoration processing on the fourth feature map set to obtain multiple supervision masks; according to the difference between the desired mask and the annotation mask, and each Adjust the parameters of the portrait segmentation model for the difference between a supervision mask and the annotation mask.
  • the supervision module includes a fourth convolutional layer and a third upsampling layer
  • the processor 303 executes and calls the supervision module to restore the fourth feature map set to obtain multiple supervision masks.
  • it can be executed: calling the fourth convolution layer to perform convolution processing on the feature maps in the fourth feature map set to obtain a fifth feature map set; calling the third upsampling layer to perform convolution processing on the feature maps in the fourth feature map set; The feature maps in the fifth feature map set are respectively up-sampled to obtain multiple supervision masks.
  • the processor 303 when the processor 303 executes acquiring a sample image, it may execute: acquiring an original image; performing data enhancement processing on the original image to obtain a sample image.
  • the image processing device provided in the embodiment of the application belongs to the same concept as the image processing method in the above embodiment, and any method provided in the image processing method embodiment can be run on the image processing device.
  • any method provided in the image processing method embodiment can be run on the image processing device.
  • the computer program may be stored in a computer readable storage medium, such as stored in a memory, and executed by at least one processor.
  • the execution process may include the process of the embodiment of the image processing method.
  • the storage medium may be a magnetic disk, an optical disc, a read only memory (ROM, Read Only Memory), a random access memory (RAM, Random Access Memory), etc.
  • the image processing device of the embodiment of the present application its functional modules may be integrated into one processing chip, or each module may exist alone physically, or two or more modules may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or software functional modules. If the integrated module is implemented in the form of a software function module and sold or used as an independent product, it can also be stored in a computer readable storage medium, such as a read-only memory, a magnetic disk or an optical disk, etc. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Image Analysis (AREA)

Abstract

图像处理方法、装置、存储介质及电子设备。方法包括:获取图像;获取人像分割模型,包括编码模块,特征金字塔模块和解码模块;调用编码模块根据图像得到第一特征图集合;将第一特征图集合输入特征金字塔模块得到第二特征图集合;调用解码模块根据第二特征图集合得到人像分割掩膜;根据人像分割掩膜从图像中分割出人像。

Description

图像处理方法、装置、存储介质及电子设备
本申请要求于2020年3月12日提交中国专利局、申请号为202010171398.0、申请名称为“图像处理方法、装置、存储介质及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请属于电子技术领域,尤其涉及一种图像处理方法、装置、存储介质及电子设备。
背景技术
人像分割,是一种将图像中的人像从背景中分离出来的技术。人像分割是计算机视觉领域的基础课题之一,在学术界与工业界都受到了广泛的重视。
相关技术中,当需要进行人像分割处理时,通常会将图像输入预先训练好的人像分割模型中,以使该人像分割模型对图像进行人像分割处理。
发明内容
本申请实施例提供一种图像处理方法、装置、存储介质及电子设备,可以提高人像分割模型预测的精度,从而可更好的从图像中分割出人像。
第一方面,本申请实施例提供一种图像处理方法,包括:
获取需要进行人像分割的待分割图像;
获取预训练的人像分割模型,所述预训练的人像分割模型包括:编码模块,特征金字塔模块和解码模块;
调用所述编码模块对所述待分割图像进行编码处理,得到第一特征图集合;
将所述第一特征图集合输入所述特征金字塔模块中,得到第二特征图集合;
调用所述解码模块对所述第二特征图集合进行解码处理,得到人像分割掩膜;
根据所述人像分割掩膜,从所述待分割图像中分割出人像。
第二方面,本申请实施例提供一种图像处理装置,包括:
第一获取模块,用于获取需要进行人像分割的待分割图像;
第二获取模块,用于获取预训练的人像分割模型,所述预训练的人像分割模型包括:编码模块,特征金字塔模块和解码模块;
第一调用模块,用于调用所述编码模块对所述待分割图像进行编码处理,得到第一特征图集合;
输入模块,用于将所述第一特征图集合输入所述特征金字塔模块中,得到第二特征图集合;
第二调用模块,用于调用所述解码模块对所述第二特征图集合进行解码处理,得到人像分割掩膜;
分割模块,用于根据所述人像分割掩膜,从所述待分割图像中分割出人像。
第三方面,本申请实施例提供一种存储介质,其上存储有计算机程序,当所述计算机程序在计算机上执行时,使得所述计算机执行本申请实施例提供的图像处理方法中的流程。
第四方面,本申请实施例还提供一种电子设备,包括存储器,处理器,所述处理器通过调用所述存储器中存储的计算机程序,用于执行本申请实施例提供的图像处理方法中的流程。
附图说明
下面结合附图,通过对本申请的具体实施方式详细描述,将使本申请的技术方案及其有益效果显而易见。
图1是本申请实施例提供的图像处理方法的流程示意图。
图2是本申请实施例提供的人像分割模型的第一种结构示意图。
图3是本申请实施例提供的第一网络块、第二网络块和第三网络块的结构示意图。
图4是本申请实施例提供的人像分割模型的第二种结构示意图。
图5是本申请实施例提供的人像分割模型的第三种结构示意图。
图6是本申请实施例提供的图像处理装置的结构示意图。
图7是本申请实施例提供的电子设备的第一种结构示意图。
图8是本申请实施例提供的电子设备的第二种结构示意图。
图9是本申请实施例提供的图像处理电路的结构示意图。
具体实施方式
请参照图示,其中相同的组件符号代表相同的组件,本申请的原理是以实施在一适当的运算环境中来举例说明。以下的说明是基于所例示的本申请具体实施例,其不应被视为限制本申请未在此详述的其它具体实施例。
本申请实施例提供一种图像处理方法,包括:
获取需要进行人像分割的待分割图像;
获取预训练的人像分割模型,所述预训练的人像分割模型包括:编码模块,特征金字塔模块和解码模块;
调用所述编码模块对所述待分割图像进行编码处理,得到第一特征图集合;
将所述第一特征图集合输入所述特征金字塔模块中,得到第二特征图集合;
调用所述解码模块对所述第二特征图集合进行解码处理,得到人像分割掩膜;
根据所述人像分割掩膜,从所述待分割图像中分割出人像。
在一种实施方式中,所述解码模块包括第一子模块和第二子模块,所述调用所述解码模块对所述第二特征图集合进行解码处理,得到人像分割掩膜,包括:
调用所述第一子模块将所述第二特征图集合中的特征图的尺寸调整至预设尺寸,得到调整后的特征图;
调用所述第二子模块对所述调整后的特征图进行融合、卷积及采样处理,得到人像分割掩膜。
在一种实施方式中,所述第一子模块包括第一网络块、第二网络块和第三网络块,所述调用所述第一子模块将所述第二特征图集合中的特征图的尺寸调整至预设尺寸,得到调整后的特征图,包括:
调用所述第一网络块、所述第二网络块和/或所述第三网络块将所述第二特征图集合中的特征图调整至预设尺寸,得到调整后的特征图。
在一种实施方式中,所述第一网络块包括依次连接的卷积层、归一化层和激活层;
所述第二网络块包括依次连接的卷积层、归一化层、激活层和上采样层;
所述第三网络块包括依次连接的卷积层、归一化层、激活层和上采样层,所述第三网络块的输入通道数与所述第三网络块的输出通道数相同。
在一种实施方式中,所述第二子模块包括第一融合层、第一卷积层和第一上采样层,所述调用所述第二子模块对所述调整后的特征图进行融合、卷积及采样处理,得到人像分割掩膜,包括:
调用所述第一融合层对所述调整后的特征图进行融合处理,得到融合后的特征图;
调用所述第一卷积层对所述融合后的特征图进行卷积处理,得到卷积后的特征图;
调用所述第一上采样层对所述卷积后的特征图进行上采样处理,得到人像分割掩膜。
在一种实施方式中,所述获取需要进行人像分割的待分割图像之前,还包括:
获取样本图像,以及所述样本图像对应的标注掩膜;
获取监督模块;
利用所述样本图像、所述样本图像对应的标注掩膜以及所述监督模块对人像分割模型进行训练。
在一种实施方式中,所述利用所述样本图像、所述样本图像对应的标注掩膜以及所述监督模块对人像分割模型进行训练,包括:
调用所述编码模块对所述样本图像进行编码处理,得到第三特征图集合;
将所述第三特征图集合输入所述特征金字塔模块,得到第四特征图集合;
调用所述解码模块对所述第四特征图集合进行解码处理,得到期望掩膜;
调用所述监督模块对所述第四特征图集合进行还原处理,得到多个监督掩膜;
根据所述期望掩膜与所述标注掩膜的差异,以及每个监督掩模与所述标注掩膜的差异,调整所述人像分割模型的参数。
在一种实施方式中,所述监督模块包括第四卷积层和第三上采样层,所述调用所述监督模块对所述第四特征图集合进行还原处理,得到多个监督掩膜,包括:
调用所述第四卷积层对所述第四特征图集合中的特征图分别进行卷积处理,得到第五特征图集合;
调用所述第三上采样层对所述第五特征图集合中的特征图分别进行上采样处理,得到多个监督掩膜。
在一种实施方式中,所述获取样本图像,包括:
获取原始图像;
对所述原始图像进行数据增强处理,得到样本图像。
可以理解的是,本申请实施例的执行主体可以是诸如智能手机或平板电脑等电子设备。
请参阅图1,图1是本申请实施例提供的图像处理方法的第一种流程示意图,流程可以包括:
在101中,获取需要进行人像分割的待分割图像。
其中,待分割图像是用于人像分割的对象。该待分割图像可包括人像。由于本申请实施例采用模型对待分割图像进行人像分割。而模型通常对输入的图像的属性有一些要求,待分割图像应当符合这些要求,以使模型能够正常处理。
可以理解的是,当电子设备获取的图像为不符合模型要求的图像时,电子设备可对该图像进行预处理,以使该图像符合模型的要求。
例如,模型要求输入图像的尺寸为预设尺寸,例如256×256。若电子设备获取的图像不为预设尺寸,那么,电子设备需将该图像的尺寸调整为预设尺寸,得到待分割图像。
又例如,模型要求输入图像的像素值应当归一化,例如,像素值应为[0,1]之间的实数,若电子设备获取的图像未归一化,电子设备应当将其归一化,得到待分割图像。例如,某图像的像素值表示为[0,255]之间的整数,可以通过除以255的方式进行归一化。可以理解的是,归一化可以有不同的定义,例如在另一种归一化的定义中,像素值应当为[-1,1]之间的实数,针对不同的归一化定义,归一化的方式应当相应地调整。
在一些实施例中,电子设备可采用摄像头对包含人体的拍摄场景进行拍摄,得到待分割图像。
其中,电子设备在根据用户操作启动拍摄类应用程序(比如电子设备的系统应用“相机”)后,其摄像头所对准的场景即为拍摄场景。比如,用户通过手指点击电子设备上“相机”应用的图标启动“相机应用”后,若用户使用电子设备的摄像头对准某一场景,则该场景即为拍摄场景。根据以上描述,本领域技术人员应当理解的是,拍摄场景并非特指某一特定场景,而是跟随摄像头的指向所实时对准的场景。
在102中,获取预训练的人像分割模型,该预训练的人像分割模型包括:编码模块,特征金字塔模块和解码模块。
其中,如图2所示,图2为本申请实施例提供的预训练的人像分割模型的第一种结构示意图。在预测阶段,该预训练的人像分割模型可包括编码模块、特征金字塔模块和解码模块。其中,特征金字塔模块分别与编码模块和解码模块连接。
在103中,调用编码模块对待分割图像进行编码处理,得到第一特征图集合。
比如,当获取到预训练的人像分割模型之后,电子设备可调用该预训练的人像分割模型的编码模块对待分割图像进行编码处理,以提取出不同尺寸的特征图。该不同尺寸的特征图构成第一特征图集合。其中,该第一特征图集合中的特征图对应的通道数可以不同。
其中,该编码模块可以为多尺度编码器。该多尺度编码器的基础网络可选用MobileNetV2网络。由于该网络的特征提取能力较强,因此选用该网络可以更好地从待分割图像中提取出图像特征,形成特征图。且由于该网络为轻量级网络,因此选用该网络可实现较小的计算量,从而可部署到智能手机等电子设备中。
多尺度编码器可包括五层结构。第一层可接收该待分割图像,然后将该待分割图像输出至第二层。该第二层可根据该待分割图像,确定第一尺寸的特征图。例如,该第一尺寸的特征图的尺寸可以为待分割图像的尺寸的1/4,该第一尺寸的特征图对应的通道数可以为24,即该第一尺寸的特征图的数量为24。
第三层可接收第二层输出的第一尺寸的特征图,并根据该第一尺寸的特征图,确定第二尺寸的特征图。例如,该第二尺寸的特征图可以为待分割图像的尺寸的1/8,该第二尺寸的特征图对应的通道数可以为32。
第四层可接收第三层输出的第二尺寸的特征图,并根据该第二尺寸的特征图,确定第三尺寸的特征图。例如,该第三尺寸的特征图可以为待分割图像的尺寸的1/16,该第三尺寸的特征图对应的通道数可以为64。
第五层可接收第四层输出的第三尺寸的特征图,并根据该第三尺寸的特征图,确定第四尺寸的特征图。例如,该第四尺寸的特征图可以为待分割图像的尺寸的1/32,该第四尺寸的特征图对应的通道数可以为320。
第一尺寸的特征图、第二尺寸的特征图、第三尺寸的特征图和第四尺寸的特征图组成第一特征图集合。
需要说明的是,上述过程仅仅是得到第一特征图集合的示例,并不用于限制本申请。在实际应用中,还可以根据需求增加多尺度解码器的层数,以获取到更多尺寸的特征图。
其中,“第二层根据该待分割图像,确定第一尺寸的特征图”,可以包括:第二层对该待分割图像进行卷积、下采样处理,得到第一尺寸的特征图。可以理解的是,第二尺寸的特征图、第三尺寸的特征图和第四尺寸的特征图也可以按照上述方式得到,此处不再赘述。
在本申请实施例中,当得到该第一特征图集合之后,电子设备可调用该编码模块将该第一特征图集合输出至特征金字塔模块。
在104中,将第一特征图集合输入特征金字塔模块,得到第二特征图集合。
其中,该特征金字塔模块可对第一特征图集合中的特征图进行特征融合、卷积等处理,得到第二特征图集合。该第二特征图集合中的特征图对应的通道数可以相同。例如,该第二特征图集合中的特征图对应的通道数均可以为64、128或256等。
在本申请实施例中,当得到该第二特征图集合之后,电子设备可调用该特征金字塔模块将该第二特征图集合输出至解码模块。
在105中,调用解码模块对第二特征图集合进行解码处理,得到人像分割掩膜。
比如,该解码模块可以为多尺度解码器。电子设备可调用该多尺度解码器对第二特征图集合中的特征图进行融合以及采样等解码处理,得到人像分割掩膜。
需要说明的是,人像分割掩膜可以为二值图像。例如,每个像素值只能取值为0或1。其中,当人像分割掩膜中的某个像素值取值为1时,表示该像素值属于前景。当人像分割掩膜中的某个像素值取值为0时,表示该像素值属于背景。在本申请实施例中,前景即为人像。
在106中,根据人像分割掩膜,从待分割图像中分割出人像。
比如,当得到人像分割掩膜之后,电子设备可根据该人像分割掩膜,从待分割图像中分割出人像。
例如,当人像分割掩膜的尺寸与待分割图像的尺寸相同时,电子设备可确定人像分割掩膜中像素值为1的像素点所在的位置。然后,电子设备可根据该位置将待分割图像对应位置的像素点保留。当人像分割掩膜的尺寸与待分割图像的尺寸不相同时,电子设备可先对人像分割掩膜的尺寸进行调整,以使得人像分割掩膜的尺寸与待分割图像的尺寸相同。然后,电子设备可确定调整后的人像分割掩膜中像素值为1的像素点所在的位置。接着,电子设备可根据该位置将待分割图像对应位置的像素点保留。
本申请实施例中,可将对图像进行编码处理得到的第一特征图集合输入特征金字塔模块,得到第二特征图集合;并调用解码模块对第二特征图集合进行解码处理,得到人像分割掩膜,从而可充分利用第一特征图集合,以更好的提取语义信息,进而可提高人像分割模型预测的精度,进而可更好的从图像中分割出人像。
在一些实施例中,该解码模块可包括第一子模块和第二子模块,“调用解码模块对第二特征图集合进行解码处理,得到人像分割掩膜”,可以包括:
(1)调用第一子模块将第二特征图集合中的特征图的尺寸调整至预设尺寸,得到调整后的特征图;
(2)调用第二子模块对调整后的特征图进行融合、卷积及采样处理,得到人像分割掩膜。
例如,假设第二特征图集合包括第一尺寸的特征图、第二尺寸的特征图、第三尺寸的特征图和第四尺寸的特征图,第一尺寸的特征图、第二尺寸的特征图、第三尺寸的特征图和第四尺寸的特征图对应的通道数均为128;预设尺寸为第五尺寸。那么,电子设备可将每个通道对应的第一尺寸的特征图、第二尺寸的特征图、第三尺寸的特征图和第四尺寸的特征图的尺寸分别由第一尺寸、第二尺寸、第三尺寸和第四尺寸调整至第五尺寸,得到多个通道对应的调整后的特征图。
可以理解的是,若预设尺寸为第一尺寸,那么,电子设备对第一尺寸的特征图进行尺寸调整之后所得到的也仍然是第一尺寸的特征图,仅仅可能只是两者所表示的特征有所不同。
需要说明的是,电子设备还可根据实际需要调用第一子模块将第二特征图集合中的特征图的尺寸调制至其他尺寸,此处不作具体限制。
随后,电子设备可调用第二子模块对每个通道对应的特征图进行融合处理,得到多个通道对应的融合后的特征图。接着,电子设备可对多个通道对应的融合后的特征图进行卷积处理,得到卷积后的特征图。其中,该卷积后的特征图对应的通道数可以为2。然后,电子设备可对卷积后的特征图进行上采样处理,得到上采样后的特征图。其中,该上采样后的特征图对应的通道数可以为2。其中,一个通道(假设其为通道C1)的上采样后的特征图中每个像素点的像素值表示该像素点属于人像的概率,另一个通道的上采样后的特征图中的每个像素点的像素值表示该像素点属于背景的概率。电子设备可根据通道C1的上采样后的特征图中每个像素点的像素值来确定人像分割掩膜中每个像素点的像素值,从而得到人像分割掩膜。其中,当上采样后的特征图中的某个像素点的像素值不小于0.5时,则人像分割掩膜中对应像素点的像素值为1;当上采样后的特征图中的某个像素点的像素值小于0.5时,则人像分割掩膜中对应像素点的像素值为0。
需要说明的是,对卷积后的特征图进行上采样处理是为了使得最终得到的人像分割掩膜的尺寸与待分割图像的尺寸相同,从而可更好地从待分割图像中分割出人像。
在一些实施例中,第一子模块包括第一网络块、第二网络块和第三网络块,“调用第一子模块将第二特征图集合中的特征图的尺寸调整至预设尺寸,得到调整后的特征图”,包括:
调用第一网络块、第二网络块和/或第三网络块将所述第二特征图集合中的特征图调整至预设尺寸,得到调整后的特征图。
比如,假设第二特征图集合中包括第一尺寸的特征图、第二尺寸的特征图、第三尺寸的特征图和第四尺寸的特征图,电子设备可使得第一尺寸的特征图经过第一网络块,从而将第一尺寸的特征图的尺寸调整至预设尺寸。电子设备可使得第二尺寸的特征图经过第二网络块,从而将第二尺寸的特征图的尺寸调整至预设尺寸。电子设备可使得第三尺寸的特征图经过依次连接的第三网络块和第二网络块,从而将第二尺寸的特征图的尺寸调整至预设尺寸。电子设备可使得第四尺寸的特征图经过依次连接的第三网络块N31、第三网络块N32和第二网络块,从而将第四尺寸的特征图的尺寸调整至预设尺寸。其中,预设尺寸可根据实际需求设置。例如,预设尺寸可以为待分割图像的尺寸的1/4或1/8等。例如,假设待分割图像的尺寸为256×256,则预设尺寸为64×64。
在一些实施例中,第一网络块可包括依次连接的卷积层、归一化层和激活层。
第二网络块包括依次连接的卷积层、归一化层、激活层和上采样层。
第三网络块包括依次连接的卷积层、归一化层、激活层和上采样层。第三网络块的输入通道数与第三网络块的输出通道数相同。
例如,第一网络块、第二网络块和第三网络块的组成结构可以如图3所示。其中,第一网络块的输入通道数可以为128,输出通道数可以为64。第二网络块的输入通道数可以为128,输出通道数可以为64。第三网络块的输入输出通道数可以均为128。第二网络块和第三网络块中的上采样层都可以为2倍 双线性插值上采样层。
比如,假设第二特征图集合包括第一尺寸的特征图、第二尺寸的特征图、第三尺寸的特征图和第四尺寸的特征图,电子设备可使得第一尺寸的特征图经过第一网络块的依次连接的卷积层、归一化层和激活层,从而将第一尺寸的特征图的尺寸调整至预设尺寸。电子设备可使得第二尺寸的特征图经过第二网络块的依次连接的卷积层、归一化层、激活层和上采样层,从而将第二尺寸的特征图的尺寸调整至预设尺寸。电子设备可使得第三尺寸的特征图经过第三网络块的依次连接的卷积层、归一化层、激活层和上采样层和第二网络块的依次连接的卷积层、归一化层、激活层和上采样层,从而将第二尺寸的特征图的尺寸调整至预设尺寸。电子设备可使得第四尺寸的特征图经过的第三网络块N31的依次连接的卷积层、归一化层、激活层和上采样层、第三网络块N32的依次连接的卷积层、归一化层、激活层和上采样层和第二网络块的依次连接的卷积层、归一化层、激活层和上采样层,从而将第四尺寸的特征图的尺寸调整至预设尺寸。
需要说明的是,由于上述第一网络块并未包括上采样层,因此第一尺寸的特征图经过该第一网络块进行尺寸调整之后,所得到的特征图的尺寸仍为第一尺寸。然而,由于该第一网络块还包括卷积层、归一化层等,因此,虽然第一尺寸的特征图经过该第一网络块进行尺寸调整之后,所得到的特征图的尺寸虽然仍为第一尺寸,但所得到的特征图与第一尺寸的特征图已有所不同。以此类推,其他尺寸的特征图相对于尺寸调整前的特征图来说,尺寸和特征图中的特征均与尺寸调整前的特征图不同。
可以理解的是,图3仅仅是本申请实施例提出的一种示例,并不用于限制本申请,第一网络块、第二网络块和第三网络块的组成结构还可以是其他形式,此处不作具体限制。
在一些实施例中,第二子模块包括第一融合层、第一卷积层和第一上采样层,第一融合层、第一卷积层和第一上采样层依次连接,“调用第二子模块对调整后的特征图进行融合、卷积及采样处理,得到人像分割掩膜”,可以包括:
调用第一融合层对调整后的特征图进行融合处理,得到融合后的特征图;
调用第一卷积层对融合后的特征图进行卷积处理,得到卷积后的特征图;
调用第一上采样层对卷积后的特征图进行上采样处理,得到人像分割掩膜。
电子设备可调用第一融合层对每个通道对应的特征图进行融合处理,得到多个通道对应的融合后的特征图。接着,电子设备可调用第一卷积层对多个通道对应的融合后的特征图进行卷积处理,得到卷积后的特征图。其中,该卷积后的特征图对应的通道数可以为2。然后,电子设备可调用第一上采样层对卷积后的特征图进行上采样处理,得到一个双通道的特征图。其中,一个通道(假设为通道C1)的特征图中每个像素点的像素值表示该像素点属于人像的概率,另一个通道的特征图中每个像素点的像素值表示该像素点属于背景的概率。然后,电子设备可将通道C1的特征图中像素值不小于0.5的像素点的像素值更换为1,将像素值小于0.5的像素点的像素值更换为0,得到人像分割掩膜。需要说明的是,对卷积后的特征图进行上采样处理是为了使得最终得到的人像分割掩膜的尺寸与待分割图像的尺寸相同,从而可更好地从待分割图像中分割出人像。
其中,对多个特征图进行融合处理,即在多个特征图重合的基础上,将多个特征图中相同位置的像素值相加再计算平均值,将该平均值作为融合后的特征图的对应位置的像素值。
在一些实施例中,特征金字塔模块可包括第二卷积层、第三卷积层、第二上采样层和第二融合层,“将第一特征图集合输入特征金字塔模块,得到第二特征图集合”,可以包括:
(1)调用第二卷积层对第一特征图集合中的特征图分别进行卷积处理,得到第六特征图集合;
(2)调用第二上采样层对第六特征图集合中除最大尺寸的特征图之外的特征图分别进行上采样处理,得到第七特征图集合;
(3)调用第二融合层对第六特征图集合中除最小尺寸的特征图之外的每个特征图与第七特征图集合中的相应特征图进行融合处理,得到第八特征图集合;
(4)调用第三卷积层对第八特征图集合中的特征图分别进行卷积处理,得到第九特征图集合;
(5)第九特征图集合中的特征图和第六特征图集合中最小尺寸的特征图构成第二特征图集合。
例如,假设第一特征图集合包括第一尺寸的特征图、第二尺寸的特征图、第三尺寸的特征图和第四尺寸的特征图;第一尺寸的特征图对应的通道数为24、第二尺寸的特征图对应的通道数为32、第三尺寸的特征图对应的通道数为64、第四尺寸的特征图对应的通道数为320;第二卷积层包括128个卷积核,第二上采样层为线性插值2倍上采样层。第一尺寸的特征图的尺寸为第二尺寸的特征图的尺寸的2倍,第二尺寸的特征图的尺寸为第三尺寸的特征图的尺寸的2倍,第三尺寸的特征图的尺寸为第四尺寸的特征图的尺寸的2倍。
电子设备可调用该第二卷积层的128个卷积核对第一尺寸的特征图、第二尺寸的特征图、第三尺寸的特征图和第四尺寸的特征图分别进行卷积处理,以将第一尺寸的特征图、第二尺寸的特征图、第三尺寸的特征图和第四尺寸的特征图对应的通道数统一为128,从而构成第六特征图集合。
随后,电子设备可调用线性插值2倍上采样层对统一通道数之后的第二尺寸的特征图、第三尺寸的特征图和第四尺寸的特征图分别进行2倍的上采样处理,得到第一尺寸的目标特征图、第二尺寸的目标特征图和第三尺寸的目标特征图。第一尺寸的目标特征图、第二尺寸的目标特征图和第三尺寸的目标特征图组成第七特征图集合。
接着,电子设备可调用第二融合层对每个通道的相同尺寸的特征图进行融合处理,得到第一尺寸的融合特征图、第二尺寸的融合特征图和第三尺寸的融合特征图。例如,电子设备可对每个通道的第一尺寸的特征图和第一尺寸的目标特征图进行融合处理,得到第一尺寸的融合特征图。第一尺寸的融合特征图、第二尺寸的融合特征图和第三尺寸的融合特征图构成第八特征图集合。
然后,电子设备可调用第三卷积层对第八特征图集合中的特征图再次进行卷积处理,得到第九特征图集合。第九特征图集合中的特征图和第六特征图集合中最小尺寸的特征图可构成第二特征图集合。
在一些实施例中,“获取需要进行人像分割的待分割图像”之前,还可以包括:
(1)获取样本图像,以及样本图像对应的标注掩膜;
(2)获取监督模块;
(3)利用样本图像、样本图像对应的标注掩膜以及监督模块对人像分割模型进行训练。
如图4所示,在训练阶段,该人像分割模型的结构可包括编码模块、特征金字塔模块、解码模块和监督模块。其中,特征金字塔模块分别与编码模块、解码模块和监督模块连接。
其中,该样本图像可以为supervisely数据集中的图像。比如,电子设备可从supervisely数据集中获取样本图像,以及样本图像对应的标注掩膜。随后,电子设备可利用该样本图像,以及样本图像对应的标注掩膜对人像分割模型进行训练。
在一些实施例中,电子设备可采用该supervisely数据集,基于PyTorch框架,使用一个合适的图像处理器对该人像分割模型进行训练。其中,损失函数可采用交叉熵损失函数,评价函数可采用交并比IoU。
在一些实施例中,“利用样本图像、样本图像对应的标注掩膜以及监督模块对人像分割模型进行训练”,可以包括:
(1)调用编码模块对样本图像进行编码处理,得到第三特征图集合;
(2)将第三特征图集合输入特征金字塔模块,得到第四特征图集合;
(3)调用解码模块对第四特征图集合进行解码处理,得到期望掩膜;
(4)调用监督模块对第四特征图集合进行还原处理,得到多个监督掩膜;
(5)根据期望掩膜与标注掩膜的差异,以及每个监督掩模与标注掩膜的差异,调整人像分割模型的参数。
可以理解的是,模型的训练过程和模型的预测过程在一定程度上是较为相似的,因此,电子设备可以按照上述描述的如何根据待分割图像得到人像分割掩膜的方式根据每个样本图像得到每个样本图像对应的期望掩膜。另外,电子设备也可以按照上述描述的如何根据待分割图像得到第二特征图集合的方式根据每个样本图像得到每个样本图像对应的第四特征图集合。随后,电子设备可调用监督模块对每个样本图像对应的第四特征图集合中的特征图分别进行卷积和上采样等还原处理,得到每个样本图像对应的多个监督掩膜。其中,该期望掩膜和监督掩膜为2通道的图像,一个通道的图像中的每个像素点的像 素值表示该像素点属于人像的概率,另一个通道的图像中的每个像素点的像素值表示该像素点属于背景的概率。
当得到每个样本对应的期望掩膜和多个监督掩膜之后,电子设备可采用交叉熵损失函数计算每个样本图像对应的期望掩膜与标注掩膜的损失值,以及每个样本图像对应的每个监督掩膜与标注掩膜的损失值,得到多个损失值。随后,电子设备可计算该多个损失值的和,得到每个样本图像对应的总损失值。然后,电子设备可计算多个样本图像对应的总损失值的平均值,将其作为人像分割模型对应的总损失值。当人像分割模型对应的总损失值收敛时,电子设备可保存人像分割模型的参数,得到训练后的人像分割模型。并在部署该训练后的人像分割模型时,移除监督模块,以减少计算量。
单个样本图像的交叉熵损失函数的公式可以为:
Figure PCTCN2021073842-appb-000001
其中,L表示样本图像的损失值,y i表示样本图像的第i个像素点的真实类别,取值为0或1,其中,0表示该像素点为背景,1表示该像素点为前景。P i表示样本图像的第i个像素点属于类别1的概率。
当得到每个样本对应的期望掩膜之后,电子设备可采用交并比IoU评价函数计算每个样本图像对应的评价值。随后,电子设备可计算多个样本图像的总评价值的平均值,将其作为人像分割模型对应的评价值,从而可通过该评价值对该人像分割模型进行评估。
单个样本图像的评价函数的公式可以为:
Figure PCTCN2021073842-appb-000002
其中,IoU表示样本图像的评价值,X表示样本图像的期望掩膜,Y表示样本图像的标注掩膜。
在一些实施例中,监督模块可包括第四卷积层和第三上采样层,“调用监督模块对第四特征图集合进行还原处理,得到多个监督掩膜”,可以包括:
(1)调用第四卷积层对第四特征图集合中的特征图分别进行卷积处理,得到第五特征图集合;
(2)调用第三上采样层对第五特征图集合中的特征图分别进行上采样处理,得到多个监督掩膜。
例如,假设第四特征图集合包括第一尺寸的特征图、第二尺寸的特征图、第三尺寸的特征图和第四尺寸的特征图;第一尺寸的特征图的尺寸为待分割图像的尺寸的1/4;第二尺寸的特征图的尺寸为待分割图像的尺寸的1/8;第三尺寸的特征图的尺寸为待分割图像的尺寸的1/16;第四尺寸的特征图的尺寸为待分割图像的尺寸的1/32;第一尺寸的特征图、第二尺寸的特征图、第三尺寸的特征图和第四尺寸的特征图对应的通道数均为128,即第一尺寸的特征图、第二尺寸的特征图、第三尺寸的特征图和第四尺寸的特征图的数量均为128;第四卷积层的数量可以为4,每个第四卷积层均可包括2个卷积核。第三上采样层的数量可以为4。第一个第三上采样层对应的采样倍数为4倍,第二个第三上采样层对应的采样倍数为8倍,第三个第三上采样层对应的采样倍数为16倍,第四个上采样层对应的倍数为32倍。
电子设备可调用第一个第四卷积层的2个卷积核对第一尺寸的特征图进行卷积处理,使得其得到一对应的通道数为2的第一尺寸的特征图。然后,电子设备可调用第一个第三上采样层对该第一尺寸的特征图进行4倍的上采样处理,得到第一个监督掩膜。
以此类推,电子设备可调用第二个第四卷积层和第二个第三上采样层,根据第二尺寸的特征图,确定第二个监督掩膜。电子设备可调用第三个第四卷积层和第三个第三上采样层,根据第三尺寸的特征图,确定第三个监督掩膜。电子设备可调用第四个第四卷积层和第四个第三上采样层,根据第四尺寸的特征图,确定第四个监督掩膜,从而可从多个尺度对深层特征进行监督,并对深层的特征提供额外的梯度,可以提升人像分割的效果,降低假阳性的预测。
在一些实施例中,“获取样本图像”,可以包括:
(1)获取原始图像;
(2)对原始图像进行数据增强处理,得到样本图像。
例如,电子设备可从supervisely数据集中获取一些原始图像;然后,电子设备可对这些原始图像进行随机旋转、随机左右翻转、随机裁剪、Gamma变换等数据增强处理,得到样本图像,从而可增加训练的数据量,提高模型的泛化能力,并且可增加噪声数据,提升模型的鲁棒性。
可以理解的是,本申请实施例所提供的图像处理方法可以为美颜,替换背景等图像处理算法提供精确的人像分割掩模,可以使得人像背景虚化更为精准,可以作为快速证件照生成的核心算法。
还可以理解的是,本申请实施例并不对各特征图集合的特征图的数量进行限制,可以根据具体数据集情况灵活调整。
请参阅图5,图5是本申请实施例提供的人像分割模型的第三种结构示意图。
在训练阶段,电子设备可采用supervisely数据集,基于PyTorch框架,使用一个合适的图像处理器对该人像分割模型进行训练。
首先,电子设备可将supervisely数据集按2:8的比例划分为测试集和训练集,并对训练集中的图像进行随机旋转、随机左右翻转、随机裁剪、Gamma变换等数据增强处理,得到样本图像。电子设备还可获取样本图像对应的标注掩膜M6,并构建人像分割模型,该人像分割模型可包括编码模块、特征金字塔模块、监督模块和解码模块。特征金字塔模块分别与编码模块、特征金字塔模块、监督模块和解码模块连接。
然后,电子设备可将样本图像输入至编码模块的第一层。电子设备可调用该编码模块的第一层将该样本图像输出至该编码模块的第二层。电子设备调用该编码模块的第二层根据该样本图像,确定尺寸为样本图像的尺寸的1/4的特征图F1,其中,特征图F1对应的通道数为24。电子设备可调用该编码模块的第二层将特征图F1输出至该编码模块的第三层。电子设备调用该编码模块的第三层根据特征图F1,确定尺寸为样本图像的尺寸的1/8的特征图F2,其中,特征图F2对应的通道数为32。电子设备可调用该编码模块的第三层将特征图F2输出至该编码模块的第四层。电子设备调用该编码模块的第四层根据特征图F2,确定尺寸为样本图像的尺寸的1/16的特征图F3,其中,特征图F3对应的通道数为64。电子设备可调用该编码模块的第四层将特征图F3输出至该编码模块的第五层。电子设备调用该编码模块的第五层根据特征图F3,确定尺寸为样本图像的尺寸的1/32的特征图F4,其中,特征图F4对应的通道数为320。
特征图F1、F2、F3和F4可构成第一特征图集合。
电子设备还可调用该编码模块的第二层、第三层、第四层和第五层分别将特征图F1、特征图F2、特征图F3和特征图F4输出至特征金字塔模块。电子设备可调用该特征金字塔模块的卷积层c1对特征图F1进行卷积处理,得到尺寸为样本图像的尺寸的1/4的特征图F5,其中特征图F5对应的通道数为128。电子设备可调用该特征金字塔模块的卷积层c2对特征图F2进行卷积处理,得到尺寸为样本图像的尺寸的1/8的特征图F6,其中特征图F6对应的通道数为128。电子设备可调用该特征金字塔模块的卷积层c3对特征图F3进行卷积处理,得到尺寸为样本图像的尺寸的1/16的特征图F7,其中特征图F7对应的通道数为128。电子设备可调用该特征金字塔模块的卷积层c4对特征图F4进行卷积处理,得到尺寸为样本图像的尺寸的1/32的特征图F8,其中特征图F4对应的通道数为128。
电子设备可调用该特征金字塔模块的线性插值2倍上采样层u1对特征图F6进行2倍的上采样处理,得到尺寸为样本图像的尺寸的1/4的特征图F9,其中,特征图F9对应的通道数为128。电子设备可调用该特征金字塔模块的线性插值2倍上采样层u2对特征图F7进行2倍的上采样处理,得到尺寸为样本图像的尺寸的1/8的特征图F10,其中,特征图F10对应的通道数为128。电子设备可调用该特征金字塔模块的线性插值2倍上采样层u3对特征图F8进行2倍的上采样处理,得到尺寸为样本图像的尺寸的1/16的特征图F11,其中,特征图F11对应的通道数为128。
电子设备可对每个通道对应的特征图F5和F9进行融合处理,得到每个通道对应的特征图F12。其中,特征图F12对应的通道数为128。电子设备可对每个通道对应的特征图F6和F10进行融合处理,得到每个通道对应的特征图F13。其中,特征图F13对应的通道数为128。电子设备可对每个通道对应 的特征图F7和F11进行融合处理,得到每个通道对应的特征图F14。其中,特征图F14对应的通道数为128。
其中,对两个特征图进行融合处理,得到目标特征图,可以包括:在两个特征图重合的基础上,将两个特征图中的相同位置的像素值相加再计算平均值,将该平均值作为目标特征图的相应位置的像素值。
电子设备可调用特征金字塔模块的卷积层c5对特征图F12进行卷积处理,得到特征图F15。其中,特征图F15对应的通道数为128。电子设备可调用特征金字塔模块的卷积层c6对特征图F13进行卷积处理,得到特征图F16。其中,特征图F16对应的通道数为128。电子设备可调用特征金字塔模块的卷积层c7对特征图F14进行卷积处理,得到特征图F17。其中,特征图F17对应的通道数为128。
特征图F8、F15、F16和F17可构成第二特征图集合。
电子设备可调用特征金字塔模块将特征图F8、F15、F16和F17输出至监督模块和解码模块。
电子设备可调用监督模块的卷积层c8和上采样层u4对特征图F15进行卷积处理及4倍的上采样处理,得到监督掩膜M1,其中,监督掩膜M1对应的通道数为2,尺寸与样本图像的尺寸相同。电子设备可调用监督模块的卷积层c9和上采样层u5对特征图F16进行卷积处理及8倍的上采样处理,得到监督掩膜M2,其中,监督掩膜M2对应的通道数为2,尺寸与样本图像的尺寸相同。电子设备可调用监督模块的卷积层c10和上采样层u6对特征图F17进行卷积处理及16倍的上采样处理,得到监督掩膜M3,其中,监督掩膜M3对应的通道数为2,尺寸与样本图像的尺寸相同。电子设备可调用监督模块的卷积层c11和上采样层u7对特征图F8进行卷积处理及32倍的上采样处理,得到监督掩膜M4,其中,监督掩膜M4对应的通道数为2,尺寸与样本图像的尺寸相同。
电子设备可调用解码模块的第一网络块sgr将特征图F15的尺寸调整至样本图像的尺寸的1/4,得到调整后的特征图F18。可以理解的是,在本实施例中,由于该特征图F15的尺寸为样本图像的尺寸的1/4,因此,调整后的特征图F18的尺寸仍为样本图像的尺寸的1/4。电子设备可调用解码模块的第二网络块sgr2x1将特征图F16的尺寸调整至样本图像的尺寸的1/4,得到调整后的特征图F19。电子设备可调用解码模块的依次连接的第三网络块cgr2x1和第二网络块sgr2x2将特征图F17的尺寸调整至样本图像的尺寸的1/4,得到调整后的特征图F20。电子设备可调用解码模块的依次连接的第三网络块cgr2x3、cgr2x2和第二网络块sgr2x3将特征图F8的尺寸调整至样本图像的尺寸的1/4,得到调整后的特征图F21。
其中,第一网络块sgr包括依次连接的卷积层、归一化层和激活层。第二网络块sgr2x1、sgr2x2和sgr2x3均包括依次连接的卷积层、归一化层、激活层和线性插值2倍上采样层。第三网络块cgr2x1、cgr2x2和cgr2x3均包括依次连接的卷积层、归一化层、激活层和线性插值2倍上采样层。其中,第三网络块cgr2x1、cgr2x2和cgr2x3的输入输出通道数相同。
电子设备可对调整后的特征图F18、F19、F20、F21进行融合处理,得到融合后的特征图F22。电子设备可调用卷积层c12的2个卷积核和上采样层u8对融合后的特征图F22进行卷积处理及4倍的上采样处理,得到期望掩膜M5。其中,期望掩膜M5对应的通道数为2,尺寸与样本图像的尺寸相同。
随后,电子设备可计算标注掩膜M6分别与监督掩膜M1、M2、M3和M4以及期望掩膜M5的交叉熵损失值,得到多个损失值,并将该多个损失值的平均值作为单个样本图像的损失值。然后,电子设备可将多个样本图像的损失值的平均值作为人像分割模型当前的总损失值。电子设备可根据该总损失值执行反向传播算法,以更新该人像分割模型的参数,直至总损失值收敛,电子设备可保存所得到的人像分割模型。
在预测阶段,电子设备可获取上述保存的人像分割模型,并移除该人像分割模型的监督模块。随后,电子设备可获取需要进行人像分割的待分割图像输入该人像分割模型中,从而得到人像分割掩膜。当得到该人像分割掩膜之后,电子设备即可根据该人像分割掩膜从待分割图像中分割出人像。
可以理解的是,模型的预测过程通常与模型的训练过程相似,因此电子设备如何利用该人像分割模型根据待分割图像得到人像分割掩膜可参考上述人像分割模型的训练过程,在此不再赘述。
请参阅图6,图6为本申请实施例提供的图像处理装置的结构示意图。该图像处理装置200包括:第一获取模块201、第二获取模块202、第一调用模块203、第二调用模块204、第三调用模块205及分 割模块206。
第一获取模块201,用于获取需要进行人像分割的待分割图像。
第二获取模块202,用于获取预训练的人像分割模型,所述预训练的人像分割模型包括:编码模块,特征金字塔模块和解码模块。
第一调用模块203,用于调用所述编码模块对所述待分割图像进行编码处理,得到第一特征图集合。
输入模块204,用于将所述第一特征图集合输入所述特征金字塔模块中,得到第二特征图集合。
第二调用模块205,用于调用所述解码模块对所述第二特征图集合进行解码处理,得到人像分割掩膜。
分割模块206,用于根据所述人像分割掩膜,从所述待分割图像中分割出人像。
在一些实施例中,所述解码模块包括第一子模块和第二子模块,所述第二调用模块205,可以用于:调用所述第一子模块将所述第二特征图集合中的特征图的尺寸调整至预设尺寸,得到调整后的特征图;调用所述第二子模块对所述调整后的特征图进行融合、卷积及采样处理,得到人像分割掩膜。
在一些实施例中,所述第一子模块包括第一网络块、第二网络块和第三网络块,所述第二调用模块205,可以用于:调用所述第一网络块、所述第二网络块和/或所述第三网络块将所述第二特征图集合中的特征图调整至预设尺寸,得到调整后的特征图。
在一些实施例中,所述第一网络块包括依次连接的卷积层、归一化层和激活层;所述第二网络块包括依次连接的卷积层、归一化层、激活层和上采样层;所述第三网络块包括依次连接的卷积层、归一化层、激活层和上采样层,所述第三网络块的输入通道数与所述第三网络块的输出通道数相同。
在一些实施例中,所述第二子模块包括第一融合层、第一卷积层和第一上采样层,所述第二调用模块205,可以用于:调用所述第一融合层对所述调整后的特征图进行融合处理,得到融合后的特征图;调用所述第一卷积层对所述融合后的特征图进行卷积处理,得到卷积后的特征图;调用所述第一上采样层对所述卷积后的特征图进行上采样处理,得到人像分割掩膜。
在一些实施例中,所述第一获取模块201,可以用于:获取样本图像,以及所述样本图像对应的标注掩膜;获取监督模块;利用所述样本图像、所述样本图像对应的标注掩膜和所述监督模块对人像分割模型进行训练。
在一些实施例中,所述第一获取模块201,可以用于:调用所述编码模块对所述样本图像进行编码处理,得到第三特征图集合;将所述第三特征图集合输入所述特征金字塔模块,得到第四特征图集合;调用所述解码模块对所述第四特征图集合进行解码处理,得到期望掩膜;调用所述监督模块对所述第四特征图集合进行还原处理,得到多个监督掩膜;根据所述期望掩膜与所述标注掩膜的差异,以及每个监督掩模与所述标注掩膜的差异,调整所述人像分割模型的参数。
在一些实施例中,所述监督模块包括第四卷积层和第三上采样层,所述第一获取模块201,可以用于:调用所述第四卷积层对所述第四特征图集合中的特征图分别进行卷积处理,得到第五特征图集合;调用所述第三上采样层对所述第五特征图集合中的特征图分别进行上采样处理,得到多个监督掩膜。
在一些实施例中,所述第一获取模块201,可以用于:获取原始图像;对所述原始图像进行数据增强处理,得到样本图像。
本申请实施例提供一种计算机可读的存储介质,其上存储有计算机程序,当所述计算机程序在计算机上执行时,使得所述计算机执行如本实施例提供的图像处理方法中的流程。
本申请实施例还提供一种电子设备,包括存储器,处理器,所述处理器通过调用所述存储器中存储的计算机程序,用于执行本实施例提供的图像处理方法中的流程。
例如,上述电子设备可以是诸如平板电脑或者智能手机等移动终端。请参阅图7,图7为本申请实施例提供的电子设备的结构示意图。
该电子设备300可以包括摄像模组301、存储器302、处理器303等部件。本领域技术人员可以理解,图7中示出的电子设备结构并不构成对电子设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
摄像模组301可以包括透镜、图像传感器和图像信号处理器,其中透镜用于采集外部的光源信号提供给图像传感器,图像传感器感应来自于透镜的光源信号,将其转换为数字化的原始图像,即RAW图像,并将该RAW图像提供给图像信号处理器处理。图像信号处理器可以对该RAW图像进行格式转换,降噪等处理,得到YUV图像。其中,RAW是未经处理、也未经压缩的格式,可以将其形象地称为“数字底片”。YUV是一种颜色编码方法,其中Y表示亮度,U表示色度,V表示浓度,人眼从YUV图像中可以直观的感受到其中所包含的自然特征。
存储器302可用于存储应用程序和数据。存储器302存储的应用程序中包含有可执行代码。应用程序可以组成各种功能模块。处理器303通过运行存储在存储器302的应用程序,从而执行各种功能应用以及数据处理。
处理器303是电子设备的控制中心,利用各种接口和线路连接整个电子设备的各个部分,通过运行或执行存储在存储器302内的应用程序,以及调用存储在存储器302内的数据,执行电子设备的各种功能和处理数据,从而对电子设备进行整体监控。
在本实施例中,电子设备中的处理器303会按照如下的指令,将一个或一个以上的应用程序的进程对应的可执行代码加载到存储器302中,并由处理器303来运行存储在存储器302中的应用程序,从而执行:
获取需要进行人像分割的待分割图像;
获取预训练的人像分割模型,所述预训练的人像分割模型包括:编码模块,特征金字塔模块和解码模块;
调用所述编码模块对所述待分割图像进行编码处理,得到第一特征图集合;
将所述第一特征图集合输入所述特征金字塔模块,得到第二特征图集合;
调用所述解码模块对所述第二特征图集合进行解码处理,得到人像分割掩膜;
根据所述人像分割掩膜,从所述待分割图像中分割出人像。
请参阅图8,电子设备300可以包括摄像模组301、存储器302、处理器303、触摸显示屏304、扬声器305、麦克风306等部件。
摄像模组301可以包括图像处理电路,图像处理电路可以利用硬件和/或软件组件实现,可包括定义图像信号处理(Image Signal Processing)管线的各种处理单元。图像处理电路至少可以包括:摄像头、图像信号处理器(Image Signal Processor,ISP处理器)、控制逻辑器、图像存储器以及显示器等。其中摄像头至少可以包括一个或多个透镜和图像传感器。图像传感器可包括色彩滤镜阵列(如Bayer滤镜)。图像传感器可获取用图像传感器的每个成像像素捕捉的光强度和波长信息,并提供可由图像信号处理器处理的一组原始图像数据。
图像信号处理器可以按多种格式逐个像素地处理原始图像数据。例如,每个图像像素可具有8、10、12或14比特的位深度,图像信号处理器可对原始图像数据进行一个或多个图像处理操作、收集关于图像数据的统计信息。其中,图像处理操作可按相同或不同的位深度精度进行。原始图像数据经过图像信号处理器处理后可存储至图像存储器中。图像信号处理器还可从图像存储器处接收图像数据。
图像存储器可为存储器装置的一部分、存储设备、或电子设备内的独立的专用存储器,并可包括DMA(Direct Memory Access,直接直接存储器存取)特征。
当接收到来自图像存储器的图像数据时,图像信号处理器可进行一个或多个图像处理操作,如时域滤波。处理后的图像数据可发送给图像存储器,以便在被显示之前进行另外的处理。图像信号处理器还可从图像存储器接收处理数据,并对所述处理数据进行原始域中以及RGB和YCbCr颜色空间中的图像数据处理。处理后的图像数据可输出给显示器,以供用户观看和/或由图形引擎或GPU(Graphics Processing Unit,图像处理器)进一步处理。此外,图像信号处理器的输出还可发送给图像存储器,且显示器可从图像存储器读取图像数据。在一种实施方式中,图像存储器可被配置为实现一个或多个帧缓冲器。
图像信号处理器确定的统计数据可发送给控制逻辑器。例如,统计数据可包括自动曝光、自动白平 衡、自动聚焦、闪烁检测、黑电平补偿、透镜阴影校正等图像传感器的统计信息。
控制逻辑器可包括执行一个或多个例程(如固件)的处理器和/或微控制器。一个或多个例程可根据接收的统计数据,确定摄像头的控制参数以及ISP控制参数。例如,摄像头的控制参数可包括照相机闪光控制参数、透镜的控制参数(例如聚焦或变焦用焦距)、或这些参数的组合。ISP控制参数可包括用于自动白平衡和颜色调整(例如,在RGB处理期间)的增益水平和色彩校正矩阵等。
请参阅图9,图9为本实施例中图像处理电路的结构示意图。如图9所示,为便于说明,仅示出与本申请实施例相关的图像处理技术的各个方面。
例如图像处理电路可以包括:摄像头、图像信号处理器、控制逻辑器、图像存储器、显示器。其中,摄像头可以包括一个或多个透镜和图像传感器。在一些实施例中,摄像头可为长焦摄像头或广角摄像头中的任一者。
摄像头采集的第一图像传输给图像信号处理器进行处理。图像信号处理器处理第一图像后,可将第一图像的统计数据(如图像的亮度、图像的反差值、图像的颜色等)发送给控制逻辑器。控制逻辑器可根据统计数据确定摄像头的控制参数,从而摄像头可根据控制参数进行自动对焦、自动曝光等操作。第一图像经过图像信号处理器进行处理后可存储至图像存储器中。图像信号处理器也可以读取图像存储器中存储的图像以进行处理。另外,第一图像经过图像信号处理器进行处理后可直接发送至显示器进行显示。显示器也可以读取图像存储器中的图像以进行显示。
此外,图中没有展示的,电子设备还可以包括CPU和供电模块。CPU和逻辑控制器、图像信号处理器、图像存储器和显示器均连接,CPU用于实现全局控制。供电模块用于为各个模块供电。
存储器302存储的应用程序中包含有可执行代码。应用程序可以组成各种功能模块。处理器303通过运行存储在存储器302的应用程序,从而执行各种功能应用以及数据处理。
处理器303是电子设备的控制中心,利用各种接口和线路连接整个电子设备的各个部分,通过运行或执行存储在存储器302内的应用程序,以及调用存储在存储器302内的数据,执行电子设备的各种功能和处理数据,从而对电子设备进行整体监控。
触摸显示屏304可以用于接收用户对电子设备的触摸控制操作。扬声器305可以播放声音信号。传感器306可包括陀螺仪传感器、加速度传感器、方向传感器、磁场传感器等,其可用于获取电子设备300的当前姿态。
在本实施例中,电子设备中的处理器303会按照如下的指令,将一个或一个以上的应用程序的进程对应的可执行代码加载到存储器302中,并由处理器303来运行存储在存储器302中的应用程序,从而执行:
获取需要进行人像分割的待分割图像;
获取预训练的人像分割模型,所述预训练的人像分割模型包括:编码模块,特征金字塔模块和解码模块;
调用所述编码模块对所述待分割图像进行编码处理,得到第一特征图集合;
将所述第一特征图集合输入所述特征金字塔模块,得到第二特征图集合;
调用所述解码模块对所述第二特征图集合进行解码处理,得到人像分割掩膜;
根据所述人像分割掩膜,从所述待分割图像中分割出人像。
在一种实施方式中,所述解码模块包括第一子模块和第二子模块,处理器303执行调用所述解码模块对所述第二特征图集合进行解码处理,得到人像分割掩膜时,可以执行:调用所述第一子模块将所述第二特征图集合中的特征图的尺寸调整至预设尺寸,得到调整后的特征图;调用所述第二子模块对所述调整后的特征图进行融合、卷积及采样处理,得到人像分割掩膜。
在一种实施方式中,所述第一子模块包括第一网络块、第二网络块和第三网络块,处理器303执行调用所述第一子模块将所述第二特征图集合中的特征图的尺寸调整至预设尺寸,得到调整后的特征图时,可以执行:调用所述第一网络块、所述第二网络块和/或所述第三网络块将所述第二特征图集合中的特征图调整至预设尺寸,得到调整后的特征图。
在一种实施方式中,所述第一网络块包括依次连接的卷积层、归一化层和激活层;所述第二网络块包括依次连接的卷积层、归一化层、激活层和上采样层;所述第三网络块包括依次连接的卷积层、归一化层、激活层和上采样层,所述第三网络块的输入通道数与所述第三网络块的输出通道数相同。
在一种实施方式中,所述第二子模块包括第一融合层、第一卷积层和第一上采样层,处理器303执行调用所述第二子模块对所述调整后的特征图进行融合、卷积及采样处理,得到人像分割掩膜时,可以执行:调用所述第一融合层对所述调整后的特征图进行融合处理,得到融合后的特征图;调用所述第一卷积层对所述融合后的特征图进行卷积处理,得到卷积后的特征图;调用所述第一上采样层对所述卷积后的特征图进行上采样处理,得到人像分割掩膜。
在一种实施方式中,处理器303执行获取需要进行人像分割的待分割图像之前,还可以执行:获取样本图像,以及所述样本图像对应的标注掩膜;获取监督模块;利用所述样本图像、所述样本图像对应的标注掩膜以及监督模块对所述人像分割模型进行训练。
在一种实施方式中,处理器303执行利用所述样本图像、所述样本图像对应的标注掩膜以及所述监督模块对所述人像分割模型进行训练时,可以执行:调用所述编码模块对所述样本图像进行编码处理,得到第三特征图集合;将所述第三特征图集合输入所述特征金字塔模块,得到第四特征图集合;调用所述解码模块对所述第四特征图集合进行解码处理,得到期望掩膜;调用所述监督模块对所述第四特征图集合进行还原处理,得到多个监督掩膜;根据所述期望掩膜与所述标注掩膜的差异,以及每个监督掩模与所述标注掩膜的差异,调整所述人像分割模型的参数。
在一种实施方式中,所述监督模块包括第四卷积层和第三上采样层,处理器303执行调用所述监督模块对所述第四特征图集合进行还原处理,得到多个监督掩膜时,可以执行:调用所述第四卷积层对所述第四特征图集合中的特征图分别进行卷积处理,得到第五特征图集合;调用所述第三上采样层对所述第五特征图集合中的特征图分别进行上采样处理,得到多个监督掩膜。
在一种实施方式中,处理器303执行获取样本图像时,可以执行:获取原始图像;对所述原始图像进行数据增强处理,得到样本图像。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见上文针对图像处理方法的详细描述,此处不再赘述。
本申请实施例提供的所述图像处理装置与上文实施例中的图像处理方法属于同一构思,在所述图像处理装置上可以运行所述图像处理方法实施例中提供的任一方法,其具体实现过程详见所述图像处理方法实施例,此处不再赘述。
需要说明的是,对本申请实施例所述图像处理方法而言,本领域普通技术人员可以理解实现本申请实施例所述图像处理方法的全部或部分流程,是可以通过计算机程序来控制相关的硬件来完成,所述计算机程序可存储于一计算机可读取存储介质中,如存储在存储器中,并被至少一个处理器执行,在执行过程中可包括如所述图像处理方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM,Random Access Memory)等。
对本申请实施例的所述图像处理装置而言,其各功能模块可以集成在一个处理芯片中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中,所述存储介质譬如为只读存储器,磁盘或光盘等。
以上对本申请实施例所提供的一种图像处理方法、装置、存储介质以及电子设备进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (20)

  1. 一种图像处理方法,其中,包括:
    获取需要进行人像分割的待分割图像;
    获取预训练的人像分割模型,所述预训练的人像分割模型包括:编码模块,特征金字塔模块和解码模块;
    调用所述编码模块对所述待分割图像进行编码处理,得到第一特征图集合;
    将所述第一特征图集合输入所述特征金字塔模块中,得到第二特征图集合;
    调用所述解码模块对所述第二特征图集合进行解码处理,得到人像分割掩膜;
    根据所述人像分割掩膜,从所述待分割图像中分割出人像。
  2. 根据权利要求1所述的图像处理方法,其中,所述解码模块包括第一子模块和第二子模块,所述调用所述解码模块对所述第二特征图集合进行解码处理,得到人像分割掩膜,包括:
    调用所述第一子模块将所述第二特征图集合中的特征图的尺寸调整至预设尺寸,得到调整后的特征图;
    调用所述第二子模块对所述调整后的特征图进行融合、卷积及采样处理,得到人像分割掩膜。
  3. 根据权利要求2所述的图像处理方法,其中,所述第一子模块包括第一网络块、第二网络块和第三网络块,所述调用所述第一子模块将所述第二特征图集合中的特征图的尺寸调整至预设尺寸,得到调整后的特征图,包括:
    调用所述第一网络块、所述第二网络块和/或所述第三网络块将所述第二特征图集合中的特征图调整至预设尺寸,得到调整后的特征图。
  4. 根据权利要求3所述的图像处理方法,其中,所述第一网络块包括依次连接的卷积层、归一化层和激活层;
    所述第二网络块包括依次连接的卷积层、归一化层、激活层和上采样层;
    所述第三网络块包括依次连接的卷积层、归一化层、激活层和上采样层,所述第三网络块的输入通道数与所述第三网络块的输出通道数相同。
  5. 根据权利要求2所述的图像处理方法,其中,所述第二子模块包括第一融合层、第一卷积层和第一上采样层,所述调用所述第二子模块对所述调整后的特征图进行融合、卷积及采样处理,得到人像分割掩膜,包括:
    调用所述第一融合层对所述调整后的特征图进行融合处理,得到融合后的特征图;
    调用所述第一卷积层对所述融合后的特征图进行卷积处理,得到卷积后的特征图;
    调用所述第一上采样层对所述卷积后的特征图进行上采样处理,得到人像分割掩膜。
  6. 根据权利要求1所述的图像处理方法,其中,所述获取需要进行人像分割的待分割图像之前,还包括:
    获取样本图像,以及所述样本图像对应的标注掩膜;
    获取监督模块;
    利用所述样本图像、所述样本图像对应的标注掩膜以及所述监督模块对人像分割模型进行训练。
  7. 根据权利要求6所述的图像处理方法,其中,所述利用所述样本图像、所述样本图像对应的标注掩膜以及所述监督模块对人像分割模型进行训练,包括:
    调用所述编码模块对所述样本图像进行编码处理,得到第三特征图集合;
    将所述第三特征图集合输入所述特征金字塔模块,得到第四特征图集合;
    调用所述解码模块对所述第四特征图集合进行解码处理,得到期望掩膜;
    调用所述监督模块对所述第四特征图集合进行还原处理,得到多个监督掩膜;
    根据所述期望掩膜与所述标注掩膜的差异,以及每个监督掩模与所述标注掩膜的差异,调整所述人像分割模型的参数。
  8. 根据权利要求7所述的图像处理方法,其中,所述监督模块包括第四卷积层和第三上采样层, 所述调用所述监督模块对所述第四特征图集合进行还原处理,得到多个监督掩膜,包括:
    调用所述第四卷积层对所述第四特征图集合中的特征图分别进行卷积处理,得到第五特征图集合;
    调用所述第三上采样层对所述第五特征图集合中的特征图分别进行上采样处理,得到多个监督掩膜。
  9. 根据权利要求6所述的图像处理方法,其中,所述获取样本图像,包括:
    获取原始图像;
    对所述原始图像进行数据增强处理,得到样本图像。
  10. 一种图像处理装置,其中,包括:
    第一获取模块,用于获取需要进行人像分割的待分割图像;
    第二获取模块,用于获取预训练的人像分割模型,所述预训练的人像分割模型包括:编码模块,特征金字塔模块和解码模块;
    第一调用模块,用于调用所述编码模块对所述待分割图像进行编码处理,得到第一特征图集合;
    输入模块,用于将所述第一特征图集合输入所述特征金字塔模块中,得到第二特征图集合;
    第二调用模块,用于调用所述解码模块对所述第二特征图集合进行解码处理,得到人像分割掩膜;
    分割模块,用于根据所述人像分割掩膜,从所述待分割图像中分割出人像。
  11. 一种存储介质,其中,所述存储介质中存储有计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行权利要求1所述的图像处理方法。
  12. 一种电子设备,其中,所述电子设备包括处理器和存储器,所述存储器中存储有计算机程序,所述处理器通过调用所述存储器中存储的所述计算机程序,用于执行:获取需要进行人像分割的待分割图像;
    获取预训练的人像分割模型,所述预训练的人像分割模型包括:编码模块,特征金字塔模块和解码模块;
    调用所述编码模块对所述待分割图像进行编码处理,得到第一特征图集合;
    将所述第一特征图集合输入所述特征金字塔模块中,得到第二特征图集合;
    调用所述解码模块对所述第二特征图集合进行解码处理,得到人像分割掩膜;
    根据所述人像分割掩膜,从所述待分割图像中分割出人像。
  13. 根据权利要求12所述的电子设备,其中,所述处理器用于执行:
    调用所述第一子模块将所述第二特征图集合中的特征图的尺寸调整至预设尺寸,得到调整后的特征图;
    调用所述第二子模块对所述调整后的特征图进行融合、卷积及采样处理,得到人像分割掩膜。
  14. 根据权利要求13所述的电子设备,其中,所述处理器用于执行:
    调用所述第一网络块、所述第二网络块和/或所述第三网络块将所述第二特征图集合中的特征图调整至预设尺寸,得到调整后的特征图。
  15. 根据权利要求14所述的电子设备,其中,所述第一网络块包括依次连接的卷积层、归一化层和激活层;
    所述第二网络块包括依次连接的卷积层、归一化层、激活层和上采样层;
    所述第三网络块包括依次连接的卷积层、归一化层、激活层和上采样层,所述第三网络块的输入通道数与所述第三网络块的输出通道数相同。
  16. 根据权利要求13所述的电子设备,其中,所述处理器用于执行:
    调用所述第一融合层对所述调整后的特征图进行融合处理,得到融合后的特征图;
    调用所述第一卷积层对所述融合后的特征图进行卷积处理,得到卷积后的特征图;
    调用所述第一上采样层对所述卷积后的特征图进行上采样处理,得到人像分割掩膜。
  17. 根据权利要求12所述的电子设备,其中,所述处理器用于执行:
    获取样本图像,以及所述样本图像对应的标注掩膜;
    获取监督模块;
    利用所述样本图像、所述样本图像对应的标注掩膜以及所述监督模块对人像分割模型进行训练。
  18. 根据权利要求17所述的电子设备,其中,所述处理器用于执行:
    调用所述编码模块对所述样本图像进行编码处理,得到第三特征图集合;
    将所述第三特征图集合输入所述特征金字塔模块,得到第四特征图集合;
    调用所述解码模块对所述第四特征图集合进行解码处理,得到期望掩膜;
    调用所述监督模块对所述第四特征图集合进行还原处理,得到多个监督掩膜;
    根据所述期望掩膜与所述标注掩膜的差异,以及每个监督掩模与所述标注掩膜的差异,调整所述人像分割模型的参数。
  19. 根据权利要求18所述的电子设备,其中,所述处理器用于执行:
    调用所述第四卷积层对所述第四特征图集合中的特征图分别进行卷积处理,得到第五特征图集合;
    调用所述第三上采样层对所述第五特征图集合中的特征图分别进行上采样处理,得到多个监督掩膜。
  20. 根据权利要求17所述的电子设备,其中,所述处理器用于执行:
    获取原始图像;
    对所述原始图像进行数据增强处理,得到样本图像。
PCT/CN2021/073842 2020-03-12 2021-01-26 图像处理方法、装置、存储介质及电子设备 WO2021179820A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010171398.0A CN111402258A (zh) 2020-03-12 2020-03-12 图像处理方法、装置、存储介质及电子设备
CN202010171398.0 2020-03-12

Publications (1)

Publication Number Publication Date
WO2021179820A1 true WO2021179820A1 (zh) 2021-09-16

Family

ID=71430755

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/073842 WO2021179820A1 (zh) 2020-03-12 2021-01-26 图像处理方法、装置、存储介质及电子设备

Country Status (2)

Country Link
CN (1) CN111402258A (zh)
WO (1) WO2021179820A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114241344A (zh) * 2021-12-20 2022-03-25 电子科技大学 一种基于深度学习的植物叶片病虫害严重程度评估方法
CN114494810A (zh) * 2022-01-28 2022-05-13 北京百度网讯科技有限公司 图像处理方法、神经网络及其训练方法、装置和设备
CN114612971A (zh) * 2022-03-04 2022-06-10 北京百度网讯科技有限公司 人脸检测方法、模型训练方法、电子设备及程序产品
CN114723760A (zh) * 2022-05-19 2022-07-08 北京世纪好未来教育科技有限公司 人像分割模型的训练方法、装置及人像分割方法、装置
CN116051386A (zh) * 2022-05-30 2023-05-02 荣耀终端有限公司 图像处理方法及其相关设备
CN116524368A (zh) * 2023-04-14 2023-08-01 北京卫星信息工程研究所 遥感图像目标检测方法

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111402258A (zh) * 2020-03-12 2020-07-10 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN113965750B (zh) * 2020-07-20 2023-08-01 武汉Tcl集团工业研究院有限公司 一种图像编码方法、存储介质及终端设备
CN111862127A (zh) * 2020-07-23 2020-10-30 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN112001923B (zh) * 2020-11-02 2021-01-05 中国人民解放军国防科技大学 一种视网膜图像分割方法及装置
CN112257728B (zh) * 2020-11-12 2021-08-17 腾讯科技(深圳)有限公司 图像处理方法、装置、计算机设备以及存储介质
CN112949651A (zh) * 2021-01-29 2021-06-11 Oppo广东移动通信有限公司 特征提取方法、装置、存储介质及电子设备
CN113313646B (zh) * 2021-05-27 2024-04-16 Oppo广东移动通信有限公司 图像处理方法及装置、电子设备、计算机可读存储介质
CN113591861B (zh) * 2021-07-08 2024-05-14 杭州网易智企科技有限公司 图像处理方法、装置、计算设备及存储介质
CN114187318B (zh) * 2021-12-10 2023-05-05 北京百度网讯科技有限公司 图像分割的方法、装置、电子设备以及存储介质
CN114445668A (zh) * 2022-01-29 2022-05-06 北京百度网讯科技有限公司 图像识别方法、装置、电子设备及存储介质
CN116071376B (zh) * 2023-04-04 2023-06-20 江苏势通生物科技有限公司 图像分割方法及相关装置、设备和存储介质

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170263005A1 (en) * 2016-03-10 2017-09-14 Sony Corporation Method for moving object detection by a kalman filter-based approach
CN109493350A (zh) * 2018-11-09 2019-03-19 重庆中科云丛科技有限公司 人像分割方法及装置
CN109886273A (zh) * 2019-02-26 2019-06-14 四川大学华西医院 一种cmr图像分割分类系统
CN110084274A (zh) * 2019-03-29 2019-08-02 南京邮电大学 实时图像语义分割方法及系统、可读存储介质和终端
CN110517267A (zh) * 2019-08-02 2019-11-29 Oppo广东移动通信有限公司 一种图像分割方法及装置、存储介质
US10540757B1 (en) * 2018-03-12 2020-01-21 Amazon Technologies, Inc. Method and system for generating combined images utilizing image processing of multiple images
CN111402258A (zh) * 2020-03-12 2020-07-10 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN111738122A (zh) * 2020-06-12 2020-10-02 Oppo广东移动通信有限公司 图像处理的方法及相关装置
CN111862127A (zh) * 2020-07-23 2020-10-30 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584246B (zh) * 2018-11-16 2022-12-16 成都信息工程大学 基于多尺度特征金字塔的dcm心肌诊疗放射影像分割方法
CN110517278B (zh) * 2019-08-07 2022-04-29 北京旷视科技有限公司 图像分割和图像分割网络的训练方法、装置和计算机设备

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170263005A1 (en) * 2016-03-10 2017-09-14 Sony Corporation Method for moving object detection by a kalman filter-based approach
US10540757B1 (en) * 2018-03-12 2020-01-21 Amazon Technologies, Inc. Method and system for generating combined images utilizing image processing of multiple images
CN109493350A (zh) * 2018-11-09 2019-03-19 重庆中科云丛科技有限公司 人像分割方法及装置
CN109886273A (zh) * 2019-02-26 2019-06-14 四川大学华西医院 一种cmr图像分割分类系统
CN110084274A (zh) * 2019-03-29 2019-08-02 南京邮电大学 实时图像语义分割方法及系统、可读存储介质和终端
CN110517267A (zh) * 2019-08-02 2019-11-29 Oppo广东移动通信有限公司 一种图像分割方法及装置、存储介质
CN111402258A (zh) * 2020-03-12 2020-07-10 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN111738122A (zh) * 2020-06-12 2020-10-02 Oppo广东移动通信有限公司 图像处理的方法及相关装置
CN111862127A (zh) * 2020-07-23 2020-10-30 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114241344A (zh) * 2021-12-20 2022-03-25 电子科技大学 一种基于深度学习的植物叶片病虫害严重程度评估方法
CN114241344B (zh) * 2021-12-20 2023-05-02 电子科技大学 一种基于深度学习的植物叶片病虫害严重程度评估方法
CN114494810A (zh) * 2022-01-28 2022-05-13 北京百度网讯科技有限公司 图像处理方法、神经网络及其训练方法、装置和设备
CN114612971A (zh) * 2022-03-04 2022-06-10 北京百度网讯科技有限公司 人脸检测方法、模型训练方法、电子设备及程序产品
CN114723760A (zh) * 2022-05-19 2022-07-08 北京世纪好未来教育科技有限公司 人像分割模型的训练方法、装置及人像分割方法、装置
CN116051386A (zh) * 2022-05-30 2023-05-02 荣耀终端有限公司 图像处理方法及其相关设备
CN116051386B (zh) * 2022-05-30 2023-10-20 荣耀终端有限公司 图像处理方法及其相关设备
CN116524368A (zh) * 2023-04-14 2023-08-01 北京卫星信息工程研究所 遥感图像目标检测方法
CN116524368B (zh) * 2023-04-14 2023-12-19 北京卫星信息工程研究所 遥感图像目标检测方法

Also Published As

Publication number Publication date
CN111402258A (zh) 2020-07-10

Similar Documents

Publication Publication Date Title
WO2021179820A1 (zh) 图像处理方法、装置、存储介质及电子设备
US11457138B2 (en) Method and device for image processing, method for training object detection model
US11250571B2 (en) Robust use of semantic segmentation in shallow depth of field rendering
WO2020192483A1 (zh) 图像显示方法和设备
WO2021047345A1 (zh) 图像降噪方法、装置、存储介质及电子设备
WO2021190146A1 (zh) 图片处理方法、装置、存储介质及电子设备
WO2020051776A1 (en) Method and system of deep supervision object detection for reducing resource usage
WO2021164234A1 (zh) 图像处理方法以及图像处理装置
US20230214976A1 (en) Image fusion method and apparatus and training method and apparatus for image fusion model
WO2020034737A1 (zh) 成像控制方法、装置、电子设备以及计算机可读存储介质
CN111028190A (zh) 图像处理方法、装置、存储介质及电子设备
WO2019105297A1 (zh) 图像虚化处理方法、装置、移动设备及存储介质
CN112602088B (zh) 提高弱光图像的质量的方法、系统和计算机可读介质
CN111612722B (zh) 基于简化Unet全卷积神经网络的低照度图像处理方法
CN116569213A (zh) 图像区域的语义细化
WO2019029573A1 (zh) 图像虚化方法、计算机可读存储介质和计算机设备
CN112651911B (zh) 一种基于偏振图像的高动态范围成像生成方法
WO2021179764A1 (zh) 图像处理模型生成方法、处理方法、存储介质及终端
CN115115516A (zh) 基于Raw域的真实世界视频超分辨率算法
CN108462831B (zh) 图像处理方法、装置、存储介质及电子设备
CN117768774A (zh) 图像处理器、图像处理方法、拍摄装置和电子设备
CN113658065A (zh) 图像降噪方法及装置、计算机可读介质和电子设备
CN116055895B (zh) 图像处理方法及其装置、芯片系统和存储介质
CN111447360A (zh) 应用程序控制方法及装置、存储介质、电子设备
US20220398704A1 (en) Intelligent Portrait Photography Enhancement System

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21768568

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21768568

Country of ref document: EP

Kind code of ref document: A1