CN116266337A - Image background blurring method, device, equipment and storage medium - Google Patents

Image background blurring method, device, equipment and storage medium Download PDF

Info

Publication number
CN116266337A
CN116266337A CN202111532136.3A CN202111532136A CN116266337A CN 116266337 A CN116266337 A CN 116266337A CN 202111532136 A CN202111532136 A CN 202111532136A CN 116266337 A CN116266337 A CN 116266337A
Authority
CN
China
Prior art keywords
image
depth
background
value
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111532136.3A
Other languages
Chinese (zh)
Inventor
朱志鹏
王钊
汪路超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Douku Software Technology Co Ltd
Original Assignee
Hangzhou Douku Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Douku Software Technology Co Ltd filed Critical Hangzhou Douku Software Technology Co Ltd
Priority to CN202111532136.3A priority Critical patent/CN116266337A/en
Publication of CN116266337A publication Critical patent/CN116266337A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image background blurring method, device, equipment and storage medium, wherein the method comprises the following steps: collecting an image; image segmentation is carried out on the image to obtain an image segmentation mask; the image segmentation mask is used for dividing the image into a foreground part and a background part; performing depth prediction on the image to obtain a first depth image of the image; re-segmenting a foreground part and a background part of the image based on the image segmentation mask and the first depth image to obtain a non-blurred part and a blurred part of the image; and carrying out background blurring processing on the blurred part of the image to obtain a background blurring image of the image. Thus, based on the image segmentation mask and the first depth image of the image, the image is precisely segmented, and the non-blurred part (namely the real foreground part) and the blurred part (namely the real background part) determined after segmentation are closer to the real scene of the image, so that the background blurring effect of the image is improved.

Description

Image background blurring method, device, equipment and storage medium
Technical Field
The present disclosure relates to image processing technologies, and in particular, to a method, an apparatus, a device, and a storage medium for blurring an image background.
Background
With the widespread use of smartphones, cameras as an important module in the phones have replaced card cameras as the most commonly used portable photographing devices for the public. The mobile phone camera has obvious application limitation due to the limitation of the volume, small aperture, short focal length and poor blurring effect of the shot background. In contrast, in the prior art, a software simulation is used to simulate a large aperture lens of a professional single lens reflex to shoot a background blurring effect, but certain drawbacks exist, such as insufficient natural transition between a foreground and a background, and the blurring background has weak sense of reality.
Disclosure of Invention
In order to solve the above technical problems, the present application desirably provides an image background blurring method, an apparatus, a device and a storage medium.
The technical scheme of the application is realized as follows:
in a first aspect, there is provided a method of image background blurring, the method comprising:
collecting an image;
performing image segmentation on the image to obtain an image segmentation mask; wherein the image segmentation mask is used to divide the image into a foreground portion and a background portion;
performing depth prediction on the image to obtain a first depth image of the image;
Re-segmenting a foreground portion and a background portion of the image based on the image segmentation mask and the first depth image to obtain a non-blurred portion and a blurred portion of the image;
and carrying out background blurring processing on the blurred part of the image to obtain a background blurring image of the image.
In a second aspect, there is provided an image background blurring apparatus, comprising:
the acquisition unit is used for acquiring images;
the segmentation unit is used for carrying out image segmentation on the image to obtain an image segmentation mask; wherein the image segmentation mask is used to divide the image into a foreground portion and a background portion;
the prediction unit is used for carrying out depth prediction on the image to obtain a first depth image of the image;
the segmentation unit is further used for re-segmenting the foreground part and the background part of the image based on the image segmentation mask and the first depth image to obtain a non-blurred part and a blurred part of the image;
and the processing unit is used for carrying out background blurring processing on the blurred part of the image to obtain a background blurring image of the image.
In a third aspect, an electronic device is provided, comprising: a processor and a memory configured to store a computer program capable of running on the processor, wherein the processor is configured to perform the steps of the aforementioned method when the computer program is run.
In a fourth aspect, a computer readable storage medium is provided, on which a computer program is stored, wherein the computer program, when being executed by a processor, carries out the steps of the aforementioned method.
The embodiment of the application provides an image background blurring method, device, equipment and storage medium, which are used for accurately dividing an image based on an image dividing mask and a first depth image of the image, wherein a non-blurred part (namely a real foreground part) and a blurred part (namely a real background part) determined after division are closer to the real scene of the image, so that the background blurring effect of the image is improved.
Drawings
FIG. 1 is a schematic flow chart of a method for blurring an image background according to an embodiment of the present application;
fig. 2 is a detailed schematic diagram of a PortraitNet network in an embodiment of the present application;
FIG. 3 is a schematic diagram of an image segmentation process in an embodiment of the present application;
FIG. 4 is a schematic diagram of a depth prediction process in an embodiment of the present application;
FIG. 5 is a detailed diagram of a Resnext50 network in an embodiment of the present application;
FIG. 6 is a detailed schematic diagram of a U-Net network in an embodiment of the present application;
FIG. 7 is a second flow chart of a method for blurring an image background according to an embodiment of the present application;
FIG. 8 is a core diagram of a powder Jing Juanji in an embodiment of the present application;
FIG. 9 is a flowchart of a process of blurring a scene according to an embodiment of the present application;
FIG. 10 is a schematic diagram of an image background blurring component structure in an embodiment of the present application;
fig. 11 is a schematic diagram of a composition structure of an electronic device in an embodiment of the present application.
Detailed Description
For a more complete understanding of the features and technical content of the embodiments of the present application, reference should be made to the following detailed description of the embodiments of the present application, taken in conjunction with the accompanying drawings, which are for purposes of illustration only and not intended to limit the embodiments of the present application.
In the prior art, software is used for simulating the background blurring effect shot by a large aperture lens of a professional single lens reflex, the problem that the reality of the background after blurring is not strong due to the fact that the transition between the foreground and the background of an image is not natural enough exists, and in this way, the embodiment of the application provides an image background blurring method, so that the transition between the foreground and the background is more natural, and the background effect after blurring is closer to the real blurring effect.
Fig. 1 is a first flow chart of an image background blurring method according to an embodiment of the present application, as shown in fig. 1, the image background blurring method may specifically include:
step 101: an image is acquired.
In the embodiment of the application, an image obtained by shooting a shooting object by using an electronic device with a shooting function is used. By way of example, the electronic device may be a smart phone, a personal computer (e.g., tablet, desktop, notebook, netbook, palmtop), a wearable device, or the like, having hardware devices with various operating systems.
In one possible implementation, the electronic device may include a visible light image sensor, and the image may be acquired based on the visible light image sensor in the electronic device. Specifically, a visible light camera included in the visible light sensor captures visible light reflected by a shooting object to image, and an image is obtained.
In another possible implementation, the electronic device may include a structured light image sensor, and the image may be acquired based on the structured light image sensor in the electronic device. Specifically, a laser camera included in the structured light image sensor captures structured light reflected by a shooting object to image, and an image is obtained.
Step 102: performing image segmentation on the image to obtain an image segmentation mask; wherein the image segmentation mask is used to divide the image into a foreground portion and a background portion.
Here, the image division mask refers to a binary image composed of 0 and 1, for example, 0 indicates that a pixel belongs to a background portion and 1 indicates that a pixel belongs to a foreground (e.g., portrait) portion, so that the foreground portion and the background portion of the image are distinguished.
In the embodiment of the application, an image segmentation network is utilized to carry out image segmentation on an image, and an image segmentation mask is obtained. The image segmentation network generally comprises a semantic segmentation network and an instance segmentation network, the semantic segmentation network belongs to a pixel-level segmentation, each pixel needs to be divided into a corresponding category, but the semantics of the same category are not distinguished, which is basically different from the instance segmentation network. For example, on an image, the semantic segmentation network will divide all people into one category, and will not distinguish different people, while the instance segmentation network will distinguish different instance objects belonging to different people.
Here, since the boundary between the foreground (such as a portrait) and the background has a large influence on the blurring effect, but the boundary between the foreground and the background after the image is segmented by a general semantic segmentation network such as FCN and deep lab is not natural enough, the blurring effect is poor.
Fig. 2 is a detailed schematic diagram of a portnet network in the embodiment of the present application, as shown in fig. 2, an image is firstly input through a conv1 x 1 convolution network to form an input layer, then the encoders encoder block1, encoder block2, encoder block3, encoder block4 and encoder block5 encode in sequence, and downsampling is performed according to the corresponding downsampling rates 2, 4, 8, 16 and 32, and a downsampling result is output; the input downsampling results of the decoders decoder block5, decoder block4, decoder block3, decoder block2 and decoder block1 are decoded in sequence, upsampling is carried out according to the corresponding upsampling rates 32, 16, 8, 4 and 2, the upsampling result is output, and finally the image coding mask is output through an output layer formed by a conv1 x 1 convolution network. Two branches are arranged in the decoder block, one branch comprises two depth-separable conv1 x 1 convolutional networks and conv dw3 x 3 convolutional networks, and convolution calculation is carried out based on the conv dw3 x 3 convolutional networks, the conv1 x 1 convolutional networks, the conv dw3 x 3 convolutional networks and the conv1 x 1 convolutional networks in sequence. The other branch contains a single conv1 x 1 convolutional network to adjust the number of channels. The encoder block performs convolution calculations based on conv1 x 1 convolution network, conv dw3 x 3 convolution network, and conv1 x 1 convolution network in sequence.
Before image segmentation processing is carried out on an image, a PortraitNet network is constructed and trained, in the training process, besides the traditional cross entropy loss, an auxiliary loss is added, and the auxiliary loss is used for calculating the difference of segmentation boundaries, and the overall loss function is as follows:
Figure BDA0003411189130000051
Figure BDA0003411189130000052
L=L 1 +λL 2
wherein L is 1 Is cross entropy loss, p i Representing predicted value, y i Representing portrait tag values. L (L) 2 Represents the auxiliary boundary loss, p i Representing predicted value, y i Representing the boundary tag value, i.e. determining whether it belongs to a boundary. The boundary tag calculates and obtains the image segmentation mask through a canny operator.
For the setting of λ, since only one convolution layer is used to generate the boundary mask, mask features and boundary features can produce ineffective contentions in the feature representation. To avoid this, λ is set. The boundary loss can improve the sensitivity of the model to the portrait boundary, thereby improving the segmentation accuracy.
Here, the trained portnet network is obtained by adjusting parameters in the portnet network according to the value of the loss function L until convergence conditions are satisfied.
In some embodiments, step 102 may specifically include: and performing image segmentation processing on the image by using a PortraitNet network to obtain the image segmentation mask.
Here, the PortraitNet network in this embodiment is a trained network.
In this embodiment, an image segmentation process flow is given, and fig. 3 is a schematic diagram of the image segmentation process flow in this embodiment, as shown in fig. 3, an image 30 is taken as an input of a PortraitNet network 31, and after the image 30 is subjected to the image segmentation process, an image segmentation mask 32 of the image 30 is output.
Step 103: and carrying out depth prediction on the image to obtain a first depth image of the image.
Here, the first depth image refers to an image including depth values of pixels of the image from a camera module of the electronic device.
It should be noted that the size of the image affects the depth prediction result, the depth prediction of the small-size image on the plane is better, and the depth prediction on the edge detail is worse; the large-size image has better depth prediction on edge details and poorer depth prediction on planes; therefore, in the embodiment of the application, the image is scaled into N different sizes, and after depth prediction is performed respectively, depth fusion processing is performed, so that the advantages of images of different sizes are combined, a first depth image with a flat plane and edge details maintained is obtained, the accuracy of the estimated image depth is ensured, and more accurate depth can enable background blurring to be more natural.
Illustratively, in some embodiments, step 103 may specifically include: performing N times of different scale scaling on the image to obtain N scaled images; wherein N is a positive integer; and carrying out depth prediction on the N Zhang Sufang image to obtain the first depth image.
In some embodiments, the depth predicting the N Zhang Sufang image comprises: respectively carrying out depth prediction on each scaled image in the N Zhang Sufang image by utilizing a monocular depth prediction network to obtain a corresponding second depth image; and carrying out depth fusion on the N second depth images by utilizing a multi-scale depth fusion network to obtain the first depth image.
An exemplary depth prediction process flow is provided in the embodiment of the present application, and fig. 4 is a schematic diagram of the depth prediction process flow in the embodiment of the present application, where, as shown in fig. 4, N is a value of 2, an image 30 is scaled into images with different sizes, that is, a first image 40 and a second image 41, which are respectively input into a monocular depth prediction network 42 to perform depth prediction, to obtain an a second depth image 43 and a B second depth image 44, and then the a second depth image 43 and the B second depth image 44 are input into a multi-scale depth fusion network 45 to perform fusion processing, to obtain a first depth image 46.
Here, in this embodiment, both the monocular depth prediction network and the multi-scale depth fusion network are trained networks. The monocular depth prediction network can be a Resnext50 network or a VGG network, and the multi-scale depth fusion network can be a U-Net network.
Fig. 5 is a detailed schematic diagram of a Resnext50 network in the embodiment of the present application, as shown in fig. 5, the image is subjected to dimension reduction by 64 7*7 convolutional networks, the dimension reduction is 3*3 convolutional networks, then up-sampling is performed sequentially according to 3 channels, 256 3*3 convolutional networks, down-sampling rates 2,4 channels, 512 3*3 convolutional networks, down- sampling rates 4,6 channels, 1025 3*3 convolutional networks, down-sampling rates 16,3 channels, 2048 3*3 convolutional networks and down-sampling rate 32, then down-sampling results are input to decoders decoder block1, decoder block2, decoder block3, decoder block4 and decoder block5, and up-sampling is performed according to corresponding up-sampling rates 2,4, 8, 16 and 32, and the up-sampling results are output to a depth image through an output layer formed by the conv1 x 1 convolutional network. The block Cout256 provides a convolution network when 256 channels perform specific convolution calculation, namely, 4 conv1 x 1 convolution networks, 4 conv3 x 3 convolution networks and 256 conv1 x 1 convolution networks. The decoder block performs convolution calculations based on the conv1 x 1 convolution network, the conv dw3 x 3 convolution network, and the conv1 x 1 convolution network in sequence.
Fig. 6 is a detailed schematic diagram of a U-Net network in the embodiment of the present application, as shown in fig. 6, two depth images with different sizes are encoded sequentially by encoders encoder block1, encoder block2, encoder block3 and encoder block4, downsampling is performed according to corresponding downsampling rates 2, 4, 8 and 16, and downsampling results are output; the input downsampling results of the decoders decoder block5, decoder block4, decoder block3 and decoder block2 are decoded in sequence, upsampled according to the corresponding upsampling rates 32, 16, 8 and 4, the upsampling result is output, and finally the final depth image is output through the output layer. Two branches are arranged in the decoder block, one branch comprises two conv1 x 1 convolution networks and conv dw3 x 3 convolution networks with separable depths, and convolution calculation is carried out based on the conv dw1 x 1 convolution network, the conv3 x 3 convolution network and the conv dw1 x 1 convolution network successively. The other branch contains a single conv1 x 1 convolutional network to adjust the number of channels. The encoder block performs convolution calculations based on conv1 x 1 convolution network, conv dw3 x 3 convolution network, and conv1 x 1 convolution network in sequence.
Before the depth prediction is performed on the image, a Resnext50 network and a U-Net network are constructed and are jointly trained, so that the whole network is an end-to-end network, and the end-to-end network is more convenient to train and use.
The method comprises the steps of respectively scaling images into two different sizes aiming at training of a Resnext50 network, respectively inputting the images into the Resnext50 network, outputting two depth prediction graphs with different sizes, respectively calculating losses of the two depth prediction graphs, and the loss functions are as follows:
Figure BDA0003411189130000081
Figure BDA0003411189130000082
L 1 is a small-size depth prediction loss, where y represents a tag value and p represents a predicted value. The loss consists of two parts, one is the square of the difference between the predicted value and the label value, which is used to limit the accuracy of the absolute value of the depth prediction, and the other is the difference between the point and the neighborhood for the predicted value, which is a smoothing term, so that the output result is smoother at the plane.
L 2 Is a large-size depth prediction penalty, where Δy and Δp represent the gradient of the label value and the predictor, respectively, which term is mainly used to enhance the accuracy of the prediction boundaries and details.
The large-size depth predictor and the small-size depth predictor are then scaled to the same size and then merged together for input into the U-Net network, outputting the final depth predictor, the loss function of which is as follows:
Figure BDA0003411189130000083
finally, the network is jointly trained, and the total loss function is as follows:
L=α 1 L 12 L 23 L 3
Here, the trained Resnext50 network and U-Net network are obtained by adjusting parameters in the whole network according to the value of the loss function L until convergence conditions are satisfied.
Step 104: based on the image segmentation mask and the first depth image, a foreground portion and a background portion of the image are re-segmented, and the image is re-segmented into a non-blurred portion and a blurred portion.
It should be noted that, because the foreground portion and the background portion of the image segmented by the image segmentation mask are not accurate enough, the real foreground portion of the image may be mistakenly segmented into the background portion, and the real background portion may be mistakenly segmented into the foreground portion, which is easy to cause the blurring effect to be not real enough.
Step 105: and carrying out background blurring processing on the blurred part of the image to obtain a background blurring image of the image.
Illustratively, gaussian blur processing is performed on the blurred portion of the image to obtain a background blurred image of the image. The blurring technique produces an image that visually appears to be viewed through a frosted glass.
Illustratively, the blurred portion of the image is subjected to a foreground blurring process to obtain a background blurred image of the image. Here, the blurring of the scene belongs to a special blurring, and the light spots of the blurred part are blurred to form beautiful circular spots, and the effect can create a more beautiful hazy feeling to simulate the pattern of a large aperture.
Here, the execution subject of steps 101 to 105 may be a processor of the electronic device.
By adopting the technical scheme, the image is accurately segmented based on the image segmentation mask and the first depth image, and the non-blurred part (namely the real foreground part) and the blurred part (namely the real background part) determined after segmentation are closer to the real scene of the image, so that the background blurring effect of the image is improved.
Based on the above embodiments, the embodiments of the present application specifically propose an image background blurring method for describing how to re-segment an image based on an image segmentation mask and a first depth image.
Fig. 7 is a second flow chart of an image background blurring method according to an embodiment of the present application, and as shown in fig. 7, the image background blurring method may specifically include:
step 701: an image is acquired.
Step 702: performing image segmentation on the image to obtain an image segmentation mask; wherein the image segmentation mask is used to divide the image into a foreground portion and a background portion.
Step 703: and carrying out depth prediction on the image to obtain a first depth image of the image.
Step 704: determining depth related information based on the image segmentation mask and the first depth image; wherein the depth-related information includes first depth difference information of the background portion and the foreground portion, and first depth information of the foreground portion.
It should be noted that, because the foreground portion and the background portion of the image segmented by the image segmentation mask are not accurate enough, the real foreground portion of the image may be mistakenly segmented into the background portion, and the real background portion may be mistakenly segmented into the foreground portion, which is prone to cause unrealistic blurring effect. The segmentation result is closer to the real scene of the image, so that the background effect after blurring is closer to the real blurring effect.
Here, the depth-related information includes first depth difference information of the background portion and the foreground portion, and first depth information of the foreground portion. The depth difference information comprises identification information of each pixel point of the background part and a depth difference value of the background part and the foreground part, and the first depth information comprises identification information and a depth value of each pixel point of the foreground part.
For how to determine first depth difference information and first depth information based on an image segmentation mask and a first depth image, illustratively, in some embodiments, first depth information for the foreground portion and second depth information for the background portion are determined based on the image segmentation mask and the first depth image; determining a pixel value of each pixel point of the foreground part based on the first depth information, and selecting a maximum depth value and a minimum depth value from the pixel values; determining a pixel value of each pixel point of the background part based on the second depth information; calculating a first absolute value of a difference value between the depth value of each pixel point of the background part and the maximum depth value; calculating a second absolute value of the difference value between the depth value of each pixel point of the background part and the minimum depth value; and selecting a minimum absolute value from the first absolute value and the second absolute value corresponding to each pixel point of the background part as a depth difference value of the pixel point corresponding to the background part, and obtaining the first depth difference information.
The image segmentation mask is used for distinguishing a foreground part and a background part of the image, and then the first depth information of the foreground part and the second depth information of the background part are determined by combining the depth values of the pixel points of the image contained in the first depth image, which are separated from the camera module of the electronic equipment. The second depth information comprises identification information and depth values of all pixel points of the background part.
And secondly, selecting a maximum depth value and a minimum depth value from the depth values of all the pixels of the foreground part, calculating a first absolute value of a difference value between the depth value of each pixel of the background part and the maximum depth value of the foreground part, calculating a second absolute value of a difference value between the depth value of each pixel of the background part and the minimum depth value of the foreground part, and selecting a minimum absolute value from the first absolute value and the second absolute value corresponding to each pixel of the background part as the depth difference value of each pixel of the background part to obtain first depth difference information.
Step 705: and re-segmenting the foreground part and the background part of the image based on the depth related information to obtain the non-blurred part and the blurred part of the image.
Here, the depth-related information includes first depth difference information of the background portion and the foreground portion, and first depth information of the foreground portion. The depth difference information comprises identification information of each pixel point of the background part and a depth difference value of the background part and the foreground part, and the first depth information comprises identification information and a depth value of each pixel point of the foreground part.
For how to re-segment the foreground portion and the background portion of the image based on the depth-related information, illustratively, in some embodiments, when the depth difference value in the first depth difference information is less than or equal to a depth difference threshold value, a corresponding pixel point in the background portion is marked as a non-blurred portion; when the depth difference value in the first depth difference information is larger than the depth difference threshold value, marking the corresponding pixel point in the background part as a fuzzy part; dividing a non-target foreground part from the foreground part based on the depth value of each pixel point in the first depth information; calculating second depth difference information of the non-target foreground part and the foreground part; when the depth difference value in the second depth difference information is smaller than or equal to the depth difference threshold value, marking the corresponding pixel point in the non-target foreground part as a non-fuzzy part; and when the depth difference value in the second depth difference information is larger than the depth difference threshold value, marking the corresponding pixel point in the non-target foreground part as a fuzzy part.
Here, for the segmentation of the background portion of the image, the non-blurred portion and the blurred portion in the background portion are determined from the comparison result of the depth difference value of the background portion pixel points and the depth difference threshold value. Particularly, when the depth difference value of the background part pixel points is smaller than or equal to a depth difference threshold value, marking the background part pixel points as non-fuzzy parts; and when the depth difference value of the background part pixel points is larger than the depth difference threshold value, marking the background part pixel points as fuzzy parts.
Here, for the segmentation of the foreground portion of the image, firstly, according to the depth value of each pixel point of the foreground portion, a non-target foreground portion (such as a non-human image portion) and a target foreground portion (such as a human image portion) are segmented from the foreground portion, wherein the target foreground portion is a non-blurred portion, and according to the comparison result of the depth difference value and the depth difference threshold value in the second depth difference information of the non-target foreground portion and the foreground portion, the non-blurred portion and the blurred portion in the non-target foreground portion are determined. Particularly, when the depth difference value of the non-target foreground part pixel points is smaller than or equal to a depth difference threshold value, marking the non-target foreground part pixel points as non-fuzzy parts; and when the depth difference value of the non-target foreground part pixel points is larger than the depth difference threshold value, marking the non-target foreground part pixel points as fuzzy parts.
And for the determination of the second depth difference information, specifically, calculating a third absolute value of a difference value between the depth value of each pixel point of the non-target foreground part and the maximum depth value of the foreground part, calculating a fourth absolute value of a difference value between the depth value of each pixel point of the non-target foreground part and the minimum depth value of the foreground part, and selecting the minimum absolute value from the third absolute value and the fourth absolute value corresponding to each pixel point of the non-target foreground part as the depth difference value of the corresponding pixel point of the non-target foreground part to obtain the second depth difference information.
For example, the depth difference threshold is 0.5, and when the depth difference of the pixels in the background part or the depth difference of the pixels in the non-target foreground part is less than or equal to 0.5, the corresponding pixels are marked as non-blurred parts; and when the depth difference value of the pixels in the background part or the depth difference value of the pixels in the non-target foreground part is larger than 0.5, marking the corresponding pixels as a fuzzy part.
In some embodiments, the segmenting the non-target foreground portion from the foreground portion based on the depth value of each pixel point in the first depth information includes: ordering the depth value of each pixel point in the first depth information; and marking the pixel points in the preset range as non-target foreground parts based on the sorting result.
Here, the non-target foreground portion may refer to a non-portrait portion.
Illustratively, the depth value of each pixel in the first depth information is sorted in order from large to small or from small to large, the pixel belonging to the depth value sorted by the first five percent is marked as a non-target foreground portion according to the sorting result, the pixel belonging to the depth value sorted by the ninety-five percent is marked as a non-target foreground portion, and the pixels belonging to the rest of the depth values are marked as target foreground portions (such as a portrait portion).
Step 706: and carrying out scattered scene blurring processing on the blurred part of the image to obtain a background blurring image of the image.
For how to perform the foreground blurring processing on the blurring part, illustratively, in some embodiments, a convolution kernel radius corresponding to a depth difference value of a pixel point in the blurring part is determined based on a corresponding relation between the depth difference value and the convolution kernel radius; and carrying out scattered scene blurring processing on each pixel point of the blurring part according to the convolution kernel radius to obtain the background blurring image.
Here, the blurred portion of the image includes a blurred portion of the background portion and a blurred portion of the non-target foreground portion.
In order to be closer to the actual blurring effect, the embodiment of the application specifically adopts progressive blurring, namely different convolution kernel sizes are set, the larger the convolution kernel is, the higher the blurring degree is, the smaller the convolution kernel is, the lower the blurring degree is, so that the gradual blurring effect is achieved, and the blurring effect is more natural and real. The size of the convolution kernel is determined by the depth difference value, and if the depth difference value is d and the radius of the convolution kernel is r, the two have the following relationship:
Figure BDA0003411189130000131
Figure BDA0003411189130000132
Figure BDA0003411189130000133
when the depth difference value of the first depth difference information or the depth value of the second depth difference information is smaller than or equal to 0.5, the convolution kernel radius r is equal to 1, and the corresponding pixel point is considered to belong to a non-fuzzy part (namely a real foreground part); when the depth difference value of the first depth difference information or the depth value of the second depth difference information is greater than 0.5, the convolution kernel radius r is greater than 1, and the corresponding pixel point is considered to belong to a blurred portion (i.e., a true background portion).
For how each pixel of the blurred portion is subjected to the foreground blurring process according to the convolution kernel radius, illustratively, in some embodiments, a convolution kernel weight of each pixel of the blurred portion is determined; taking each pixel point of the fuzzy part as a central pixel point, and searching all peripheral pixel points of the central pixel point according to the convolution kernel radius; and determining a weighted sum of the central pixel point and all the peripheral pixel points based on the pixel value and the convolution kernel weight value of each pixel point in the central pixel point and all the peripheral pixel points so as to carry out the foreground blurring on the central pixel point.
Here, the convolution kernel weights of the foreground blur are distributed in a circle with the center pixel as the center, and example fig. 8 is a schematic diagram of a foreground convolution kernel, w7 is the center pixel, the convolution kernel radius is 2, and the pixels around w7 include w1, w2, w3, w4, w5, w6, w8, w9, w10, w11, w12 and w13. And multiplying the pixel value of each pixel point by the convolution kernel weight value, and summing to carry out the blurring process on the w7 center pixel. The method for carrying out the blurring process on other pixel points is similar and will not be described.
For how to determine the convolution kernel weight for each pixel of the blurred portion, illustratively, in some embodiments, a corresponding convolution kernel weight is determined based on the brightness of each pixel of the blurred portion; wherein the convolution kernel weight is linearly related to the brightness.
In the embodiment of the present application, the weight of the convolution kernel is not fixed, but is linearly related to the brightness of the pixel point, that is, the weight of the convolution kernel is different in each convolution, and if the weight of the convolution kernel is w and the brightness of the pixel point is b, the two have the following relationship:
Figure BDA0003411189130000141
∑w ij =1
wherein the sum of the weights is 1, the weights are only related to the pixel brightness in the convolution kernel.
Based on the above embodiments, the present application provides a process flow of the blurring process in real time, and fig. 9 is a schematic diagram of the process flow of the blurring process in the embodiment of the present application.
As shown in fig. 9, a depth extremum 90 is determined based on the image segmentation mask 32 and the first depth image 46. Specifically, first depth information of the foreground portion and second depth information of the background portion are determined based on the image segmentation mask 32 and the first depth image 46, and a maximum depth value and a minimum depth value (i.e., the depth extremum 90) of the foreground portion pixel point are determined based on the first depth information.
Depth related information 91 is determined based on the first depth image 46 and the depth extremum 90. Specifically, based on the second depth information of the background portion in the first depth image 46, a first absolute value of a difference between a depth value of each pixel point of the background portion and a maximum depth value is calculated, a second absolute value of a difference between the depth value of each pixel point of the background portion and a minimum depth value is calculated, and a minimum absolute value is selected from the first absolute value and the second absolute value corresponding to each pixel point of the background portion as the depth difference of each pixel point of the background portion, so as to obtain first depth difference information. Wherein the first depth information and the first depth difference information constitute depth related information 91.
The foreground and background portions of the image are re-segmented based on the depth related information 91 into a non-blurred portion 92 and a blurred portion 93. And determining a non-fuzzy part and a fuzzy part in the background part according to the comparison result of the depth difference value of the pixel points of the background part and the depth difference threshold value for the segmentation of the background part of the image. For segmentation of a foreground part of an image, firstly, a non-target foreground part (such as a non-human image part) is segmented from the foreground part according to the depth value of each pixel point of the foreground part, and then, a non-fuzzy part and a fuzzy part in the non-target foreground part are determined according to the comparison result of the depth difference value and the depth difference threshold value in second depth difference information of the non-target foreground part and the foreground part. Wherein the blurred portions of the background portion and the blurred portions of the non-foreground portion constitute blurred portions 93.
When the blurring part 93 is subjected to the foreground blurring process, firstly, determining a convolution kernel radius corresponding to the depth difference value of the pixel points in the blurring part 93 based on the corresponding relation between the depth difference value and the convolution kernel radius; and carrying out the scattered scene blurring processing on each pixel point of the blurring part 93 according to the convolution kernel radius to obtain a background blurring image.
When the foreground blurring 95 is performed on a certain pixel point of the blurring portion 93, the pixel point is used as a central pixel point, all peripheral pixel points of the central pixel point are found out based on the convolution kernel radius, a corresponding convolution kernel weight is determined according to brightness of each pixel point, a foreground blurring convolution kernel 94 is obtained, the pixel value of each pixel point is calculated to be multiplied by the convolution kernel weight, and then summation is performed, so that the foreground blurring processing is performed on the central pixel point. The blurring process of the scenery of the rest pixel points is the same. Determining a convolution kernel weight of each pixel point of the fuzzy part 93, specifically determining a corresponding convolution kernel weight based on brightness of each pixel point of the fuzzy part 93; wherein the convolution kernel weight is linearly related to the brightness.
By adopting the technical scheme, the depth related information is determined based on the image segmentation mask and the first depth image of the image, the image is precisely segmented based on the depth related information, and the non-blurred part (namely the real foreground part) and the blurred part (namely the real background part) determined after segmentation are closer to the real scene of the image, so that the background blurring effect of the image is improved.
In order to implement the method of the embodiment of the present application, based on the same inventive concept, an image background blurring device is further provided in the embodiment of the present application, and fig. 10 is a schematic diagram of a composition structure of the image background blurring device in the embodiment of the present application, as shown in fig. 10, where the image background blurring device 10 includes:
An acquisition unit 1001 for acquiring an image;
a segmentation unit 1002, configured to perform image segmentation on the image to obtain an image segmentation mask; wherein the image segmentation mask is used to divide the image into a foreground portion and a background portion;
a prediction unit 1003, configured to perform depth prediction on the image, so as to obtain a first depth image of the image;
the segmentation unit 1002 is further configured to re-segment the foreground portion and the background portion of the image based on the image segmentation mask and the first depth image, to obtain a non-blurred portion and a blurred portion of the image;
the processing unit 1004 is configured to perform background blurring processing on the blurred portion of the image, so as to obtain a background blurring image of the image.
By adopting the scheme, the image is accurately segmented based on the image segmentation mask and the first depth image, and the non-blurred part (namely the real foreground part) and the blurred part (namely the real background part) determined after segmentation are closer to the real scene of the image, so that the background blurring effect of the image is improved.
In some embodiments, the segmentation unit 1002 is specifically configured to determine depth related information based on the image segmentation mask and the first depth image; wherein the depth-related information includes first depth difference information of the background portion and the foreground portion, and first depth information of the foreground portion; the image is re-segmented into the non-blurred portion and the blurred portion based on the depth related information.
In some embodiments, when determining depth related information based on the image segmentation mask and the first depth image, the segmentation unit 1002 is specifically further configured to determine first depth information of the foreground portion and second depth information of the background portion based on the image segmentation mask and the first depth image; determining a depth value of each pixel point of the foreground part based on the first depth information, and determining a maximum depth value and a minimum depth value from the depth value; determining a depth value of each pixel point of the background part based on the second depth information; calculating a first absolute value of a difference value between the depth value of each pixel point of the background part and the maximum depth value; calculating a second absolute value of the difference value between the depth value of each pixel point of the background part and the minimum depth value; and selecting a minimum absolute value from the first absolute value and the second absolute value corresponding to each pixel point of the background part as a depth difference value of the pixel point corresponding to the background part, and obtaining the first depth difference information.
In some embodiments, when the foreground portion and the background portion of the image are re-segmented based on the depth related information, the segmentation unit 1002 is specifically further configured to mark a corresponding pixel point in the background portion as a non-blurred portion when a depth difference value in the first depth difference information is less than or equal to a depth difference threshold; when the depth difference value in the first depth difference information is larger than the depth difference threshold value, marking the corresponding pixel point in the background part as a fuzzy part; dividing a non-target foreground part from the foreground part based on the depth value of each pixel point in the first depth information; calculating second depth difference information of the non-target foreground part and the foreground part; when the depth difference value in the second depth difference information is smaller than or equal to the depth difference threshold value, marking the corresponding pixel point in the non-target foreground part as a non-fuzzy part; and when the depth difference value in the second depth difference information is larger than the depth difference threshold value, marking the corresponding pixel point in the non-target foreground part as a fuzzy part.
In some embodiments, when the non-target foreground portion is segmented from the foreground portion based on the depth value of each pixel point in the first depth information, the segmentation unit 1002 is further specifically configured to sort the depth value of each pixel point in the first depth information; and marking the pixel points in the preset range as non-target foreground parts based on the sorting result.
In some embodiments, when background blurring processing is performed on the blurred portion of the image, the processing unit 1004 is specifically further configured to determine a convolution kernel radius corresponding to the depth difference value of the pixel point in the blurred portion based on a correspondence between the depth difference value and the convolution kernel radius; and carrying out scattered scene blurring processing on each pixel point of the blurring part according to the convolution kernel radius to obtain the background blurring image.
In some embodiments, when the foreground blurring process is performed on each pixel point of the blurring part according to the convolution kernel radius, the processing unit 1004 is specifically further configured to determine a convolution kernel weight value of each pixel point of the blurring part; taking each pixel point of the fuzzy part as a central pixel point, and searching all peripheral pixel points of the central pixel point according to the convolution kernel radius; and determining a weighted sum of the central pixel point and all the peripheral pixel points based on the pixel value and the convolution kernel weight value of each pixel point in the central pixel point and all the peripheral pixel points so as to carry out the foreground blurring on the central pixel point.
In some embodiments, when determining the convolution kernel weight of each pixel point of the blurred portion, the processing unit 1004 is specifically further configured to determine a corresponding convolution kernel weight based on the brightness of each pixel point of the blurred portion; wherein the convolution kernel weight is linearly related to the brightness.
In some embodiments, the prediction unit 1003 is specifically configured to perform N different scale scaling on the image, to obtain N scaled images; wherein N is a positive integer; and carrying out depth prediction on the N Zhang Sufang image to obtain the first depth image.
In some embodiments, the prediction unit 1003 is specifically further configured to perform depth prediction on each scaled image in the N Zhang Sufang image by using a monocular depth prediction network, so as to obtain a corresponding second depth image; and carrying out depth fusion on the N second depth images by utilizing a multi-scale depth fusion network to obtain the first depth image.
The embodiment of the application also provides another electronic device, fig. 11 is a schematic diagram of a composition structure of the electronic device in the embodiment of the application, and as shown in fig. 11, the electronic device includes: a processor 1101 and a memory 1102 configured to store a computer program capable of running on the processor;
Wherein the processor 1101 is configured to execute the method steps of the previous embodiments when running a computer program.
Of course, in actual practice, the various components of the electronic device would be coupled together via bus system 1103 as shown in FIG. 11. It is appreciated that the bus system 1103 serves to facilitate connected communications between these components. The bus system 1103 includes a power bus, a control bus, and a status signal bus in addition to the data bus. But for clarity of illustration, the various buses are labeled as bus system 1103 in fig. 11.
In practical applications, the processor may be at least one of an application specific integrated circuit (ASIC, application Specific Integrated Circuit), a digital signal processing device (DSPD, digital Signal Processing Device), a programmable logic device (PLD, programmable Logic Device), a Field-programmable gate array (Field-Programmable Gate Array, FPGA), a controller, a microcontroller, and a microprocessor. It will be appreciated that the electronic device for implementing the above-mentioned processor function may be other for different apparatuses, and embodiments of the present application are not specifically limited.
The Memory may be a volatile Memory (RAM) such as Random-Access Memory; or a nonvolatile Memory (non-volatile Memory), such as a Read-Only Memory (ROM), a flash Memory (flash Memory), a Hard Disk (HDD) or a Solid State Drive (SSD); or a combination of the above types of memories and provide instructions and data to the processor.
In an exemplary embodiment, the present application also provides a computer-readable storage medium for storing a computer program.
Optionally, the computer readable storage medium may be applied to any one of the methods in the embodiments of the present application, and the computer program causes a computer to execute a corresponding flow implemented by a processor in each method in the embodiments of the present application, which is not described herein for brevity.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above described device embodiments are only illustrative, e.g. the division of the units is only one logical function division, and there may be other divisions in practice, such as: multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described as separate units may or may not be physically separate, and units displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units; some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present invention may be integrated in one processing module, or each unit may be separately used as one unit, or two or more units may be integrated in one unit; the integrated units may be implemented in hardware or in hardware plus software functional units. Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware associated with program instructions, where the foregoing program may be stored in a computer readable storage medium, and when executed, the program performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk or an optical disk, or the like, which can store program codes.
The methods disclosed in the several method embodiments provided in the present application may be arbitrarily combined without collision to obtain a new method embodiment.
The features disclosed in the several product embodiments provided in the present application may be combined arbitrarily without conflict to obtain new product embodiments.
The features disclosed in the several method or apparatus embodiments provided in the present application may be arbitrarily combined without conflict to obtain new method embodiments or apparatus embodiments.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (13)

1. A method of blurring an image background, the method comprising:
collecting an image;
performing image segmentation on the image to obtain an image segmentation mask; wherein the image segmentation mask is used to divide the image into a foreground portion and a background portion;
Performing depth prediction on the image to obtain a first depth image of the image;
re-segmenting a foreground portion and a background portion of the image based on the image segmentation mask and the first depth image to obtain a non-blurred portion and a blurred portion of the image;
and carrying out background blurring processing on the blurred part of the image to obtain a background blurring image of the image.
2. The method of claim 1, wherein the re-segmenting the foreground portion and the background portion of the image based on the image segmentation mask and the first depth image comprises:
determining depth related information based on the image segmentation mask and the first depth image; wherein the depth-related information includes first depth difference information of the background portion and the foreground portion, and first depth information of the foreground portion;
and re-segmenting the foreground part and the background part of the image based on the depth related information to obtain the non-blurred part and the blurred part of the image.
3. The method of claim 2, wherein the determining depth related information based on the image segmentation mask and the first depth image comprises:
Determining first depth information of the foreground portion and second depth information of the background portion based on the image segmentation mask and the first depth image;
determining a depth value of each pixel point of the foreground part based on the first depth information, and determining a maximum depth value and a minimum depth value from the depth value;
determining a depth value of each pixel point of the background part based on the second depth information;
calculating a first absolute value of a difference value between the depth value of each pixel point of the background part and the maximum depth value;
calculating a second absolute value of the difference value between the depth value of each pixel point of the background part and the minimum depth value;
and selecting a minimum absolute value from the first absolute value and the second absolute value corresponding to each pixel point of the background part as a depth difference value of the pixel point corresponding to the background part, and obtaining the first depth difference information.
4. The method of claim 2, wherein the re-segmenting the foreground portion and the background portion of the image based on the depth related information comprises:
when the depth difference value in the first depth difference information is smaller than or equal to a depth difference threshold value, marking the corresponding pixel point in the background part as a non-fuzzy part;
When the depth difference value in the first depth difference information is larger than the depth difference threshold value, marking the corresponding pixel point in the background part as a fuzzy part;
dividing a non-target foreground part from the foreground part based on the depth value of each pixel point in the first depth information;
calculating second depth difference information of the non-target foreground part and the foreground part;
when the depth difference value in the second depth difference information is smaller than or equal to the depth difference threshold value, marking the corresponding pixel point in the non-target foreground part as a non-fuzzy part;
and when the depth difference value in the second depth difference information is larger than the depth difference threshold value, marking the corresponding pixel point in the non-target foreground part as a fuzzy part.
5. The method of claim 4, wherein the segmenting the non-target foreground portion from the foreground portion based on the depth value of each pixel point in the first depth information comprises:
ordering the depth value of each pixel point in the first depth information;
and marking the pixel points in the preset range as non-target foreground parts based on the sorting result.
6. The method of claim 4, wherein said blurring the blurred portion of the image comprises:
Determining the convolution kernel radius corresponding to the depth difference value of the pixel point in the fuzzy part based on the corresponding relation between the depth difference value and the convolution kernel radius;
and carrying out scattered scene blurring processing on each pixel point of the blurring part according to the convolution kernel radius to obtain the background blurring image.
7. The method of claim 6, wherein said performing a foreground blurring process on each pixel of said blurred portion according to said convolution kernel radius comprises:
determining a convolution kernel weight of each pixel point of the fuzzy part;
taking each pixel point of the fuzzy part as a central pixel point, and searching all peripheral pixel points of the central pixel point according to the convolution kernel radius;
and determining a weighted sum of the central pixel point and all the peripheral pixel points based on the pixel value and the convolution kernel weight value of each pixel point in the central pixel point and all the peripheral pixel points so as to carry out the foreground blurring on the central pixel point.
8. The method of claim 7, wherein the determining the convolution kernel weights for each pixel point of the blurred portion comprises:
determining a corresponding convolution kernel weight based on the brightness of each pixel point of the blurred portion; wherein the convolution kernel weight is linearly related to the brightness.
9. The method of claim 1, wherein the performing depth prediction on the image to obtain a first depth image of the image comprises:
performing N times of different scale scaling on the image to obtain N scaled images; wherein N is a positive integer;
and carrying out depth prediction on the N Zhang Sufang image to obtain the first depth image.
10. The method of claim 9, wherein the depth predicting the N Zhang Sufang image comprises:
respectively carrying out depth prediction on each scaled image in the N Zhang Sufang image by utilizing a monocular depth prediction network to obtain a corresponding second depth image;
and carrying out depth fusion on the N second depth images by utilizing a multi-scale depth fusion network to obtain the first depth image.
11. An image background blurring apparatus, the apparatus comprising:
the acquisition unit is used for acquiring images;
the segmentation unit is used for carrying out image segmentation on the image to obtain an image segmentation mask; wherein the image segmentation mask is used to divide the image into a foreground portion and a background portion;
the prediction unit is used for carrying out depth prediction on the image to obtain a first depth image of the image;
The segmentation unit is further used for re-segmenting the foreground part and the background part of the image based on the image segmentation mask and the first depth image to obtain a non-blurred part and a blurred part of the image;
and the processing unit is used for carrying out background blurring processing on the blurred part of the image to obtain a background blurring image of the image.
12. An electronic device, the electronic device comprising: a processor and a memory configured to store a computer program capable of running on the processor,
wherein the processor is configured to perform the steps of the method of any of claims 1 to 10 when the computer program is run.
13. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 10.
CN202111532136.3A 2021-12-14 2021-12-14 Image background blurring method, device, equipment and storage medium Pending CN116266337A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111532136.3A CN116266337A (en) 2021-12-14 2021-12-14 Image background blurring method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111532136.3A CN116266337A (en) 2021-12-14 2021-12-14 Image background blurring method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116266337A true CN116266337A (en) 2023-06-20

Family

ID=86742950

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111532136.3A Pending CN116266337A (en) 2021-12-14 2021-12-14 Image background blurring method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116266337A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117541590A (en) * 2024-01-10 2024-02-09 腾讯科技(深圳)有限公司 Image processing method and device, storage medium and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117541590A (en) * 2024-01-10 2024-02-09 腾讯科技(深圳)有限公司 Image processing method and device, storage medium and electronic equipment
CN117541590B (en) * 2024-01-10 2024-04-09 腾讯科技(深圳)有限公司 Image processing method and device, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN110176027B (en) Video target tracking method, device, equipment and storage medium
Berman et al. Single image dehazing using haze-lines
CN110276767B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN111583097A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111832568B (en) License plate recognition method, training method and device of license plate recognition model
CN111079764B (en) Low-illumination license plate image recognition method and device based on deep learning
CN107908998B (en) Two-dimensional code decoding method and device, terminal equipment and computer readable storage medium
CN111161279A (en) Medical image segmentation method and device and server
CA3137297C (en) Adaptive convolutions in neural networks
CN112329702A (en) Method and device for rapid face density prediction and face detection, electronic equipment and storage medium
CN113689373B (en) Image processing method, device, equipment and computer readable storage medium
CN110827341A (en) Picture depth estimation method and device and storage medium
CN113284155B (en) Video object segmentation method and device, storage medium and electronic equipment
CN116266337A (en) Image background blurring method, device, equipment and storage medium
CN114419091A (en) Foreground matting method and device and electronic equipment
CN116258756B (en) Self-supervision monocular depth estimation method and system
CN110765875B (en) Method, equipment and device for detecting boundary of traffic target
CN116993987A (en) Image semantic segmentation method and system based on lightweight neural network model
CN109543557B (en) Video frame processing method, device, equipment and storage medium
CN113012030A (en) Image splicing method, device and equipment
CN113920147B (en) Remote sensing image building extraction method and device based on deep learning
CN116883770A (en) Training method and device of depth estimation model, electronic equipment and storage medium
CN111080543B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN114842066A (en) Image depth recognition model training method, image depth recognition method and device
Que et al. Lightweight and Dynamic Deblurring for IoT-Enabled Smart Cameras

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination