CN110349080B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN110349080B
CN110349080B CN201910498181.8A CN201910498181A CN110349080B CN 110349080 B CN110349080 B CN 110349080B CN 201910498181 A CN201910498181 A CN 201910498181A CN 110349080 B CN110349080 B CN 110349080B
Authority
CN
China
Prior art keywords
depth
target
background area
pixel point
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910498181.8A
Other languages
Chinese (zh)
Other versions
CN110349080A (en
Inventor
刘江宇
熊鹏飞
黄怡菲
陈书全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Megvii Technology Co Ltd
Original Assignee
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Megvii Technology Co Ltd filed Critical Beijing Megvii Technology Co Ltd
Priority to CN201910498181.8A priority Critical patent/CN110349080B/en
Publication of CN110349080A publication Critical patent/CN110349080A/en
Priority to PCT/CN2020/090822 priority patent/WO2020248774A1/en
Application granted granted Critical
Publication of CN110349080B publication Critical patent/CN110349080B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details

Abstract

The invention provides an image processing method and device, wherein the image processing method comprises the following steps: acquiring an image to be processed; determining a target depth change type of a target background area of the image to be processed, wherein the target depth change type is used for representing: a size change rule of depth values of pixel points in a target background area; determining corresponding filtering parameters of each pixel point in a target background area based on the target depth change type, wherein the pixel points with different depth values have different corresponding filtering parameters; and for each pixel point in the target background area, filtering the pixel point by utilizing the filtering parameter corresponding to the pixel point. According to the technical scheme provided by the embodiment of the invention, the pixel points with different depth values in the background area are subjected to blurring to different degrees, and the blurring of the depth of field of the image is more true.

Description

Image processing method and device
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus.
Background
The depth effect of the simulated lens is a subject in the computing photography, and depth blurring is also one of important functions in the intelligent terminal photographing application due to the contained aesthetic look and feel.
In the following, an example of a smart terminal is taken as a smart phone, for the smart phone, due to the influence of the aperture of the lens, the response degree of the smart phone to the object distances with different depths of field is limited, that is, the definition of the foreground and the background is basically consistent by using the image shot by the smart phone, as shown in fig. 1.
Currently, the process of blurring the depth of field of an image photographed by a smart phone is generally: after the smart phone shoots an image, a certain filtering radius is utilized to carry out filtering operation on the background of the image. The blurring degree of different areas of the background is consistent, so that the reality of the blurring of the depth of field of the image is poor. For example, as shown in fig. 2, the foreground (portrait) is suspended in the background.
Disclosure of Invention
In order to solve the technical problems, the invention discloses an image processing method and an image processing device.
In a first aspect, an embodiment of the present invention provides an image processing method, including:
acquiring an image to be processed;
determining a target depth change type of a target background area of the image to be processed, wherein the target depth change type is used for representing: a size change rule of depth values of pixel points in the target background area;
Determining a filtering parameter corresponding to each pixel point in the target background area based on the target depth change type, wherein the corresponding filtering parameters are different for the pixel points with different depth values;
and for each pixel point in the target background area, filtering the pixel point by utilizing the filtering parameter corresponding to the pixel point.
Optionally, the filtering parameter is a filtering radius;
the size of the filter radius corresponding to any pixel point in the target background area is in direct proportion to the depth value of the pixel point.
Optionally, the determining, based on the target depth change type, a filtering parameter corresponding to each pixel point in the target background area includes:
determining a target depth template corresponding to the target depth change type by utilizing a corresponding relation between a pre-generated depth change type and a depth template, wherein any depth template is a gray level image generated based on a sample image of one depth change type, the gray level value of any pixel point in the depth template is in direct proportion to the depth value of a target pixel point in the sample image, and the position of the target pixel point in the sample image is consistent with the position of the pixel point in the gray level image;
And determining a filter radius corresponding to each pixel point in the target background area based on the gray value corresponding to the pixel point in the target depth template, wherein the size of the filter radius corresponding to one pixel point is in direct proportion to the gray value corresponding to the pixel point in the target depth template.
Optionally, the mode of generating the depth template by using the sample image is specifically:
determining a depth change type of a background area of the sample image;
determining the depth value size relation of four pixel points positioned at four vertexes of the background area based on the determined depth change type;
respectively determining gray values corresponding to pixel points positioned at four vertexes of the depth template based on the determined depth value size relation;
and obtaining gray values of other pixel points in the depth template by utilizing an interpolation method based on the determined four vertex gray values.
Optionally, the determining, based on the determined depth value size relationship, gray values corresponding to pixels located at four vertices of the depth template respectively includes:
determining a preset maximum gray value as a first gray value corresponding to a first pixel point corresponding to the maximum depth value in the pixel points of four vertexes of the depth template;
And determining a preset minimum gray value as a second gray value corresponding to a second pixel point corresponding to the minimum depth value among the pixel points of the four vertexes of the depth template.
Optionally, for each pixel point in the target background area, determining the filter radius corresponding to the pixel point based on the gray value corresponding to the pixel point in the target depth template includes:
acquiring a preset maximum filter radius and a preset minimum filter radius;
determining the maximum filter radius as the filter radius of the pixel point corresponding to the maximum gray value in the target background area, and determining the minimum filter radius as the filter radius of the pixel point corresponding to the minimum gray value in the target background area;
and determining the filter radius of each other pixel point in the target background area according to the gray values corresponding to each other pixel point in the target background area.
Optionally, the determining the target depth change type of the target background area of the image to be processed includes:
inputting the image to be processed into a pre-trained convolutional neural network to obtain a target depth change type of a target background area of the image to be processed, wherein the convolutional neural network is obtained by training based on a plurality of sample images and the depth change type of the background area of the sample images;
The depth change type of the background area of the plurality of sample images includes: no depth change, longitudinal depth change, lateral depth change, and longitudinal and lateral mixed depth change.
Optionally, the target depth change type is no depth change;
the determining, based on the target depth change type, a filtering parameter corresponding to each pixel point in the target background area includes:
acquiring a preset maximum filter radius and a preset minimum filter radius;
calculating the area proportion of the foreground area of the image to be processed to the whole image area to be processed;
determining a target filter radius according to the area proportion, wherein the target filter radius is in direct proportion to the area proportion, and the target filter radius is not smaller than the minimum filter radius and not larger than the maximum filter radius;
and determining the target filter radius as the filter radius of each pixel point in the target background area.
Optionally, the method further comprises:
and performing foreground detection on the target foreground region of the image to be processed to obtain the target foreground region of the image to be processed.
In a second aspect, the present invention shows an image processing apparatus comprising:
The image acquisition module is used for acquiring an image to be processed;
the type determining module is used for determining a target depth change type of a target background area of the image to be processed, wherein the target depth change type is used for representing: a size change rule of depth values of pixel points in the target background area;
the filtering radius determining module is used for determining filtering parameters corresponding to all pixel points in the target background area based on the target depth change type, wherein the pixel points with different depth values have different corresponding filtering parameters;
and the filtering module is used for filtering each pixel point in the target background area by utilizing the filtering parameter corresponding to the pixel point.
Optionally, the filtering parameter is a filtering radius;
the size of the filter radius corresponding to any pixel point in the target background area is in direct proportion to the depth value of the pixel point.
Optionally, the filtering parameter determining module includes:
the depth template generation unit is used for determining a target depth template corresponding to the target depth change type by utilizing the corresponding relation between the depth change type and the depth template, wherein any depth template is a gray level image generated based on a sample image of one depth change type, the gray level value of any pixel point in the depth template is in direct proportion to the depth value of a target pixel point in the sample image, and the position of the target pixel point in the sample image is consistent with the position of the pixel point in the gray level image;
And the filter radius determining unit is used for determining the filter radius corresponding to each pixel point in the target background area based on the gray value corresponding to the pixel point in the target depth template, wherein the size of the filter radius corresponding to one pixel point is in direct proportion to the size of the gray value corresponding to the pixel point in the target depth template.
Optionally, the mode of generating the depth template by the depth template generating unit through using the sample image specifically includes:
determining a depth change type of a background area of the sample image;
determining the depth value size relation of four pixel points positioned at four vertexes of the background area based on the determined depth change type;
respectively determining gray values corresponding to pixel points positioned at four vertexes of the depth template based on the determined depth value size relation;
and obtaining gray values of other pixel points in the depth template by utilizing an interpolation method based on the determined four vertex gray values.
Optionally, the depth template generating unit is specifically configured to:
determining a preset maximum gray value as a first gray value corresponding to a first pixel point corresponding to the maximum depth value in the pixel points of four vertexes of the depth template;
And determining a preset minimum gray value as a second gray value corresponding to a second pixel point corresponding to the minimum depth value among the pixel points of the four vertexes of the depth template.
Optionally, the filter radius determining unit is specifically configured to:
acquiring a preset maximum filter radius and a preset minimum filter radius;
determining the maximum filter radius as the filter radius of the pixel point corresponding to the maximum gray value in the target background area, and determining the minimum filter radius as the filter radius of the pixel point corresponding to the minimum gray value in the target background area;
and determining the filter radius of each other pixel point in the target background area according to the gray values corresponding to each other pixel point in the target background area.
Optionally, the type determining module is specifically configured to:
inputting the image to be processed into a pre-trained convolutional neural network to obtain a target depth change type of a target background area of the image to be processed, wherein the convolutional neural network is obtained by training based on a plurality of sample images and the depth change type of the background area of the sample images;
the depth change type of the background area of the plurality of sample images includes: no depth change, longitudinal depth change, lateral depth change, and longitudinal and lateral mixed depth change.
Optionally, the target depth change type is no depth change;
the filtering parameter determining module is specifically configured to:
acquiring a preset maximum filter radius and a preset minimum filter radius;
calculating the area proportion of the foreground area of the image to be processed to the whole image area to be processed;
determining a target filter radius according to the area proportion, wherein the target filter radius is in direct proportion to the area proportion, and the target filter radius is not smaller than the minimum filter radius and not larger than the maximum filter radius;
and determining the target filter radius as the filter radius of each pixel point in the target background area.
Optionally, the apparatus further includes:
and the foreground detection module is used for carrying out foreground detection on the target foreground region of the image to be processed to obtain the target foreground region of the image to be processed.
In a third aspect, the invention features an electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the image processing method of the first aspect when the program is executed.
In a fourth aspect, the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the image processing method according to the first aspect.
According to the technical scheme provided by the embodiment of the invention, after the image to be processed is acquired; determining a target depth change type of a target background area of the image to be processed, wherein the target depth change type is used for representing: a size change rule of depth values of pixel points in the target background area; determining corresponding filtering parameters of each pixel point in a target background area based on the target depth change type, wherein the pixel points with different depth values have different corresponding filtering parameters; and for each pixel point in the target background area, filtering the pixel point by utilizing the filtering parameter corresponding to the pixel point. Therefore, through the technical scheme provided by the embodiment of the invention, the pixel points with different depth values in the background area are subjected to blurring to different degrees, and the image depth blurring is more true.
Drawings
Fig. 1 is an image photographed by a conventional smart phone;
FIG. 2 is a prior art depth-of-field blurred image;
FIG. 3 is a flow chart of steps of an image processing method according to an embodiment of the present invention;
FIG. 4 is an image with a background region of depth change type of no depth change;
FIG. 5 is an image of a background region with a type of depth variation of longitudinal depth variation;
FIG. 6 is an image of a background region with a depth variation type of lateral depth variation;
FIG. 7 is an image of background areas of depth variation type of longitudinal and lateral mixed depth variation;
FIG. 8 is a flowchart of steps of another image processing method according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a corresponding depth template when the depth change type is no depth change according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of a corresponding depth template when the depth change type is a longitudinal depth change in accordance with an embodiment of the present invention;
FIG. 11 is a schematic diagram of a corresponding depth template when the depth variation type is a lateral depth variation according to an embodiment of the present invention;
FIG. 12 is a schematic illustration of a corresponding depth template when the depth change type is a hybrid longitudinal and lateral depth change in accordance with an embodiment of the present invention;
fig. 13 is a block diagram of an image processing apparatus according to an embodiment of the present invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
In order to solve the technical problems described in the background art, the embodiment of the invention provides an image processing method and an image processing device.
It should be noted that, the execution main body of the image processing method provided by the invention may be an intelligent terminal with a photographing function, and the intelligent terminal may be an intelligent mobile phone, a tablet, etc., and the embodiment of the invention does not specifically limit the electronic device.
Referring to fig. 3, a flowchart illustrating steps of an image processing method of the present invention may specifically include the steps of:
s310, acquiring an image to be processed.
The image to be processed can be any one or more images shot by the intelligent terminal, or can be any image acquired by the intelligent terminal from other intelligent terminals, which is reasonable.
It will be appreciated that the image to be processed generally includes a foreground region and a background region. The foreground region may be a portrait, an object image, etc.; the background area is the other area except the foreground area in the image to be processed.
S320, determining a target depth change type of a target background area of the image to be processed, wherein the target depth change type is used for representing: and the change rule of the depth value of the pixel point in the target background area.
After the image to be processed is acquired, in order to enable the pixel points with different depths in the background area to be subjected to different degrees of blurring in the subsequent step, so that the depth of field blurring of the image is more real, the depth change type of the background area of the image to be processed can be determined, the background area of the image to be processed can be called a target background area for clarity of scheme description, and the depth change type of the background area of the image to be processed can be called a target depth change type.
The target depth change type is used for representing the size change rule of the depth value of the pixel point in the target background area. In an actual application scene, the types of target depth change may be generally divided into four types, where the first type is no depth change, that is, the depth values of each pixel point in the background area are the same, as shown in fig. 4, and the depth change type of the background area is an image without depth change. The second type is longitudinal depth change, namely in a background area, the depth value of a pixel point is changed from small to large from bottom to top, and as shown in fig. 5, the second type is an image with the longitudinal depth change in the background area; or in the background area, the depth value of the pixel point is reduced from large to small from bottom to top. The third type is a lateral depth change, that is, in the background area, the depth value of the pixel point is changed from small to large from left to right, as shown in fig. 6, and the depth change type of the background area is an image of the lateral depth change; or, in the background area, the depth value of the pixel point is from large to small from left to right. The fourth type is: the depth value of the pixel point changes from top to bottom in the background area, and from left to right, the depth value of the pixel point also changes, as shown in fig. 7, which is an image of the background area with the depth change type of the longitudinal and transverse mixed depth change.
In one embodiment, determining the target depth variation type of the target background area of the image to be processed may include:
inputting the image to be processed into a convolutional neural network trained in advance to obtain the target depth change type of the target background area of the image to be processed.
The convolutional neural network is trained based on a plurality of sample images and depth change types of background areas of the plurality of sample images.
In this embodiment, when training the convolutional neural network, a plurality of sample images may be acquired, and the depth change types of the background areas of the plurality of sample images may be roughly classified into four types, respectively: no depth change, longitudinal depth change, lateral depth change, and a hybrid longitudinal and lateral depth change.
For each sample image in the plurality of sample images, the depth change type of the background area of the sample image can be calibrated, the calibrated plurality of sample images are input into the convolutional neural network, for each sample image in the plurality of sample images, whether the output depth change type is consistent with the depth change type calibrated by the sample image is judged, if the output depth change type is inconsistent with the depth change type calibrated by the sample image, the model parameters are adjusted until the output depth change type is consistent with the depth change type calibrated by the sample image after each sample image is input into the convolutional neural network, and the convolutional neural network after the model parameters are adjusted is determined to be a trained convolutional neural network.
After training the convolutional neural network, inputting the image to be processed into the trained convolutional neural network, and obtaining the depth change type of the background area of the image to be processed.
In order to ensure that the image with the blurred depth of field can be focused in the foreground region, in one embodiment, the image processing method may further include: and performing foreground detection on the target foreground region of the image to be processed to obtain the target foreground region of the image to be processed. The target foreground region does not participate in filtering operation in the subsequent steps, namely, blurring processing is not carried out on the target foreground region.
S330, determining the corresponding filtering parameters of each pixel point in the target background area based on the target depth change type, wherein the corresponding filtering parameters of the pixel points with different depth values are different.
It will be appreciated that the true depth of field blurred image is typically: background areas of different depths (depth of field) have different degrees of diffusion, i.e. the degree of blurring of the background areas of different depths should be different. Therefore, in order to make the image with the depth of field virtual more real, the pixels with different depth values have different corresponding filtering parameters. The filtering parameter may be a filtering radius, etc., and the embodiment of the present invention does not specifically limit the filtering parameter.
In one embodiment, the filtering parameter is a filtering radius, and at this time, the size of the filtering radius corresponding to any pixel point in the target background area is proportional to the depth value of the pixel point.
In order to make the image with the depth of field blurring more real, a background area with smaller depth (depth of field is shallower) should use a smaller filter radius to achieve the effect of smaller blurring degree, and a background area with larger depth (depth of field is deeper) should use a larger filter radius to achieve the effect of larger blurring degree, so as to form the depth of field blurring effect with depth gradient.
Therefore, after the target depth change type of the image to be processed is determined, in order to obtain a more real depth blurring image, in the target background area of the image to be processed, the filtering radius corresponding to the pixel point with a larger depth value is larger, and the filtering radius corresponding to the pixel point with a smaller depth value is smaller, so that the effects of larger blurring degree of the pixel point with a larger depth value and smaller blurring degree of the pixel point with a smaller depth value can be achieved in the subsequent steps.
It should be noted that, in an embodiment, the target depth change type of the image to be processed may be no depth change, in this case, determining, based on the target depth change type, the filtering parameters corresponding to each pixel point in the target background area may include the following steps:
Obtaining a maximum filter radius and a minimum filter radius;
calculating the area proportion of a foreground area of the image to be processed to the whole image area to be processed;
determining a target filter radius according to the area proportion, wherein the target filter radius is in direct proportion to the area proportion, and is not smaller than the minimum filter radius and not larger than the maximum filter radius;
and determining the target filter radius as the filter radius of each pixel point in the target background area.
In this embodiment, since the target depth change type of the image to be processed is no depth change, the corresponding filter radius should be the same for each pixel point of the target background area of the image to be processed, and this filter radius is referred to as a target filter radius for clarity of description of the scheme.
In this case, the method for determining the target filter radius may be: the maximum filter radius and the minimum filter radius are obtained, the area proportion of the foreground area to the whole image to be processed is calculated, if the area proportion of the foreground area to the whole image to be processed is larger, the larger filter radius can be determined as the target filter radius for the purpose of focusing the foreground area, and if the area proportion of the foreground area to the whole image to be processed is smaller, the smaller filter radius can be determined as the target filter radius.
The maximum filter radius and the minimum filter radius can be set by a system or selected by a user according to actual needs, which is reasonable.
S340, for each pixel point in the target background area, filtering the pixel point by using the filtering parameter corresponding to the pixel point.
After determining the filter radius corresponding to each pixel point in the target background area, a filtering operation may be performed on each pixel point in the target background area.
In one embodiment, when the filtering parameter is a filtering radius, for a pixel, a process of filtering the pixel with the filtering radius corresponding to the pixel may be: determining a circle by taking the pixel point as a circle center and the filter radius corresponding to the pixel point as a radius; and the pixel value of each pixel point in the circle is weighted and averaged, and the pixel value obtained after the weighted and averaged is used as the pixel value of the pixel point.
It should be noted that the foreground region of the image to be processed is not subjected to the filtering operation. In the process of filtering any pixel point in a background area of an image to be processed, if the pixel point is used as a circle center, a filter radius corresponding to the pixel point is used as a radius, the determined circle covers a foreground area, and when the weighting summation is carried out, the weighting summation is not carried out on the pixel values of the pixel points in the foreground area.
Of course, the process of filtering the pixel point by using the filtering radius may be any process understood by those skilled in the art, which is not particularly limited in the embodiment of the present invention.
According to the technical scheme provided by the embodiment of the invention, after the image to be processed is acquired; determining a target depth change type of a target background area of the image to be processed, wherein the target depth change type is used for representing: a size change rule of depth values of pixel points in the target background area; determining corresponding filtering parameters of each pixel point in a target background area based on the target depth change type, wherein the pixel points with different depth values have different corresponding filtering parameters; and for each pixel point in the target background area, filtering the pixel point by utilizing the filtering parameter corresponding to the pixel point. Therefore, through the technical scheme provided by the embodiment of the invention, the pixel points with different depth values in the background area are subjected to blurring to different degrees, and the image depth blurring is more true.
Referring to fig. 8, a flowchart illustrating steps of an image processing method of the present invention may specifically include the steps of:
s810, acquiring an image to be processed.
Step S810 corresponds to step S310, and step S310 has already been described in detail in fig. 3, and the detailed description of step S810 is omitted here.
S820, determining a target depth change type of a target background area of the image to be processed, wherein the target depth change type is used for representing: and the change rule of the depth value of the pixel point in the target background area.
Step S820 corresponds to step S320, and step S320 has already been described in detail in fig. 3, and the detailed description of step S820 is omitted here.
S830, determining a target depth template corresponding to the target depth change type by utilizing the corresponding relation between the depth change type and the depth template, which are generated in advance.
Any depth template is a gray level image generated based on a sample image with a depth change type, the gray level value of any pixel point in the depth template is proportional to the depth value of a target pixel point in the sample image, and the position of the target pixel point in the sample image is consistent with the position of the pixel point in the gray level image.
In this step, after determining the target depth change type of the image to be processed, a target depth template corresponding to the target depth change type may be found in a corresponding relationship between the depth change type and the depth template that is generated in advance. It will be appreciated that since there are roughly four types of target depth variation types, the target depth templates are also typically four types.
Specifically, when the depth change type is no depth change, that is, the depth values of all the pixel points in the target background area are the same, in the depth template corresponding to the depth change type, the gray values corresponding to all the pixel points are the same. For example, as shown in fig. 9, the depth value of each pixel point in the background area is 0, and the gray value of each pixel point in the depth template is 0.
When the depth change type is longitudinal depth change, namely from top to bottom, the depth value of the pixel point in the background area is changed from small to large; alternatively, from top to bottom, the depth value of the pixel point in the background area becomes smaller from large. When the depth value of the pixel point in the background area is changed from small to large from top to bottom, the gray value of the pixel point is changed from small to large from top to bottom in the corresponding depth template. As shown in fig. 10, 0 indicates that the depth value is minimum, and 1 indicates that the depth value is maximum. When the depth value of the pixel point in the background area is from large to small from top to bottom, the gray value of the pixel point is also changed from large to small from top to bottom in the corresponding depth template.
When the depth change type is transverse depth change, namely from left to right, the depth value of the pixel point in the background area is changed from small to large; alternatively, from left to right, the depth value of the pixel point in the background area becomes smaller from large. When the depth value of the pixel point in the background area is changed from small to large from left to right, the gray value of the pixel point is changed from small to large from left to right in the corresponding depth template. As shown in fig. 11, 0 indicates that the depth value is minimum, and 1 indicates that the depth value is maximum. When the depth value of the pixel point in the background area is from large to small from left to right, the gray value of the pixel point is also changed from large to small from left to right in the corresponding depth template.
When the depth change type is longitudinal and transverse mixed depth change, namely in a background area, the depth value of the pixel point is changed from top to bottom, the depth value of the pixel point is also changed from left to right, at the moment, in the corresponding depth template, the gray value of the pixel point is changed from top to bottom, and the gray value of the pixel point is also changed from left to right. As shown in fig. 12, 0 indicates that the depth value is minimum, and 1 indicates that the depth value is maximum.
In one embodiment, the depth template is generated by using the sample image, specifically:
determining a depth change type of a background area of the sample image;
determining the depth value size relation of four pixel points positioned at four vertexes of a background area based on the determined depth change type;
respectively determining gray values corresponding to pixel points positioned at four vertexes of the depth template based on the determined depth value size relation;
and obtaining gray values of other pixel points in the depth template by utilizing an interpolation method based on the determined four vertex gray values. As can be seen from the above description, the types of depth variations of the background area of the sample image are roughly classified into four types, namely, no depth variations, longitudinal depth variations, lateral depth variations, and longitudinal and lateral mixed depth variations. For these four depth change types, the corresponding depth value is typically 0 or 1 for four pixels located at four vertices of the background area, 0 representing the smallest depth value and 1 representing the largest depth value. Therefore, when the sample image is used to generate the depth template, the depth value magnitude relation of four pixels located at four vertices of the background area may be determined first, and gray values corresponding to the pixels located at the four vertices of the depth template may be determined based on the determined depth value magnitude relation, specifically, a preset maximum gray value is determined as a first gray value corresponding to a first pixel located at the four vertices of the depth template, for example, the preset maximum gray value may be 255; the preset minimum gray value is determined as a second gray value corresponding to a second pixel point corresponding to the minimum depth value, among the pixels located at the four vertices of the depth template, for example, the preset minimum gray value may be 0. Of course, the maximum gray value and the minimum gray value may be set according to actual situations, and the embodiment of the present invention does not specifically limit the maximum gray value and the minimum gray value.
After the gray values corresponding to the pixels at the four vertices of the depth template are determined, the gray values of other pixels of the depth template can be obtained by interpolation. The gray value principle of other pixels of the depth template obtained by interpolation method can be as follows: the depth template gradient is made as smooth as possible.
S840, for each pixel point in the target background area, determining a filter radius corresponding to the pixel point based on the gray value corresponding to the pixel point in the target depth template.
The size of the filtering radius corresponding to one pixel point is in direct proportion to the corresponding gray value of the pixel point in the target depth template.
In one embodiment, for each pixel point in the target background area, determining the filter radius corresponding to the pixel point based on the gray value corresponding to the pixel point in the target depth template may include:
acquiring a preset maximum filter radius and a preset minimum filter radius;
determining the maximum filter radius as the filter radius of the pixel point corresponding to the maximum gray value in the target background area, and determining the minimum filter radius as the filter radius of the pixel point corresponding to the minimum gray value in the target background area;
And determining the filter radius of each other pixel point in the target background area according to the gray values corresponding to each other pixel point in the target background area.
Since the depth value corresponding to the four pixels located at the four vertices of the background area is usually the maximum depth value or the minimum depth value, the gray value corresponding to the four pixels located at the four vertices of the background area is usually the maximum gray value or the minimum gray value, and the maximum gray value corresponds to the maximum filter radius, and the minimum gray value corresponds to the minimum filter radius. Therefore, in determining the filter radius of each pixel point in the target background area, the filter radius of four pixel points located at four vertices of the target background area may be determined first. And then, determining the filter radius of each other pixel point in the target background area according to the gray values corresponding to each other pixel point in the target background area.
It should be noted that, in practical application, the size of the image to be processed may be different from the size of the depth template, in this case, the filter radii corresponding to the pixel points located at the four vertices of the target background area may be determined first, and then the filter radii of other pixel points may be interpolated according to the depth change type of the target background area.
S850, for each pixel point in the target background area, filtering the pixel point by using the corresponding filtering radius of the pixel point.
Step S850 corresponds to step S340, and step S340 has already been described in detail in fig. 3, and the detailed description of step S850 is omitted here.
According to the technical scheme provided by the embodiment of the invention, after the image to be processed is acquired; determining a target depth change type of a target background area of the image to be processed, wherein the target depth change type is used for representing: a size change rule of depth values of pixel points in the target background area; determining corresponding filtering parameters of each pixel point in a target background area based on the target depth change type, wherein the pixel points with different depth values have different corresponding filtering parameters; and for each pixel point in the target background area, filtering the pixel point by utilizing the filtering parameter corresponding to the pixel point. Therefore, through the technical scheme provided by the embodiment of the invention, the pixel points with different depth values in the background area are subjected to blurring to different degrees, and the image depth blurring is more true.
It should be noted that, for simplicity of explanation, the method embodiments are shown as a series of acts, but it should be understood by those skilled in the art that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are all alternative embodiments and that the actions involved are not necessarily required for the present invention.
Referring to fig. 13, there is shown a block diagram of an image processing apparatus of the present invention, which may include:
an image acquisition module 1310, configured to acquire an image to be processed;
a type determining module 1320, configured to determine a target depth change type of a target background area of the image to be processed, where the target depth change type is used to characterize: a size change rule of depth values of pixel points in the target background area;
a filter radius determining module 1330, configured to determine, based on the target depth change type, a filter parameter corresponding to each pixel point in the target background area, where the pixel points with different depth values have different corresponding filter parameters;
the filtering module 1340 is configured to, for each pixel point in the target background area, filter the pixel point by using a filtering parameter corresponding to the pixel point.
According to the technical scheme provided by the embodiment of the invention, after the image to be processed is acquired; determining a target depth change type of a target background area of the image to be processed, wherein the target depth change type is used for representing: a size change rule of depth values of pixel points in the target background area; determining corresponding filtering parameters of each pixel point in a target background area based on the target depth change type, wherein the pixel points with different depth values have different corresponding filtering parameters; and for each pixel point in the target background area, filtering the pixel point by utilizing the filtering parameter corresponding to the pixel point. Therefore, through the technical scheme provided by the embodiment of the invention, the pixel points with different depth values in the background area are subjected to blurring to different degrees, and the image depth blurring is more true.
Optionally, the filtering parameter is a filtering radius;
the size of the filter radius corresponding to any pixel point in the target background area is in direct proportion to the depth value of the pixel point.
Optionally, the filtering parameter determining module includes:
the depth template generation unit is used for determining a target depth template corresponding to the target depth change type by utilizing the corresponding relation between the depth change type and the depth template, wherein any depth template is a gray level image generated based on a sample image of one depth change type, the gray level value of any pixel point in the depth template is in direct proportion to the depth value of a target pixel point in the sample image, and the position of the target pixel point in the sample image is consistent with the position of the pixel point in the gray level image;
and the filter radius determining unit is used for determining the filter radius corresponding to each pixel point in the target background area based on the gray value corresponding to the pixel point in the target depth template, wherein the size of the filter radius corresponding to one pixel point is in direct proportion to the size of the gray value corresponding to the pixel point in the target depth template.
Optionally, the mode of generating the depth template by the depth template generating unit through using the sample image specifically includes:
determining a depth change type of a background area of the sample image;
determining the depth value size relation of four pixel points positioned at four vertexes of the background area based on the determined depth change type;
respectively determining gray values corresponding to pixel points positioned at four vertexes of the depth template based on the determined depth value size relation;
and obtaining gray values of other pixel points in the depth template by utilizing an interpolation method based on the determined four vertex gray values.
Optionally, the depth template generating unit is specifically configured to:
determining a preset maximum gray value as a first gray value corresponding to a first pixel point corresponding to the maximum depth value in the pixel points of four vertexes of the depth template;
and determining a preset minimum gray value as a second gray value corresponding to a second pixel point corresponding to the minimum depth value among the pixel points of the four vertexes of the depth template.
Optionally, the filter radius determining unit is specifically configured to:
acquiring a preset maximum filter radius and a preset minimum filter radius;
Determining the maximum filter radius as the filter radius of the pixel point corresponding to the maximum gray value in the target background area, and determining the minimum filter radius as the filter radius of the pixel point corresponding to the minimum gray value in the target background area;
and determining the filter radius of each other pixel point in the target background area according to the gray values corresponding to each other pixel point in the target background area.
Optionally, the type determining module is specifically configured to:
inputting the image to be processed into a pre-trained convolutional neural network to obtain a target depth change type of a target background area of the image to be processed, wherein the convolutional neural network is obtained by training based on a plurality of sample images and the depth change type of the background area of the sample images;
the depth change type of the background area of the plurality of sample images includes: no depth change, longitudinal depth change, lateral depth change, and longitudinal and lateral mixed depth change.
Optionally, the target depth change type is no depth change;
the filtering parameter determining module is specifically configured to:
acquiring a preset maximum filter radius and a preset minimum filter radius;
Calculating the area proportion of the foreground area of the image to be processed to the whole image area to be processed;
determining a target filter radius according to the area proportion, wherein the target filter radius is in direct proportion to the area proportion, and the target filter radius is not smaller than the minimum filter radius and not larger than the maximum filter radius;
and determining the target filter radius as the filter radius of each pixel point in the target background area.
Optionally, the apparatus further includes:
and the foreground detection module is used for carrying out foreground detection on the target foreground region of the image to be processed to obtain the target foreground region of the image to be processed.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points.
The embodiment of the invention also provides electronic equipment, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the steps of the image processing method provided by the embodiment of the invention when executing the program.
According to the technical scheme provided by the embodiment of the invention, after the image to be processed is acquired; determining a target depth change type of a target background area of the image to be processed, wherein the target depth change type is used for representing: a size change rule of depth values of pixel points in the target background area; determining corresponding filtering parameters of each pixel point in a target background area based on the target depth change type, wherein the pixel points with different depth values have different corresponding filtering parameters; and for each pixel point in the target background area, filtering the pixel point by utilizing the filtering parameter corresponding to the pixel point. Therefore, through the technical scheme provided by the embodiment of the invention, the pixel points with different depth values in the background area are subjected to blurring to different degrees, and the image depth blurring is more true.
The embodiment of the invention also shows a computer readable storage medium, and the computer readable storage medium stores a computer program, and the computer program realizes the steps of the image processing method provided by the embodiment of the invention when being executed by a processor.
According to the technical scheme provided by the embodiment of the invention, after the image to be processed is acquired; determining a target depth change type of a target background area of the image to be processed, wherein the target depth change type is used for representing: a size change rule of depth values of pixel points in the target background area; determining corresponding filtering parameters of each pixel point in a target background area based on the target depth change type, wherein the pixel points with different depth values have different corresponding filtering parameters; and for each pixel point in the target background area, filtering the pixel point by utilizing the filtering parameter corresponding to the pixel point. Therefore, through the technical scheme provided by the embodiment of the invention, the pixel points with different depth values in the background area are subjected to blurring to different degrees, and the image depth blurring is more true.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
It will be apparent to those skilled in the art that embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The foregoing has outlined a detailed description of an image processing method and an image processing apparatus according to the present invention, wherein specific examples are provided herein to illustrate the principles and embodiments of the present invention, and the above examples are provided to assist in understanding the method and core ideas of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (11)

1. An image processing method, the method comprising:
acquiring an image to be processed;
determining a target depth change type of a target background area of the image to be processed, wherein the target depth change type is used for representing: a size change rule of depth values of pixel points in the target background area;
determining a filtering parameter corresponding to each pixel point in the target background area based on the target depth change type, wherein the corresponding filtering parameters are different for the pixel points with different depth values;
for each pixel point in the target background area, filtering the pixel point by utilizing a filtering parameter corresponding to the pixel point;
the determining the target depth change type of the target background area of the image to be processed comprises the following steps:
inputting the image to be processed into a pre-trained convolutional neural network to obtain a target depth change type of a target background area of the image to be processed, wherein the convolutional neural network is obtained by training based on a plurality of sample images and the depth change type of the background area of the sample images; the depth change type of the background area of the plurality of sample images includes: no depth change, longitudinal depth change, lateral depth change, and longitudinal and lateral mixed depth change.
2. The method of claim 1, wherein the filter parameter is a filter radius;
the size of the filter radius corresponding to any pixel point in the target background area is in direct proportion to the depth value of the pixel point.
3. The method according to claim 2, wherein determining the filter parameters corresponding to the pixels in the target background region based on the target depth change type includes:
determining a target depth template corresponding to the target depth change type by utilizing a corresponding relation between a pre-generated depth change type and a depth template, wherein any depth template is a gray level image generated based on a sample image of one depth change type, the gray level value of any pixel point in the depth template is in direct proportion to the depth value of a target pixel point in the sample image, and the position of the target pixel point in the sample image is consistent with the position of the pixel point in the gray level image;
and determining a filter radius corresponding to each pixel point in the target background area based on the gray value corresponding to the pixel point in the target depth template, wherein the size of the filter radius corresponding to one pixel point is in direct proportion to the gray value corresponding to the pixel point in the target depth template.
4. A method according to claim 3, characterized in that the depth template is generated by means of a sample image, in particular:
determining a depth change type of a background area of the sample image;
determining the depth value size relation of four pixel points positioned at four vertexes of the background area based on the determined depth change type;
respectively determining gray values corresponding to pixel points positioned at four vertexes of the depth template based on the determined depth value size relation;
and obtaining gray values of other pixel points in the depth template by utilizing an interpolation method based on the determined four vertex gray values.
5. The method according to claim 4, wherein determining gray values corresponding to pixels located at four vertices of the depth template based on the determined depth value magnitude relation, respectively, comprises:
determining a preset maximum gray value as a first gray value corresponding to a first pixel point corresponding to the maximum depth value in the pixel points of four vertexes of the depth template;
and determining a preset minimum gray value as a second gray value corresponding to a second pixel point corresponding to the minimum depth value among the pixel points of the four vertexes of the depth template.
6. A method according to claim 3, wherein for each pixel in the target background region, determining the filter radius corresponding to the pixel based on the gray value corresponding to the pixel in the target depth template comprises:
acquiring a preset maximum filter radius and a preset minimum filter radius;
determining the maximum filter radius as the filter radius of the pixel point corresponding to the maximum gray value in the target background area, and determining the minimum filter radius as the filter radius of the pixel point corresponding to the minimum gray value in the target background area;
and determining the filter radius of each other pixel point in the target background area according to the gray values corresponding to each other pixel point in the target background area.
7. The method of claim 1, wherein the target depth change type is no depth change;
the determining, based on the target depth change type, a filtering parameter corresponding to each pixel point in the target background area includes:
acquiring a preset maximum filter radius and a preset minimum filter radius;
calculating the area proportion of the foreground area of the image to be processed to the whole image area to be processed;
Determining a target filter radius according to the area proportion, wherein the target filter radius is in direct proportion to the area proportion, and the target filter radius is not smaller than the minimum filter radius and not larger than the maximum filter radius;
and determining the target filter radius as the filter radius of each pixel point in the target background area.
8. The method according to any one of claims 1 to 6, further comprising:
and performing foreground detection on the target foreground region of the image to be processed to obtain the target foreground region of the image to be processed.
9. An image processing apparatus, characterized in that the apparatus comprises:
the image acquisition module is used for acquiring an image to be processed;
the type determining module is used for determining a target depth change type of a target background area of the image to be processed, wherein the target depth change type is used for representing: a size change rule of depth values of pixel points in the target background area;
the filtering parameter determining module is used for determining the filtering parameters corresponding to all the pixel points in the target background area based on the target depth change type, wherein the pixel points with different depth values have different corresponding filtering parameters;
The filtering module is used for filtering each pixel point in the target background area by utilizing the filtering parameter corresponding to the pixel point;
the determining the target depth change type of the target background area of the image to be processed comprises the following steps:
inputting the image to be processed into a pre-trained convolutional neural network to obtain a target depth change type of a target background area of the image to be processed, wherein the convolutional neural network is obtained by training based on a plurality of sample images and the depth change type of the background area of the sample images; the depth change type of the background area of the plurality of sample images includes: no depth change, longitudinal depth change, lateral depth change, and longitudinal and lateral mixed depth change.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the image processing method according to any one of claims 1 to 9 when the program is executed.
11. A computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the steps of the image processing method according to any one of claims 1 to 9.
CN201910498181.8A 2019-06-10 2019-06-10 Image processing method and device Active CN110349080B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910498181.8A CN110349080B (en) 2019-06-10 2019-06-10 Image processing method and device
PCT/CN2020/090822 WO2020248774A1 (en) 2019-06-10 2020-05-18 Image processing method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910498181.8A CN110349080B (en) 2019-06-10 2019-06-10 Image processing method and device

Publications (2)

Publication Number Publication Date
CN110349080A CN110349080A (en) 2019-10-18
CN110349080B true CN110349080B (en) 2023-07-04

Family

ID=68181738

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910498181.8A Active CN110349080B (en) 2019-06-10 2019-06-10 Image processing method and device

Country Status (2)

Country Link
CN (1) CN110349080B (en)
WO (1) WO2020248774A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110349080B (en) * 2019-06-10 2023-07-04 北京迈格威科技有限公司 Image processing method and device
CN110910304B (en) * 2019-11-08 2023-12-22 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and medium
CN110866946A (en) * 2019-11-25 2020-03-06 歌尔股份有限公司 Image processing method and device for depth module, storage medium and depth camera
CN115311147A (en) * 2021-05-06 2022-11-08 影石创新科技股份有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113436304B (en) * 2021-06-22 2023-05-23 青岛小鸟看看科技有限公司 Image rendering method and device and head-mounted display equipment
CN114125296A (en) * 2021-11-24 2022-03-01 广东维沃软件技术有限公司 Image processing method, image processing device, electronic equipment and readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106331492A (en) * 2016-08-29 2017-01-11 广东欧珀移动通信有限公司 Image processing method and terminal
CN106530241A (en) * 2016-10-31 2017-03-22 努比亚技术有限公司 Image blurring processing method and apparatus
CN107370958A (en) * 2017-08-29 2017-11-21 广东欧珀移动通信有限公司 Image virtualization processing method, device and camera terminal
CN107945105A (en) * 2017-11-30 2018-04-20 广东欧珀移动通信有限公司 Background blurring processing method, device and equipment
CN108076286A (en) * 2017-11-30 2018-05-25 广东欧珀移动通信有限公司 Image weakening method, device, mobile terminal and storage medium
CN108234826A (en) * 2018-01-15 2018-06-29 厦门美图之家科技有限公司 Image processing method and device
CN109712067A (en) * 2018-12-03 2019-05-03 北京航空航天大学 A kind of virtual viewpoint rendering method based on depth image

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101587586B (en) * 2008-05-20 2013-07-24 株式会社理光 Device and method for processing images
US9418400B2 (en) * 2013-06-18 2016-08-16 Nvidia Corporation Method and system for rendering simulated depth-of-field visual effect
CN106993112B (en) * 2017-03-09 2020-01-10 Oppo广东移动通信有限公司 Background blurring method and device based on depth of field and electronic device
CN108230234B (en) * 2017-05-19 2019-08-20 深圳市商汤科技有限公司 Image blurs processing method, device, storage medium and electronic equipment
CN110349080B (en) * 2019-06-10 2023-07-04 北京迈格威科技有限公司 Image processing method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106331492A (en) * 2016-08-29 2017-01-11 广东欧珀移动通信有限公司 Image processing method and terminal
CN106530241A (en) * 2016-10-31 2017-03-22 努比亚技术有限公司 Image blurring processing method and apparatus
CN107370958A (en) * 2017-08-29 2017-11-21 广东欧珀移动通信有限公司 Image virtualization processing method, device and camera terminal
CN107945105A (en) * 2017-11-30 2018-04-20 广东欧珀移动通信有限公司 Background blurring processing method, device and equipment
CN108076286A (en) * 2017-11-30 2018-05-25 广东欧珀移动通信有限公司 Image weakening method, device, mobile terminal and storage medium
CN108234826A (en) * 2018-01-15 2018-06-29 厦门美图之家科技有限公司 Image processing method and device
CN109712067A (en) * 2018-12-03 2019-05-03 北京航空航天大学 A kind of virtual viewpoint rendering method based on depth image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于模糊估计融合显著性检测的自动抠图算法;裴晓康 等;《计算机应用研究》;20121015;第29卷(第10期);3945-3947,3955 *

Also Published As

Publication number Publication date
CN110349080A (en) 2019-10-18
WO2020248774A1 (en) 2020-12-17

Similar Documents

Publication Publication Date Title
CN110349080B (en) Image processing method and device
US9639945B2 (en) Depth-based application of image effects
KR101855224B1 (en) Image processing method and apparatus
US9444991B2 (en) Robust layered light-field rendering
US10410327B2 (en) Shallow depth of field rendering
CN106027851B (en) Method and system for processing images
CN108848367B (en) Image processing method and device and mobile terminal
CN108234858B (en) Image blurring processing method and device, storage medium and electronic equipment
CN108230234B (en) Image blurs processing method, device, storage medium and electronic equipment
JP2023515654A (en) Image optimization method and device, computer storage medium, computer program, and electronic equipment
US10992845B1 (en) Highlight recovery techniques for shallow depth of field rendering
CN106952247B (en) Double-camera terminal and image processing method and system thereof
TWI777098B (en) Method, apparatus and electronic device for image processing and storage medium thereof
JP5911292B2 (en) Image processing apparatus, imaging apparatus, image processing method, and image processing program
CN111127309A (en) Portrait style transfer model training method, portrait style transfer method and device
Zheng et al. Constrained predictive filters for single image bokeh rendering
WO2023081399A1 (en) Integrated machine learning algorithms for image filters
CN110570375B (en) Image processing method, device, electronic device and storage medium
CN115496925A (en) Image processing method, apparatus, storage medium, and program product
CN113793257A (en) Image processing method and device, electronic equipment and computer readable storage medium
CN111105370B (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
US20220398704A1 (en) Intelligent Portrait Photography Enhancement System
CN109727193B (en) Image blurring method and device and electronic equipment
CN105574818B (en) Depth-of-field rendering method and device
CN114372931A (en) Target object blurring method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant