CN115115683A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN115115683A
CN115115683A CN202110302683.6A CN202110302683A CN115115683A CN 115115683 A CN115115683 A CN 115115683A CN 202110302683 A CN202110302683 A CN 202110302683A CN 115115683 A CN115115683 A CN 115115683A
Authority
CN
China
Prior art keywords
pixel point
depth
image
target
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110302683.6A
Other languages
Chinese (zh)
Inventor
余冲
雷磊
王晓涛
李雅楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202110302683.6A priority Critical patent/CN115115683A/en
Publication of CN115115683A publication Critical patent/CN115115683A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure provides an image processing method and apparatus. The method comprises the following steps: the method comprises the steps of obtaining an initial depth image corresponding to an original image, obtaining pixel value similarity between adjacent pixel points in the original image, adjusting depth values in the initial depth image according to the pixel value similarity, and obtaining a target depth image corresponding to the original image. Compared with the initial depth image, the initial depth image has the characteristics that the boundary of the shot object is clear, the depth value in the initial depth image is adjusted by utilizing the pixel value similarity between adjacent pixel points in the initial depth image, the depth information of the boundary of the shot object in the initial depth image can be perfected, and the high-quality target depth image is obtained.

Description

Image processing method and device
Technical Field
The present disclosure relates to the field of computer communication technologies, and in particular, to an image processing method and apparatus.
Background
With the development of computer vision technology, especially 3D vision, the computer vision system has more and more demands for obtaining high-quality depth information, so as to realize functions of background blurring, AR ruler, AR special effect, and the like.
At present, TOF (Time of flight), structured light, binocular vision and other technologies are used to obtain depth images. However, the depth image obtained by the above method has many problems, such as loss of depth information of the boundary of the photographic subject, and the like, and a high-quality depth image cannot be obtained.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides an image processing method and apparatus.
According to a first aspect of embodiments of the present disclosure, there is provided an image processing method, the method including:
acquiring an initial depth image corresponding to an original image;
acquiring pixel value similarity between adjacent pixel points in the original image;
and adjusting the depth value in the initial depth image according to the pixel value similarity to obtain a target depth image corresponding to the original image.
Optionally, the initial depth image comprises a set of pixel points whose depth values are zero; the adjusting the depth value in the initial depth image according to the pixel value similarity includes:
and adjusting the depth value of the pixel point in the pixel point set according to the pixel value similarity.
Optionally, the adjusting the depth value in the initial depth image according to the pixel value similarity to obtain a target depth image corresponding to the original image includes:
aiming at a pixel point in the initial depth image, determining the weight used by the pixel point in the initial depth image according to the pixel value similarity corresponding to a target pixel point in the original image, wherein the target pixel point and the pixel point are positioned at the same pixel point position;
constructing an objective function, wherein the objective function includes a first data item, a second data item and a third data item, wherein the first data item is used for representing a first difference, the first difference is a difference between an optimized depth value and an actual depth value of the pixel point in the initial depth image, the second data item is used for representing a second difference, the second difference is a difference between the optimized depth value of the pixel point in the initial depth image and an optimized depth value of an adjacent pixel point, and the third data item is used for representing a weight used by the pixel point in the initial depth image;
and adjusting the optimized depth value of each pixel point in the target function, responding to the adjusted target function meeting a preset condition, and obtaining the target depth image according to the current optimized depth value of each pixel point.
Optionally, the weight used by the pixel point is negatively correlated with the pixel value similarity corresponding to the target pixel point.
Optionally, the obtaining the target depth image according to the current optimized depth value of each pixel point includes:
determining the deviation between the current optimized depth value and the actual depth value of each pixel point in the initial depth image;
in response to the deviation not falling within a preset deviation range, adjusting the current optimized depth value of the pixel point to the actual depth value;
and obtaining the target depth image according to the actual depth value of the pixel point.
Optionally, the adjusting the depth value in the initial depth image according to the pixel value similarity to obtain a target depth image corresponding to the original image includes:
adjusting the depth value in the initial depth image according to the pixel value similarity to obtain an intermediate depth image;
and denoising the intermediate depth image to obtain the target depth image.
Optionally, the method further comprises:
after a group of target depth images corresponding to a group of continuously shot original images are obtained, and the group of original images comprises the original images, aiming at a first target depth image corresponding to a first original image in the group of original images, obtaining a first depth value of a first pixel point used for representing a shot object in the first target depth image and a second depth value of a second pixel point used for representing the shot object in a second target depth image, wherein the second target depth image is a target depth image corresponding to a second original image in the group of original images, and the second original image is shot adjacently to the first original image;
and determining the target depth value of the first pixel point in the first target depth image according to the first depth value and the second depth value or according to the second depth value.
Optionally, the obtaining a second depth value of a second pixel point in a second target depth image, where the second pixel point is used to characterize the photographic object, includes:
acquiring a third pixel point in the first original image and a fourth pixel point in the second original image, wherein the third pixel point and the fourth pixel point are both used for representing the shooting object;
determining the position relation between the third pixel point and the fourth pixel point;
determining a target pixel point position in the second target depth image according to the pixel point position of the first pixel point in the first target depth image and the position relation;
determining the second pixel point in the second target depth image at the target pixel point position;
and acquiring a second depth value of the second pixel point.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus including:
the image acquisition module is configured to acquire an initial depth image corresponding to the original image;
the similarity acquisition module is configured to acquire pixel value similarity between adjacent pixel points in the original image;
and the depth value adjusting module is configured to adjust the depth value in the initial depth image according to the pixel value similarity, so as to obtain a target depth image corresponding to the original image.
Optionally, the initial depth image comprises a set of pixels having a depth value of zero;
the depth value adjusting module is configured to adjust the depth value of the pixel point in the pixel point set according to the pixel value similarity.
Optionally, the depth value adjusting module includes:
the weight determining submodule is configured to determine, for a pixel point in the initial depth image, a weight used by the pixel point in the initial depth image according to a pixel value similarity corresponding to a target pixel point in the original image, wherein the target pixel point and the pixel point are located at the same pixel point position;
a function construction sub-module configured to construct an objective function, the objective function including a first data item, a second data item and a third data item, wherein the first data item is used for representing a first difference, the first difference is a difference between an optimized depth value and an actual depth value of the pixel point in the initial depth image, the second data item is used for representing a second difference, the second difference is a difference between the optimized depth value of the pixel point in the initial depth image and an optimized depth value of an adjacent pixel point, and the third data item is used for representing a weight used by the pixel point in the initial depth image;
the optimized depth value adjusting submodule is configured to adjust the optimized depth value of each pixel point in the objective function;
and the image obtaining submodule is configured to respond that the adjusted target function meets a preset condition, and obtain the target depth image according to the current optimized depth value of each pixel point.
Optionally, the weight used by the pixel point is negatively correlated with the pixel value similarity corresponding to the target pixel point.
Optionally, the image obtaining sub-module includes:
a deviation determining unit configured to determine, for each pixel point in the initial depth image, a deviation of a current optimized depth value and an actual depth value of the pixel point;
a depth value adjusting unit configured to adjust the current optimized depth value of the pixel point to the actual depth value in response to the deviation not falling within a preset deviation range;
an image obtaining unit configured to obtain the target depth image according to the actual depth values of the pixel points.
Optionally, the depth value adjusting module includes:
a depth value adjusting submodule configured to adjust a depth value in the initial depth image according to the pixel value similarity, so as to obtain an intermediate depth image;
an image denoising module configured to denoise the intermediate depth image to obtain the target depth image.
Optionally, the apparatus further comprises:
a depth value obtaining module configured to, after obtaining a set of target depth images corresponding to a set of continuously captured original images, the set of original images including the original images, obtain, for a first target depth image corresponding to a first original image in the set of original images, a first depth value of a first pixel point in the first target depth image, the first pixel point being used for representing a captured object, and a second depth value of a second pixel point in a second target depth image, the second target depth image being a target depth image corresponding to a second original image in the set of original images, the second original image being captured adjacent to the first original image;
a depth value determination module configured to determine a target depth value of the first pixel point in the first target depth image according to the first depth value and the second depth value, or according to the second depth value.
Optionally, the depth value obtaining module includes:
the pixel point obtaining submodule is configured to obtain a third pixel point in the first original image and a fourth pixel point in the second original image, and the third pixel point and the fourth pixel point are both used for representing the shooting object;
a position relation determination submodule configured to determine a position relation between the third pixel point and the fourth pixel point;
a pixel point position determining submodule configured to determine a target pixel point position in the second target depth image according to the pixel point position of the first pixel point in the first target depth image and the positional relationship;
a pixel point determination submodule configured to determine the second pixel point located at the target pixel point position in the second target depth image;
and the second depth value acquisition submodule is configured to acquire a second depth value of the second pixel point.
According to a third aspect of embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of any one of the above first aspects.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of any of the first aspect above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
the embodiment of the disclosure provides an image processing method, and compared with an initial depth image, the initial depth image has the characteristics that the boundary of a shot object is clear, and the like.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
FIG. 1 is a flow diagram illustrating a method of image processing according to an exemplary embodiment;
FIG. 2 is a flow diagram illustrating another method of image processing according to an exemplary embodiment;
FIG. 3 is a block diagram of an image processing apparatus according to an exemplary embodiment;
fig. 4 is a schematic structural diagram of an electronic device according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The image processing method provided by the disclosure can be applied to electronic equipment. For example, the electronic device is provided with a camera module, and after an image is acquired by the camera module, a depth image corresponding to the image is obtained by using the image processing method provided by the present disclosure.
The image processing method provided by the disclosure can be applied to a server. For example, the server acquires an image uploaded by the electronic device, and obtains a depth image corresponding to the image by using the image processing method provided by the present disclosure. Further, the server side can send the obtained depth image to the electronic device for the electronic device to use.
The following describes an image processing method provided by the present disclosure, taking an example in which the image processing method is applied to an electronic device.
FIG. 1 is a flow diagram illustrating a method of image processing according to an exemplary embodiment, the method illustrated in FIG. 1 including:
in step 101, an initial depth image corresponding to an original image is acquired.
The original image is an image to be processed, and the original image needs to be processed to obtain a depth image corresponding to the original image. The original image may be an RGB image or other suitable type of image.
The method in the related art can be adopted to obtain the initial depth image corresponding to the original image.
In one embodiment, the electronic device is provided with a specific function, such as an AR ruler, an AR special effect, and the like, and when the specific function is turned on, the electronic device starts to execute the image processing method provided by the embodiment.
In step 102, the pixel value similarity between adjacent pixel points in the original image is obtained.
For a pixel point in the original image, there are multiple pixel points adjacent to the pixel point, for example, there are eight pixel points adjacent to the pixel point. In this step, the pixel value similarity between the pixel point and a plurality of adjacent pixel points may be calculated, for example, the pixel value similarity between the pixel point and each adjacent pixel point is calculated.
There are various methods for calculating the pixel value similarity between the pixel point and each adjacent pixel point. For example, the pixel value of the pixel point is divided by the pixel value of each adjacent pixel point to obtain the pixel value similarity between the pixel point and each adjacent pixel point; for another example, after the pixel value of the pixel point is divided by the pixel value of each adjacent pixel point, the size of the pixel value of the pixel point and the size of the pixel values of the adjacent pixel points are comprehensively considered, and the division result is adjusted to obtain the pixel value similarity between the pixel point and each adjacent pixel point. The pixel value similarity may be determined using a method in the related art.
In step 103, the depth values in the initial depth image are adjusted according to the pixel value similarity, and a target depth image corresponding to the original image is obtained.
In one embodiment, the depth image obtained by the existing method has the problem of depth information loss in local areas to different degrees, so that the depth image comprises areas similar to holes. In this case, the initial depth image includes a pixel point set having a depth value of zero, the depth values of the pixel points in the pixel point set are zero, the pixel point set includes an abnormal pixel point, and the actual distance between the shooting object represented by the abnormal pixel point and the shooting module is not zero.
In order to solve the above problem, when the electronic device executes this step, the depth value of the pixel point in the pixel point set included in the initial depth image may be adjusted according to the pixel value similarity obtained in step 102.
By adopting the method, the number of pixel points with zero depth values in the depth image is reduced, and the number of pixel points with non-zero depth values in the depth image is increased, so that a dense depth image is obtained.
In one embodiment, step 103 may be implemented by:
the first step is as follows: and aiming at the pixel points in the initial depth image, determining the weight used by the pixel points in the initial depth image according to the pixel value similarity corresponding to the target pixel point in the original image, wherein the target pixel point and the pixel points in the initial depth image are positioned at the same pixel point position.
The pixel value similarity corresponding to the target pixel point in the original image can be understood as follows: and (3) the pixel value similarity between the target pixel point and the adjacent pixel point in the original image. The method described in step 102 may be used to obtain the pixel value similarity corresponding to the target pixel point in the original image.
For example: the initial depth image comprises adjacent pixel points 1 and 2, and the coordinate of the pixel point 1 is (x) 1 ,y 1 ) The coordinate of the pixel point 2 is (x) 2 ,y 2 ) The original image comprises pixel 3 and pixel 4, and the coordinate of pixel 3 is (x) 1 ,y 1 ) The coordinate of the pixel point 4 is (x) 2 ,y 2 )。
And determining the weight suitable for the pixel point 2 used by the pixel point 1 in the initial depth image according to the pixel value similarity between the pixel point 3 and the adjacent pixel point 4.
One of the situations is: for the weight used by the pixel point in the initial depth image, the weight is in negative correlation with the pixel value similarity corresponding to the target pixel point in the original image.
The specific relationship between the above-mentioned weight and the similarity of the pixel values may be set as needed, for example, the weight is equal to one of the similarity of the pixel values.
The second step: and constructing an objective function.
The objective function comprises a first data item, a second data item and a third data item, wherein the first data item is used for representing a first difference, the first difference is a difference between an optimized depth value and an actual depth value of a pixel point in the initial depth image, the second data item is used for representing a second difference, the second difference is a difference between the optimized depth value of the pixel point in the initial depth image and an optimized depth value of an adjacent pixel point, and the third data item is used for representing a weight used by the pixel point in the initial depth image.
The depth value of a pixel point in the initial depth image is referred to as the actual depth value.
The optimized depth values of the pixel points in the initial depth image are: and optimizing the actual depth value of the pixel point in the initial depth image to obtain the depth value. The optimized depth value is the depth value to be determined.
On the basis that the data items have specific characterization functions, the specific forms of the first data item, the second data item and the third data item can be set according to needs and experience.
For example, the expression of the objective function is as follows:
Figure BDA0002986936210000111
wherein lambda is a penalty factor, A is a set of pixel point coordinates in the initial depth image, i is the pixel point coordinates in the initial depth image, and x i For the optimized depth value of the pixel point a at i in the initial depth image, d i Is the actual depth value of the pixel point a at the position i in the initial depth image, n (i) is the coordinate set of all pixel points adjacent to the pixel point a in the initial depth image, j is the coordinate of one pixel point adjacent to the pixel point a in the initial depth image, and x j For the optimized depth value of the pixel point b at j in the initial depth image, W ij The weight suitable for pixel b is used for pixel a in the initial depth image.
The pixel value similarity, W, exists between the pixel point at i and the pixel point at j in the original image ij The similarity of the pixel points is negative correlation.
Typically λ is a fixed value, such as thousandths or ten-thousandths, and can be set as desired and empirically.
Usually, the initial depth image and the original image have the same pixel point distribution, and the pixel points located at the same pixel point coordinate are used for representing the same shooting object. For example, a pixel point a at the position i in the initial depth image and a pixel point at the position i in the original image are used for representing the same photographic object, and a pixel point b at the position j in the initial depth image and a pixel point at the position j in the original image are used for representing the same photographic object.
In using the above objective function, a correlation calculation is performed for each j in n (i), and the calculation results associated with each j are summed, and a correlation calculation is performed for each i in a, and the calculation results associated with each i are summed.
In the above expression of the objective function, the first data item includes (x) i -d i ) 2 The second data item comprises (x) i -x j ) 2 The third data item includes W ij
On the basis of the expression of the objective function, a non-global term, a first derivative term of the optimized depth value of the pixel point, a first derivative term of the actual depth value of the pixel point and the like can be added in the expression.
The present example is only described by way of example of the expression of the objective function, and any suitable expression may be used.
The third step: and adjusting the optimized depth value of each pixel point in the objective function, responding to the adjusted objective function meeting the preset condition, and obtaining a target depth image according to the current optimized depth value of each pixel point.
For the objective function in the above example, in the process of adjusting the optimized depth value of each pixel in the objective function, after one adjustment is completed, the function value of the objective function is calculated according to the current optimized depth value of each pixel, and if the function value is the minimum, it is determined that the adjusted objective function meets the preset condition.
The objective function in the above example includes W ij ,W ij Is determined according to the pixel value similarity between adjacent pixel points in the original image and uses W ij The purpose is to introduce pixel value information in the original image. Compared with the initial depth image, the original image has the characteristics of clear shot object boundary and the like, and W is utilized ij For the initial depth imageThe depth value in the initial depth image is adjusted, so that the depth information of the boundary of the shooting object in the initial depth image can be perfected, and a high-quality target depth image is obtained.
The target function in the above example is used to adjust the depth value of each pixel point in the initial depth image, so as to implement global optimization. And adjusting the zero depth value in the initial depth image to be a non-zero depth value by using an objective function so as to remove a hole-like area formed by a plurality of zero depth values in the initial depth image, improve the number of the non-zero depth values and finally obtain a dense target depth image.
And aiming at the third step, after the current optimized depth value of the pixel point is determined by using the target function, the current optimized depth value of the pixel point is deviated from the actual depth value based on the influence of some factors, and if the deviation is overlarge, the quality of the finally obtained target depth image is influenced.
Based on this, the electronic device may determine, for each pixel point in the initial depth image, a deviation between a current optimized depth value and an actual depth value of the pixel point, adjust the current optimized depth value of the pixel point to the actual depth value in response to that the deviation does not fall within a preset deviation range, and obtain the target depth image according to the actual depth value of the pixel point.
For example, in response to the deviation being greater than the preset deviation, the current optimized depth value of the pixel point is adjusted to the actual depth value, and a target depth image including the actual depth value of the pixel point is obtained.
In the process, the constraint of the actual depth value of the pixel point is increased to remove the depth value with larger deviation and obtain an accurate target depth image.
In one embodiment, step 103 may be implemented by: firstly, adjusting the depth value in the initial depth image according to the pixel value similarity to obtain an intermediate depth image; and secondly, denoising the intermediate depth image to obtain a target depth image.
Denoising can be performed by using a median filtering method, for example, denoising is performed by using a 3 × 3 median filtering method, so as to reduce the discrete noise of the initial depth image.
In the embodiment, a denoising operation is additionally arranged to obtain a low-noise target depth image.
In one embodiment, if the initial depth image is relatively noisy, the initial depth image may be denoised first, for example, a median filtering method is used to denoise the initial depth image, so as to remove discrete noise in the initial depth image. And then, according to the pixel value similarity, adjusting the depth value in the denoised initial depth image to obtain a target depth image corresponding to the original image.
The embodiment of the disclosure provides an image processing method, and compared with an initial depth image, the initial depth image has the characteristics that the boundary of a shot object is clear, and the like.
In one embodiment, a set of target depth images corresponding to a set of continuously captured original images is obtained using the method shown in FIG. 1. In this case, the electronic apparatus can also operate as follows.
FIG. 2 is a flow diagram illustrating another method of image processing according to an exemplary embodiment, the method illustrated in FIG. 2 including:
in step 201, for a first target depth image corresponding to a first original image in a group of original images, a first depth value of a first pixel point in the first target depth image for representing a photographic object and a second depth value of a second pixel point in a second target depth image for representing the photographic object are obtained, where the second target depth image is a target depth image corresponding to a second original image in the group of original images, and the second original image is photographed adjacent to the first original image.
In step 202, a target depth value of a first pixel point in the first target depth image is determined according to the first depth value and the second depth value, or according to the second depth value.
In this embodiment, the depth value in the second target depth image corresponding to the adjacently shot second original image is used to constrain the depth value in the first target depth image corresponding to the first original image, so as to ensure that the depth values at the same pixel position in the two frames of target depth images corresponding to the adjacently shot two frames of original images are relatively stable, thereby avoiding the occurrence of the problem of visual flicker caused by a large difference in depth values at the same pixel position, and achieving stability in the time domain.
For step 201, the following steps may be implemented:
step 1, acquiring a third pixel point in the first original image and a fourth pixel point in the second original image, wherein the third pixel point and the fourth pixel point are both used for representing the shooting object.
The first pixel point, the second pixel point, the third pixel point and the fourth pixel point are used for representing the same shooting object.
And 2, determining the position relation between the third pixel point and the fourth pixel point.
And 3, determining the position of the target pixel point in the second target depth image according to the position of the pixel point of the first pixel point in the first target depth image and the position relation determined in the step 2.
And 4, determining a second pixel point positioned at the target pixel point in the second target depth image.
And 5, acquiring a second depth value of a second pixel point in the second target depth image.
For example, in the process of shooting a video, a set of original images which are continuously shot is obtained, and a set of target depth images corresponding to the set of original images is obtained.
First, one frame of original image in a set of original images is taken as a reference image, and at least one frame of original image photographed adjacent to the reference image is taken as a reference image.
There are various ways of selecting the reference image, for example, a group of original images are arranged in the shooting time sequence, an m-frame image arranged before the reference image and an n-frame image arranged after the reference image are used as the reference images, and m and n are equal or unequal. For example, m is an integer of 3 to 5, and n is an integer of 3 to 5.
For an original picture that is ranked first, for example, a certain number of original pictures that are ranked next to it can be used as a reference picture. For an original image that is ranked later, such as the last original image, a certain number of original images that are ranked ahead of it may be used as reference images.
Secondly, a first position relation between pixel points which are used for representing the same shooting object in the reference image and the reference image is determined.
The reference image corresponds to the target depth image 1 and the reference image corresponds to the target depth image 2. For the same shooting object, pixel points used for representing the same shooting object in the reference image and the reference image have a first position relationship, and pixel points used for representing the same shooting object in the target depth image 1 and the target depth image 2 should also have the first position relationship.
And thirdly, determining the position of the pixel point for representing the same shooting object in the target depth image 2 according to the position of the pixel point for representing the same shooting object in the target depth image 1 and the position relation.
In application, the reference image comprises a plurality of shooting objects, and the first position relation corresponding to each shooting object is obtained by using the method. The plurality of first position relations can be arranged into a matrix according to the positions of a plurality of pixel points for representing a plurality of shooting objects in the reference image, and a homography matrix is obtained. And determining the positions of pixel points for representing different shooting objects in the target depth image 2 by using the homography matrix.
In step 202, in one case, the first depth value and the second depth value are counted, and the statistical result is determined as the target depth value of the first pixel point.
The first original image may be understood as a reference image and the second original image may be understood as a reference image. And when the number of the second original images is multiple, the number of the second depth values is multiple, the first depth value and the multiple second depth values are counted, and the counting result is determined as the target depth value of the first pixel point.
In another case, the target depth value of the first pixel point in the first target depth image is determined according to the second depth value.
When the number of the second original image is one, the number of the second depth value is one, and the second depth value may be determined as the target depth value of the first pixel, or the second depth value may be processed, for example, multiplied by a coefficient, and the processing result may be determined as the target depth value of the first pixel.
When the number of the second original images is multiple, the number of the second depth values is multiple, at this time, the multiple second depth values may be counted, and the statistical result is determined as the target depth value of the first pixel point.
The counting of the plurality of second depth values may be a weighted sum calculation of the plurality of second depth values. For example, for the first pixel point, the second pixel point, the third pixel point, and the fourth pixel point, the pixel value similarity between the fourth pixel point and the third pixel point may be determined, and the weight used by the second depth value of the second pixel point is determined according to the pixel value similarity, where the weight in this example is positively correlated with the pixel value similarity.
In application, a weighted median filtering algorithm may be used to calculate the plurality of depth values, so as to obtain an optimal depth value of a first pixel point in the first target depth image.
For a video stream with a higher input frame rate, a median filtering algorithm may be used to perform statistics on multiple depth values, so as to obtain an optimal depth value of a first pixel point in a first target depth image.
The method provided by the embodiment can be adopted to take each frame of continuously shot original image as a reference image, and perform optimization processing on the target depth image corresponding to each frame of original image, so as to obtain a high-quality depth image.
In application, each frame of original image in a group of original images can be used as a reference image in sequence from early to late of shooting time.
While, for purposes of simplicity of explanation, the foregoing method embodiments have been described as a series of acts or combination of acts, it will be appreciated by those skilled in the art that the present disclosure is not limited by the order of acts, as some steps may, in accordance with the present disclosure, occur in other orders and concurrently.
Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that acts and modules referred to are not necessarily required by the disclosure.
Corresponding to the embodiment of the application function implementation method, the disclosure also provides an application function implementation device and a corresponding embodiment.
Fig. 3 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment, referring to fig. 3, the apparatus including:
an image obtaining module 31 configured to obtain an initial depth image corresponding to the original image;
a similarity obtaining module 32 configured to obtain pixel value similarities between adjacent pixel points in the original image;
and a depth value adjusting module 33 configured to adjust a depth value in the initial depth image according to the pixel value similarity, so as to obtain a target depth image corresponding to the original image.
In an alternative embodiment, on the basis of the image processing apparatus shown in fig. 3, the initial depth image includes a set of pixel points whose depth values are zero;
the depth value adjusting module 33 may be configured to adjust the depth value of the pixel point in the pixel point set according to the pixel value similarity.
In an alternative embodiment, on the basis of the image processing apparatus shown in fig. 3, the depth value adjusting module 33 may include:
the weight determining submodule is configured to determine, for a pixel point in the initial depth image, a weight used by the pixel point in the initial depth image according to a pixel value similarity corresponding to a target pixel point in the original image, wherein the target pixel point and the pixel point are located at the same pixel point position;
a function construction sub-module configured to construct an objective function, the objective function including a first data item, a second data item and a third data item, wherein the first data item is used for representing a first difference, the first difference is a difference between an optimized depth value and an actual depth value of the pixel point in the initial depth image, the second data item is used for representing a second difference, the second difference is a difference between the optimized depth value of the pixel point in the initial depth image and an optimized depth value of an adjacent pixel point, and the third data item is used for representing a weight used by the pixel point in the initial depth image;
the optimized depth value adjusting submodule is configured to adjust the optimized depth value of each pixel point in the objective function;
and the image obtaining submodule is configured to respond that the adjusted target function meets a preset condition, and obtain the target depth image according to the current optimized depth value of each pixel point.
In an optional embodiment, the weight used by the pixel point is inversely related to the pixel value similarity corresponding to the target pixel point.
In an alternative embodiment, the image obtaining sub-module includes:
a deviation determining unit configured to determine, for each pixel point in the initial depth image, a deviation of a current optimized depth value and an actual depth value of the pixel point;
a depth value adjusting unit configured to adjust the current optimized depth value of the pixel point to the actual depth value in response to the deviation not falling within a preset deviation range;
an image obtaining unit configured to obtain the target depth image according to the actual depth values of the pixel points.
In an alternative embodiment, on the basis of the image processing apparatus shown in fig. 3, the depth value adjusting module 33 may include:
a depth value adjusting submodule configured to adjust a depth value in the initial depth image according to the pixel value similarity, so as to obtain an intermediate depth image;
an image denoising module configured to denoise the intermediate depth image to obtain the target depth image.
In an alternative embodiment, on the basis of the image processing apparatus shown in fig. 3, the apparatus may further include:
a depth value obtaining module configured to, after obtaining a set of target depth images corresponding to a set of continuously captured original images, the set of original images including the original images, obtain, for a first target depth image corresponding to a first original image in the set of original images, a first depth value of a first pixel point in the first target depth image, the first pixel point being used for representing a captured object, and a second depth value of a second pixel point in a second target depth image, the second target depth image being a target depth image corresponding to a second original image in the set of original images, the second original image being captured adjacent to the first original image;
a depth value determination module configured to determine a target depth value of the first pixel point in the first target depth image according to the first depth value and the second depth value, or according to the second depth value.
In an optional embodiment, the depth value obtaining module may include:
the pixel point obtaining submodule is configured to obtain a third pixel point in the first original image and a fourth pixel point in the second original image, and the third pixel point and the fourth pixel point are both used for representing the shooting object;
a position relation determination submodule configured to determine a position relation between the third pixel point and the fourth pixel point;
a pixel point position determining submodule configured to determine a target pixel point position in the second target depth image according to the pixel point position of the first pixel point in the first target depth image and the positional relationship;
a pixel point determination submodule configured to determine the second pixel point located at the target pixel point position in the second target depth image;
and the second depth value acquisition sub-module is configured to acquire a second depth value of the second pixel point.
Fig. 4 is a schematic diagram illustrating a structure of an electronic device 1600 according to an example embodiment. For example, the electronic device 1600 may be a user device, which may be embodied as a mobile phone, a computer, a digital broadcast, a messaging device, a gaming console, a tablet device, a medical device, a fitness device, a personal digital assistant, a wearable device such as a smart watch, smart glasses, a smart bracelet, a smart running shoe, and the like.
Referring to fig. 4, electronic device 1600 may include one or more of the following components: processing component 1602, memory 1604, power component 1606, multimedia component 1608, audio component 1610, input/output (I/O) interface 1612, sensor component 1614, and communications component 1616.
The processing component 1602 generally controls overall operation of the electronic device 1600, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 1602 may include one or more processors 1620 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 1602 can include one or more modules that facilitate interaction between the processing component 1602 and other components. For example, the processing component 1602 can include a multimedia module to facilitate interaction between the multimedia component 1608 and the processing component 1602.
The memory 1604 is configured to store various types of data to support operation at the device 1600. Examples of such data include instructions for any application or method operating on the electronic device 1600, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1604 may be implemented by any type or combination of volatile or non-volatile storage devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 1606 provides power to the various components of the electronic device 1600. The power components 1606 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 1600.
The multimedia component 1608 includes a screen that provides an output interface between the electronic device 1600 and a user as described above. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of the touch or slide action but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1608 comprises a front-facing camera and/or a rear-facing camera. The front-facing camera and/or the back-facing camera may receive external multimedia data when device 1600 is in an operational mode, such as an adjustment mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 1610 is configured to output and/or input an audio signal. For example, audio component 1610 includes a Microphone (MIC) configured to receive external audio signals when electronic device 1600 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 1604 or transmitted via the communications component 1616. In some embodiments, audio component 1610 further includes a speaker for outputting audio signals.
The I/O interface 1612 provides an interface between the processing component 1602 and peripheral interface modules, such as keyboards, click wheels, buttons, and the like. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 1614 includes one or more sensors for providing various aspects of status evaluation for the electronic device 1600. For example, sensor assembly 1614 may detect an open/closed state of device 1600, the relative positioning of components, such as a display and keypad of device 1600, a change in position of device 1600 or a component of device 1600, the presence or absence of user contact with device 1600, orientation or acceleration/deceleration of device 1600, and a change in temperature of device 1600. The sensor assembly 1614 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 1614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1614 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communications component 1616 is configured to facilitate communications between the electronic device 1600 and other devices in a wired or wireless manner. The electronic device 1600 may access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof. In an exemplary embodiment, the communication component 1616 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the aforementioned communication component 1616 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 1600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, there is also provided a non-transitory computer-readable storage medium, such as the memory 1604 including instructions that, when executed by the processor 1620 of the electronic device 1600, enable the electronic device 1600 to perform an image processing method, the method comprising: acquiring an initial depth image corresponding to an original image; acquiring pixel value similarity between adjacent pixel points in the original image; and adjusting the depth value in the initial depth image according to the pixel value similarity to obtain a target depth image corresponding to the original image.
The non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (18)

1. An image processing method, characterized in that the method comprises:
acquiring an initial depth image corresponding to an original image;
acquiring pixel value similarity between adjacent pixel points in the original image;
and adjusting the depth value in the initial depth image according to the pixel value similarity to obtain a target depth image corresponding to the original image.
2. The method of claim 1, wherein the initial depth image comprises a set of pixels having depth values of zero; the adjusting the depth value in the initial depth image according to the pixel value similarity includes:
and adjusting the depth value of the pixel point in the pixel point set according to the pixel value similarity.
3. The method according to claim 1, wherein the adjusting the depth values in the initial depth image according to the pixel value similarity to obtain a target depth image corresponding to the original image comprises:
aiming at a pixel point in the initial depth image, determining the weight used by the pixel point in the initial depth image according to the pixel value similarity corresponding to a target pixel point in the original image, wherein the target pixel point and the pixel point are positioned at the same pixel point position;
constructing an objective function, wherein the objective function includes a first data item, a second data item and a third data item, wherein the first data item is used for representing a first difference, the first difference is a difference between an optimized depth value and an actual depth value of the pixel point in the initial depth image, the second data item is used for representing a second difference, the second difference is a difference between the optimized depth value of the pixel point in the initial depth image and an optimized depth value of an adjacent pixel point, and the third data item is used for representing a weight used by the pixel point in the initial depth image;
and adjusting the optimized depth value of each pixel point in the target function, responding to the adjusted target function meeting a preset condition, and obtaining the target depth image according to the current optimized depth value of each pixel point.
4. The method of claim 3, wherein the weight used by the pixel point is inversely related to the similarity of the pixel values corresponding to the target pixel point.
5. The method according to claim 3 or 4, wherein the obtaining the target depth image according to the current optimized depth value of each pixel point comprises:
determining the deviation between the current optimized depth value and the actual depth value of each pixel point in the initial depth image;
in response to the deviation not falling within a preset deviation range, adjusting the current optimized depth value of the pixel point to the actual depth value;
and obtaining the target depth image according to the actual depth value of the pixel point.
6. The method according to claim 1, wherein the adjusting the depth values in the initial depth image according to the pixel value similarity to obtain a target depth image corresponding to the original image comprises:
adjusting the depth value in the initial depth image according to the pixel value similarity to obtain an intermediate depth image;
and denoising the intermediate depth image to obtain the target depth image.
7. The method of claim 1, further comprising:
after a group of target depth images corresponding to a group of continuously shot original images are obtained, and the group of original images comprises the original images, aiming at a first target depth image corresponding to a first original image in the group of original images, obtaining a first depth value of a first pixel point used for representing a shot object in the first target depth image and a second depth value of a second pixel point used for representing the shot object in a second target depth image, wherein the second target depth image is a target depth image corresponding to a second original image in the group of original images, and the second original image is shot adjacently to the first original image;
and determining the target depth value of the first pixel point in the first target depth image according to the first depth value and the second depth value or according to the second depth value.
8. The method according to claim 7, wherein the obtaining a second depth value of a second pixel point in a second target depth image for characterizing the photographic object comprises:
acquiring a third pixel point in the first original image and a fourth pixel point in the second original image, wherein the third pixel point and the fourth pixel point are both used for representing the shooting object;
determining the position relation between the third pixel point and the fourth pixel point;
determining a target pixel point position in the second target depth image according to the pixel point position of the first pixel point in the first target depth image and the position relation;
determining the second pixel point at the position of the target pixel point in the second target depth image;
and acquiring a second depth value of the second pixel point.
9. An image processing apparatus, characterized in that the apparatus comprises:
the image acquisition module is configured to acquire an initial depth image corresponding to the original image;
the similarity obtaining module is configured to obtain pixel value similarity between adjacent pixel points in the original image;
and the depth value adjusting module is configured to adjust the depth value in the initial depth image according to the pixel value similarity, so as to obtain a target depth image corresponding to the original image.
10. The apparatus of claim 9, wherein the initial depth image comprises a set of pixels having a depth value of zero;
the depth value adjusting module is configured to adjust the depth value of the pixel point in the pixel point set according to the pixel value similarity.
11. The apparatus of claim 9, wherein the depth value adjustment module comprises:
the weight determining submodule is configured to determine, for a pixel point in the initial depth image, a weight used by the pixel point in the initial depth image according to a pixel value similarity corresponding to a target pixel point in the original image, wherein the target pixel point and the pixel point are located at the same pixel point position;
a function construction sub-module configured to construct an objective function, the objective function including a first data item, a second data item and a third data item, wherein the first data item is used for representing a first difference, the first difference is a difference between an optimized depth value and an actual depth value of the pixel point in the initial depth image, the second data item is used for representing a second difference, the second difference is a difference between the optimized depth value of the pixel point in the initial depth image and an optimized depth value of an adjacent pixel point, and the third data item is used for representing a weight used by the pixel point in the initial depth image;
the optimized depth value adjusting submodule is configured to adjust the optimized depth value of each pixel point in the objective function;
and the image obtaining submodule is configured to respond that the adjusted target function meets a preset condition, and obtain the target depth image according to the current optimized depth value of each pixel point.
12. The apparatus of claim 11, wherein the weight used by the pixel point is inversely related to the similarity of the pixel values corresponding to the target pixel point.
13. The apparatus of claim 11 or 12, wherein the image acquisition sub-module comprises:
a deviation determining unit configured to determine, for each pixel point in the initial depth image, a deviation of a current optimized depth value and an actual depth value of the pixel point;
a depth value adjusting unit configured to adjust the current optimized depth value of the pixel point to the actual depth value in response to the deviation not falling within a preset deviation range;
an image obtaining unit configured to obtain the target depth image according to the actual depth values of the pixel points.
14. The apparatus of claim 9, wherein the depth value adjustment module comprises:
a depth value adjusting submodule configured to adjust a depth value in the initial depth image according to the pixel value similarity, so as to obtain an intermediate depth image;
an image denoising module configured to denoise the intermediate depth image to obtain the target depth image.
15. The apparatus of claim 9, further comprising:
a depth value obtaining module configured to, after obtaining a set of target depth images corresponding to a set of continuously captured original images, the set of original images including the original images, obtain, for a first target depth image corresponding to a first original image in the set of original images, a first depth value of a first pixel point in the first target depth image, the first pixel point being used for representing a captured object, and a second depth value of a second pixel point in a second target depth image, the second target depth image being a target depth image corresponding to a second original image in the set of original images, the second original image being captured adjacent to the first original image;
a depth value determination module configured to determine a target depth value of the first pixel point in the first target depth image according to the first depth value and the second depth value, or according to the second depth value.
16. The apparatus of claim 15, wherein the depth value obtaining module comprises:
the pixel point obtaining submodule is configured to obtain a third pixel point in the first original image and a fourth pixel point in the second original image, and the third pixel point and the fourth pixel point are both used for representing the shooting object;
a position relation determination submodule configured to determine a position relation between the third pixel point and the fourth pixel point;
a pixel point position determining submodule configured to determine a target pixel point position in the second target depth image according to the pixel point position of the first pixel point in the first target depth image and the position relationship;
a pixel point determination submodule configured to determine the second pixel point located at the target pixel point position in the second target depth image;
and the second depth value acquisition sub-module is configured to acquire a second depth value of the second pixel point.
17. A non-transitory computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the method of any one of claims 1-8.
18. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of any one of claims 1-8.
CN202110302683.6A 2021-03-22 2021-03-22 Image processing method and device Pending CN115115683A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110302683.6A CN115115683A (en) 2021-03-22 2021-03-22 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110302683.6A CN115115683A (en) 2021-03-22 2021-03-22 Image processing method and device

Publications (1)

Publication Number Publication Date
CN115115683A true CN115115683A (en) 2022-09-27

Family

ID=83324271

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110302683.6A Pending CN115115683A (en) 2021-03-22 2021-03-22 Image processing method and device

Country Status (1)

Country Link
CN (1) CN115115683A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070206678A1 (en) * 2006-03-03 2007-09-06 Satoshi Kondo Image processing method and image processing device
JP2007322403A (en) * 2006-06-05 2007-12-13 Topcon Corp Image processing apparatus and its processing method
CN108885778A (en) * 2016-04-06 2018-11-23 索尼公司 Image processing equipment and image processing method
CN110097590A (en) * 2019-04-24 2019-08-06 成都理工大学 Color depth image repair method based on depth adaptive filtering
CN111091592A (en) * 2018-10-24 2020-05-01 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, electronic device, and readable storage medium
CN111968238A (en) * 2020-08-22 2020-11-20 晋江市博感电子科技有限公司 Human body color three-dimensional reconstruction method based on dynamic fusion algorithm

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070206678A1 (en) * 2006-03-03 2007-09-06 Satoshi Kondo Image processing method and image processing device
JP2007322403A (en) * 2006-06-05 2007-12-13 Topcon Corp Image processing apparatus and its processing method
CN108885778A (en) * 2016-04-06 2018-11-23 索尼公司 Image processing equipment and image processing method
CN111091592A (en) * 2018-10-24 2020-05-01 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, electronic device, and readable storage medium
CN110097590A (en) * 2019-04-24 2019-08-06 成都理工大学 Color depth image repair method based on depth adaptive filtering
CN111968238A (en) * 2020-08-22 2020-11-20 晋江市博感电子科技有限公司 Human body color three-dimensional reconstruction method based on dynamic fusion algorithm

Similar Documents

Publication Publication Date Title
CN107798669B (en) Image defogging method and device and computer readable storage medium
CN106331504B (en) Shooting method and device
CN110557547B (en) Lens position adjusting method and device
CN108154465B (en) Image processing method and device
CN107330868B (en) Picture processing method and device
KR20160021737A (en) Method, apparatus and device for image segmentation
CN107944367B (en) Face key point detection method and device
CN108154466B (en) Image processing method and device
CN106651777B (en) Image processing method and device and electronic equipment
CN110769147B (en) Shooting method and electronic equipment
CN109784327B (en) Boundary box determining method and device, electronic equipment and storage medium
CN114500821B (en) Photographing method and device, terminal and storage medium
CN105678296B (en) Method and device for determining character inclination angle
EP3813010B1 (en) Facial image enhancement method, device and electronic device
CN111968052A (en) Image processing method, image processing apparatus, and storage medium
CN113313788A (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
CN111741187A (en) Image processing method, device and storage medium
CN108596957B (en) Object tracking method and device
CN107730443B (en) Image processing method and device and user equipment
CN106469446B (en) Depth image segmentation method and segmentation device
CN111325674A (en) Image processing method, device and equipment
CN115115683A (en) Image processing method and device
CN117616774A (en) Image processing method, device and storage medium
CN115118950B (en) Image processing method and device
CN109447929B (en) Image synthesis method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination