CN115908527A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN115908527A
CN115908527A CN202110885387.3A CN202110885387A CN115908527A CN 115908527 A CN115908527 A CN 115908527A CN 202110885387 A CN202110885387 A CN 202110885387A CN 115908527 A CN115908527 A CN 115908527A
Authority
CN
China
Prior art keywords
depth map
local
expression
depth
constant
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110885387.3A
Other languages
Chinese (zh)
Inventor
张超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202110885387.3A priority Critical patent/CN115908527A/en
Publication of CN115908527A publication Critical patent/CN115908527A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The present disclosure provides an image processing method and apparatus. The method is applied to an electronic device, the electronic device comprises a depth camera, and the method comprises the following steps: the method comprises the steps of collecting a depth map by using a depth camera, adjusting depth information in a local depth map in each image area in the depth map according to compensation information set for the image area to obtain a target local depth map, and obtaining the target depth map according to each target local depth map. The structure in the depth map is optimized by using the method, so that the structure in the depth map is closer to a real structure.

Description

Image processing method and device
Technical Field
The present disclosure relates to the field of computer communication technologies, and in particular, to an image processing method and apparatus.
Background
Compared with the traditional camera, the depth camera has the advantage that the depth measuring function is added, so that the surrounding environment and changes can be conveniently and accurately obtained. The distance measurement principle of the depth camera is as follows: the method comprises the steps of continuously transmitting light pulses to the target, receiving the light pulses reflected by the object by using a sensor, and determining the distance of the target according to the flight time of the detected light pulses.
The depth camera outputs a depth map after completing the depth measurement, and pixel coordinates in the depth map comprise: position information of the pixel in a horizontal direction, position information of the pixel in a vertical direction, and a depth value indicating a distance of an object displayed by the pixel from the camera.
However, in some cases, the structure displayed by the depth map is different from the actual structure, and the display effect of the depth map is not good.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides an image processing method and apparatus.
According to a first aspect of the embodiments of the present disclosure, there is provided an image processing method applied to an electronic device including a depth camera, the method including:
acquiring a depth map using the depth camera;
for each image area in the depth map, compensating the depth information in the local depth map in the image area according to the compensation information set for the image area to obtain a target local depth map;
and obtaining a target depth map according to each target local depth map.
Optionally, the compensating the depth information in the local depth map displayed in the image region according to the compensation information set for the image region includes:
determining a first expression of depth information in the local depth map, wherein an argument in the first expression comprises pixel position information;
compensating the constant in the first expression according to a compensation constant set for the image area to obtain a second expression;
and determining target depth information in the local depth map according to the pixel position information in the local depth map and the second expression.
Optionally, the method comprises:
acquiring a real-time depth map of a test surface by using the depth camera, wherein the test surface and an imaging plane of the depth camera meet a preset parallel condition;
determining a third expression of depth information in the local real-time depth map for the local real-time depth map within each image region in the real-time depth map;
determining a local standard depth map in the standard depth map of the test surface, wherein the local standard depth map and the local real-time depth map are located in the same image area;
determining a fourth expression of depth information in the local standard depth map;
and compensating the constant in the third expression according to the constant in the fourth expression to obtain a compensation constant set for an image area where the local real-time depth map is located.
Optionally, the compensating the constant in the third expression according to the constant in the fourth expression includes:
determining a first difference between a constant in the third expression and a constant in the fourth expression;
compensating for constants in the third expression according to a set of differences including the first difference, the set of differences including differences determined for the same image region when image acquisitions of the test surface at different distances are made by the depth camera.
Optionally, the compensating the constant in the third expression according to the difference set including the first difference includes:
determining a current shooting scene;
determining a difference calculation mode suitable for the shooting scene;
calculating the difference in the difference set according to the difference calculation mode;
and compensating the constant in the third expression according to the calculation result.
Optionally, the difference calculation means includes a weight coefficient set for a difference in the difference set; calculating the difference in the difference set according to the difference calculation mode, including:
and performing weight calculation on the differences in the difference set by using the weight coefficient.
Optionally, the method comprises:
determining target depth information of a pixel according to pixel position information of the pixel in each local real-time depth map and a compensated third expression aiming at least one local real-time depth map in the real-time depth maps, wherein the compensated third expression comprises the compensation constant;
determining the flatness of the test surface according to the target depth information determined for the pixels in the at least one local real-time depth map;
determining whether the flatness meets a preset flatness condition;
and if not, adjusting the compensation constant in the compensated third expression.
Optionally, the determining the flatness of the test surface according to the target depth information determined for the pixels in each local real-time depth map includes:
for each local real-time depth map of the at least one local real-time depth map, determining a flatness of a local test surface according to target depth information determined for pixels in the local real-time depth map;
and carrying out statistics on the flatness of at least one local test surface to obtain the flatness of the test surface.
Optionally, the target depth information comprises target depth values; determining the flatness of the local test surface according to the target depth information determined for the pixels in the local real-time depth map, including:
calculating an average of the target depth values determined for all pixels in the local real-time depth map;
for each pixel in the local real-time depth map, determining a difference between a target depth value of the pixel and the average value;
and counting a plurality of differences determined aiming at the local real-time depth map to obtain the planeness of the local test surface.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus applied to an electronic device including a depth camera, the apparatus including:
a depth map acquisition module configured to acquire a depth map using the depth camera;
the depth information compensation module is configured to compensate depth information in a local depth map in the image area according to compensation information set for the image area for each image area in the depth map to obtain a target local depth map;
and the target depth map obtaining module is configured to obtain a target depth map according to each target local depth map.
Optionally, the depth information compensation module includes:
a first expression determination submodule configured to determine a first expression of depth information in the partial depth map, an argument in the first expression including pixel position information;
a constant compensation submodule configured to compensate a constant in the first expression according to a compensation constant set for the image area, resulting in a second expression, wherein the compensation information includes the compensation constant;
a target depth information determination submodule configured to determine target depth information in the partial depth map according to the pixel position information in the partial depth map and the second expression.
Optionally, the apparatus comprises:
a real-time depth map acquisition module configured to acquire a real-time depth map of a test surface using the depth camera, the test surface and an imaging plane of the depth camera satisfying a preset parallel condition;
a third expression determination module configured to determine, for a local real-time depth map within each image region in the real-time depth map, a third expression of depth information in the local real-time depth map;
the local standard depth map determining module is configured to determine a local standard depth map in the standard depth map of the test surface, wherein the local standard depth map is located in the same image area as the local real-time depth map;
a fourth expression determination module configured to determine a fourth expression of depth information in the local standard depth map;
and the constant compensation module is configured to compensate the constant in the third expression according to the constant in the fourth expression to obtain a compensation constant set for an image area where the local real-time depth map is located.
Optionally, the constant compensation module includes:
a difference determination submodule configured to determine a first difference between a constant in the third expression and a constant in the fourth expression;
a difference set use sub-module configured to compensate for the constant in the third expression according to a difference set comprising the first difference, the difference set comprising differences determined for the same image area when image acquisitions of the test face at different distances are made by the depth camera.
Optionally, the difference set uses a sub-module comprising:
a scene determination unit configured to determine a current shooting scene;
a manner determination unit configured to determine a difference calculation manner suitable for the shooting scene;
a difference calculation unit configured to calculate differences in the difference set in the difference calculation manner;
a constant adjusting unit configured to compensate the constant in the third expression according to a calculation result.
Optionally, the difference calculation means includes a weight coefficient set for a difference in the difference set;
the difference calculation unit is configured to perform weight calculation on the differences in the difference set by using the weight coefficient.
Optionally, the apparatus comprises:
an information determination module configured to determine, for at least one of the real-time depth maps, target depth information for a pixel in each partial real-time depth map from pixel location information for the pixel and a compensated third expression, the compensated third expression comprising the compensation constant;
a planarity determination module configured to determine a planarity of the test face from target depth information determined for pixels in the at least one local real-time depth map;
a flatness determination module configured to determine whether the flatness satisfies a preset flatness condition;
a constant adjusting module configured to adjust the compensation constant in the compensated third expression if the compensation constant is not satisfied.
Optionally, the flatness determination module includes:
a first planarity determination submodule configured to determine, for each of the at least one local real-time depth map, a planarity of a local test surface from target depth information determined for pixels in the local real-time depth map;
and the second flatness determination submodule is configured to count the flatness of at least one local test surface to obtain the flatness of the test surface.
Optionally, the target depth information comprises target depth values; the first flatness determination submodule includes:
an average value calculation unit configured to calculate an average value of the target depth values determined for all pixels in the local real-time depth map;
a difference determination unit configured to determine, for each pixel in the local real-time depth map, a difference between a target depth value of the pixel and the average value;
a difference counting unit configured to count a plurality of differences determined for the local real-time depth map to obtain the flatness of the local test surface.
According to a third aspect of embodiments of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of any one of the above first aspects.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of any one of the first aspect above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
in the embodiment of the disclosure, a depth camera is used for collecting a depth map, for each image area in the depth map, compensation information set for the image area is used for adjusting depth information in a local depth map in the image area to obtain a target local depth map, and the target depth map is obtained according to each target local depth map. The structure in the depth map is optimized by using the method, so that the structure in the depth map is closer to a real structure.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
FIG. 1 is a flow diagram illustrating a method of image processing according to an exemplary embodiment;
FIG. 2 is a schematic diagram illustrating a window movement according to an exemplary embodiment;
FIG. 3 is a flow diagram illustrating another method of image processing according to an exemplary embodiment;
FIG. 4 is a flow diagram illustrating another method of image processing according to an exemplary embodiment;
FIG. 5 is a schematic diagram illustrating another window movement in accordance with an exemplary embodiment;
FIG. 6 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment;
FIG. 7 is a block diagram of another image processing apparatus according to an exemplary embodiment;
FIG. 8 is a block diagram of another image processing apparatus according to an exemplary embodiment;
FIG. 9 is a block diagram of another image processing apparatus according to an exemplary embodiment;
fig. 10 is a schematic structural diagram of an electronic device according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if," as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination," depending on the context.
Fig. 1 is a flowchart illustrating an image processing method according to an exemplary embodiment, in which the method illustrated in fig. 1 is applied to an electronic device including a depth camera, the method including:
in step 101, a depth map is acquired using a depth camera.
And acquiring an image of the target object by using a depth camera to obtain a depth map of the target object. The depth map is similar to a grayscale map, with the pixel values (which may be understood as depth values) of the pixels in the depth map indicating the actual distance of the sensor in the depth camera from the object.
The object is a subject to be photographed. The object may be a person, a building, etc.
The electronic device may be a cell phone, tablet, notebook, wearable device, or the like.
In step 102, for each image region in the depth map, the compensation information set for the image region is used to compensate the depth information in the local depth map in the image region, so as to obtain a target local depth map.
Corresponding compensation information is set for each image region in the depth map, and the compensation information set for different image regions may be the same or different.
Through the arrangement, the depth information in different local depth images can be compensated to different degrees, the compensation effect of the depth information is improved, and the finally obtained structure displayed by the target depth image is closer to a real structure.
In some embodiments, adjusting the depth information in the partial depth map displayed in the image region using the compensation information set for the image region may include: the first step is as follows: determining a first expression of depth information in the local depth map, wherein an argument in the first expression comprises pixel position information; the second step is as follows: compensating the constant in the first expression according to a compensation constant set for the image area to obtain a second expression, wherein the compensation information comprises the compensation constant; the third step: and determining target depth information in the local depth map according to the pixel position information in the local depth map and the second expression.
The method realizes indirect compensation of the depth information in the local depth map in the image area according to the compensation constant set for the image area.
For the first step, the partial depth map displays a photographic subject, which may include a target and/or a background. An equation for representing the surface of the shot object in the local depth map exists, the equation comprises pixel position information, depth information and constants in the local depth map, and the equation can be deformed to obtain an expression of the depth information in the local depth map.
The independent variable in the expression includes pixel position information, and specifically may include pixel position information in the x direction and pixel position information in the y direction; dependent variables in the expression include depth information.
For example, for an equation of a plane where a shot object is located in the local depth map, fitting constants in the equation to obtain an equation, and transforming the equation to obtain an expression of depth information in the local depth map, where the expression is as follows: z = ax 2 +by 2 + cxy + d, where z is depth information, x is pixel coordinates in the x direction, y is pixel coordinates in the y direction, and a, b, c, and d are constants.
For the second step, a compensation constant is set for each image area. The compensation constants set for different image areas may be the same or different.
Compensating the constant in the first expression according to the compensation constant set for the image area to obtain a second expression may include: and replacing the constant in the first expression by using the compensation constant to obtain a second expression.
For example, the first expression of depth information in the local depth map is z = a 1 x 2 +b 1 y 2 +c 1 xy+d 1 The compensation constants set for the image area include: a is 1 Compensation constants a', b 1 Compensation constants b', c of 1 Compensation constants c' and d of 1 Is compensated for by the compensation constant d'. Replacing a in the expression with a ', replacing b in the expression with b ', replacing c in the expression with c ', and replacing d in the expression with d ', so as to obtain a second expression, wherein the second expression is that z = a ' x 2 +b′y 2 +c′xy+d′。
For the third step, for each pixel in the depth map, the pixel coordinates include pixel position information and a depth value, the pixel position information including pixel coordinates in the x-direction and pixel coordinates in the y-direction.
And substituting the pixel position information in the pixel coordinates of the pixel into the second expression for each pixel in the depth map, and obtaining the target depth information of the pixel through calculation.
In some embodiments, the window is moved on the depth map according to a preset movement rule, for example, the window is moved in a row-by-row and column-by-column manner.
When the window is moved to a position, the window limits an image area, and the depth information in the local depth image in the image area is compensated by using the compensation information set for the image area limited by the window, so that a target local depth map is obtained.
The number of windows may be one or more. By arranging a plurality of windows, the time of the windows passing through the whole depth map can be shortened, and the image processing efficiency is improved.
The shape of the window may be square or rectangular, etc.
Windows of different sizes may be set for different shooting scenes. For example, when a person is photographed, the size of the window is 30pix × 30pix.
For example, fig. 2 is a schematic diagram illustrating a window movement according to an exemplary embodiment, referring to fig. 2, a square window is provided on the depth map, the direction of the arrow is the movement direction of the window, and the window moves row by row and column by column.
In step 103, a target depth map is obtained from each target local depth map.
In the embodiment of the disclosure, a depth camera is used for collecting a depth map, for each image area in the depth map, compensation information set for the image area is used for adjusting depth information in a local depth map in the image area to obtain a target local depth map, and the target depth map is obtained according to each target local depth map. The structure in the depth map is optimized by using the method, so that the structure in the depth map is closer to a real structure.
FIG. 3 is a flow diagram illustrating another method of image processing according to an example embodiment, the method illustrated in FIG. 3 being applied to an electronic device, the electronic device including a depth camera, the method illustrated in FIG. 3 including:
in step 201, a real-time depth map of a test surface is acquired by using a depth camera, and imaging planes of the test surface and the depth camera satisfy a preset parallel condition.
The test face may be a flat face.
The test surface and the imaging plane of the depth camera meet the preset parallel condition, which can be understood as follows: the included angle between the test surface and the imaging plane of the depth camera is smaller than a preset angle, and the preset angle can be 0 degree, 3 degrees or other smaller angles.
In step 202, a third expression of depth information in the partial real-time depth map is determined for the partial real-time depth map within each image region in the real-time depth map.
The third expression may be determined with reference to the method of determining the first expression or other methods.
In step 203, a local standard depth map in the standard depth map of the test surface, which is located in the same image area as the local real-time depth map, is determined.
The standard depth map may show a test surface with the same structure as the actual structure of the test surface, or the difference may be very small, e.g. the difference is smaller than a difference threshold.
In step 204, a fourth expression of depth information in the local standard depth map is determined.
The fourth expression may be determined with reference to the method of determining the first expression or other methods.
In step 205, the constant in the third expression is compensated according to the constant in the fourth expression, so as to obtain a compensation constant set for the image area where the local real-time depth map is located.
In some embodiments, step 1: determining a first difference between the constant in the third expression and the constant in the fourth expression; and 2, step: the constants in the third expression are compensated for according to a difference set, the difference set comprising differences determined for the same image region when the test surfaces at different distances are image-captured by the depth camera, the difference set comprising the first difference.
For step 1, for example, the third expression is z = a 3 x 2 +b 3 y 2 +c 3 xy+d 3 In determining the third expression, table 3 is generated, where a is recorded in Table 3 3 、b 3 、c 3 And d 3 (ii) a The fourth expression is z = a 4 x 2 +b 4 y 2 +c 4 xy+d 4 In determining the fourth expression, table 4 is generated, and a is recorded in table 4 4 、b 4 、c 4 And d 4
The difference calculation is performed on the constants set for the same data item in table 3 and table 4, and is specifically calculated as follows: Δ a = a 1 -a 2 、Δb=b 1 -b 2 、Δc=c 1 -c 2 And Δ d = d 1 -d 2 Obtaining a first difference, wherein the first difference comprises: Δ a, Δ b, Δ c, and Δ d.
With respect to step 2, in some embodiments, a current shooting scene may be determined, a difference calculation manner suitable for the current shooting scene may be determined, differences in the difference set may be calculated according to the determined difference calculation manner, and a constant in the third expression may be adjusted according to a calculation result.
Different difference calculation modes can be set for different shooting scenes.
The difference calculation method may include a weight coefficient set for the differences in the difference set, and the differences in the difference set may be weighted using the weight coefficient.
Examples are as follows: the set of differences includes: a first difference determined for image area a when the test surface at distance L1 is image-captured by the depth camera, a second difference determined for image area a when the test surface at distance L2 is image-captured by the depth camera, and a third difference determined for image area a when the test surface at distance L3 is image-captured by the depth camera.
For Δ a in the first difference 1 Δ a in the second difference 2 And Δ a in the third difference 3 Performing weight calculation to obtain a compensation constant A; for Δ b in the first difference 1 Δ b in the second difference 2 And Δ b in the third difference 3 Performing weight calculation to obtain a compensation constant B; for Δ c in the first difference 1 Δ c in the second difference 2 And Δ c in the third difference 3 Performing weight calculation to obtain a compensation constant C; for Δ d in the first difference 1 Δ d in the second difference 2 And Δ d in the third difference 3 And performing weight calculation to obtain a compensation constant D.
According to the calculation result, the constant in the third expression is compensated to obtain a fifth expression: z = Ax 2 +By 2 +Cxy+D。
In the embodiment, the flatness difference under different shooting distances is balanced, so that the structure displayed by the processed depth map is closer to a real structure.
In some embodiments, the constants in the fourth expression may be directly determined as compensation constants, simplifying the operation of determining the compensation constants.
FIG. 4 is a flow diagram illustrating another method of image processing according to an example embodiment, the method illustrated in FIG. 4 being applied to an electronic device, the electronic device including a depth camera, the method illustrated in FIG. 4 including:
in step 301, for at least one partial real-time depth map of the real-time depth maps, target depth information of a pixel is determined according to pixel position information of the pixel in each partial real-time depth map and the compensated third expression.
The compensated third expression may be referred to as a fifth expression, which includes a compensation constant.
And for each pixel in the local real-time depth map, the pixel coordinate of the pixel comprises pixel position information and depth information, and the pixel position information in the pixel coordinate of the pixel is substituted into the fifth expression to obtain the target depth information of the pixel.
For example, at least one of the local real-time depth maps comprises all local real-time depth maps of the real-time depth maps. For each local real-time depth map in the real-time depth maps, determining target depth information of pixels in the local real-time depth map according to pixel position information of the pixels in the local real-time depth map and the compensated third expression determined for the local real-time depth map.
In step 302, a flatness of the test surface is determined from the determined target depth information for the pixels in the at least one local real-time depth map.
In some embodiments, may include: step 1), determining the planeness of a local test surface according to target depth information determined according to pixels in at least one local real-time depth map in the real-time depth maps aiming at each local real-time depth map; and 2) carrying out statistics on the flatness of at least one local test surface to obtain the flatness of the test surface.
For step 1), the target depth information may comprise target depth values.
Determining the flatness of the local test surface according to the target depth information determined for the local real-time depth map may include: firstly, calculating the average value of target depth values determined for all pixels in the local real-time depth map; secondly, determining the difference between the target depth value and the average value of each pixel in the local real-time depth map; thirdly, counting a plurality of differences determined according to the local real-time depth map to obtain the planeness of the local test surface.
With respect to step 2), when the at least one local test surface includes more than two local test surfaces, an average value of the flatness of the at least one local test surface may be calculated, and the calculated average value of the flatness may be determined as the flatness of the test surface.
In step 303, it is determined whether the flatness of the test surface satisfies a predetermined flatness condition.
For example, whether the flatness of the test surface is greater than or equal to a flatness threshold value is determined, if yes, the flatness of the test surface is determined to meet a preset flatness condition, and if not, the flatness of the test surface is determined not to meet the preset flatness condition.
In step 304, if not, the compensation constant in the compensated third expression is adjusted.
The compensated third expression may be referred to as a fifth expression. The compensation constant in the fifth expression may be adjusted according to a preset adjustment manner. For example, the same value is increased or decreased for each compensation constant in the fifth expression, or the compensation constant in the fifth expression is increased or decreased by different magnitudes depending on the shooting scene, or only a part of the compensation constants in the fifth expression is increased or decreased depending on the shooting scene, or the like.
Fig. 5 is a schematic diagram illustrating another window movement according to an exemplary embodiment, fig. 5 is a side view, referring to fig. 5, where reference 1 is a test surface, reference 2 is a structure of the test surface in a depth map, and reference 3 is a window. From position w of window 3 1 Move to position w 2 From position w 2 Move to position w 3 … …, final move to position w n
The window 3 is moved to one position at a time, a local real-time depth map located below the window 3 is determined,
an average of all target depth values determined for the local real-time depth map is calculated. The calculation formula is as follows:
Figure BDA0003193911640000141
wherein z is k For a target depth value determined for a pixel k in the local real-time depth map, N is the number of pixels in the local real-time depth map, ->
Figure BDA0003193911640000142
Is the average of all target depth values determined for the local real-time depth map.
And determining the flatness of the local test surface according to the target depth information determined aiming at the local real-time depth map. The calculation formula is as follows:
Figure BDA0003193911640000143
wherein P is the flatness of the local test surface i.
And counting a plurality of differences determined according to the local real-time depth map to obtain the planeness of the local test surface. The calculation formula is as follows:
Figure BDA0003193911640000144
wherein n is the number of partial test faces indicated by the depth map, based on the measured values>
Figure BDA0003193911640000145
The flatness of the local test surface is used. n can also be understood as the number of movements of the window 3.
The embodiment provides a novel method for determining the flatness of the test surface, and the compensation constant in the fifth expression is automatically adjusted under the condition that the flatness of the test surface is determined not to meet the preset flatness condition, so that the image processing effect is optimized.
While, for purposes of simplicity of explanation, the foregoing method embodiments have been described as a series of acts or combination of acts, it will be appreciated by those skilled in the art that the present disclosure is not limited by the order of acts, as some steps may, in accordance with the present disclosure, occur in other orders and concurrently.
Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that acts and modules referred to are not necessarily required by the disclosure.
Corresponding to the embodiment of the application function implementation method, the disclosure also provides an embodiment of an application function implementation device and corresponding electronic equipment.
Fig. 6 is a block diagram of an image processing apparatus according to an exemplary embodiment, which is applied to an electronic device including a depth camera, and the apparatus may include:
a depth map acquisition module 41 configured to acquire a depth map using the depth camera;
a depth information compensation module 42 configured to compensate, for each image region in the depth map, depth information in a local depth map within the image region according to compensation information set for the image region, so as to obtain a target local depth map;
and a target depth map obtaining module 43 configured to obtain a target depth map according to each target local depth map.
In some embodiments, fig. 7 is a block diagram of another image processing apparatus according to an exemplary embodiment, and the depth information compensation module 42 may include:
a first expression determination submodule 421 configured to determine a first expression of the depth information in the partial depth map, wherein an argument in the first expression includes pixel position information;
a constant compensation sub-module 422, configured to compensate the constant in the first expression according to a compensation constant set for the image area, to obtain a second expression, where the compensation information includes the compensation constant;
a target depth information determining submodule 423 configured to determine target depth information in the partial depth map according to the pixel position information in the partial depth map and the second expression.
In some embodiments, FIG. 8 is a block diagram illustrating another image processing apparatus according to an example embodiment, the apparatus comprising:
a real-time depth map acquisition module 44 configured to acquire a real-time depth map of a test surface using the depth camera, the test surface and an imaging plane of the depth camera satisfying a preset parallel condition;
a third expression determination module 45 configured to determine, for a local real-time depth map within each image region in the real-time depth map, a third expression of depth information in the local real-time depth map;
a local standard depth map determination module 46 configured to determine a local standard depth map in the standard depth map of the test surface, which is located in the same image area as the local real-time depth map;
a fourth expression determination module 47 configured to determine a fourth expression of depth information in the local standard depth map;
and the constant compensation module 48 is configured to compensate the constant in the third expression according to the constant in the fourth expression, so as to obtain a compensation constant set for the image area where the local real-time depth map is located.
In some embodiments, referring to fig. 8, the constant compensation module 48 may include:
a difference determination submodule 481 configured to determine a first difference between a constant in the third expression and a constant in the fourth expression;
a difference set usage submodule 482 configured to compensate for constants in the third expression from a difference set comprising the first difference, the difference set comprising differences determined for the same image region when image acquisitions of the test surface at different distances by the depth camera.
In some embodiments, referring to fig. 8, the difference set usage sub-module 482 may include:
a scene determination unit 4821 configured to determine a current shooting scene;
a manner determining unit 4822 configured to determine a difference calculation manner suitable for the shooting scene;
a difference calculating unit 4823 configured to calculate differences in the difference set in the difference calculating manner;
a constant adjustment unit 4824 configured to compensate the constant in the third expression according to the calculation result.
In some embodiments, referring to fig. 8, the variance calculation includes a weighting factor set for the variances in the variance set;
the difference calculating unit 4823 configured to perform weight calculation on the differences in the difference set using the weight coefficient.
In some embodiments, fig. 9 is a block diagram illustrating another image processing apparatus according to an example embodiment, the apparatus may include:
an information determining module 49 configured to determine, for at least one of the real-time depth maps, target depth information for a pixel in each of the local real-time depth maps from pixel position information of the pixel and a compensated third expression, the compensated third expression comprising the compensation constant;
a planarity determination module 410 configured to determine a planarity of the test face from the determined target depth information for pixels in the at least one local real-time depth map;
a flatness determination module 411 configured to determine whether the flatness satisfies a preset flatness condition;
a constant adjustment module 412 configured to adjust the compensation constant in the compensated third expression if not satisfied.
In some embodiments, referring to fig. 9, the flatness determination module 410 may include:
a first planarity determination submodule 4101 configured to determine, for each of the at least one local real-time depth map, a planarity of a local test surface from target depth information determined for pixels in the local real-time depth map;
a second flatness determination submodule 4102 configured to count the flatness of at least one local test surface, to obtain the flatness of the test surface.
In some embodiments, referring to fig. 9, the target depth information includes a target depth value; the first flatness determination submodule 4101 may include:
an average value calculation unit 41011 configured to calculate an average value of the target depth values determined for all pixels in the local real-time depth map;
a disparity determining unit 41012 configured to determine, for each pixel in the local real-time depth map, a disparity between a target depth value of the pixel and the average value;
a difference statistics unit 41013 configured to count a plurality of differences determined for the local real-time depth map, resulting in a flatness of the local test surface.
Fig. 10 is a schematic diagram illustrating a structure of an electronic device 1600 according to an example embodiment. Referring to fig. 10, electronic device 1600 may include one or more of the following components: processing component 1602, memory 1604, power component 1606, multimedia component 1608, audio component 1610, input/output (I/O) interface 1612, sensor component 1614, and communications component 1616.
The processing component 1602 generally controls overall operation of the electronic device 1600, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 1602 may include one or more processors 1620 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 1602 can include one or more modules that facilitate interaction between the processing component 1602 and other components. For example, the processing component 1602 can include a multimedia module to facilitate interaction between the multimedia component 1608 and the processing component 1602.
The memory 1604 is configured to store various types of data to support operation at the electronic device 1600. Examples of such data include instructions for any application or method operating on the electronic device 1600, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1604 may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 1606 provides power to the various components of the electronic device 1600. The power components 1606 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 1600.
The multimedia component 1608 includes a screen that provides an output interface between the electronic device 1600 and a user as described above. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of the touch or slide action but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1608 comprises a front-facing camera and/or a rear-facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 1600 is in an operating mode, such as an adjustment mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 1610 is configured to output and/or input an audio signal. For example, audio component 1610 includes a Microphone (MIC) configured to receive external audio signals when electronic device 1600 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 1604 or transmitted via the communications component 1616. In some embodiments, audio component 1610 further comprises a speaker for outputting audio signals.
The I/O interface 1612 provides an interface between the processing component 1602 and a peripheral interface module, which can be a keyboard, click wheel, button, or the like. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
Sensor assembly 1614 includes one or more sensors for providing various aspects of status assessment for electronic device 1600. For example, sensor assembly 1614 may detect an open/closed state of electronic device 1600, the relative positioning of components, such as a display and keypad of electronic device 1600, a change in position of electronic device 1600 or a component of electronic device 1600, the presence or absence of user contact with electronic device 1600, orientation or acceleration/deceleration of electronic device 1600, and a change in temperature of electronic device 1600. The sensor assembly 1614 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 1614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1614 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communications component 1616 is configured to facilitate communications between the electronic device 1600 and other devices in a wired or wireless manner. The electronic device 1600 may access a wireless network based on a communication standard, such as WiFi,2G, or 3G, or a combination thereof. In an exemplary embodiment, the communication component 1616 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the aforementioned communication component 1616 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 1600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, there is also provided a non-transitory computer-readable storage medium, such as the memory 1604 comprising instructions that, when executed by the processor 1620 of the electronic device 1600, enable the electronic device 1600 to perform an image processing method, the method comprising: acquiring a depth map using the depth camera; for each image area in the depth map, compensating the depth information in the local depth map in the image area according to the compensation information set for the image area to obtain a target local depth map; and obtaining a target depth map according to each target local depth map.
The non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements that have been described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (20)

1. An image processing method applied to an electronic device including a depth camera, the method comprising:
acquiring a depth map using the depth camera;
for each image area in the depth map, compensating the depth information in the local depth map in the image area according to the compensation information set for the image area to obtain a target local depth map;
and obtaining a target depth map according to each target local depth map.
2. The method according to claim 1, wherein the compensating the depth information in the partial depth map displayed in the image area according to the compensation information set for the image area comprises:
determining a first expression of depth information in the local depth map, wherein an argument in the first expression comprises pixel position information;
compensating the constant in the first expression according to a compensation constant set for the image area to obtain a second expression;
and determining target depth information in the local depth map according to the pixel position information in the local depth map and the second expression.
3. The method of claim 2, wherein the method comprises:
acquiring a real-time depth map of a test surface by using the depth camera, wherein the test surface and an imaging plane of the depth camera meet a preset parallel condition;
determining a third expression of depth information in the local real-time depth map for the local real-time depth map within each image region in the real-time depth map;
determining a local standard depth map in the standard depth map of the test surface, wherein the local standard depth map and the local real-time depth map are located in the same image area;
determining a fourth expression of depth information in the local standard depth map;
and compensating the constant in the third expression according to the constant in the fourth expression to obtain a compensation constant set for an image area where the local real-time depth map is located.
4. The method of claim 3, wherein the compensating the constant in the third expression according to the constant in the fourth expression comprises:
determining a first difference between a constant in the third expression and a constant in the fourth expression;
compensating for a constant in the third expression according to a set of differences including the first difference, the set of differences including differences determined for the same image region when image acquisitions of the test face at different distances are made by the depth camera.
5. The method of claim 4, wherein compensating the constant in the third expression according to the difference set comprising the first difference comprises:
determining a current shooting scene;
determining a difference calculation mode suitable for the shooting scene;
calculating the difference in the difference set according to the difference calculation mode;
and compensating the constant in the third expression according to the calculation result.
6. The method according to claim 5, wherein the variance calculation means comprises a weight coefficient set for the variance in the variance set; calculating the difference in the difference set according to the difference calculation mode, including:
and performing weight calculation on the differences in the difference set by using the weight coefficient.
7. The method of claim 3, wherein the method comprises:
determining target depth information of a pixel according to pixel position information of the pixel in each local real-time depth map and a compensated third expression aiming at least one local real-time depth map in the real-time depth maps, wherein the compensated third expression comprises the compensation constant;
determining the flatness of the test surface according to the target depth information determined for the pixels in the at least one local real-time depth map;
determining whether the flatness meets a preset flatness condition;
and if not, adjusting the compensation constant in the compensated third expression.
8. The method of claim 7, wherein determining the flatness of the test surface from the determined target depth information for the pixels in the at least one local real-time depth map comprises:
for each local real-time depth map of the at least one local real-time depth map, determining a flatness of a local test surface according to target depth information determined for pixels in the local real-time depth map;
and carrying out statistics on the flatness of at least one local test surface to obtain the flatness of the test surface.
9. The method of claim 8, wherein the target depth information comprises a target depth value; determining the flatness of the local test surface according to the target depth information determined for the pixels in the local real-time depth map, including:
calculating an average of the target depth values determined for all pixels in the local real-time depth map;
for each pixel in the local real-time depth map, determining a difference between a target depth value of the pixel and the average value;
and counting a plurality of differences determined aiming at the local real-time depth map to obtain the planeness of the local test surface.
10. An image processing apparatus applied to an electronic device including a depth camera, the apparatus comprising:
a depth map acquisition module configured to acquire a depth map using the depth camera;
the depth information compensation module is configured to compensate depth information in a local depth map in the image area according to compensation information set for the image area for each image area in the depth map to obtain a target local depth map;
and the target depth map obtaining module is configured to obtain a target depth map according to each target local depth map.
11. The apparatus of claim 10, wherein the depth information compensation module comprises:
a first expression determination submodule configured to determine a first expression of depth information in the partial depth map, an argument in the first expression including pixel position information;
a constant compensation submodule configured to compensate a constant in the first expression according to a compensation constant set for the image area, resulting in a second expression, wherein the compensation information includes the compensation constant;
a target depth information determination submodule configured to determine target depth information in the partial depth map according to the pixel position information in the partial depth map and the second expression.
12. The apparatus of claim 11, wherein the apparatus comprises:
a real-time depth map acquisition module configured to acquire a real-time depth map of a test surface using the depth camera, the test surface and an imaging plane of the depth camera satisfying a preset parallel condition;
a third expression determination module configured to determine, for a local real-time depth map within each image region in the real-time depth map, a third expression of depth information in the local real-time depth map;
the local standard depth map determining module is configured to determine a local standard depth map in the standard depth map of the test surface, wherein the local standard depth map is located in the same image area as the local real-time depth map;
a fourth expression determination module configured to determine a fourth expression for depth information in the local standard depth map;
and the constant compensation module is configured to compensate the constant in the third expression according to the constant in the fourth expression to obtain a compensation constant set for an image area where the local real-time depth map is located.
13. The apparatus of claim 12, wherein the constant compensation module comprises:
a difference determination submodule configured to determine a first difference between a constant in the third expression and a constant in the fourth expression;
a difference set use sub-module configured to compensate for the constant in the third expression according to a difference set comprising the first difference, the difference set comprising differences determined for the same image area when image acquisitions of the test face at different distances are made by the depth camera.
14. The apparatus of claim 13, wherein the difference set uses a sub-module comprising:
a scene determination unit configured to determine a current shooting scene;
a manner determination unit configured to determine a difference calculation manner suitable for the shooting scene;
a difference calculation unit configured to calculate differences in the difference set in the difference calculation manner;
a constant adjusting unit configured to compensate the constant in the third expression according to a calculation result.
15. The apparatus according to claim 14, wherein the variance calculation means comprises a weighting factor set for the variance in the variance set;
the difference calculation unit is configured to perform weight calculation on the differences in the difference set by using the weight coefficient.
16. The apparatus of claim 12, wherein the apparatus comprises:
an information determination module configured to determine, for at least one of the real-time depth maps, target depth information for a pixel in each partial real-time depth map from pixel location information for the pixel and a compensated third expression, the compensated third expression comprising the compensation constant;
a planarity determination module configured to determine a planarity of the test face from target depth information determined for pixels in the at least one local real-time depth map;
a flatness determination module configured to determine whether the flatness satisfies a preset flatness condition;
a constant adjusting module configured to adjust the compensation constant in the compensated third expression if the compensation constant is not satisfied.
17. The apparatus of claim 16, wherein the flatness determination module comprises:
a first planarity determination submodule configured to determine, for each of the at least one local real-time depth map, a planarity of a local test surface from target depth information determined for pixels in the local real-time depth map;
and the second flatness determination submodule is configured to count the flatness of at least one local test surface to obtain the flatness of the test surface.
18. The apparatus of claim 17, wherein the target depth information comprises a target depth value; the first flatness determination submodule includes:
an average value calculation unit configured to calculate an average value of the target depth values determined for all pixels in the local real-time depth map;
a disparity determination unit configured to determine, for each pixel in the local real-time depth map, a disparity between a target depth value of the pixel and the average value;
a difference counting unit configured to count a plurality of differences determined for the local real-time depth map to obtain the flatness of the local test surface.
19. A non-transitory computer readable storage medium having stored thereon a computer program, characterized in that the program, when executed by a processor, implements the method of any one of claims 1-9.
20. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of any one of claims 1-9.
CN202110885387.3A 2021-08-03 2021-08-03 Image processing method and device Pending CN115908527A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110885387.3A CN115908527A (en) 2021-08-03 2021-08-03 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110885387.3A CN115908527A (en) 2021-08-03 2021-08-03 Image processing method and device

Publications (1)

Publication Number Publication Date
CN115908527A true CN115908527A (en) 2023-04-04

Family

ID=86479936

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110885387.3A Pending CN115908527A (en) 2021-08-03 2021-08-03 Image processing method and device

Country Status (1)

Country Link
CN (1) CN115908527A (en)

Similar Documents

Publication Publication Date Title
US10564392B2 (en) Imaging apparatus and focus control method
JP6348611B2 (en) Automatic focusing method, apparatus, program and recording medium
US9165367B2 (en) Depth estimation system for two-dimensional images and method of operation thereof
CN106778773B (en) Method and device for positioning target object in picture
CN107944367B (en) Face key point detection method and device
CN110930463B (en) Method and device for calibrating internal reference of monitoring camera and electronic equipment
CN106557759B (en) Signpost information acquisition method and device
US20220084249A1 (en) Method for information processing, electronic equipment, and storage medium
US20180247419A1 (en) Object tracking method
CN110930351A (en) Light spot detection method and device and electronic equipment
CN105678296B (en) Method and device for determining character inclination angle
CN112202962A (en) Screen brightness adjusting method and device and storage medium
CN107657608B (en) Image quality determination method and device and electronic equipment
CN113920083A (en) Image-based size measurement method and device, electronic equipment and storage medium
CN115861741B (en) Target calibration method and device, electronic equipment, storage medium and vehicle
CN113762623A (en) Landslide direction and trend identification and prediction method and device and electronic equipment
CN112188096A (en) Photographing method and device, terminal and storage medium
CN115908527A (en) Image processing method and device
CN107770444B (en) Photographing parameter determination method and device
CN111325674A (en) Image processing method, device and equipment
CN116934823A (en) Image processing method, device, electronic equipment and readable storage medium
CN111829651B (en) Method, device and equipment for calibrating light intensity value and storage medium
CN114268743A (en) Image acquisition method, device, equipment and storage medium
CN115118950B (en) Image processing method and device
CN111985280B (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination