CN117294829A - Depth compensation method and device thereof - Google Patents

Depth compensation method and device thereof Download PDF

Info

Publication number
CN117294829A
CN117294829A CN202311232127.1A CN202311232127A CN117294829A CN 117294829 A CN117294829 A CN 117294829A CN 202311232127 A CN202311232127 A CN 202311232127A CN 117294829 A CN117294829 A CN 117294829A
Authority
CN
China
Prior art keywords
depth
image
point cloud
measurement interval
sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311232127.1A
Other languages
Chinese (zh)
Inventor
谢根华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202311232127.1A priority Critical patent/CN117294829A/en
Publication of CN117294829A publication Critical patent/CN117294829A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/703SSIS architectures incorporating pixels for producing signals other than image signals
    • H04N25/705Pixels for depth measurement, e.g. RGBZ

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a depth compensation method and a depth compensation device, and belongs to the technical field of image processing. The method comprises the following steps: receiving shooting operation on a shooting preview interface to obtain a first 2D image and a first depth image, wherein the first depth image is acquired through a depth sensor; performing depth conversion on a first point cloud image corresponding to the first 2D image based on the first depth image to obtain a second depth image; adopting N error weights corresponding to the depth sensor to carry out weighted compensation on the first depth image and the second depth image, so as to obtain a third depth image; each error weight corresponds to one depth measurement interval of the depth sensor, and any error weight of the N error weights is used for representing the measurement error of the corresponding depth measurement interval, wherein N is an integer greater than 1.

Description

Depth compensation method and device thereof
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a depth compensation method and a device thereof.
Background
Currently, an electronic device may perform editing processing, such as background blurring, on a captured image by acquiring depth information of the captured image. In the related art, an electronic device may directly acquire depth information Of a photographed image through a depth sensor, for example, a Time Of Flight (TOF) sensor.
However, since the TOF sensor generally has a limited measurement range, if the depth of the object in the captured image exceeds the measurement range of the TOF sensor, a void region with a meaningless value exists in the depth map obtained based on the TOF sensor, and thus, the accuracy of the depth map corresponding to the captured image obtained by the electronic device is poor.
Disclosure of Invention
The embodiment of the application aims to provide an image compensation method and device, which can improve the accuracy of a depth map obtained by electronic equipment.
In a first aspect, an embodiment of the present application provides a depth compensation method, including: receiving shooting operation on a shooting preview interface to obtain a first 2D image and a first depth image, wherein the first depth image is acquired through a depth sensor; performing depth conversion on a first point cloud image corresponding to the first 2D image based on the first depth image to obtain a second depth image; according to N error weights corresponding to the depth sensor, performing weighted compensation on the first depth image and the second depth image to obtain a third depth image; each error weight corresponds to one depth measurement interval of the depth sensor, and any error weight of the N error weights is used for representing the measurement error of the corresponding depth measurement interval, wherein N is an integer greater than 1.
In a second aspect, an embodiment of the present application provides an image compensation apparatus, including: the device comprises a shooting module, a processing module and a compensation module. And the shooting module is used for receiving shooting operation on a shooting preview interface to obtain a first 2D image and a first depth image, wherein the first depth image is acquired through a depth sensor. And the processing module is used for performing depth conversion on the first point cloud image corresponding to the first 2D image based on the first depth image acquired by the shooting module to acquire a second depth image. The compensation module is used for carrying out weighted compensation on the first depth image and the second depth image according to N error weights corresponding to the depth sensor to obtain a third depth image; each error weight corresponds to one depth measurement interval of the depth sensor, and any error weight of the N error weights is used for representing the measurement error of the corresponding depth measurement interval, wherein N is an integer greater than 1.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, the electronic device may receive a shooting operation on a shooting preview interface to obtain a first 2D image and a first depth image, where the first depth image is acquired by a depth sensor during the shooting operation; performing depth conversion on a first point cloud image corresponding to the first image based on the first depth image to obtain a second depth image; according to N error weights corresponding to the depth sensor, performing weighted compensation on the first depth image and the second depth image to obtain a third depth image; wherein each error weight corresponds to one depth measurement interval of the depth sensor, any error weight of the N error weights is used for representing the measurement error of the depth measurement interval corresponding to any error weight, and N is an integer greater than 1. According to the method and the device, the depth measurement intervals of the depth sensor are divided, corresponding error weights are set for each depth measurement interval, so that the depth of a shot object is in any depth measurement interval, the electronic device can carry out weighted compensation on the first depth image and the second depth image through the error weights corresponding to the depth measurement intervals, and further the depth values in the void areas with nonsensical values in the weighted and compensated first depth image are compensated through the weighted and compensated second depth image, and a third image is obtained, so that the accuracy of the depth image acquired by the electronic device is improved.
Drawings
FIG. 1 is one of the flowcharts of a depth compensation method provided in an embodiment of the present application;
FIG. 2 is a second flowchart of a depth compensation method according to an embodiment of the present disclosure;
FIG. 3 is an exemplary diagram of a depth measurement interval of a depth sensor provided in an embodiment of the present application;
FIG. 4 is a third flowchart of a depth compensation method according to an embodiment of the present disclosure;
FIG. 5 is a schematic structural diagram of a depth compensation device according to an embodiment of the present disclosure;
FIG. 6 is a second schematic diagram of a depth compensation device according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application;
fig. 8 is a second schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the objects identified by "first," "second," etc. are generally of a type and do not limit the number of objects, for example, the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The terms "at least one," "at least one," and the like in the description and in the claims of the present application mean that they encompass any one, any two, or a combination of two or more of the objects. For example, at least one of a, b, c (item) may represent: "a", "b", "c", "a and b", "a and c", "b and c" and "a, b and c", wherein a, b, c may be single or plural. Similarly, the term "at least two" means two or more, and the expression thereof is similar to the term "at least one
The depth compensation method and the device provided by the embodiment of the application are described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
Currently, an electronic device may obtain a depth image of a photographic subject through a depth sensor to perform editing processing on the photographic image, such as background blurring. For example, the electronic device may perform image depth estimation on the photographic subject through the depth sensor, thereby obtaining a depth image of the photographic subject. Image depth estimation refers to calculating the distance of a real world object corresponding to each pixel in an image to the camera imaging plane in a specific manner.
In the related art, an electronic apparatus can perform image depth estimation on a photographic subject in the following two ways.
Mode one: direct depth estimation. Direct depth estimation refers to distance measurement by a depth sensor. Depth sensors include ToF sensors, structured light sensors, lidar sensors, and the like. The depth map calculated by the ToF sensor has low resolution and limited measurement range. The structured light sensor and the laser radar sensor are high in price and large in size, and are not suitable for being applied to mobile terminal equipment.
Mode two: indirect depth estimation. Indirect depth estimation refers to a scheme in which the depth of a scene is estimated from a single or multiple two-dimensional image sequence. For example, a monocular depth estimation based on deep learning, a depth estimation based on binocular stereo vision, and the like. The monocular depth estimation based on the deep learning is very dependent on the accuracy of the tag data when the model is trained, so that the acquisition difficulty of a data set is high and the cost is high. Binocular depth estimation estimates scene depth information from disparity information by taking two images of the same scene. This approach relies on high quality image sequences, and depth results are limited by the pixel matching accuracy of the two images, which perform poorly in weak texture scenes.
Most existing direct depth estimation methods or indirect depth estimation methods use only the output information of one sensor, and mobile terminal devices (e.g., smart phones) typically have a plurality of sensors, such as 2D image sensors, infrared sensors, toF sensors, etc. Depth estimation can be performed based on the output information of these sensors. However, because each sensor has different working characteristics, the depth estimation accuracy is different, and the existing direct depth estimation method or indirect depth estimation method is difficult to avoid the problem of insufficient depth estimation accuracy in part of scenes.
According to the depth compensation method and the device thereof, the depth measurement intervals of the depth sensor are divided, and corresponding error weights are set for each depth measurement interval, so that the depth of a shooting object is located in any depth measurement interval, the electronic equipment can carry out weighted compensation on the first depth image and the second depth image through the error weights corresponding to the depth measurement interval, and further the depth values in the void areas with nonsensical values in the weighted and compensated first depth image are compensated through the weighted and compensated second depth image, and a third image is obtained, so that the accuracy of the depth image acquired by the electronic equipment is improved.
The execution body of the depth compensation method provided in the embodiment of the present application may be a depth compensation device, and the depth compensation device may be an electronic device or a functional module in the electronic device. The technical solution provided in the embodiments of the present application will be described below by taking an electronic device as an example.
The embodiment of the application provides a depth compensation method, and fig. 1 shows a flowchart of the depth compensation method provided by the embodiment of the application. As shown in fig. 1, the depth compensation method provided in the embodiment of the present application may include the following steps 201 to 203.
Step 201, the electronic device receives a shooting operation on a shooting preview interface, and obtains a first 2D image and a first depth image.
In this embodiment of the present application, the first depth image is acquired by a depth sensor during a photographing operation.
It can be understood that the first 2D image is a two-dimensional image acquired by the electronic device through the image sensor.
Alternatively, in the embodiment of the present application, the image sensor may be a charge coupled device (Charged Coupled Device) sensor or a Complementary Metal Oxide Semiconductor (CMOS) sensor.
Optionally, in the embodiment of the present application, in a case where the shooting preview interface of the shooting application is displayed, the electronic device may enable the electronic device to obtain the second 2D image by using the first input of the shooting control in the shooting preview interface by the user. The electronic device may then perform a low-pass filtering process on the second 2D image to obtain a 2D image after image noise reduction, i.e. a first 2D image.
For example, the first input may be a click input, a preset track input, a slide input, or a long press input of the photographing control by the user. Specifically, the method can be determined according to actual use conditions, and the embodiment of the application is not limited.
The image format of the first 2D image may be Bayer domain image format, RGB domain image format, or YUV domain image format, for example.
It will be appreciated that YUV is a color coding method (belonging to PAL) employed by the european television system, and is the color space employed by PAL and SECAM analog color television systems. In modern color television systems, three-tube color cameras or color CCD cameras are generally used for capturing images, then the obtained color image signals are subjected to color separation and amplification correction to obtain RGB, then a matrix conversion circuit is used for obtaining a brightness signal Y and two color difference signals B-Y (U) and R-Y (V), finally a transmitting end encodes the brightness signal and the color difference signal respectively and transmits the brightness signal and the color difference signal by using the same channel. The representation of this color is the so-called YUV color space representation. The importance of using the YUV color space is that its luminance signal Y and chrominance signal U, V are separate.
Optionally, in this embodiment of the present application, the electronic device may call the depth sensor while capturing the first 2D image, so as to obtain a first depth image corresponding to the first 2D image.
Optionally, in the embodiment of the present application, after obtaining the first 2D image, the electronic device may obtain a first depth image corresponding to the first 2D image through a depth sensor.
In an example, in the case where the first 2D image is an image after image noise reduction, the electronic device may input the image after image noise reduction into the depth sensor to obtain a depth image corresponding to the image after image noise reduction, and because noise exists in the depth image, further noise reduction processing may be performed on the depth image, and the depth image after noise reduction processing may be determined as the first depth image.
In another example, in the case where the first image is an original 2D image that is not subjected to the image noise reduction process, the electronic device may input the original 2D image into the depth sensor, to obtain a depth image corresponding to the original 2D image, and because noise exists in the depth image, further noise reduction processing may be performed on the depth image, and the depth image after the noise reduction processing is determined as the first depth image.
For example, after obtaining the depth image, the electronic device may perform noise reduction processing on the depth image through a low-pass filter to obtain the first depth image.
Step 202, the electronic device performs depth conversion on the first point cloud image corresponding to the first 2D image based on the first depth image, so as to obtain a second depth image.
In the implementation of the application, after the electronic device obtains the first 2D image, the electronic device may perform three-dimensional reconstruction processing on the first 2D image through a three-dimensional reconstruction algorithm to obtain a three-dimensional image corresponding to the first 2D image, and obtain the first point cloud image through the three-dimensional image.
For example, after obtaining the first 2D image, the electronic device may input the first 2D image into a 3D single-frame three-dimensional reconstruction algorithm to reconstruct a real-time scene, so as to obtain a three-dimensional image corresponding to the three-dimensional scene. Then, the electronic device may count coordinate information and depth information of each pixel point in the three-dimensional image, so as to obtain a first point cloud image, where the first point cloud image records three-dimensional coordinates (X, Y, Z) of each pixel point in the three-dimensional image relative to a 2D image sensor coordinate system, where Z represents depth information of the point to an imaging plane of the 2D image sensor.
In the embodiment of the application, after obtaining the first point cloud image, the electronic device may convert the first point cloud image into the second depth image under the coordinate system of the depth sensor through the first depth image.
And 203, the electronic device performs weighted compensation on the first depth image and the second depth image according to the N error weights corresponding to the depth sensor, so as to obtain a third depth image.
In this embodiment of the present application, each of the N error weights corresponds to one depth measurement interval of the depth sensor, and any one of the N error weights is used to represent a measurement error of the corresponding depth measurement interval, where N is an integer greater than 1.
It will be appreciated that each of the N error weights described above is different.
In this embodiment of the present application, the depth measurement interval is obtained by dividing a measurement range corresponding to a depth value of an object measured by the depth sensor. In other words, the measurement range of the depth sensor includes the depth measurement intervals corresponding to the N error weights.
Illustratively, the accuracy of the depth value in the third depth image is higher than the accuracy of the depth value in the first depth image, and the accuracy of the depth value in the third depth image is higher than the accuracy of the depth value in the second depth image.
Optionally, in an embodiment of the present application, as shown in fig. 2 in conjunction with fig. 1, before step 203, the depth compensation method provided in an embodiment of the present application further includes the following step 301.
Step 301, the electronic device determines an error weight corresponding to each depth measurement interval of the N depth measurement intervals of the depth sensor according to a first rule.
In this embodiment of the present application, after the above steps, the electronic device currently has two depth images, i.e., a first depth image and a second depth image, in a depth camera coordinate system. The first depth image obtained based on the depth sensor can directly reflect the measured scene depth information, but because the measuring range is limited, a part of meaningless value exists in the first depth image; meanwhile, under different measuring distances, the measuring errors are different. Figure 3 shows possible "distance-error" curves for a depth sensor at different distances. The meanings of the variables in the figure are as follows:
[l min ,l max ]: the limit of the depth sensor measures a section outside of which the output value of the depth sensor does not have a practical meaning.
[lp min ,lp max ]: the optimum working interval of the depth sensor, in which the measurement error usually behaves stably.
error1: the depth sensor measures an error value in an optimal working interval.
error2: depth sensor at [ l ] min ,lp min ]The maximum measurement error value within the interval is typically measured at a distance of about l min Measurement errorThe larger.
error3: depth sensor at [ lp ] max ,l max ]Maximum measurement error values within the interval. Typically the measurement distance is approximately close to l max The larger the measurement error;
Wherein the line segment in fig. 3 is a "distance-error" curve within the limit measurement interval and the optimal working interval; l (L) min For the minimum measurement boundary of the limit measurement interval, l max A maximum measurement boundary that is a limit measurement interval; lp (lp) min Lp, the minimum measurement boundary for the optimal working interval max The maximum measurement boundary for the optimal working interval.
In this embodiment of the present application, the first rule includes the following rules:
when the depth measurement interval is in the first measurement interval, the error weight corresponding to the depth measurement interval is 0, and the first measurement area is a limit area which can be measured by the depth sensor.
And under the condition that the depth measurement interval is in the second measurement interval, the error weight corresponding to the depth measurement interval is a preset weight, and the measurement error of the depth sensor is stable in the second measurement area.
The preset weights are set by the user, for example: 0.95.
for example, in a case where the depth measurement section is not located in the first measurement section and the second measurement section, the electronic device may determine the error weight corresponding to the depth measurement section based on the section boundary of the first measurement section, the section boundary of the second measurement section, and the target depth value.
In this embodiment of the present application, the target depth value is an average value of a depth value of the object in the second depth image and a depth value of the object in the first depth image.
In the embodiment of the present application, the target depth value may be obtained by the following formula 1.
d avg =(P D2 (u,v)+P D4 (u,v))/2 (1)
Wherein d avg For the target depth value, P D2 (u, v) is (u)V) depth value of pixel point in first depth image, P D4 And (u, v) is the depth value of the (u, v) pixel point in the second depth image.
Illustratively, at [ l ] of the depth sensor min ,lp min ]And in the working interval, carrying out weighted compensation on the depth information output by the depth sensor and the depth information estimated by the three-dimensional reconstruction. Wherein the weighting coefficient is dynamically changed, and the distance d is measured avg The closer to lp min The greater the depth information weight given to the depth sensor.
For example, the electronic device may calculate an error weight corresponding to each depth measurement interval based on the following equation 2. Wherein, the formula 2 is:
α=(d avg -l min )/(lp min -l min ) (2)
wherein alpha is the error weight, l min For the minimum measurement boundary of the limit measurement interval lp min Is the minimum measurement boundary for the optimal working interval.
Illustratively, in [ lp ] max ,l max ]And in the working interval, carrying out weighted compensation on the depth information output by the depth sensor and the depth information estimated by the three-dimensional reconstruction. Wherein the weighting coefficient is dynamically changed, and the distance d is measured avg The closer to lp max The greater the depth information weight given to the depth sensor.
For example, the electronic device may calculate an error weight corresponding to each depth measurement interval based on the following equation 3. Wherein, the above formula 3 is:
α=(d avg -lp max )/(l max -lp max ) (3)
wherein alpha is the error weight, l max For maximum measurement boundary of limit measurement interval lp max The maximum measurement boundary for the optimal working interval.
In this embodiment of the present application, the electronic device may perform weighted compensation on the first depth image of the depth sensor according to the "distance-error" relationship of the depth sensor under different measurement distances, to obtain a third depth image that is finally output.
In the depth compensation method provided by the embodiment of the application, the electronic equipment receives shooting operation on a shooting preview interface to obtain a first 2D image and a first depth image, wherein the first depth image is acquired by a depth sensor in the shooting operation process; performing depth conversion on a first point cloud image corresponding to the first image based on the first depth image to obtain a second depth image; according to N error weights corresponding to the depth sensor, performing weighted compensation on the first depth image and the second depth image to obtain a third depth image; wherein each error weight corresponds to one depth measurement interval of the depth sensor, any error weight of the N error weights is used for representing the measurement error of the depth measurement interval corresponding to any error weight, and N is an integer greater than 1. According to the method and the device, the depth measurement intervals of the depth sensor are divided, corresponding error weights are set for each depth measurement interval, so that the depth of a shot object is in any depth measurement interval, the electronic device can carry out weighted compensation on the first depth image and the second depth image through the error weights corresponding to the depth measurement intervals, and further the depth values in the void areas with nonsensical values in the weighted and compensated first depth image are compensated through the weighted and compensated second depth image, and a third image is obtained, so that the accuracy of the depth image acquired by the electronic device is improved.
Alternatively, in the embodiment of the present application, as shown in fig. 4 in conjunction with fig. 1, the above step 202 may be specifically implemented by the following steps 202a to 202 d.
Step 202a, the electronic device converts the first depth image into a second point cloud image under a coordinate system corresponding to the depth sensor based on the imaging focal length of the depth sensor, the distance between pixels in the depth sensor, and the center point coordinates of the depth sensor.
In the embodiment of the application, the electronic device can obtain the imaging focal length of the depth sensor, the distance between pixels in the depth sensor and the center point coordinate of the depth sensor through the internal reference information of the depth sensor.
Illustratively, the distance between pixels in the depth sensor described above includes a physical distance between pixels in the X-axis and Y-axis directions for each pixel in the depth sensor.
Alternatively, in the embodiment of the present application, the internal reference information of the depth sensor may be stored in a Read Only Memory (ROM) of the depth sensor, so that the electronic device may obtain the information from the ROM at any time.
For example, the electronic device may convert the first depth image into the second point cloud image in the coordinate system corresponding to the depth sensor through the following formula 4, where the specific formula 4 is:
Wherein d x ,d y Representing the physical distance of each pixel point in the directions of the x axis and the y axis, u 0 ,v 0 Representing the coordinates of the center point of the depth image, P D2 For the first depth image, f is the imaging focal length of the depth sensor, u, v denote the coordinates of each pixel point on the first depth image, P D2 (u, v) represents a depth value corresponding to each pixel point on the first depth image, and (x, y, z) represents a three-dimensional coordinate position of the (u, v) pixel point under a depth camera coordinate system.
Step 202b, the electronic device obtains a first matrix based on the second point cloud image and the first point cloud image.
In this embodiment of the present application, the first matrix is used to characterize a relative pose relationship of the same object in the second point cloud image and the first point cloud image.
The relative pose relationship is a positional and posture relationship between the second point cloud image and the first point cloud image of the same object.
In this embodiment of the present application, the electronic device may input the second point cloud image and the first point cloud image into the point cloud registration module to obtain the first matrix.
Optionally, in an embodiment of the present application, the point cloud registration module may be any one of the following: a PCL point cloud registration module, an NDT point cloud registration module or a TRICP point cloud registration module.
Step 202c, the electronic device obtains a third point cloud image mapped by the first point cloud image under the coordinate system of the depth sensor based on the first matrix.
In this embodiment of the present application, the electronic device may obtain the third point cloud image mapped by the first point cloud image under the coordinate system of the depth sensor according to the following formula 5, where the specific formula 5 is:
wherein, PC D2 For the third point cloud image, M is the first matrix, and PC S1 PC for the first point cloud image S1_xi ,PC S1_yi ,PC S1_zi Coordinates of each pixel point in the first point cloud image.
Alternatively, in the embodiment of the present application, the above step 202c may be specifically implemented by the following steps 202c1 and 202c 2.
Step 202c1, the electronic device performs spatial transformation on the coordinate system of the first point cloud image based on the first matrix, so as to obtain a fourth point cloud image of the first point cloud image mapped under the coordinate system of the depth sensor.
In this embodiment of the present application, the electronic device may obtain the fourth point cloud image through the above formula 5.
It should be noted that, the specific implementation process may be detailed in the above embodiments, and in order to avoid repetition, the description is omitted here.
Step 202c2, the electronic device eliminates the pixel point larger than the field boundary of the second point cloud image from the fourth point cloud image based on the field boundary of the second point cloud image and the coordinate information of each pixel in the fourth point cloud image, so as to obtain a third point cloud image of the first point cloud image mapped under the coordinate system of the depth sensor.
In this embodiment of the present application, the field of view boundary of the second point cloud image is determined based on coordinate information of each pixel in the second point cloud image, where the field of view boundary of the second depth image is smaller than or equal to the field of view boundary of the first depth image.
In the embodiment of the application, the electronic device may calculate the minimum and maximum values of the coordinates of the x axis and the y axis by traversing the coordinate information of each pixel point in the second point cloud image, so as to determine the field boundary of the second point cloud image.
In this embodiment of the present application, for each pixel point in the fourth point cloud image, if its coordinate is not located within the field boundary, the pixel point in the fourth point cloud image is removed.
For example, if the X-axis coordinate of the first pixel point is smaller than the minimum coordinate value of the X-axis coordinate of the field of view boundary of the second depth image, or the X-axis coordinate is larger than the maximum coordinate value of the X-axis coordinate of the field of view boundary of the second depth image, or the Y-axis coordinate of the pixel point is smaller than the minimum coordinate value of the Y-axis coordinate of the field of view boundary of the second depth image, or the Y-axis coordinate of the pixel point is larger than the maximum coordinate value of the Y-axis coordinate of the field of view boundary of the second depth image, the first pixel point in the fourth point cloud image is eliminated.
In this embodiment of the present application, the electronic device may determine, through a field boundary of view of the second point cloud image and coordinate information of each pixel in the fourth point cloud image, a field boundary of view of a third point cloud image of the first point cloud image mapped under a coordinate system of the depth sensor, and further, the electronic device may obtain a second depth image matching with an image size of the first depth image.
Step 202d, the electronic device converts the third point cloud image into a second depth image based on the imaging focal length of the depth sensor, the distance between pixels in the depth sensor, and the depth value and coordinate information of each pixel point in the third point cloud image.
In this embodiment of the present application, the electronic device may convert the third point cloud image into the second depth image through the following formula 6, and the specific formula 6 may be:
wherein x is i 、y i 、z i For the pixel point coordinates in the third point cloud image, P D3 And (u, v) is the depth value of the (u, v) pixel point in the second depth image, u is the X coordinate of the pixel point in the second depth image, and v is the Y coordinate of the pixel point in the second depth image.
In the embodiment of the application, the electronic device can obtain the second depth image through the third point cloud image, and further the electronic device can compensate the depth value in the void area with the meaningless value in the first depth image through the depth value in the second depth image, so that the accuracy of the depth image acquired by the electronic device is improved.
Alternatively, in the embodiment of the present application, the above step 203 may be specifically implemented by the following steps 203a to 203 c.
In step 203a, the electronic device performs weighted compensation on the first depth value measured for the first depth measurement interval in the first depth image according to the first error weight corresponding to the first depth measurement interval, so as to obtain a weighted first depth image.
In this embodiment of the present application, the first depth measurement interval is one of N depth measurement intervals of the depth sensor.
Step 203b, the electronic device performs weighted compensation on the second depth value measured for the first depth measurement interval in the second depth image according to the first error weight corresponding to the first depth measurement interval, so as to obtain a weighted second depth image.
It should be noted that, the step 203a and the step 203b describe only a weighted compensation process of the depth value corresponding to one depth measurement interval, and the weighted depth values of the corresponding depth measurement intervals may be obtained by referring to the step 203a and the step 203b for other depth measurement intervals of the N depth measurement intervals.
In this embodiment of the present application, for the first depth image, the electronic device may obtain, through the above step 203a, a weighted and compensated depth value corresponding to each depth measurement interval in the first depth image, for a depth value corresponding to each depth measurement interval in the N depth measurement intervals in the first image, so that the electronic device may obtain the weighted and compensated first depth image.
In this embodiment of the present application, for the second depth image, the electronic device may determine N depth measurement intervals from the second depth image through a correspondence between the first depth image and the second depth image, and obtain, through the above step 203b, a weighted and compensated depth value corresponding to each depth measurement interval in the second depth image, so that the electronic device may obtain the weighted and compensated second depth image.
It will be appreciated that since the field of view boundary between the first depth image and the second depth image is consistent, the electronic device may determine N depth measurement intervals from the second depth image through the first depth image.
Step 203c, the electronic device obtains a third depth image based on the weighted first depth image and the weighted second depth image.
In this embodiment of the present application, after obtaining the weighted first depth image and the weighted second depth image, the electronic device may superimpose depth values in the weighted first depth image and the weighted second depth image to obtain the third depth image.
Illustratively, outside the limit working interval of the depth sensor, the depth values without actual meaning are compensated as depth results of the three-dimensional reconstruction estimation. The weighted compensation formula and the weight coefficient alpha are shown in formula 7, and the specific formula 7 is as follows:
P D4 (u,v)=α×P D2 (u,v)+(1-α)×P D3 (u,v)
α=0 (7)
Wherein P is D4 (u, v) is a depth value of the (u, v) pixel in the third depth image.
For example, in the optimal working interval of the depth sensor, the depth information output by the depth sensor and the depth information estimated by the three-dimensional reconstruction are subjected to weighted compensation, wherein a weighted compensation formula and a weight coefficient α are shown in formula 8, and specific formula 8 is as follows:
P D4 (u,v)=α×P D2 (u,v)+(1-α)×P D3 (u,v)
α=0.95 (8)
wherein P is D4 (u, v) is a depth value of the (u, v) pixel in the third depth image.
Illustratively, at [ l ] of the depth sensor min ,lp min ]And in the working interval, carrying out weighted compensation on the depth information output by the depth sensor and the depth information estimated by the three-dimensional reconstruction. The weighted compensation formula and the weight coefficient α are shown in formula 9, and the specific formula 9 is:
P D4 (u,v)=α×P D2 (u,v)+(1-α)×P D3 (u,v)
α=(d avg -l min )/(lp min -l min ) (9)
wherein P is D4 (u, v) is a depth value of the (u, v) pixel in the third depth image.
Illustratively, in [ lp ] max ,l max ]And in the working interval, carrying out weighted compensation on the depth information output by the depth sensor and the depth information estimated by the three-dimensional reconstruction. The weighted compensation formula and the weight coefficient α are shown in formula 10, and the specific formula 10 is:
P D4 (u,v)=α×P D2 (u,v)+(1-α)×P D3 (u,v)
α=(d avg -lp max )/(l max -lp max ) (10)
wherein P is D4 (u, v) is a depth value of the (u, v) pixel in the third depth image.
In this embodiment of the present application, the electronic device may perform weighted compensation on the first depth image and the second depth image through the error weights corresponding to the depth measurement interval, and further compensate, through the weighted compensated second depth image, the depth value in the void area with the nonsensical value in the weighted compensated first depth image, so as to obtain the third image, thereby improving the accuracy of the depth image acquired by the electronic device
It should be noted that, in the depth compensation method provided in the embodiment of the present application, the execution body may be a depth compensation device, or an electronic device, or may also be a functional module or entity in the electronic device. In the embodiment of the present application, a depth compensation device is used as an example to execute a depth compensation method.
Fig. 5 shows a schematic diagram of one possible structure of the depth compensation device according to an embodiment of the present application. As shown in fig. 5, the depth compensation device 70 may include: a shooting module 71, a processing module 72 and a compensation module 73.
The photographing module 71 is configured to receive a photographing operation on a photographing preview interface, and obtain a first 2D image and a first depth image, where the first depth image is acquired by a depth sensor. The processing module 72 is configured to perform depth conversion on a first point cloud image corresponding to the first 2D image based on the first depth image acquired by the acquiring module, so as to obtain a second depth image. The compensation module 73 is configured to perform weighted compensation on the first depth image and the second depth image according to N error weights corresponding to the depth sensor, so as to obtain a third depth image; each error weight corresponds to one depth measurement interval of the depth sensor, and any error weight of the N error weights is used for representing the measurement error of the corresponding depth measurement interval, wherein N is an integer greater than 1.
In one possible implementation, the processing module 72 is specifically configured to convert the first depth image into a second point cloud image under a coordinate system corresponding to the depth sensor based on an imaging focal length of the depth sensor, a distance between pixels in the depth sensor, and a center point coordinate of the depth sensor; obtaining a first matrix based on the second point cloud image and the first point cloud image, wherein the first matrix is used for representing the relative pose relation of the same object in the second point cloud image and the first point cloud image; acquiring a third point cloud image of the first point cloud image mapped under a coordinate system of the depth sensor based on the first matrix; the third point cloud image is converted to a second depth image based on an imaging focal length of the depth sensor, a distance between pixels in the depth sensor, and depth value and coordinate information of each pixel point in the third point cloud image.
In one possible implementation manner, the processing module 72 is specifically configured to spatially transform the coordinate system of the first point cloud image based on the first matrix, so as to obtain a fourth point cloud image of the first point cloud image mapped under the coordinate system of the depth sensor; removing pixel points larger than the view field boundary of the second point cloud image from the fourth point cloud image based on the view field boundary of the second point cloud image and the coordinate information of each pixel in the fourth point cloud image to obtain a third point cloud image of the first point cloud image mapped under the coordinate system of the depth sensor; the view field boundary of the second point cloud image is determined based on the coordinate information of each pixel in the second point cloud image, and the view field boundary of the second depth image is smaller than or equal to the view field boundary of the first depth image.
In one possible implementation manner, in conjunction with fig. 5, as shown in fig. 6, the apparatus further includes a determining module; the determining module 74 is configured to perform weighted compensation on the first depth image by using N error weights corresponding to the depth sensor based on the second depth image by the compensating module 73, and determine, according to a first rule, an error weight corresponding to each of N depth measurement intervals of the depth sensor before obtaining the third depth image; wherein the first rule comprises the following rules: when the depth measurement interval is in a first measurement interval, the error weight corresponding to the depth measurement interval is 0, and the first measurement area is a limit area which can be measured by the depth sensor; under the condition that the depth measurement interval is in a second measurement interval, the error weight corresponding to the depth measurement interval is a preset weight, and the measurement error of the depth sensor in the second measurement area is stable; when the depth measurement section is not located in the first measurement section and the second measurement section, an error weight corresponding to the depth measurement section is determined based on the section boundary of the first measurement section, the section boundary of the second measurement section, and the target depth value. The target depth value is an average value of a depth value of the object in the second depth image and a depth value of the object in the first depth image.
In a possible implementation manner, the compensation module 73 is specifically configured to perform weighted compensation on a first depth value measured for the first depth measurement interval in the first depth image according to a first error weight corresponding to the first depth measurement interval, so as to obtain a weighted first depth image; according to the first error weight corresponding to the first depth measurement interval, performing weighted compensation on a second depth value measured for the first depth measurement interval in the second depth image to obtain a weighted second depth image, wherein the first depth measurement interval is one of N depth measurement intervals of the depth sensor; and obtaining a third depth image based on the weighted first depth image and the weighted second depth image.
According to the depth compensation device, the depth measurement intervals of the depth sensor are divided, corresponding error weights are set for each depth measurement interval, so that the depth of a shooting object is no matter in which depth measurement interval, the depth compensation device can carry out weighted compensation on the first depth image and the second depth image through the error weights corresponding to the depth measurement intervals, and further the depth values in the void areas with nonsensical values in the weighted and compensated first depth image are compensated through the weighted and compensated second depth image, and a third image is obtained, so that the accuracy of the depth image obtained by the depth compensation device is improved.
The depth compensation device in the embodiment of the application may be an electronic device, or may be a component in the electronic device, for example, an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the mobile electronic device may be a mobile phone, tablet, notebook, palmtop, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The depth compensation device in the embodiments of the present application may be a device having an operating system. The operating system may be an Android operating system, an iOS operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The depth compensation device provided in the embodiment of the present application can implement each process implemented by the above method embodiment, and in order to avoid repetition, details are not repeated here.
Optionally, as shown in fig. 7, the embodiment of the present application further provides an electronic device 90, including a processor 91 and a memory 92, where a program or an instruction capable of being executed on the processor 91 is stored in the memory 92, and the program or the instruction implements each step of the foregoing depth compensation method embodiment when executed by the processor 91, and the steps achieve the same technical effects, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 8 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, and processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further include a power source (e.g., a battery) for powering the various components, and that the power source may be logically coupled to the processor 110 via a power management system to perform functions such as managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 8 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The sensor 105 is configured to receive a photographing operation on a photographing preview interface, and obtain a first 2D image and a first depth image, where the first depth image is acquired by the depth sensor. The processor 110 is configured to perform depth conversion on a first point cloud image corresponding to the first 2D image based on the first depth image, to obtain a second depth image; according to N error weights corresponding to the depth sensor, carrying out weighted compensation on the first depth image and the second depth image to obtain a third depth image; each error weight corresponds to one depth measurement interval of the depth sensor, and any error weight of the N error weights is used for representing the measurement error of the corresponding depth measurement interval, wherein N is an integer greater than 1.
According to the electronic equipment, the depth measurement intervals of the depth sensor are divided, corresponding error weights are set for each depth measurement interval, so that the depth of a shooting object is no matter in which depth measurement interval is located, the electronic equipment can carry out weighted compensation on the first depth image and the second depth image through the error weights corresponding to the depth measurement intervals, and further the second depth image after weighted compensation compensates the depth values in the void areas with nonsensical values in the first depth image after weighted compensation, and a third image is obtained, so that the accuracy of the depth image acquired by the electronic equipment is improved.
Optionally, in the embodiment of the present application, the processor 110 is specifically configured to convert the first depth image into the second point cloud image under the coordinate system corresponding to the depth sensor based on an imaging focal length of the depth sensor, a distance between pixels in the depth sensor, and a center point coordinate of the depth sensor; obtaining a first matrix based on the second point cloud image and the first point cloud image, wherein the first matrix is used for representing the relative pose relation of the same object in the second point cloud image and the first point cloud image; acquiring a third point cloud image of the first point cloud image mapped under a coordinate system of the depth sensor based on the first matrix; the third point cloud image is converted to a second depth image based on an imaging focal length of the depth sensor, a distance between pixels in the depth sensor, and depth value and coordinate information of each pixel point in the third point cloud image.
Optionally, in the embodiment of the present application, the processor 110 is specifically configured to perform spatial transformation on a coordinate system of the first point cloud image based on the first matrix to obtain a fourth point cloud image that is mapped by the first point cloud image under the coordinate system of the depth sensor; removing pixel points larger than the view field boundary of the second point cloud image from the fourth point cloud image based on the view field boundary of the second point cloud image and the coordinate information of each pixel in the fourth point cloud image to obtain a third point cloud image of the first point cloud image mapped under the coordinate system of the depth sensor; the view field boundary of the second point cloud image is determined based on the coordinate information of each pixel in the second point cloud image, and the view field boundary of the second depth image is smaller than or equal to the view field boundary of the first depth image.
Optionally, in this embodiment of the present application, the processor 110 is further configured to, based on the second depth image, perform weighted compensation on the first depth image by using N error weights corresponding to the depth sensor, and determine, according to a first rule, an error weight corresponding to each of N depth measurement intervals of the depth sensor before obtaining the third depth image; wherein the first rule comprises the following rules: when the depth measurement interval is in a first measurement interval, the error weight corresponding to the depth measurement interval is 0, and the first measurement area is a limit area which can be measured by the depth sensor; under the condition that the depth measurement interval is in a second measurement interval, the error weight corresponding to the depth measurement interval is a preset weight, and the measurement error of the depth sensor in the second measurement area is stable; when the depth measurement section is not located in the first measurement section and the second measurement section, an error weight corresponding to the depth measurement section is determined based on the section boundary of the first measurement section, the section boundary of the second measurement section, and the target depth value. The target depth value is an average value of the depth value of the object in the second depth image and the depth value of the object in the first depth image.
Optionally, in this embodiment of the present application, the processor 110 is specifically configured to perform weighted compensation on a first depth value measured for a first depth measurement interval in a first depth image according to a first error weight corresponding to the first depth measurement interval, so as to obtain a weighted first depth image; according to the first error weight corresponding to the first depth measurement interval, performing weighted compensation on a second depth value measured for the first depth measurement interval in the second depth image to obtain a weighted second depth image, wherein the first depth measurement interval is one of N depth measurement intervals of the depth sensor; and obtaining a third depth image based on the weighted first depth image and the weighted second depth image.
The electronic device provided in the embodiment of the present application can implement each process implemented by the above method embodiment, and can achieve the same technical effects, so that repetition is avoided, and details are not repeated here.
The beneficial effects of the various implementation manners in this embodiment may be specifically referred to the beneficial effects of the corresponding implementation manners in the foregoing method embodiment, and in order to avoid repetition, the description is omitted here.
It should be appreciated that in embodiments of the present application, the input unit 104 may include a graphics processor (Graphics Processing Unit, GPU) 1041 and a microphone 1042, the graphics processor 1041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes at least one of a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
Memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 109 may include volatile memory or nonvolatile memory, or the memory 109 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 109 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 110 may include one or more processing units; optionally, the processor 110 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the program or the instruction implement each process of the embodiment of the method, and the same technical effects can be achieved, so that repetition is avoided, and no further description is given here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, and the processor is used for running a program or an instruction, implementing each process of the above method embodiment, and achieving the same technical effect, so as to avoid repetition, and not repeated here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
The embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the foregoing depth compensation method embodiments, and achieve the same technical effects, and are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (10)

1. A depth compensation method, the method comprising:
receiving shooting operation on a shooting preview interface to obtain a first 2D image and a first depth image, wherein the first depth image is acquired by a depth sensor;
performing depth conversion on a first point cloud image corresponding to the first 2D image based on the first depth image to obtain a second depth image;
according to N error weights of the depth sensor, performing weighted compensation on the first depth image and the second depth image to obtain a third depth image;
each error weight corresponds to one depth measurement interval of the depth sensor, any error weight of the N error weights is used for representing a measurement error of the corresponding depth measurement interval, and N is an integer greater than 1.
2. The method of claim 1, wherein performing depth conversion on the first point cloud image corresponding to the first 2D image based on the first depth image to obtain a second depth image, comprises:
converting the first depth image into a second point cloud image under a coordinate system corresponding to the depth sensor based on an imaging focal length of the depth sensor, a distance between pixels in the depth sensor and a center point coordinate of the depth sensor;
Obtaining a first matrix based on the second point cloud image and the first point cloud image, wherein the first matrix is used for representing the relative pose relationship of the same object in the second point cloud image and the first point cloud image;
acquiring a third point cloud image of the first point cloud image mapped under the coordinate system of the depth sensor based on the first matrix;
the third point cloud image is converted into the second depth image based on an imaging focal length of the depth sensor, a distance between pixels in the depth sensor, and depth value and coordinate information of each pixel point in the third point cloud image.
3. The method of claim 2, wherein the acquiring a third point cloud image of the first point cloud image mapped under the coordinate system of the depth sensor based on the first matrix comprises:
based on the first matrix, performing space transformation on the coordinate system of the first point cloud image to obtain a fourth point cloud image of the first point cloud image mapped under the coordinate system of the depth sensor;
removing pixel points larger than the field of view boundary of the second point cloud image from the fourth point cloud image based on the field of view boundary of the second point cloud image and the coordinate information of each pixel in the fourth point cloud image, and obtaining the third point cloud image of the first point cloud image mapped under the coordinate system of the depth sensor;
Wherein a field of view boundary of the second point cloud image is determined based on coordinate information of each pixel in the second point cloud image, the field of view boundary of the second depth image being less than or equal to the field of view boundary of the first depth image.
4. The method of claim 1, wherein the weighting compensation is performed on the first depth image based on the second depth image using N error weights corresponding to the depth sensor, and before obtaining a third depth image, the method further comprises:
according to a first rule, determining an error weight corresponding to each depth measurement interval in N depth measurement intervals of the depth sensor;
wherein the first rule includes the following rules:
when the depth measurement interval is in a first measurement interval, the error weight corresponding to the depth measurement interval is 0, and the first measurement area is a limit area which can be measured by the depth sensor;
under the condition that the depth measurement interval is in a second measurement interval, the error weight corresponding to the depth measurement interval is a preset weight, and the measurement error of the depth sensor in the second measurement area is stable;
Determining an error weight corresponding to the depth measurement interval based on an interval boundary of the first measurement interval, an interval boundary of the second measurement interval, and a target depth value when the depth measurement interval is not located in the first measurement interval and the second measurement interval;
the target depth value is an average value of a depth value of the object in the second depth image and a depth value of the object in the first depth image.
5. The method of claim 1, wherein the performing weighted compensation on the first depth image and the second depth image according to the N error weights corresponding to the depth sensor to obtain a third depth image comprises:
according to a first error weight corresponding to a first depth measurement interval, performing weighted compensation on a first depth value measured for the first depth measurement interval in the first depth image to obtain a weighted first depth image;
according to a first error weight corresponding to a first depth measurement interval, performing weighted compensation on a second depth value measured for the first depth measurement interval in the second depth image to obtain a weighted second depth image, wherein the first depth measurement interval is one of N depth measurement intervals of the depth sensor;
And obtaining the third depth image based on the weighted first depth image and the weighted second depth image.
6. A depth compensation device, the device comprising: the device comprises a shooting module, a processing module and a compensation module;
the shooting module is used for receiving shooting operation on a shooting preview interface to obtain a first 2D image and a first depth image, wherein the first depth image is acquired through a depth sensor;
the processing module is used for performing depth conversion on a first point cloud image corresponding to the first 2D image based on the first depth image acquired by the shooting module to acquire a second depth image;
the compensation module is used for carrying out weighted compensation on the first depth image and the second depth image according to N error weights corresponding to the depth sensor to obtain a third depth image;
each error weight corresponds to one depth measurement interval of the depth sensor, any error weight of the N error weights is used for representing a measurement error of the corresponding depth measurement interval, and N is an integer greater than 1.
7. The apparatus according to claim 6, wherein the processing module is configured to convert the first depth image into a second point cloud image in a corresponding coordinate system of the depth sensor based on an imaging focal length of the depth sensor, a distance between pixels in the depth sensor, and a center point coordinate of the depth sensor; obtaining a first matrix based on the second point cloud image and the first point cloud image, wherein the first matrix is used for representing the relative pose relationship of the same object in the second point cloud image and the first point cloud image; acquiring a third point cloud image of the first point cloud image mapped under the coordinate system of the depth sensor based on the first matrix; the third point cloud image is converted into the second depth image based on an imaging focal length of the depth sensor, a distance between pixels in the depth sensor, and depth value and coordinate information of each pixel point in the third point cloud image.
8. The apparatus of claim 7, wherein the processing module is specifically configured to spatially transform a coordinate system of the first point cloud image based on the first matrix to obtain a fourth point cloud image that maps the first point cloud image under the coordinate system of the depth sensor; removing pixel points larger than the field of view boundary of the second point cloud image from the fourth point cloud image based on the field of view boundary of the second point cloud image and the coordinate information of each pixel in the fourth point cloud image, and obtaining the third point cloud image of the first point cloud image mapped under the coordinate system of the depth sensor; wherein a field of view boundary of the second point cloud image is determined based on coordinate information of each pixel in the second point cloud image, the field of view boundary of the second depth image being less than or equal to the field of view boundary of the first depth image.
9. The apparatus of claim 6, further comprising a determination module; the determining module is configured to determine, based on the second depth image, an error weight corresponding to each of N depth measurement intervals of the depth sensor according to a first rule before performing weighted compensation on the first depth image to obtain a third depth image, where the N error weights correspond to the depth measurement intervals of the depth sensor;
Wherein the first rule includes the following rules:
when the depth measurement interval is in a first measurement interval, the error weight corresponding to the depth measurement interval is 0, and the first measurement area is a limit area which can be measured by the depth sensor;
under the condition that the depth measurement interval is in a second measurement interval, the error weight corresponding to the depth measurement interval is a preset weight, and the measurement error of the depth sensor in the second measurement area is stable;
determining an error weight corresponding to the depth measurement interval based on an interval boundary of the first measurement interval, an interval boundary of the second measurement interval, and a target depth value when the depth measurement interval is not located in the first measurement interval and the second measurement interval;
the target depth value is an average value of a depth value of the object in the second depth image and a depth value of the object in the first depth image.
10. The apparatus of claim 6, wherein the compensation module is specifically configured to perform weighted compensation on a first depth value measured for a first depth measurement interval in the first depth image according to a first error weight corresponding to the first depth measurement interval, so as to obtain the weighted first depth image; according to a first error weight corresponding to a first depth measurement interval, performing weighted compensation on a second depth value measured for the first depth measurement interval in the second depth image to obtain a weighted second depth image, wherein the first depth measurement interval is one of N depth measurement intervals of the depth sensor; and obtaining the third depth image based on the weighted first depth image and the weighted second depth image.
CN202311232127.1A 2023-09-21 2023-09-21 Depth compensation method and device thereof Pending CN117294829A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311232127.1A CN117294829A (en) 2023-09-21 2023-09-21 Depth compensation method and device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311232127.1A CN117294829A (en) 2023-09-21 2023-09-21 Depth compensation method and device thereof

Publications (1)

Publication Number Publication Date
CN117294829A true CN117294829A (en) 2023-12-26

Family

ID=89247418

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311232127.1A Pending CN117294829A (en) 2023-09-21 2023-09-21 Depth compensation method and device thereof

Country Status (1)

Country Link
CN (1) CN117294829A (en)

Similar Documents

Publication Publication Date Title
EP2881915B1 (en) Techniques for disparity estimation using camera arrays for high dynamic range imaging
WO2019105154A1 (en) Image processing method, apparatus and device
CN108055452A (en) Image processing method, device and equipment
KR102512889B1 (en) Image fusion processing module
US20110148868A1 (en) Apparatus and method for reconstructing three-dimensional face avatar through stereo vision and face detection
JP2015197745A (en) Image processing apparatus, imaging apparatus, image processing method, and program
CN110322485B (en) Rapid image registration method of heterogeneous multi-camera imaging system
CN105141841B (en) Picture pick-up device and its method
CN110266954A (en) Image processing method, device, storage medium and electronic equipment
CN115526983B (en) Three-dimensional reconstruction method and related equipment
CN115550570B (en) Image processing method and electronic equipment
WO2023040725A1 (en) White balance processing method and electronic device
CN113014803A (en) Filter adding method and device and electronic equipment
CN113542600A (en) Image generation method, device, chip, terminal and storage medium
CN109559353A (en) Camera module scaling method, device, electronic equipment and computer readable storage medium
CN113838151B (en) Camera calibration method, device, equipment and medium
CN110930440B (en) Image alignment method, device, storage medium and electronic equipment
CN111385481A (en) Image processing method and device, electronic device and storage medium
EP4231621A1 (en) Image processing method and electronic device
KR20140052769A (en) Apparatus and method for correcting distored image
CN117294829A (en) Depth compensation method and device thereof
WO2021049281A1 (en) Image processing device, head-mounted display, and spatial information acquisition method
CN116208851A (en) Image processing method and related device
CN114119701A (en) Image processing method and device
US11143874B2 (en) Image processing apparatus, head-mounted display, and image displaying method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination