CN110335211A - Bearing calibration, terminal device and the computer storage medium of depth image - Google Patents

Bearing calibration, terminal device and the computer storage medium of depth image Download PDF

Info

Publication number
CN110335211A
CN110335211A CN201910550733.5A CN201910550733A CN110335211A CN 110335211 A CN110335211 A CN 110335211A CN 201910550733 A CN201910550733 A CN 201910550733A CN 110335211 A CN110335211 A CN 110335211A
Authority
CN
China
Prior art keywords
depth information
color image
depth
pixel
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910550733.5A
Other languages
Chinese (zh)
Other versions
CN110335211B (en
Inventor
杨鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910550733.5A priority Critical patent/CN110335211B/en
Publication of CN110335211A publication Critical patent/CN110335211A/en
Application granted granted Critical
Publication of CN110335211B publication Critical patent/CN110335211B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The embodiment of the present application discloses bearing calibration, terminal device and the computer storage medium of a kind of depth image, this method comprises: obtaining the corresponding original image of target object and the corresponding main color image of target object and secondary color image;Wherein, original image is collecting to target object according to flight time TOF sensor, and main color image and secondary color image are collecting to target object according to dual camera;According to main color image and secondary color image, algorithm is taken the photograph using default pair and determines corresponding first depth information of target object and the first confidence level;According to original image, corresponding second depth information of target object is determined;Based on the first depth information, the second depth information and the first confidence level, the erroneous data regions in the first depth information are determined;Processing is corrected to the erroneous data regions by the second depth information and main color image, target depth information is obtained, depth image is obtained according to target depth information.

Description

Bearing calibration, terminal device and the computer storage medium of depth image
Technical field
This application involves technical field of image processing more particularly to a kind of bearing calibration of depth image, terminal device with And computer storage medium.
Background technique
With the rapid development of intelligent terminal, the terminal devices such as mobile phone, palm PC, digital camera, video camera at For user life in a kind of essential tool, and for user life various aspects bring great convenience.It is existing Terminal device be substantially all camera function, user can be used terminal device and shoot various images.
When shooting has the image of virtualization effect, it usually needs terminal equipment configuration dual camera.Pass through dual camera Depth (depth) information is obtained, although structure is simple, hardware power consumption is low, high resolution, still has defect, such as right It is poor in adaptability that is texture-free, repeating the scenes such as texture, overexposure, under-exposure, lead to the accuracy of acquired depth information It is relatively low, to affect the effect of portrait virtualization.
Summary of the invention
The main purpose of the application is to propose a kind of bearing calibration of depth image, terminal device and computer storage Medium, can repair it is double take the photograph under portrait mode of figure depth information it is texture-free, repeat the regions depth such as texture, overexposure, under-exposure The phenomenon that error, thus improve it is double take the photograph the accuracy of depth under portrait mode of figure, and then optimize the accuracy of portrait virtualization.
In order to achieve the above objectives, the technical solution of the application is achieved in that
In a first aspect, the embodiment of the present application provides a kind of bearing calibration of depth image, which comprises
Obtain the corresponding original image of target object and the corresponding main color image of the target object and secondary cromogram Picture;Wherein, the original image is collecting to target object, the main color according to flight time TOF sensor Image and secondary color image are collecting to target object according to dual camera;
According to the main color image and the secondary color image, algorithm is taken the photograph using default pair and determines the target object pair The first depth information answered and the first confidence level;According to the original image, corresponding second depth of the target object is determined Information;
Based on first depth information, second depth information and first confidence level, described first is determined Erroneous data regions in depth information;
Processing is corrected to the erroneous data regions by second depth information and the main color image, Target depth information is obtained, depth image is obtained according to the target depth information.
Second aspect, the embodiment of the present application provide a kind of terminal device, and the terminal device includes: acquiring unit, really Order member and correction unit, wherein
The acquiring unit is configured to obtain the corresponding original image of target object and the corresponding master of the target object Color image and secondary color image;Wherein, the original image is collecting to target object according to TOF sensor, The main color image and secondary color image are collecting to target object according to dual camera;
The determination unit is configured to take the photograph calculation using default pair according to the main color image and the secondary color image Method determines corresponding first depth information of the target object and the first confidence level;According to the original image, the mesh is determined Mark corresponding second depth information of object;And be additionally configured to based on first depth information, second depth information with And first confidence level, determine the erroneous data regions in first depth information;
The correction unit is configured to through second depth information and the main color image to the error number It is corrected processing according to region, obtains target depth information, depth image is obtained according to the target depth information.
The third aspect, the embodiment of the present application provide a kind of terminal device, and the terminal device includes: memory and processing Device;Wherein,
The memory, for storing the computer program that can be run on the processor;
The processor, for executing depth image as described in relation to the first aspect when running the computer program Bearing calibration.
Fourth aspect, the embodiment of the present application provide a kind of computer storage medium, the computer storage medium storage There is the correction program of depth image, such as first party is realized when the correction program of the depth image is executed by least one processor The bearing calibration of depth image described in face.
A kind of bearing calibration of depth image, terminal device provided by the embodiment of the present application and computer storage are situated between Matter, by obtaining the corresponding original image of target object and the corresponding main color image of the target object and secondary cromogram Picture;Wherein, the original image is collecting to target object, the main color image and pair according to TOF sensor Color image is collecting to target object according to dual camera;Then according to the main color image and the secondary coloured silk Chromatic graph picture takes the photograph algorithm using default pair and determines corresponding first depth information of the target object and the first confidence level;According to institute Original image is stated, determines corresponding second depth information of the target object;It is based on first depth information, described second again Depth information and first confidence level, determine the erroneous data regions in first depth information;Finally by described Second depth information and the main color image are corrected processing to the erroneous data regions, obtain target depth letter Breath, obtains depth image according to the target depth information;In this way, being carried out by the second depth information to the first depth information excellent Change, can repair it is double take the photograph under portrait mode of figure depth information it is texture-free, repeat the regions depth such as texture, overexposure, under-exposure and go out Wrong phenomenon, to improve double accuracys for taking the photograph depth under portrait mode of figure;In addition, the target depth information is mainly used for pair The virtualization of main color image is handled, and can also optimize the accuracy of portrait virtualization, improves the effect of portrait virtualization.
Detailed description of the invention
Fig. 1 is a kind of blasting type structural schematic diagram of TOF camera provided by the embodiments of the present application;
Fig. 2 is a kind of double structural schematic diagrams for taking the photograph virtualization process provided by the embodiments of the present application;
Fig. 3 is a kind of flow diagram of the bearing calibration of depth image provided by the embodiments of the present application;
Fig. 4 is a kind of hardware structural diagram of terminal device provided by the embodiments of the present application;
Fig. 5 is a kind of Contrast on effect schematic diagram of polar curve correction provided by the embodiments of the present application;
Fig. 6 is a kind of double effect diagrams for taking the photograph disparity computation provided by the embodiments of the present application;
Fig. 7 is a kind of model schematic for calculating depth information provided by the embodiments of the present application;
Fig. 8 is a kind of detailed process schematic diagram of the bearing calibration of depth image provided by the embodiments of the present application;
Fig. 9 is a kind of Contrast on effect schematic diagram of portrait virtualization provided by the embodiments of the present application;
Figure 10 is a kind of composed structure schematic diagram of terminal device provided by the embodiments of the present application;
Figure 11 is the hardware structural diagram of another terminal device provided by the embodiments of the present application.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete Site preparation description.
In recent years, due to the rapid development of flight time (Time of Flight, TOF) technology, people grind to optical Study carefully more and more deep.TOF is widely applied to such as intelligent as a kind of three-dimensional (Three Dimension, 3D) imaging technique In the terminal devices such as mobile phone, palm PC, tablet computer, digital camera, range measurement, three-dimensional modeling, void of taking pictures may be implemented In the application such as change and somatic sensation television game, augmented reality (Augmented Reality, AR) technology can also be cooperated to realize AR The related application of mirror.
In general, TOF camera can be made of optical transmitter module and optical receiver module.Wherein, optical transmitter module is also referred to as For laser emitter, TOF transmitter or transmitting lighting module etc., optical receiver module is also referred to as detector, TOF receiver or photosensitive Receiving module etc..Specifically, optical transmitter module issues modulated near infrared light, encounters subject back reflection, then by Optical receiver module calculates time difference or the phase difference of light transmitting and reflection, then is converted to obtain subject Distance, to generate depth information.
Referring to Fig. 1, it illustrates a kind of blasting type structural schematic diagrams of TOF camera provided by the embodiments of the present application.Such as Fig. 1 Shown, TOF camera 10 includes optical transmitter module 110 and luminous receiving module 120;Wherein, optical transmitter module 110 is by scrim (diffuser), photodiode (Photo-Diode, PD), vertical cavity surface emitting laser (Vertical Cavity Surface Emitting Laser, VCSEL) and the composition such as ceramic packaging body;Optical receiver module 120 is narrow by camera lens, 940nm Band optical filter and TOF sensor (TOF Sensor) etc. form.It will be understood by those skilled in the art that being formed shown in Fig. 1 Structure does not constitute the restriction to TOF camera, and TOF camera may include components more more or fewer than diagram, or combine certain A little components or different component layouts.
It is to be appreciated that different according to the signal results of acquisition, TOF can be divided into the direct flight time (Direct-TOF, ) and the indirect flight time (Indirect-TOF, I-TOF) D-TOF.Wherein, what D-TOF was obtained is the time difference, what I-TOF was obtained It is the phase offset (for example, the specific gravity of charge or voltage in out of phase) of different target return signal, quilt is calculated with this The distance of object is shot, depth information is generated.
In addition, according to modulation system difference, I-TOF can be divided into impulse modulation (Pulsed Modulation) scheme again (Continuous Wave Modulation) scheme is modulated with continuous wave.The master that the terminal device of most manufacturers uses at present Stream mode is the indirect TOF scheme (can be indicated with CW-I-TOF) of continuous wave modulation.For CW-I-TOF scheme, each It include 2 capacitors in pixel, optical transmitter module launches 4 sections of square-wave pulses, and the pulse period is △ t;And optical receiver module exists There are phase delay when receiving the pulse, the pulse period of 90 ° of each window phase delay, i.e. a quarter (uses △ t/4 table Show), such phase delay is respectively 0 °, 180 °, 90 ° and 270 °, also referred to as four phase methods.During exposure, each pixel Two capacitor wheel current charges, the time for exposure is impartial, and the difference of the light exposure of two capacitors is not recorded as Q1, Q2, Q3 and Q4;It utilizes The relationship of charge differences and flight phase can calculate phase differenceIt converts to obtain subject again by the phase difference Distance D;Wherein,
When the angle corresponding to the distance of subject is more than 2 π, then the phase that needs two frequencies different solves True distance out.It is assumed that two phase values obtained are used respectivelyWithIt indicates, it willIt is extended toIt will It is extended toWill so there be a true distance so that the corresponding range difference of the two is minimum, so as to Determine true distance.
TOF has been widely used on the terminal devices such as mobile phone as active depth transducer, for example, certain manufacturer by its Scheme as postposition depth transducer.But the resolution ratio of TOF sensor is low, can not directly adapt to virtualization, stingy figure etc. to preceding The very high application of scape edge accuracy requirement.Therefore, it takes the photograph based on scheme at present or with double.
Double schemes of taking the photograph may include main camera and secondary camera, and main camera and secondary camera all can be RGB camera shootings Head, wherein RGB represent red (Red, R), green (Green, G), blue three channels (Blue, B) color, these three channels Color is mixed according to different ratios or superposition, all colours that human eyesight is perceived in available image.It is double to take the photograph scheme Have become a standard configuration function of terminal device as portrait virtualization application, it is double to take the photograph virtualization process as shown in Fig. 2, taking the photograph by double Four steps such as calibration, polar curve correction, Stereo matching and the virtualization of scattered scape, so as to complete double functions of taking the photograph virtualization.However, passing through Although double schemes of taking the photograph obtain that depth information advantage is more, for example structure is simple, hardware power consumption is low, the high resolution of depth information, But also it can be with both in-door and most of scene of outdoor;But it is double take the photograph scheme for it is texture-free, repeat texture, overexposure, The adaptability of the scenes such as under-exposure is poor so that it is double take the photograph scheme it is texture-free, repeat texture, overexposure, the regions institute such as under-exposure There may be mistakes for the depth information of generation, so as to cause virtualization error.
The embodiment of the present application provides a kind of bearing calibration of depth image, and this method is applied to terminal device.By obtaining Take the corresponding original image of target object and the corresponding main color image of the target object and secondary color image;Wherein, institute Stating original image is collecting to target object according to TOF sensor, and the main color image and secondary color image are According to dual camera collecting to target object;Then according to the main color image and the secondary color image, benefit Algorithm, which is taken the photograph, with default pair determines corresponding first depth information of the target object and the first confidence level;According to the original graph Picture determines corresponding second depth information of the target object;It is based on first depth information, second depth information again And first confidence level, determine the erroneous data regions in first depth information;Finally by second depth Information and the main color image are corrected processing to the erroneous data regions, target depth information are obtained, according to institute It states target depth information and obtains depth image;In this way, being optimized by the second depth information to the first depth information, Neng Gouxiu It is multiple double take the photograph under portrait mode of figure depth information it is texture-free, repeat the phenomenon that regions depth error such as texture, overexposure, under-exposure, To improve double accuracys for taking the photograph depth under portrait mode of figure;In addition, the target depth information is mainly used for main color image Virtualization processing, can also optimize portrait virtualization accuracy, improve portrait virtualization effect.
Each embodiment of the application is described in detail below in conjunction with attached drawing.
Referring to Fig. 3, it illustrates a kind of flow diagrams of the bearing calibration of depth image provided by the embodiments of the present application. As shown in figure 3, this method may include:
S301: obtaining the corresponding original image of target object and the corresponding main color image of the target object and pair is color Chromatic graph picture;Wherein, the original image is collecting to target object, the main color image according to TOF sensor It is collecting to target object according to dual camera with secondary color image;
It should be noted that this method is applied to terminal device, it include TOF sensor and dual camera in terminal device Components such as (main camera and secondary cameras).In this way, the corresponding original of target object can be collected by TOF sensor Beginning image can also collect the corresponding main color image of target object and secondary color image, after being convenient for by dual camera The continuous calculating for carrying out depth information.
It should also be noted that, terminal device can be implemented in a variety of manners.For example, terminal described in this application is set Standby may include such as mobile phone, tablet computer, laptop, palm PC, personal digital assistant (Personal Digital Assistant, PDA), wearable device, digital camera, the mobile terminals such as video camera, and such as number TV, desk-top calculating The fixed terminals such as machine;The embodiment of the present application is not especially limited.
In some embodiments, for S301, the corresponding original image of acquisition target object and the mesh The corresponding main color image of object and secondary color image are marked, may include:
S301a: the target object is acquired by TOF sensor, it is corresponding original to obtain the target object Image;
S301b: the target object is acquired by dual camera, obtains the target object under main camera Corresponding main color image and the target object corresponding pair color image under secondary camera;Wherein, double camera shootings Head includes main camera and secondary camera.
It should be noted that by acquisition of the TOF sensor to target object, available target object is corresponding original Image, such as one group of RAW figure;Acquisition by dual camera to target object, corresponding main coloured silk under available main camera Chromatic graph picture, such as a RGB master map;And corresponding secondary color image under secondary camera, such as a RGB pair figure.In this way, root Double the corresponding depth information of mode, the application reality are taken the photograph according to the main color image of dual camera acquisition and secondary color image are available Apply example is indicated with the first depth information;The corresponding depth letter of the available TOF mode of original image acquired according to TOF sensor Breath, the embodiment of the present application are indicated with the second depth information.
Illustratively, referring to fig. 4, it illustrates a kind of hardware configuration signals of terminal device provided by the embodiments of the present application Figure.As shown in figure 4, terminal device may include have application processor (Application Processor, AP), main camera, Secondary camera, TOF sensor and laser (Laser) transmitter;Wherein, the side AP includes the first image-signal processor (First Image Signal Processor, ISP1), the second image-signal processor (Second Image Signal Processor, ISP2) and Mobile Industry Processor Interface (Mobile Industry Processor Interface, MIPI);In addition to this, the side AP is also placed with preset algorithm, for example default pair is taken the photograph algorithm, preset calibrations algorithm etc., the application Embodiment is not especially limited.
Terminal device as shown in connection with fig. 4, it is double to take the photograph under mode, terminal device to target object carry out Image Acquisition when, It can be separately connected two-way camera by two ISP by the side AP, to obtain two-way RGB data, while ensure that and double take the photograph mode Under frame synchronization it is synchronous with 3A;Wherein, it includes auto-focusing (Automatic Focus, AF), automatic exposure that 3A is synchronous The synchronization of (Automatic Exposure, AE) and automatic white balance (Automatic White Balance, AWB).Scheming The main color image (RGB data all the way) in 4, being acquired, and being will acquire to target object by main camera is sent into ISP1; The secondary color image (another way RGB data) for being acquired, and being will acquire to target object by secondary camera is sent into ISP2.Separately Outside, terminal device can also be guaranteed by drive integrated circult (Integrated Circuit, IC) laser (Laser) with it is red The exposure time series requirement of outside line (Infrared Radiation, IR), and require IR to expose RGB corresponding with main camera and expose Phototiming;Specifically, software synchronization mode or hardware synchronization mode can be used to realize.The pre- imputation placed in conjunction with the side AP Method, so that it may corresponding first depth information of target object be calculated.And under TOF mode, terminal device can be passed by TOF Sensor collects one group of RAW figure, to get corresponding second depth information of target object.In this way, subsequent can be according to One depth information and the second depth information can be corrected and double take the photograph the first depth under mode in the main depth fusion taken the photograph in coordinate system Zone errors in information, to achieve the purpose that promote depth accuracy.
S302: according to the main color image and the secondary color image, algorithm is taken the photograph using default pair and determines the target Corresponding first depth information of object and the first confidence level;According to the original image, the target object corresponding is determined Two depth informations;
It should be noted that the first confidence level is used to characterize the accuracy of the first depth information, default pair is taken the photograph algorithm and is used for Indicate pre-set algorithm or model based on dual camera Stereo matching.Specifically, colored according to main color image and pair Image, it is available to be taken the photograph under mode to double by default double calculating taken the photograph algorithm and carry out the first depth information and the first confidence level The first depth information and the first confidence level;And the calculating of depth information is carried out according to original image, it is available to arrive TOF mould The second depth information under formula;In this way, subsequent can be according to the second depth information under TOF mode to double first taken the photograph under mode Depth information optimizes.
It should also be noted that, the first depth information is calculated according to main color image and secondary color image, institute Obtained the first depth information script is in the case where master takes the photograph coordinate system, because turning without carrying out coordinate system to the first depth again It changes;And the second depth information is calculated according to original image, obtained second depth information is then in TOF coordinate system Under, thus also need to carry out the second depth coordinate system conversion, it is snapped to master and is taken the photograph in coordinate system.
Usually, main to take the photograph that resolution ratio is larger, the resolution ratio for generating depth information is also just higher, with 4,000,000 cameras For, resolution ratio is 2584 × 1938;And TOF resolution ratio is lower, the resolution ratio for generating depth information is also just relatively low, such as Resolution ratio is 320*240;That is, the high resolution of the first depth information is in the resolution ratio of the second depth information.In this way, the Two depth informations snap to it is main after taking the photograph in coordinate system, the corresponding pixel of the second depth information be it is sparse, also can with for It is subsequent that some sparse effective pixel points are provided.
S303: be based on first depth information, second depth information and first confidence level, determine described in Erroneous data regions in first depth information;
It should be noted that erroneous data regions refer to that there are the pixel of depth mistake institute groups in the first depth information At region, and erroneous data regions are generally in the low confidence region of the first depth information;Wherein, the low confidence Region is determined by the first confidence level.
It is taken the photograph under mode due to double, there are the feelings that depth malfunctions in regions such as texture-free, repetition textures for the first depth information Condition;At this time in order to promote double accuracys for taking the photograph depth under mode, the embodiment of the present application is just it needs to be determined that go out the wrong data area Domain carries out depth calibration to the erroneous data regions convenient for subsequent.In this way, equal in the first depth information and the second depth information It snaps to after main take the photograph in coordinate system, according to the first confidence level, can determine low confidence region in the first depth information;So Afterwards in the low confidence region, by the first depth information and the second depth information, it can calculate in the first depth information Erroneous data regions, calibrated convenient for subsequent for the erroneous data regions.
S304: the erroneous data regions are corrected by second depth information and the main color image Processing, obtains target depth information, obtains depth image according to the target depth information.
It should be noted that the second depth can be passed through after obtaining the erroneous data regions in the first depth information Information can carry out interpolation and repair process to the pixel in the erroneous data regions, obtain new depth information.Wherein, Erroneous data regions in one depth information are to carry out interpolation to it by the second depth information and repair process obtains, and first Non-erroneous data area in depth information then remains the first original depth information, and the first depth information is come in this way It says, has obtained new depth information;As it can be seen that the new depth information is melted by the first depth information and the second depth information What conjunction obtained;In order to weaken artificial synthesized trace, place can also be filtered to new depth information by main color image The depth information of reason, final output is target depth information, and required depth image is just obtained according to the target depth information, from And solve it is double take the photograph under mode depth information it is texture-free, repeat showing for the regions depth error such as texture, overexposure, under-exposure As improving the accuracy of depth information.
Further, in some embodiments, after S304, this method can also include:
Virtualization processing is carried out to the main color image according to the depth image, obtains target image.
It should be noted that carrying out virtualization processing, available institute to main color image according to acquired depth image The target image needed;Wherein, target image can be scattered scape image.In addition, due to the embodiment of the present application algorithm complexity compared with Height, the bearing calibration of the depth image mainly apply in double photographing modes for taking the photograph portrait virtualization, can not be applied to preview mould Formula.Specifically, the embodiment of the present application is primarily directed to virtualization, the application such as scratch figure, by combining the advantage of TOF to double portraits taken the photograph Depth information optimizes.In this way, since the target image is to carry out void to main color image according to acquired depth image What change was handled so that the target image it is repaired it is double take the photograph under portrait mode of figure depth information it is texture-free, repeat texture, The phenomenon that regions depth such as overexposure, under-exposure malfunctions, improve it is double take the photograph the accuracy of depth under portrait mode of figure, to optimize The accuracy of portrait virtualization.
A kind of bearing calibration of depth image is present embodiments provided, this method is applied to terminal device.Obtain target pair As corresponding original image and the corresponding main color image of the target object and secondary color image;Wherein, the original graph It seem collecting to target object according to TOF sensor, the main color image and secondary color image are taken the photograph according to double As head collecting to target object;Then according to the main color image and the secondary color image, default pair is utilized It takes the photograph algorithm and determines corresponding first depth information of the target object and the first confidence level;According to the original image, institute is determined State corresponding second depth information of target object;Again based on first depth information, second depth information and described First confidence level determines the erroneous data regions in first depth information;Finally by second depth information and The main color image is corrected processing to the erroneous data regions, obtains target depth information, deep according to the target Degree information obtains depth image;In this way, the first depth information is by double since the second depth information is obtained by TOF mode What the mode of taking the photograph obtained, the first depth information is optimized by the second depth information, double take the photograph under portrait mode of figure deeply can be repaired Information is spent the phenomenon that the regions depth such as texture-free, repetition texture, overexposure, under-exposure malfunctions, to realize TOF to double The optimization for taking the photograph portrait mode of figure improves double accuracys for taking the photograph depth under portrait mode of figure;In addition, the target depth information is mainly used In the virtualization processing to main color image, the accuracy of portrait virtualization can also be optimized, improve the effect of portrait virtualization.
In another embodiment of the application, since the lens precision and technique of camera can introduce distortion, to cause figure Image distortion;And it is double take the photograph under mode, the optical axis of main camera and secondary camera is simultaneously not parallel;At this time in the case where mode is taken the photograph in calculating pair The first depth information before, it is also necessary to distortion correction and polar curve correction process are carried out to main color image and secondary color image. Therefore, in some embodiments, described according to the main color image and the secondary color image for S302, it utilizes Default pair is taken the photograph algorithm and determines corresponding first depth information of the target object and the first confidence level, may include:
S302a: distortion correction processing is carried out to the main color image, the main color image after being corrected;
S302b: distortion correction and polar curve correction process are carried out to the secondary color image, the secondary cromogram after being corrected Picture;
It should be noted that the imaging process of main camera or secondary camera is actually by the coordinate points of world coordinate system It is transformed into the main process for taking the photograph coordinate system.Due to the lens precision and technique of camera can introduce distortion (it is so-called distortion, in particular to It will no longer be straight line that straight line in world coordinate system, which is transformed into other coordinate systems), so as to cause image fault, it is therefore desirable to right Main color image and secondary color image carry out distortion correction.In addition, in order to realize that the optical axis of main camera and secondary camera is complete In parallel, so that the same pixel of target object is consistent with the height in secondary color image in main color image, it is also necessary to pair Color image carries out polar curve correction, for example can use Bouguet polar curve correcting algorithm.Specifically, due to the master before correction Optical axis (also referred to as baseline) between camera and secondary camera is not parallel, and the target of polar curve correction is main camera Optical axis between secondary camera is substantially parallel;In this way, after distortion correction and polar curve correction, it can be according to identical view The image of rink corner (Field of Vision, FOV) standard parallel binocular vision.
It is also to be noted that after getting main color image and secondary color image, it can also be according to preset ratio It is zoomed in and out, the color image of low resolution is obtained;Then according to the corresponding calibrating parameters of dual camera to the low resolution The color image of rate carries out distortion correction and polar curve correction process, to get the main color image after correction and the pair after correction Color image.Wherein, the corresponding calibrating parameters of dual camera, which can be, carries out calculating acquisition according to preset calibrations algorithm, such as Zhang Zhengyou calibration method;It is also possible to directly to be provided according to the production firm or supplier of dual camera and obtains.In addition, default Ratio is set according to the actual situation in practical applications according to the pre-set ratio value of target resolution, the application Embodiment is not especially limited.
Due to the polar curve between the main camera that is set on terminal device and secondary camera and not parallel, so that target pair As in same pixel in main color image height and height of the pixel in secondary color image it is not consistent, and into After the correction of row polar curve, height phase one of the same pixel of target object in the height and secondary color image in main color image It causes.In this way, when main color image and secondary color image carry out Stereo matching, it is only necessary to find the picture to match on a same row Vegetarian refreshments.
Illustratively, referring to Fig. 5, it illustrates a kind of Contrast on effect signals of polar curve correction provided by the embodiments of the present application Figure.In Fig. 5, before carrying out polar curve correction to main camera and secondary camera, the optical axis of the two is simultaneously not parallel, such as shown in (a); It is at this time directed to the pixel 1 of target object, the height of height and secondary color image in obtained main color image is not Unanimously, as shown in (b);In order to carry out the calculating of the first depth information, need to carry out pole to main camera and secondary camera Line correction, in this way after polar curve correction, the optical axis of main camera and secondary camera is substantially parallel, such as shown in (c);This When the height locating in main color image of pixel 1 and the height locating in secondary color image of pixel 1 be consistent, such as (d) shown in;In this way, when main color image and secondary color image carry out Stereo matching, it is only necessary to find phase on a same row The pixel matched, so that efficiency be greatly improved.
S302c: for each pixel in the target object, based on the main color image after correction and the pair after correction Color image takes the photograph algorithm using default pair and determines that corresponding first depth information of each pixel and each pixel corresponding first are set Reliability;Wherein, first confidence level is used to characterize the accuracy of first depth information.
It should be noted that after getting the main color image after correction and the secondary color image after correction, it can be with Each pixel corresponding first in target object is determined with the secondary color image after correction according to the main color image after correction Depth information and the first confidence level;Wherein, for the first depth information and the first confidence level are as unit of pixel.
Further, in some embodiments, for S302c, the main color image and school based on after correction Secondary color image after just determines corresponding first depth information of each pixel, may include:
Pass through double secondary color image progress disparity correspondences for taking the photograph matching algorithm to the main color image after correction and after correction It calculates, obtains the corresponding parallax value of each pixel;
Depth conversion is carried out to the parallax value by the first default transformation model, obtains each pixel corresponding first deeply Spend information.
It should be noted that double matching algorithms of taking the photograph are pre-set algorithms or model for disparity computation, belong to pre- If double take the photograph in algorithm the classic algorithm for calculating parallax;Wherein, double matching algorithms of taking the photograph can be half global registration (Semi-Global Matching, SGM) algorithm or lion indicator mould cost polymerize (Cross-Scale Cost Aggregation, CSCA) algorithm Deng the embodiment of the present application is not especially limited.Illustratively, referring to Fig. 6, it illustrates provided by the embodiments of the present application a kind of double Take the photograph the effect diagram of disparity computation.In FIG. 6, it can be seen that the two images for (a) and (b) carry out disparity correspondence meter It calculates, finally obtains the parallax effect figure as shown in (c).
In addition, the first default transformation model is the model for parallax depth conversion of default setting, typically refer to utilize Parallax value and default imaging parameters calculate the range of triangle model of depth information;Wherein, being preset in the embodiment of the present application As may include parallax range and focal length in parameter.For example, the first default transformation model can be Z=Baseline*focal/ Disparity, here, Z indicate that depth information, Baseline indicate the distance of baseline or optical axis, and focal indicates focal length, Disparity indicates parallax value;But the embodiment of the present application is also not especially limited the first default transformation model.
Illustratively, referring to Fig. 7, it illustrates a kind of model signals for calculating depth information provided by the embodiments of the present application Figure.As shown in fig. 7, ORFor the position where main camera, OTFor the position where secondary camera, ORWith OTThe distance between be Parallax range is indicated with b;P is the position where target object, P1Target object P is acquired by main camera for terminal device When picture point obtained, P1' it is that terminal device passes through picture point obtained, x when secondary camera acquisition target object PRFor target pair The picture point P of elephant1Coordinate in main color image, xTFor the picture point P of target object1' coordinate in secondary color image, based on f Focal length between camera and secondary camera.At this point, from similar trianglesIn turn, it can be obtainedWherein, d is parallax value.Therefore, terminal device only needs to know parallax range b, focal length f and parallax After value d, so that it may according to the first default transformation model (such as), calculate each pixel corresponding first deeply Spend information.
After getting the main color image after correction and the secondary color image after correction, the first confidence can also be carried out The determination of degree.In the embodiment of the present application, the first confidence level can be colored according to the main color image after correction and the pair after correction Matching similitude cost between image is calculated, can also be according to main color image texture ladder corresponding with secondary color image Degree difference is calculated, and the embodiment of the present application is not especially limited.
Further, in some embodiments, for S302c, the main color image and school based on after correction Secondary color image after just determines corresponding first confidence level of each pixel, may include:
Matching Similarity measures are carried out to the main color image after correction and the secondary color image after correction, obtain each picture The corresponding matching similitude cost of element;
Based on the matching similitude cost, corresponding first confidence level of each pixel is determined.
It should be noted that similar by match to the main color image after correction and the secondary color image after correction Property calculate, the corresponding matching similitude cost of each pixel in available target object;Wherein, the tool of similitude cost is matched Body calculation in practical application, can be configured, the embodiment of the present application is not especially limited according to the actual situation.
It should also be noted that, after obtaining matching similitude cost, it can be according to matching similitude cost, further Determine corresponding first confidence level of each pixel.Specifically, terminal device can set cost threshold value, and each pixel is obtained Matching similitude cost be compared with cost threshold value, with determine the first confidence level;Such as when the corresponding matching phase of the pixel When being greater than cost threshold value like property cost, the secondary color image after main color image and correction after showing correction is for the pixel For still have the erroneous matching of greater probability, corresponding first confidence level of the pixel is relatively low at this time.In addition, being directed to each pixel Carry out matching Similarity measures, available the smallest matching similitude cost and time the smallest matching similitude cost;If The smallest matching similitude cost match similitude cost relatively with time the smallest, then may also indicate that pixel correspondence The first confidence level it is relatively low.
Further, in some embodiments, for S302c, the main color image and school based on after correction Secondary color image after just determines corresponding first confidence level of each pixel, may include:
Calculate corresponding first texture gradient under the main color image of each pixel after calibration;
Based on first texture gradient, corresponding first confidence level of each pixel is determined.
It should be noted that the first confidence level can also be related with texture-rich.According to the main color image after correction, Corresponding first texture gradient of each pixel can be calculated;In this way, according to the first texture gradient, so that it may determine that first sets Reliability.Wherein, the calculation of specific texture gradient in practical application, can be configured according to the actual situation, and the application is real Example is applied to be not especially limited.
A kind of bearing calibration of depth image is present embodiments provided, this method is applied to terminal device.The present embodiment pair The specific implementation of previous embodiment is elaborated, there it can be seen that technical solution through this embodiment, Neng Gouxiu It is multiple double take the photograph under portrait mode of figure depth information it is texture-free, repeat the phenomenon that regions depth error such as texture, overexposure, under-exposure, To realize TOF to double optimizations for taking the photograph portrait mode of figure, double accuracys for taking the photograph depth under portrait mode of figure are improved;In addition, the mesh Mark depth information is mainly used for the processing of the virtualization to main color image, can also optimize the accuracy of portrait virtualization, improve people As the effect of virtualization.
In the another embodiment of the application, the first depth information is calculated according to main color image and secondary color image It arrives, obtained first depth information script is in the case where master takes the photograph coordinate system, because without carrying out again to the first depth Coordinate system conversion;And the second depth information is calculated according to original image, obtained second depth information be then It under TOF coordinate system, thus also needs to carry out the second depth coordinate system conversion, is snapped to master and taken the photograph in coordinate system.Cause This, it is in some embodiments, described according to the original image for S302, determine the target object corresponding Two depth informations may include:
S302d: according to the original image, it is initial under TOF coordinate system to obtain each pixel in the target object Depth information;
It should be noted that being acquired by TOF sensor to the target object, it is corresponding to obtain the target object Original image (such as one group RAW figure);In this way, carrying out the calculating of depth information, available target according to the original image Initial depth information of each pixel under TOF coordinate system in object;Wherein, the calculation of the initial depth information can adopt With four phase methods.
S302e: coordinate system conversion is carried out to the initial depth information by the second default transformation model, obtains each picture Element is in main the second depth information taken the photograph under coordinate system.
It should be noted that the second default transformation model is the model for coordinate system conversion of default setting, such as will Coordinate under TOF coordinate system is transformed into master and takes the photograph in coordinate system.In this way, can be by TOF coordinate system according to the second default transformation model Under initial depth information conversion based on take the photograph the second depth information under coordinate system, with realize pixel be aligned.
Further, in some embodiments, before S302e, this method can also include:
It is demarcated according to preset calibrations algorithm between TOF sensor and dual camera, obtains calibrating parameters;
Correspondingly, the second default transformation model that passes through obtains initial depth information progress coordinate system conversion Each pixel takes the photograph the second depth information under coordinate system in master, may include:
Based on the calibrating parameters and the second default transformation model, the initial depth information is transformed into master and takes the photograph coordinate In system, each pixel is obtained in main the second depth information taken the photograph under coordinate system.
It should be noted that before carrying out pixel alignment, it is also necessary to first carry out double take the photograph to TOF sensor and dual camera Calibration.Wherein, which, which can be, carries out calculating acquisition, such as Zhang Zhengyou calibration method according to preset calibrations algorithm;? It can be and directly provided and obtained according to the production firm or supplier of dual camera, the embodiment of the present application does not limit specifically It is fixed.
In this way, after obtaining calibrating parameters, it can be according to the calibrating parameters and the second default transformation model, it will be initial Depth information is transformed into master and takes the photograph in coordinate system, has obtained each pixel in main the second depth information taken the photograph under coordinate system, thus real The pixel alignment for having showed the first depth information and the second depth information is conducive to subsequent to the first depth information and the second depth letter The fusion of breath, to realize the correction to erroneous data regions in the first depth information.It should be noted that usually master takes the photograph resolution ratio Bigger, the resolution ratio for generating depth information is generally larger than the resolution ratio that TOF generates depth information;In this way, initial depth is believed Breath snaps to master and takes the photograph after coordinate system, and pixel is sparse in obtained second depth information, is conducive to as subsequent depth Information fusion provides some sparse effective pixel points.
A kind of bearing calibration of depth image is present embodiments provided, this method is applied to terminal device.The present embodiment pair The specific implementation of previous embodiment is elaborated, there it can be seen that technical solution through this embodiment, Neng Gouxiu It is multiple double take the photograph under portrait mode of figure depth information it is texture-free, repeat the phenomenon that regions depth error such as texture, overexposure, under-exposure, To realize TOF to double optimizations for taking the photograph portrait mode of figure, double accuracys for taking the photograph depth under portrait mode of figure are improved;In addition, the mesh Mark depth information is mainly used for the processing of the virtualization to main color image, can also optimize the accuracy of portrait virtualization, improve people As the effect of virtualization.
In the another embodiment of the application, for the erroneous data regions in the first depth information, the wrong data area Domain is usually located in the low confidence region of the first depth information, and can be by believing the first depth information and the second depth Difference between breath judges specifically to determine.Therefore, in some embodiments, described to be based on described first for S303 Depth information, second depth information and first confidence level, determine the wrong data in first depth information Region may include:
S303a: according to first confidence level, the low confidence region in first depth information is determined;
It should be noted that low confidence region in the first depth information can be and be determined by the first confidence level, and First confidence level with to match similitude cost, texture-rich related.That is, can be according to the picture in the first depth information Element matches similitude cost and determines the low confidence region in the first depth information with pixel in the second depth information, can also To determine that first is deep according to the texture gradient of pixel in the texture gradient of pixel in the first depth information and the second depth information Spend the low confidence region in information.
Further, it is assumed that the first confidence threshold value is the decision content for whether belonging to low confidence for measuring the first confidence level; In this way, can also be judged according to the first confidence threshold value obtained in advance.Specifically, by the first confidence level and the first confidence Degree threshold value is compared;When the first confidence level is less than the first confidence threshold value, picture corresponding to first confidence level is shown Element belongs in low confidence region, with the low confidence region in this available first depth information.
S303b: for each of low confidence region pixel to be judged, it is corresponding to calculate each pixel to be judged Difference between first depth information and corresponding second depth information in the effective neighborhood for being somebody's turn to do pixel to be judged;
It should be noted that effective neighborhood in the second depth information refers to the region closest with pixel to be judged is somebody's turn to do, And the size of effective neighborhood is limited, for example can be 5*5 or 7*7;But the size of effectively neighborhood, practical application In, it can be configured according to the actual situation, the embodiment of the present application is not especially limited.
In this way, there are effective pixel points associated with the second depth information in effective neighborhood, so as to incite somebody to action Corresponding first depth information of pixel to be judged the second depth information corresponding with the effective pixel points carries out difference calculating, obtains Difference between the two;Further determine that whether pixel to be judged is erroneous point convenient for the subsequent size according to the difference.
S303c: the difference is compared with preset difference value threshold value;
It should be noted that preset difference value threshold value is preset for measuring whether pixel to be judged is erroneous point Decision content.In this way, after step S303c, according to the comparison result of difference and preset difference value threshold value, when difference is poor greater than default When being worth threshold value, step S303d is executed;When difference is not more than preset difference value threshold value, step S303e is executed.
S303d: when the difference is greater than preset difference value threshold value, which is labeled as erroneous point, according to mark The erroneous point of note obtains the erroneous data regions in first depth information;
S303e: when the difference is not more than preset difference value threshold value, retains and be somebody's turn to do the corresponding first depth letter of pixel to be judged Breath, obtains the reservation data area in first depth information.
It should be noted that being taken the photograph under mode due to double, the first depth information exists in regions such as texture-free, repetition textures The case where depth malfunctions;At this time in order to promote double accuracys for taking the photograph depth under mode, the embodiment of the present application is it needs to be determined that go out First depth information is carried out the differentiation of erroneous data regions and original data region by the erroneous data regions.Specifically, exist First depth information and the second depth information snap to main after taking the photograph in coordinate system, according to the first confidence level, can determine Low confidence region in first depth information;Then in the low confidence region, deeply by pixel to be judged corresponding first Corresponding second depth information in effective neighborhood of information and the pixel to be judged is spent, the difference of the two can be calculated;It will be poor Value is compared with preset difference value threshold value;It, can be by the pixel to be judged labeled as mistake when difference is greater than preset difference value threshold value It is overdue, according to the erroneous point of label, obtain the erroneous data regions in the first depth information;When difference is not more than preset difference value When threshold value, showing corresponding first depth information of the pixel to be judged is that correctly, at this time needing to retain should picture be judged Corresponding first depth information of element, obtains the reservation data area in first depth information, that is, the first depth information In original data region, so that the first depth information is divided into erroneous data regions and original data region.
Further, in some embodiments, this method can also include:
S303f: associated effectively if there is no second depth informations in effective neighborhood of the pixel to be judged Pixel does not execute described the step of being compared the difference with preset difference value threshold value then.
It should be noted that due to the TOF initial depth information generated snap to it is main take the photograph coordinate system after, obtained the Pixel is sparse in two depth informations;In this way, for the second depth information, it may in effective neighborhood of pixel to be judged There is no effective pixel points associated with the second depth information.That is, not deposited when in effective neighborhood of pixel to be judged In the corresponding effective pixel points of the second depth information, it will cannot at this time execute what difference was compared with preset difference value threshold value Step can retain corresponding first depth information of the pixel to be judged.
Further, in some embodiments, described by second depth information and described for S304 Main color image is corrected processing to the erroneous data regions, obtains target depth information, may include:
S304a: for each erroneous point in the erroneous data regions, pass through corresponding second depth of each erroneous point Information is weighted interpolation calculation to the erroneous data regions, and calculated depth information replacement described first is deep The erroneous data regions in information are spent, new depth information is obtained;
S304b: being filtered the new depth information according to the main color image, and it is deep to obtain the target Spend information.
It should be noted that the erroneous point for including described in erroneous data regions, can use effective picture in effective neighborhood Corresponding second depth information of vegetarian refreshments, while can be combined with the weight of color similarity and space length, to wrong data Region is weighted interpolation calculation, can obtain the depth information being calculated at this time;Recycle the depth information being calculated Replace the erroneous data regions in the first depth information, available new depth information;As it can be seen that the new depth information is It is merged by the first depth information and the second depth information;It, can also be with master in order to weaken artificial synthesized trace Color image is filtered new depth information, as guidance so that artificial synthesized trace is reduced, according to what is exported The available depth image of target depth information, is also achieved that the correction process to the erroneous data regions.
It should also be noted that, filtering processing mode includes guiding filtering (Guide Filter), DT filtering (Domain Transform Filter), the modes such as weight median filtering (Weight Medium Filter);In practical application, Ke Yigen It is configured according to actual conditions, the embodiment of the present application is not especially limited.
Illustratively, referring to Fig. 8, it illustrates a kind of the detailed of the bearing calibration of depth image provided by the embodiments of the present application Thin flow diagram.As shown in figure 8, terminal device collects original image by TOF sensor, it can be by one group of RAW figure group At;Terminal device collects main color image and terminal device by main camera and collects secondary colour by secondary camera Image;Then distortion correction processing is carried out to main color image, distortion correction and polar curve correction process is carried out to secondary color image, The secondary color image after main color image and correction after correction can be respectively obtained;After recycling pair takes the photograph matching algorithm to correction Main color image and correction after secondary color image carry out matching treatment, available first depth information and the first confidence Degree;Then depth information calculating, available initial depth information are carried out to original image;Due to initial depth information be Under TOF coordinate system, it is also necessary to convert it to master and take the photograph under coordinate system, to obtain the second depth information, realize the second depth Information and the alignment of the pixel of the first depth information;According to the first confidence level, the low confidence area in the first depth information is determined Then domain calculates the difference between the first depth information and the second depth information in low confidence region;Whether judge difference again Greater than preset difference value threshold value, i.e., the safety of difference is judged;When difference is greater than preset difference value threshold value, difference is shown Be it is unsafe, at this time by the pixel to be judged be labeled as erroneous point, obtain the erroneous data regions in the first depth information, so Processing (such as reparation, filling and interpolation processing etc.) is corrected to erroneous data regions using the second depth information afterwards;It is on duty When value is no more than preset difference value threshold value, show difference be it is safe, the pixel corresponding first to be judged can be retained at this time Depth information;Finally the two is merged, exports final depth image.
After getting depth image, virtualization processing, energy can also be carried out to main color image according to the depth image Enough improve the accuracy of portrait virtualization.Referring to Fig. 9, it illustrates a kind of pairs of portrait virtualization effect provided by the embodiments of the present application Compare schematic diagram.As shown in figure 9, it is to repeat texture region that (a) and (b), which is background, wherein (a) provides double take the photograph under mode Portrait blurs effect, (b) provides double portrait virtualization effects taken the photograph under mode+TOF mode;It can thus be seen that it is double take the photograph mode+ Portrait virtualization effect under TOF mode is more preferable.
A kind of bearing calibration of depth image is present embodiments provided, this method is applied to terminal device.The present embodiment pair The specific implementation of previous embodiment is elaborated, there it can be seen that technical solution through this embodiment, Neng Gouxiu It is multiple double take the photograph under portrait mode of figure depth information it is texture-free, repeat the phenomenon that regions depth error such as texture, overexposure, under-exposure, To realize TOF to double optimizations for taking the photograph portrait mode of figure, double accuracys for taking the photograph depth under portrait mode of figure are improved;In addition, the mesh Mark depth information is mainly used for the processing of the virtualization to main color image, can also optimize the accuracy of portrait virtualization, improve people As the effect of virtualization.
In the another embodiment of the application, since TOF mode is poor in outdoor effect, the second depth information is caused to be deposited There is a large amount of cavity, is being not suitable for executing the bearing calibration of the depth image of the embodiment of the present application.Therefore, in some embodiments In, described according to the original image, after determining corresponding second depth information of the target object, the method is also wrapped It includes:
According to the original image, corresponding second confidence level of the target object is determined;Wherein, second confidence level For characterizing the accuracy of second depth information;
Based on second confidence level, the empty number in second depth information is determined;
If the cavity number is greater than default cavitation threshold, the bearing calibration of the depth image is not executed.
It should be noted that can at this time increase an empty number since TOF mode is poor in outdoor effect Or the judgement of voidage.If the second depth information under TOF mode there are a large amount of cavity or low confidence region, The bearing calibration that depth image described in the embodiment of the present application can not at this time be executed (specifically refers to the first depth information and The fusion of two depth informations), still pattern acquiring depth image is taken the photograph using normal pair.Specifically, it is assumed that default cavitation threshold It is for measuring a whether excessive decision content of empty quantity, then can determine the second depth according to the second confidence level Empty number in information;If empty number is greater than default cavitation threshold, it is excessive to show empty quantity, can not hold at this time The bearing calibration of the depth image of row the embodiment of the present application takes the photograph mode only with normal pair to obtain depth image.
In addition, the resolution ratio of the generated depth information of TOF sensor is lower in terminal device, it is opposite to be mainly used for correction The depth mistake in larger region;And the scene very rich for hollow out, depth level, this does not have very good calibration result. Therefore, the bearing calibration of the depth image of the embodiment of the present application is mainly used in double uses taken the photograph under portrait mode of figure.
A kind of bearing calibration of depth image is present embodiments provided, this method is applied to terminal device.The present embodiment pair The specific implementation of previous embodiment is elaborated, there it can be seen that technical solution through this embodiment, Neng Gouxiu It is multiple double take the photograph under portrait mode of figure depth information it is texture-free, repeat the phenomenon that regions depth error such as texture, overexposure, under-exposure, To realize TOF to double optimizations for taking the photograph portrait mode of figure, double accuracys for taking the photograph depth under portrait mode of figure are improved;In addition, the mesh Mark depth information is mainly used for the processing of the virtualization to main color image, can also optimize the accuracy of portrait virtualization, improve people As the effect of virtualization.
Based on the identical inventive concept of previous embodiment, referring to Figure 10, it illustrates provided by the embodiments of the present application another The composed structure schematic diagram of kind terminal device 100.As shown in Figure 10, terminal device 100 may include: acquiring unit 1001, really Order member 1002 and correction unit 1003, wherein
The acquiring unit 1001 is configured to obtain the corresponding original image of target object and the target object is corresponding Main color image and secondary color image;Wherein, the original image is to be collected according to TOF sensor to target object , the main color image and secondary color image are collecting to target object according to dual camera;
The determination unit 1002 is configured to utilize default pair according to the main color image and the secondary color image It takes the photograph algorithm and determines corresponding first depth information of the target object and the first confidence level;According to the original image, institute is determined State corresponding second depth information of target object;And it is additionally configured to believe based on first depth information, second depth Breath and first confidence level, determine the erroneous data regions in first depth information;
The correction unit 1003 is configured to through second depth information and the main color image to the mistake Accidentally data area is corrected processing, obtains target depth information, obtains depth image according to the target depth information.
In the above scheme, referring to Figure 10, the terminal device 100 can also include virtualization unit 1004, be configured to root Virtualization processing is carried out to the main color image according to the depth image, obtains target image.
In the above scheme, referring to Figure 10, the terminal device 100 can also include acquisition unit 1005, be configured to lead to It crosses TOF sensor to be acquired the target object, obtains the corresponding original image of the target object;And it is additionally configured to The target object is acquired by dual camera, obtains the target object corresponding main color figure under main camera Picture and the target object corresponding pair color image under secondary camera;Wherein, the dual camera includes main camera With secondary camera.
In the above scheme, the correction unit 1003 is additionally configured to carry out at distortion correction the main color image Reason, the main color image after being corrected;And distortion correction and polar curve correction process are carried out to the secondary color image, it obtains Secondary color image after correction;
The determination unit 1002, concrete configuration are for each pixel in the target object, after correction Secondary color image after main color image and correction takes the photograph algorithm using default pair and determines corresponding first depth information of each pixel The first confidence level corresponding with each pixel;Wherein, first confidence level is for characterizing the accurate of first depth information Degree.
In the above scheme, referring to Figure 10, the terminal device 100 can also include computing unit 1006 and converting unit 1007, wherein
The computing unit 1006 is configured to through double matching algorithms of taking the photograph to the main color image after correction and after correcting Secondary color image carries out disparity correspondence calculating, obtains the corresponding parallax value of each pixel;
The converting unit 1007 is configured to carry out depth conversion to the parallax value by the first default transformation model, Obtain corresponding first depth information of each pixel.
In the above scheme, the computing unit 1006 is additionally configured to the main color image after correction and after correcting Secondary color image carries out matching Similarity measures, obtains the corresponding matching similitude cost of each pixel;
The determination unit 1002, is additionally configured to based on the matching similitude cost, determines each pixel corresponding the One confidence level.
In the above scheme, the computing unit 1006 is additionally configured to calculate the main color figure of each pixel after calibration Corresponding first texture gradient as under;
The determination unit 1002 is additionally configured to determine each pixel corresponding first based on first texture gradient Confidence level.
In the above scheme, the converting unit 1007 is additionally configured to obtain the target pair according to the original image Initial depth information of each pixel under TOF coordinate system as in;And by the second default transformation model to the initial depth It spends information and carries out coordinate system conversion, obtain each pixel in main the second depth information taken the photograph under coordinate system.
In the above scheme, the computing unit 1006 is additionally configured to according to preset calibrations algorithm to TOF sensor and double It is demarcated between camera, obtains calibrating parameters;
The converting unit 1007, concrete configuration are based on the calibrating parameters and the second default transformation model, by institute It states initial depth information and is transformed into master and take the photograph in coordinate system, obtain each pixel in main the second depth information taken the photograph under coordinate system.
In the above scheme, referring to Figure 10, the terminal device 100 can also include judging unit 1008, wherein
The determination unit 1002 is additionally configured to be determined in first depth information according to first confidence level Low confidence region;
The computing unit 1006 is additionally configured to calculate for each of low confidence region pixel to be judged Each corresponding first depth information of pixel to be judged with should pixel be judged effective neighborhood in corresponding second depth information Between difference;
The judging unit 1008 is configured to for the difference being compared with preset difference value threshold value;And work as the difference When value is greater than preset difference value threshold value, which is obtained described first according to the erroneous point of label labeled as erroneous point Erroneous data regions in depth information.
In the above scheme, the judging unit 1008 is additionally configured to when the difference is no more than preset difference value threshold value, Retain and be somebody's turn to do corresponding first depth information of pixel to be judged, obtains the reservation data area in first depth information.
In the above scheme, the judging unit 1008, if being additionally configured in effective neighborhood of the pixel to be judged not There are the associated effective pixel points of the second depth information, then do not execute it is described by the difference and preset difference value threshold value into The step of row compares.
In the above scheme, the correction unit 1003, concrete configuration are for each of described erroneous data regions Erroneous point is weighted interpolation calculation to the erroneous data regions by corresponding second depth information of each erroneous point, and Calculated depth information replaces the erroneous data regions in first depth information, obtains new depth information; And the new depth information is filtered according to the main color image, obtain the target depth information.
In the above scheme, the determination unit 1002 is additionally configured to determine the target pair according to the original image As corresponding second confidence level;Wherein, second confidence level is used to characterize the accuracy of second depth information;And base In second confidence level, the empty number in second depth information is determined;
The judging unit 1008 does not execute described if being additionally configured to the empty number is greater than default cavitation threshold Depth image bearing calibration.
It is to be appreciated that in the present embodiment, " unit " can be partial circuit, segment processor, subprogram or soft Part etc., naturally it is also possible to be module, can also be non-modularization.And each component part in the present embodiment can collect It is physically existed alone at each unit in one processing unit, is also possible to, it can also be integrated with two or more units In a unit.Above-mentioned integrated unit both can take the form of hardware realization, can also be using software function module Form is realized.
If the integrated unit realizes that being not intended as independent product is sold in the form of software function module Or in use, can store in a computer readable storage medium, based on this understanding, the technical side of the present embodiment Substantially all or part of the part that contributes to existing technology or the technical solution can be produced case in other words with software The form of product embodies, which is stored in a storage medium, including some instructions are used so that one Platform computer equipment (can be personal computer, server or the network equipment etc.) or processor (processor) execute sheet The all or part of the steps of embodiment the method.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (Read Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic or disk Etc. the various media that can store program code.
Therefore, a kind of computer storage medium is present embodiments provided, which is stored with depth image Correction program, any one of previous embodiment is realized when the correction program of the depth image is executed by least one processor The method.
Based on the composed structure and computer storage medium of above-mentioned terminal device 100, referring to Figure 11, it illustrates this Shens Please embodiment provide terminal device 100 specific hardware structure, may include: communication interface 1101, memory 1102 and place Manage device 1103;Various components are coupled by bus system 1104.It is understood that bus system 1104 is for realizing these groups Connection communication between part.Bus system 1104 further includes power bus, control bus and state in addition to including data/address bus Signal bus.But for the sake of clear explanation, various buses are all designated as bus system 1104 in Figure 11.Wherein, it communicates Interface 1101, during for being received and sent messages between other external equipments, signal is sended and received;
Memory 1102, for storing the computer program that can be run on processor 1103;
Processor 1103, for executing when running the computer program:
Obtain the corresponding original image of target object and the corresponding main color image of the target object and secondary cromogram Picture;Wherein, the original image is collecting to target object, the main color image and pair according to TOF sensor Color image is collecting to target object according to dual camera;
According to the main color image and the secondary color image, algorithm is taken the photograph using default pair and determines the target object pair The first depth information answered and the first confidence level;According to the original image, corresponding second depth of the target object is determined Information;
Based on first depth information, second depth information and first confidence level, described first is determined Erroneous data regions in depth information;
Processing is corrected to the erroneous data regions by second depth information and the main color image, Target depth information is obtained, depth image is obtained according to the target depth information.
It is appreciated that the memory 1102 in the embodiment of the present application can be volatile memory or non-volatile memories Device, or may include both volatile and non-volatile memories.Wherein, nonvolatile memory can be read-only memory (Read-Only Memory, ROM), programmable read only memory (Programmable ROM, PROM), erasable programmable are only Read memory (Erasable PROM, EPROM), electrically erasable programmable read-only memory (Electrically EPROM, ) or flash memory EEPROM.Volatile memory can be random access memory (Random Access Memory, RAM), use Make External Cache.By exemplary but be not restricted explanation, the RAM of many forms is available, such as static random-access Memory (Static RAM, SRAM), dynamic random access memory (Dynamic RAM, DRAM), synchronous dynamic random-access Memory (Synchronous DRAM, SDRAM), double data speed synchronous dynamic RAM (Double Data Rate SDRAM, DDRSDRAM), it is enhanced Synchronous Dynamic Random Access Memory (Enhanced SDRAM, ESDRAM), synchronous Connect dynamic random access memory (Synchlink DRAM, SLDRAM) and direct rambus random access memory (Direct Rambus RAM, DRRAM).The memory 1102 of system and method described herein is intended to include but is not limited to this A little and any other suitable type memory.
And processor 1103 may be a kind of IC chip, the processing capacity with signal.During realization, on Each step for stating method can be completed by the integrated logic circuit of the hardware in processor 1103 or the instruction of software form. Above-mentioned processor 1103 can be general processor, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit (Application Specific Integrated Circuit, ASIC), ready-made programmable gate Array (Field Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or crystal Pipe logical device, discrete hardware components.It may be implemented or execute the disclosed each method in the embodiment of the present application, step and patrol Collect block diagram.General processor can be microprocessor or the processor is also possible to any conventional processor etc..In conjunction with this The step of method disclosed in application embodiment, can be embodied directly in hardware decoding processor and execute completion, or at decoding Hardware and software module combination in reason device execute completion.Software module can be located at random access memory, flash memory, read-only storage In the storage medium of this fields such as device, programmable read only memory or electrically erasable programmable memory, register maturation.It should Storage medium is located at memory 1102, and processor 1103 reads the information in memory 1102, completes in conjunction with its hardware above-mentioned Method.
It is understood that embodiments described herein can with hardware, software, firmware, middleware, microcode or its Combination is to realize.For hardware realization, processing unit be may be implemented in one or more specific integrated circuit (Application Specific Integrated Circuits, ASIC), digital signal processor (Digital Signal Processing, DSP), digital signal processing appts (DSP Device, DSPD), programmable logic device (Programmable Logic Device, PLD), field programmable gate array (Field-Programmable Gate Array, FPGA), general processor, In controller, microcontroller, microprocessor, other electronic units for executing herein described function or combinations thereof.
For software implementations, it can be realized herein by executing the module (such as process, function etc.) of function described herein The technology.Software code is storable in memory and is executed by processor.Memory can in the processor or It is realized outside processor.
Optionally, as another embodiment, processor 1103 is additionally configured to when running the computer program, is executed Method described in any one of previous embodiment.
Optionally, as another embodiment, terminal device 100 may include application processor, main camera, secondary camera shooting Head, RF transmitter and laser emitter;Wherein, application processor is configurable to when running the computer program, Execute method described in any one of previous embodiment.
It should be noted that in this application, the terms "include", "comprise" or its any other variant are intended to non- It is exclusive to include, so that the process, method, article or the device that include a series of elements not only include those elements, It but also including other elements that are not explicitly listed, or further include solid by this process, method, article or device Some elements.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including There is also other identical elements in the process, method of the element, article or device.
Above-mentioned the embodiment of the present application serial number is for illustration only, does not represent the advantages or disadvantages of the embodiments.
Disclosed method in several embodiments of the method provided herein, in the absence of conflict can be any group It closes, obtains new embodiment of the method.
Disclosed feature in several product embodiments provided herein, in the absence of conflict can be any group It closes, obtains new product embodiments.
Disclosed feature in several methods provided herein or apparatus embodiments, in the absence of conflict can be with Any combination obtains new embodiment of the method or apparatus embodiments.
The above, the only specific embodiment of the application, but the protection scope of the application is not limited thereto, it is any Those familiar with the art within the technical scope of the present application, can easily think of the change or the replacement, and should all contain Lid is within the scope of protection of this application.Therefore, the protection scope of the application should be based on the protection scope of the described claims.

Claims (17)

1. a kind of bearing calibration of depth image, which is characterized in that the described method includes:
Obtain the corresponding original image of target object and the corresponding main color image of the target object and secondary color image;Its In, the original image is collecting to target object according to flight time TOF sensor, the main color image and Secondary color image is collecting to target object according to dual camera;
According to the main color image and the secondary color image, algorithm is taken the photograph using default pair and determines that the target object is corresponding First depth information and the first confidence level;According to the original image, corresponding second depth information of the target object is determined;
Based on first depth information, second depth information and first confidence level, first depth is determined Erroneous data regions in information;
Processing is corrected to the erroneous data regions by second depth information and the main color image, is obtained Target depth information obtains depth image according to the target depth information.
2. the method according to claim 1, wherein obtaining depth map according to the target depth information described As after, the method also includes:
Virtualization processing is carried out to the main color image according to the depth image, obtains target image.
3. the method according to claim 1, wherein the corresponding original image of the acquisition target object and institute State the corresponding main color image of target object and secondary color image, comprising:
The target object is acquired by TOF sensor, obtains the corresponding original image of the target object;
The target object is acquired by dual camera, obtains the target object corresponding main coloured silk under main camera Chromatic graph picture and the target object corresponding pair color image under secondary camera;Wherein, the dual camera includes that master takes the photograph As head and secondary camera.
4. the method according to claim 1, wherein described according to the main color image and the secondary cromogram Picture takes the photograph algorithm using default pair and determines corresponding first depth information of the target object and the first confidence level, comprising:
Distortion correction processing is carried out to the main color image, the main color image after being corrected;
Distortion correction and polar curve correction process are carried out to the secondary color image, the secondary color image after being corrected;
For each pixel in the target object, based on the main color image after correction and the secondary color image after correction, Algorithm, which is taken the photograph, using default pair determines corresponding first depth information of each pixel and corresponding first confidence level of each pixel;Its In, first confidence level is used to characterize the accuracy of first depth information.
5. according to the method described in claim 4, it is characterized in that, after the main color image based on after correction and correction Secondary color image determines corresponding first depth information of each pixel, comprising:
Disparity correspondence calculating are carried out by double secondary color images for taking the photograph matching algorithm to the main color image after correction and after correcting, Obtain the corresponding parallax value of each pixel;
Depth conversion is carried out to the parallax value by the first default transformation model, obtains the corresponding first depth letter of each pixel Breath.
6. according to the method described in claim 4, it is characterized in that, after the main color image based on after correction and correction Secondary color image determines corresponding first confidence level of each pixel, comprising:
Matching Similarity measures are carried out to the main color image after correction and the secondary color image after correction, obtain each pixel pair The matching similitude cost answered;
Based on the matching similitude cost, corresponding first confidence level of each pixel is determined.
7. according to the method described in claim 4, it is characterized in that, after the main color image based on after correction and correction Secondary color image determines corresponding first confidence level of each pixel, comprising:
Calculate corresponding first texture gradient under the main color image of each pixel after calibration;
Based on first texture gradient, corresponding first confidence level of each pixel is determined.
8. determining the target pair the method according to claim 1, wherein described according to the original image As corresponding second depth information, comprising:
According to the original image, initial depth information of each pixel under TOF coordinate system in the target object is obtained;
Coordinate system conversion is carried out to the initial depth information by the second default transformation model, each pixel is obtained in master and takes the photograph seat The second depth information under mark system.
9. according to the method described in claim 8, it is characterized in that, in the second default transformation model that passes through to described initial Depth information carries out coordinate system conversion, obtains each pixel before master takes the photograph the second depth information under coordinate system, the method Further include:
It is demarcated according to preset calibrations algorithm between TOF sensor and dual camera, obtains calibrating parameters;
Correspondingly, the second default transformation model that passes through obtains each initial depth information progress coordinate system conversion Pixel is in main the second depth information taken the photograph under coordinate system, comprising:
Based on the calibrating parameters and the second default transformation model, the initial depth information is transformed into master and takes the photograph coordinate system In, each pixel is obtained in main the second depth information taken the photograph under coordinate system.
10. the method according to claim 1, wherein described be based on first depth information, described second deeply Information and first confidence level are spent, determines the erroneous data regions in first depth information, comprising:
According to first confidence level, the low confidence region in first depth information is determined;
For each of low confidence region pixel to be judged, the corresponding first depth letter of each pixel to be judged is calculated It ceases and the difference in the effective neighborhood for being somebody's turn to do pixel to be judged between corresponding second depth information;
The difference is compared with preset difference value threshold value;
When the difference is greater than preset difference value threshold value, by pixel judge labeled as erroneous point, according to the erroneous point of label, Obtain the erroneous data regions in first depth information.
11. according to the method described in claim 10, it is characterized in that, the difference and preset difference value threshold value are carried out described After comparing, the method also includes:
When the difference is not more than preset difference value threshold value, retains and be somebody's turn to do corresponding first depth information of pixel to be judged, obtain institute State the reservation data area in the first depth information.
12. according to the method described in claim 10, it is characterized in that, the method also includes:
If the associated effective pixel points of the second depth information are not present in effective neighborhood of the pixel to be judged, no Execute described the step of being compared the difference with preset difference value threshold value.
13. the method according to claim 1, wherein described pass through second depth information and the master Color image is corrected processing to the erroneous data regions, obtains target depth information, comprising:
For each erroneous point in the erroneous data regions, by corresponding second depth information of each erroneous point to described Erroneous data regions are weighted interpolation calculation, and calculated depth information is replaced in first depth information Erroneous data regions obtain new depth information;
The new depth information is filtered according to the main color image, obtains the target depth information.
14. according to claim 1 to 13 described in any item methods, which is characterized in that described according to the original image, really After determining corresponding second depth information of the target object, the method also includes:
According to the original image, corresponding second confidence level of the target object is determined;Wherein, second confidence level is used for Characterize the accuracy of second depth information;
Based on second confidence level, the empty number in second depth information is determined;
If the cavity number is greater than default cavitation threshold, the bearing calibration of the depth image is not executed.
15. a kind of terminal device, which is characterized in that the terminal device include: acquiring unit, determination unit and correction unit, Wherein,
The acquiring unit is configured to obtain the corresponding original image of target object and the corresponding main color of the target object Image and secondary color image;Wherein, the original image is collecting to target object according to TOF sensor, described Main color image and secondary color image are collecting to target object according to dual camera;
The determination unit, is configured to according to the main color image and the secondary color image, and it is true to take the photograph algorithm using default pair Determine corresponding first depth information of the target object and the first confidence level;According to the original image, the target pair is determined As corresponding second depth information;And it is additionally configured to based on first depth information, second depth information and institute The first confidence level is stated, determines the erroneous data regions in first depth information;
The correction unit is configured to through second depth information and the main color image to the wrong data area Domain is corrected processing, obtains target depth information, obtains depth image according to the target depth information.
16. a kind of terminal device, which is characterized in that the terminal device includes: memory and processor;Wherein,
The memory, for storing the computer program that can be run on the processor;
The processor, for executing such as the described in any item sides of claim 1 to 14 when running the computer program Method.
17. a kind of computer storage medium, which is characterized in that the computer storage medium is stored with the correction journey of depth image Sequence is realized as described in any one of claim 1 to 14 when the correction program of the depth image is executed by least one processor Method.
CN201910550733.5A 2019-06-24 2019-06-24 Method for correcting depth image, terminal device and computer storage medium Active CN110335211B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910550733.5A CN110335211B (en) 2019-06-24 2019-06-24 Method for correcting depth image, terminal device and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910550733.5A CN110335211B (en) 2019-06-24 2019-06-24 Method for correcting depth image, terminal device and computer storage medium

Publications (2)

Publication Number Publication Date
CN110335211A true CN110335211A (en) 2019-10-15
CN110335211B CN110335211B (en) 2021-07-30

Family

ID=68142681

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910550733.5A Active CN110335211B (en) 2019-06-24 2019-06-24 Method for correcting depth image, terminal device and computer storage medium

Country Status (1)

Country Link
CN (1) CN110335211B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110874852A (en) * 2019-11-06 2020-03-10 Oppo广东移动通信有限公司 Method for determining depth image, image processor and storage medium
CN111239729A (en) * 2020-01-17 2020-06-05 西安交通大学 Speckle and floodlight projection fused ToF depth sensor and distance measuring method thereof
CN111325691A (en) * 2020-02-20 2020-06-23 Oppo广东移动通信有限公司 Image correction method, image correction device, electronic equipment and computer-readable storage medium
CN111457886A (en) * 2020-04-01 2020-07-28 北京迈格威科技有限公司 Distance determination method, device and system
CN111539899A (en) * 2020-05-29 2020-08-14 深圳市商汤科技有限公司 Image restoration method and related product
CN111861962A (en) * 2020-07-28 2020-10-30 湖北亿咖通科技有限公司 Data fusion method and electronic equipment
CN112085775A (en) * 2020-09-17 2020-12-15 北京字节跳动网络技术有限公司 Image processing method, device, terminal and storage medium
WO2021087812A1 (en) * 2019-11-06 2021-05-14 Oppo广东移动通信有限公司 Method for determining depth value of image, image processor and module
CN112866674A (en) * 2019-11-12 2021-05-28 Oppo广东移动通信有限公司 Depth map acquisition method and device, electronic equipment and computer readable storage medium
CN112911091A (en) * 2021-03-23 2021-06-04 维沃移动通信(杭州)有限公司 Parameter adjusting method and device of multipoint laser and electronic equipment
WO2021114061A1 (en) * 2019-12-09 2021-06-17 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Electric device and method of controlling an electric device
CN113301320A (en) * 2021-04-07 2021-08-24 维沃移动通信(杭州)有限公司 Image information processing method and device and electronic equipment
CN114391259A (en) * 2019-11-06 2022-04-22 Oppo广东移动通信有限公司 Information processing method, terminal device and storage medium
CN115994937A (en) * 2023-03-22 2023-04-21 科大讯飞股份有限公司 Depth estimation method and device and robot
CN116990830A (en) * 2023-09-27 2023-11-03 锐驰激光(深圳)有限公司 Distance positioning method and device based on binocular and TOF, electronic equipment and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102609941A (en) * 2012-01-31 2012-07-25 北京航空航天大学 Three-dimensional registering method based on ToF (Time-of-Flight) depth camera
CN106993112A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 Background-blurring method and device and electronic installation based on the depth of field
CN109300151A (en) * 2018-07-02 2019-02-01 浙江商汤科技开发有限公司 Image processing method and device, electronic equipment
CN109615652A (en) * 2018-10-23 2019-04-12 西安交通大学 A kind of depth information acquisition method and device
CN109640066A (en) * 2018-12-12 2019-04-16 深圳先进技术研究院 The generation method and device of high-precision dense depth image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102609941A (en) * 2012-01-31 2012-07-25 北京航空航天大学 Three-dimensional registering method based on ToF (Time-of-Flight) depth camera
CN106993112A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 Background-blurring method and device and electronic installation based on the depth of field
CN109300151A (en) * 2018-07-02 2019-02-01 浙江商汤科技开发有限公司 Image processing method and device, electronic equipment
CN109615652A (en) * 2018-10-23 2019-04-12 西安交通大学 A kind of depth information acquisition method and device
CN109640066A (en) * 2018-12-12 2019-04-16 深圳先进技术研究院 The generation method and device of high-precision dense depth image

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114391259A (en) * 2019-11-06 2022-04-22 Oppo广东移动通信有限公司 Information processing method, terminal device and storage medium
CN110874852A (en) * 2019-11-06 2020-03-10 Oppo广东移动通信有限公司 Method for determining depth image, image processor and storage medium
WO2021087812A1 (en) * 2019-11-06 2021-05-14 Oppo广东移动通信有限公司 Method for determining depth value of image, image processor and module
CN112866674A (en) * 2019-11-12 2021-05-28 Oppo广东移动通信有限公司 Depth map acquisition method and device, electronic equipment and computer readable storage medium
WO2021114061A1 (en) * 2019-12-09 2021-06-17 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Electric device and method of controlling an electric device
CN114514735B (en) * 2019-12-09 2023-10-03 Oppo广东移动通信有限公司 Electronic apparatus and method of controlling the same
CN114514735A (en) * 2019-12-09 2022-05-17 Oppo广东移动通信有限公司 Electronic apparatus and method of controlling the same
CN111239729A (en) * 2020-01-17 2020-06-05 西安交通大学 Speckle and floodlight projection fused ToF depth sensor and distance measuring method thereof
CN111325691B (en) * 2020-02-20 2023-11-10 Oppo广东移动通信有限公司 Image correction method, apparatus, electronic device, and computer-readable storage medium
CN111325691A (en) * 2020-02-20 2020-06-23 Oppo广东移动通信有限公司 Image correction method, image correction device, electronic equipment and computer-readable storage medium
CN111457886A (en) * 2020-04-01 2020-07-28 北京迈格威科技有限公司 Distance determination method, device and system
CN111457886B (en) * 2020-04-01 2022-06-21 北京迈格威科技有限公司 Distance determination method, device and system
CN111539899A (en) * 2020-05-29 2020-08-14 深圳市商汤科技有限公司 Image restoration method and related product
CN111861962A (en) * 2020-07-28 2020-10-30 湖北亿咖通科技有限公司 Data fusion method and electronic equipment
CN112085775A (en) * 2020-09-17 2020-12-15 北京字节跳动网络技术有限公司 Image processing method, device, terminal and storage medium
CN112911091B (en) * 2021-03-23 2023-02-24 维沃移动通信(杭州)有限公司 Parameter adjusting method and device of multipoint laser and electronic equipment
CN112911091A (en) * 2021-03-23 2021-06-04 维沃移动通信(杭州)有限公司 Parameter adjusting method and device of multipoint laser and electronic equipment
CN113301320A (en) * 2021-04-07 2021-08-24 维沃移动通信(杭州)有限公司 Image information processing method and device and electronic equipment
CN115994937A (en) * 2023-03-22 2023-04-21 科大讯飞股份有限公司 Depth estimation method and device and robot
CN116990830A (en) * 2023-09-27 2023-11-03 锐驰激光(深圳)有限公司 Distance positioning method and device based on binocular and TOF, electronic equipment and medium
CN116990830B (en) * 2023-09-27 2023-12-29 锐驰激光(深圳)有限公司 Distance positioning method and device based on binocular and TOF, electronic equipment and medium

Also Published As

Publication number Publication date
CN110335211B (en) 2021-07-30

Similar Documents

Publication Publication Date Title
CN110335211A (en) Bearing calibration, terminal device and the computer storage medium of depth image
CN107948519B (en) Image processing method, device and equipment
JP6946188B2 (en) Methods and equipment for multi-technology depth map acquisition and fusion
CN108257183B (en) Camera lens optical axis calibration method and device
CN108028887B (en) Photographing focusing method, device and equipment for terminal
US6915073B2 (en) Stereo camera and automatic convergence adjusting device
CN109118581B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN112150528A (en) Depth image acquisition method, terminal and computer readable storage medium
CN114998499B (en) Binocular three-dimensional reconstruction method and system based on line laser galvanometer scanning
WO2020038255A1 (en) Image processing method, electronic apparatus, and computer-readable storage medium
CN114830030A (en) System and method for capturing and generating panoramic three-dimensional images
KR20060063558A (en) A depth information-based stereo/multi-view stereo image matching apparatus and method
CN102368137B (en) Embedded calibrating stereoscopic vision system
CN110335307A (en) Scaling method, device, computer storage medium and terminal device
CN106611430A (en) An RGB-D image generation method, apparatus and a video camera
CN113160298A (en) Depth truth value acquisition method, device and system and depth camera
CN108322726A (en) A kind of Atomatic focusing method based on dual camera
CN107564051A (en) A kind of depth information acquisition method and system
CN114359406A (en) Calibration of auto-focusing binocular camera, 3D vision and depth point cloud calculation method
TWI625051B (en) Depth sensing apparatus
CN109842791B (en) Image processing method and device
JP2019179463A (en) Image processing device, control method thereof, program, and recording medium
Montgomery et al. Stereoscopic camera design
CN114119701A (en) Image processing method and device
CN109191396B (en) Portrait processing method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant