WO2018076529A1 - 场景深度计算方法、装置及终端 - Google Patents

场景深度计算方法、装置及终端 Download PDF

Info

Publication number
WO2018076529A1
WO2018076529A1 PCT/CN2016/112696 CN2016112696W WO2018076529A1 WO 2018076529 A1 WO2018076529 A1 WO 2018076529A1 CN 2016112696 W CN2016112696 W CN 2016112696W WO 2018076529 A1 WO2018076529 A1 WO 2018076529A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
image
offset
lens
ois
Prior art date
Application number
PCT/CN2016/112696
Other languages
English (en)
French (fr)
Inventor
唐忠伟
徐荣跃
王运
李邢
李远友
敖欢欢
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201680054264.2A priority Critical patent/CN108260360B/zh
Publication of WO2018076529A1 publication Critical patent/WO2018076529A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the embodiments of the present invention relate to the field of communications, and in particular, to a method, a device, and a terminal for calculating a scene depth of a target scene having a dual camera terminal device.
  • Optical Image Stabilization (OIS, also commonly referred to as optical image stabilization) is an important means of improving the quality of photographs in low light, and is also used on more and more mobile phones. OIS works by compensating for hand shake by moving the lens to achieve image stabilization.
  • the image is generally corrected by the calibration parameters of the dual camera, so that the left and right images provided by the dual camera are aligned in one direction, then the parallax is calculated, and the parallax is converted into the depth of the scene.
  • the OIS causes a lens shift, which causes the dual camera calibration parameters to change, resulting in parallax problems (positive parallax and negative parallax exist simultaneously, or the image cannot be aligned in one direction).
  • the calculated scene depth value is not accurate.
  • the embodiment of the invention provides a scene depth calculation method.
  • the scene of the target scene caused by the parallax problem (the positive or negative parallax exists simultaneously or the image cannot be aligned in one direction) is solved.
  • the problem of inaccurate depth values is solved.
  • a scene depth calculation method comprising: acquiring a lens offset of a camera with an OIS system; wherein the first camera and/or the second camera has an OIS system, the first camera And the second camera is arranged side by side on the body of the same terminal device; according to the preset OIS motor sensitivity calibration parameter, the lens offset is converted into an image offset; according to the compensated first camera calibration parameter and / Or the compensated second camera calibration parameter, and the first image and the second image obtained by the first camera and the second camera respectively acquiring the target scene at the same time, and calculating the scene depth of the target scene;
  • the calibration parameter of the first camera is compensated according to the lens offset of the first camera
  • the calibration parameter of the second camera is compensated according to the lens offset of the second camera.
  • the method may further include: acquiring angular velocity information of the terminal device jitter detected by the gyro sensor; converting the angular velocity information into jitter of the terminal device
  • the amplitude drives the OIS motor to push the lens of the first camera and/or the lens of the second camera according to the amplitude of the shake, and acquires a lens offset of the first camera and/or the second camera.
  • the terminal device jitter time and the exposure time are inconsistent, and the jitter time is greater than the exposure time, multiple lens offsets are acquired, and one lens offset is determined from the plurality of lens offsets according to a preset rule. And when the scene depth of the target scene is subsequently calculated, the determined lens offset is used for calculation.
  • the OIS motor sensitivity calibration parameter is determined according to the steps of: pushing the OIS motor through the OIS controller, moving the lens to a designated position; waiting for the OIS motor Photographing after stabilization; when the captured image reaches a preset number of sheets, detecting feature point coordinates of each image, and according to the specified position of the lens and the respective images
  • the feature point coordinates of the image determine the OIS motor sensitivity calibration parameters.
  • the OIS motor sensitivity calibration parameter is stored therein before the terminal device leaves the factory, so that when the terminal device is shipped, the scene depth of the target scene is calculated, and the stored OIS motor sensitivity calibration parameter is used to convert the lens offset into an image. Offset.
  • the lens offset is converted into an image offset according to the following formula
  • ⁇ x is the image offset
  • is the OIS motor sensitivity calibration parameter
  • ⁇ C is the lens offset
  • the lens offset is converted to the image offset by the OIS motor sensitivity calibration parameter, and the unit of the camera calibration parameter is maintained. Consistent.
  • the first camera calibration parameter after compensation and/or the second camera calibration after compensation And the first image and the second image obtained by the first camera and the second camera at the same time, and the scene depth of the target scene is specifically calculated by:
  • Z is the scene depth of the target scene
  • f is the focal length
  • a 1 is the OIS motor sensitivity calibration parameter of the first camera
  • ⁇ 1 is the lens offset
  • pixel pitch is the size of a pixel
  • a 1 ⁇ 1 ⁇ x 1 is the first image offset
  • a ⁇ ⁇ 1 ⁇ pixel pitch is the unit of the first image offset is converted from pixel to mm
  • the main point of the first camera is changed from u 1 ' to u 1 after compensation.
  • B 'becomes B 1 the principal point of the second camera is u 2
  • x 1 is the first image forming point
  • x 2 is an imaging point of the second image.
  • the first camera and the second camera both have an OIS system
  • the first camera is calibrated according to the compensated parameter and/or compensated Second camera calibration parameter
  • the first camera and the second camera are The first image and the second image obtained by the target scene are collected at the same time, and the calculation of the scene depth of the target scene includes:
  • a 1 is the OIS motor sensitivity calibration parameter of the first camera
  • ⁇ 1 is the mirror of the first camera
  • the head offset, a 1 ⁇ ⁇ 1 ⁇ ⁇ x 1 is the first image shift amount, and a 1 ⁇ ⁇ 1 ⁇ pixel pitch is a unit for converting the first image shift amount from pixel to mm, and a 2 is second.
  • the OIS motor sensitivity calibration parameter of the camera ⁇ 2 is the lens offset of the second camera, a 2 ⁇ ⁇ 2 ⁇ ⁇ x 2 is the second image offset, and a 2 ⁇ ⁇ 2 ⁇ pixel pitch is the second image bias
  • the unit pixel of the shift amount is converted to mm, and the main point of the first camera is changed from u 1 ' to u 1 after compensation, and the main point of the second camera is changed from u 2 ' to u 2 after compensation, and x 1 is the first
  • the image is imaged and x 2 is the imaged point of the second image.
  • an embodiment of the present invention provides a scenario depth calculation device, where the device includes: a first acquisition unit, a processing unit, and a calculation unit; and the first acquisition unit is configured to acquire a camera with an OIS system. a lens shift amount; wherein the first camera and/or the second camera are provided with an OIS system, the first camera and the second camera are juxtaposed on the body of the same terminal device; and the processing unit is configured to be preset according to The OIS motor sensitivity calibration parameter converts the lens offset into an image offset; the calculating unit is configured to perform calibration parameters according to the compensated first camera and/or compensated second camera calibration parameters, and The first camera and the second camera collect the first image and the second image respectively obtained by the target scene at the same time, and calculate the scene depth of the target scene; wherein, the first offset is compensated according to the lens offset of the first camera The calibration parameter of the camera compensates the calibration parameter of the second camera according to the lens offset of the second camera.
  • the device further includes: a second acquiring unit, where the second acquiring unit is configured to: acquire the jitter of the terminal device detected by the gyro sensor Angular velocity information; converting the angular velocity information into a jitter amplitude of the terminal device, Driving the OIS motor to drive lens movement of the lens of the first camera and/or the second camera according to the amplitude of the shake, and acquiring a lens shift amount of the first camera and/or the second camera.
  • the second acquiring unit is configured to: acquire the jitter of the terminal device detected by the gyro sensor Angular velocity information; converting the angular velocity information into a jitter amplitude of the terminal device, Driving the OIS motor to drive lens movement of the lens of the first camera and/or the second camera according to the amplitude of the shake, and acquiring a lens shift amount of the first camera and/or the second camera.
  • the terminal device jitter time and the exposure time are inconsistent, and the jitter time is greater than the exposure time, multiple lens offsets are acquired, and one lens offset is determined from the plurality of lens offsets according to a preset rule. And when the scene depth of the target scene is subsequently calculated, the determined lens offset is used for calculation.
  • the device further includes: a determining unit, where the determining unit is specifically configured to: push the OIS motor through the OIS controller, and move the lens to the designated Positioning; waiting for the OIS motor to stabilize after taking a picture; when the captured image reaches a preset number of sheets, detecting feature point coordinates of each image, and determining an OIS motor sensitivity calibration parameter according to the specified position of the lens.
  • the OIS motor sensitivity calibration parameter is stored therein before the terminal device leaves the factory, so that when the terminal device is shipped, the scene depth of the target scene is calculated, and the stored OIS motor sensitivity calibration parameter is used to convert the lens offset into an image. Offset.
  • the processing unit is specifically configured to:
  • ⁇ x is the image offset
  • is the OIS motor sensitivity calibration parameter
  • ⁇ C is the lens offset
  • the lens offset is converted to the image offset by the OIS motor sensitivity calibration parameter, and the unit of the camera calibration parameter is maintained. Consistent.
  • the calculating unit is specifically configured to:
  • Z is the scene depth of the target scene
  • f is the focal length
  • a 1 is the OIS motor sensitivity calibration parameter of the first camera
  • ⁇ 1 is the lens offset
  • pixel pitch is the size of a pixel
  • a 1 ⁇ 1 ⁇ x 1 is the first image offset
  • a ⁇ ⁇ 1 ⁇ pixel pitch is the unit of the first image offset is converted from pixel to mm
  • the main point of the first camera is changed from u 1 ' to u 1 after compensation.
  • B 'becomes B 1 the principal point of the second camera is u 2
  • x 1 is the first image forming point
  • x 2 is an imaging point of the second image.
  • the calculating unit is specifically configured to:
  • a 1 is the OIS motor sensitivity calibration parameter of the first camera
  • ⁇ 1 is the mirror of the first camera
  • the head offset, a 1 ⁇ ⁇ 1 ⁇ ⁇ x 1 is the first image shift amount, and a 1 ⁇ ⁇ 1 ⁇ pixel pitch is a unit for converting the first image shift amount from pixel to mm, and a 2 is second.
  • the OIS motor sensitivity calibration parameter of the camera ⁇ 2 is the lens offset of the second camera, a 2 ⁇ ⁇ 2 ⁇ ⁇ x 2 is the second image offset, and a 2 ⁇ ⁇ 2 ⁇ pixel pitch is the second image bias
  • the unit pixel of the shift amount is converted to mm, and the main point of the first camera is changed from u 1 ' to u 1 after compensation, and the main point of the second camera is changed from u 2 ' to u 2 after compensation, and x 1 is the first
  • the image is imaged and x 2 is the imaged point of the second image.
  • an embodiment of the present invention provides a terminal, where the terminal includes a first camera and a second camera, where the first camera and the second camera are used to collect at least one target scene at a same time, respectively obtaining a first image. And a second image; wherein the first camera and/or the second camera are provided with an OIS system, the first camera and the second camera are juxtaposed on the body of the same terminal device; and the memory is configured to store the first An image and a second image; the processor is configured to acquire a lens offset of the camera with the OIS motor, and convert the lens offset into an image offset according to a preset OIS motor sensitivity calibration parameter Calculating a scene depth of the target scene according to the compensated first camera calibration parameter and/or the compensated second camera calibration parameter, and the first image and the second image acquired from the memory; wherein, according to First camera The lens offset compensates the calibration parameter of the first camera, and the calibration parameter of the second camera is compensated according to the lens offset of the second camera.
  • the lens offset is used to compensate for the change of the camera calibration parameters caused by the jitter of the terminal device, and the parallax problem is solved.
  • the compensated camera calibration parameters are used to calculate the scene depth of the target scene, and the calculated scene depth value is more accurate.
  • the OIS system is specifically configured to: acquire angular velocity information of terminal device jitter detected by a gyro sensor; and convert the angular velocity information into a terminal device The amplitude of the jitter, driving the OIS motor to drive the lens movement of the lens of the first camera and/or the second camera according to the amplitude of the shake, and acquiring the lens offset of the first camera and/or the second camera.
  • the terminal device jitter time and the exposure time are inconsistent, and the jitter time is greater than the exposure time, multiple lens offsets are acquired, and one lens offset is determined from the plurality of lens offsets according to a preset rule. And when the scene depth of the target scene is subsequently calculated, the determined lens offset is used for calculation.
  • the processor is further configured to: by the OIS controller, push the OIS motor to move the lens to the designated position; wait for the OIS motor to stabilize Taking a picture; detecting a feature point coordinate of each image when the captured image reaches a preset number of sheets, and determining an OIS motor sensitivity calibration parameter according to the specified position of the lens and the feature point coordinates of each image;
  • the memory is further configured to store the OIS motor sensitivity calibration parameters.
  • the OIS motor sensitivity calibration parameter is stored therein before the terminal leaves the factory, so that when the terminal is calculated and the scene depth of the target scene is calculated, the stored OIS motor sensitivity calibration parameter is used to convert the lens offset into an image offset. the amount.
  • the processor is specifically configured to convert the lens offset into an image offset according to the following formula,
  • ⁇ x is the image offset
  • is the OIS motor sensitivity calibration parameter
  • ⁇ C is the lens offset
  • the lens offset is converted to the image offset by the OIS motor sensitivity calibration parameter, and the unit of the camera calibration parameter is maintained. Consistent.
  • the processor is specifically configured to determine a scene depth of the target scene by using:
  • Z is the scene depth of the target scene
  • f is the focal length
  • a 1 is the OIS motor sensitivity calibration parameter of the first camera
  • ⁇ 1 is the lens offset
  • pixel pitch is the size of a pixel
  • a 1 ⁇ 1 ⁇ x 1 is the first image shift amount
  • a ⁇ 1 ⁇ pixel pitch is a unit for converting the first image shift amount from pixel to mm
  • the main point of the first camera is changed from u 1 ' to u 1 after compensation.
  • the baseline B 'becomes B 1 the principal point of the second camera is u 2
  • x 1 is the first image forming point
  • the processor is specifically configured to determine the target scenario by using the following formula: Scene depth,
  • a 1 is the OIS motor sensitivity calibration parameter of the first camera
  • ⁇ 1 is the mirror of the first camera
  • the head offset, a 1 ⁇ ⁇ 1 ⁇ ⁇ x 1 is the first image shift amount, and a 1 ⁇ ⁇ 1 ⁇ pixel pitch is a unit for converting the first image shift amount from pixel to mm, and a 2 is second.
  • the OIS motor sensitivity calibration parameter of the camera ⁇ 2 is the lens offset of the second camera, a 2 ⁇ ⁇ 2 ⁇ ⁇ x 2 is the second image offset, and a 2 ⁇ ⁇ 2 ⁇ pixel pitch is the second image bias
  • the unit pixel of the shift amount is converted to mm, and the main point of the first camera is changed from u 1 ' to u 1 after compensation, and the main point of the second camera is changed from u 2 ' to u 2 after compensation, and x 1 is the first
  • the image is imaged and x 2 is the imaged point of the second image.
  • FIG. 1 is a block diagram showing the working principle of the OIS system
  • Figure 2 is a block diagram of a depth calculation system
  • FIG. 3 is a flowchart of a method for calculating a depth of a scene according to Embodiment 1 of the present invention
  • Figure 4a is a schematic diagram of a lens offset scene
  • Figure 4b is a schematic diagram of imaging changes before and after lens shift
  • Figure 4c is a flow chart for determining the OIS motor sensitivity calibration parameters
  • FIG. 5a is a schematic diagram of scene depth calculation according to an embodiment of the present invention.
  • FIG. 5b is still another schematic diagram of scene depth calculation according to an embodiment of the present invention.
  • Figure 6a is a schematic diagram of an image taken when compensating the calibration parameters of the dual camera
  • Figure 6b is a schematic diagram of an image taken after the calibration parameters of the dual camera are not compensated
  • Figure 6c is a partial enlarged view of Figure 6a
  • Figure 6d is a partial enlarged view of Figure 6b;
  • Figure 7a is a schematic diagram of the depth of the scene when the calibration parameters of the dual camera are not compensated
  • Figure 7b is a schematic diagram of the depth of the scene after compensating the calibration parameters of the dual camera
  • FIG. 8 is a schematic structural diagram of a scene depth calculation apparatus according to Embodiment 2 of the present invention.
  • FIG. 9 is a schematic structural diagram of still another scene depth computing apparatus according to Embodiment 2 of the present invention.
  • FIG. 10 is a schematic structural diagram of a terminal according to Embodiment 3 of the present invention.
  • the terminal device may be a device having a dual camera, including but not limited to a camera (such as a digital camera), a video camera, a mobile phone (such as a smart phone), a tablet (Pad), and a personal digital assistant (Personal).
  • a camera such as a digital camera
  • a video camera such as a digital camera
  • a mobile phone such as a smart phone
  • a tablet such as a tablet
  • a personal digital assistant Personal digital assistant
  • the digital assistant (PDA), the portable device for example, a portable computer
  • the wearable device and the like are not specifically limited in the embodiment of the present invention.
  • the terminal device may be a mobile phone, and the following uses a mobile phone as an example to perform an embodiment of the present invention. set forth.
  • the dual camera simulates the human binocular vision principle to perceive the distance, that is, observing an object from two points, and acquiring images at different viewing angles, according to the pixels between the images.
  • the matching relationship is obtained by calculating the offset between pixels by the principle of triangulation to obtain the depth of the scene of the object.
  • the OIS causes the lens to shift, which causes the dual camera calibration parameters to change, resulting in parallax problems, which in turn results in inaccurate scene depth calculation. Therefore, it is necessary to compensate the dual camera calibration parameters. , so that the scene depth of the target scene is calculated accurately.
  • FIG. 1 is a block diagram of the working principle of the OIS system.
  • the terminal device includes an OIS system 100 and an Image Signal Processor (ISP) 110.
  • the OIS system 100 includes an OIS controller 120, a gyro sensor 130, a Hall sensor 140, a motor 150, and a camera 160.
  • ISP Image Signal Processor
  • the camera 160 includes a first camera and a second camera.
  • the first camera and the second camera may be juxtaposed in front of the terminal device, or may be juxtaposed on the back of the terminal device, and may be arranged in a horizontal arrangement or a vertical arrangement.
  • the first camera and/or the second camera are provided with an OIS system, and the first camera and the second camera respectively have lenses (not shown in FIG. 1).
  • the Hall sensor 140 is a magnetic field sensor that performs displacement measurement based on the Hall effect for acquiring the lens shift amount of the camera with the OIS system, that is, the lens shift amount of the first camera and/or the second camera.
  • the gyro sensor 130 is a positioning system based on the movement of the terminal device in a free space orientation for acquiring angular velocity information when the terminal device is shaken.
  • the OIS controller 120 acquires angular velocity information from the gyro sensor 130, converts the angular velocity information into a jitter amplitude of the terminal device, and transmits the jitter amplitude as a reference signal to the motor 150.
  • the motor 150 may be an OIS motor for driving the lens movement of the camera with the OIS system according to the amplitude of the shake to ensure the sharpness of the image; wherein the movement refers to moving in the X and/or Y direction, and the Y direction refers to the lens.
  • the X direction refers to the direction of the light passing through the lens and perpendicular to the Y direction.
  • the OIS controller 120 also acquires the first image and the second image obtained by acquiring the target scene at the same time from the first camera and the second camera.
  • the ISP 110 stores the lens shift amount, the first image, and the second image acquired from the OIS controller 120.
  • the terminal device performs initialization, and usually, when ready, the OIS controller 120 controls the shutter to acquire an image.
  • the terminal device may shake, and the OIS controller 120 reads the angular velocity information detected by the gyro sensor 130, converts the angular velocity information into the jitter amplitude of the terminal device, and transmits it as a reference signal to the OIS motor, and the OIS motor according to the jitter amplitude Move the lens of the camera with OIS system to avoid blurring of the captured image caused by the jitter of the terminal device and ensure the sharpness of the image.
  • the movement may be that the lens of the first camera moves in the X and/or Y direction and/or the lens of the second camera moves in the X and/or Y direction.
  • the OIS controller 120 reads the lens offset of the camera with the OIS system detected by the Hall sensor 140, that is, the lens offset of the first camera and/or the second camera, and acquires the captured image from the camera. That is, the first image and the second image obtained by the target scene are acquired at the same time corresponding to the first camera and the second camera, respectively, and the lens offset and the captured image are sent to the ISP 110.
  • the ISP 110 stores the lens shift amount and the first image and the second image captured by the camera.
  • the terminal device jitter time is generally greater than its exposure time, for example, the terminal device jitter duration is 30ms and the exposure time is 2ms.
  • the Hall sensor 140 acquires 15 lens offsets, OIS.
  • the controller 120 reads the 15 lens offsets from the Hall sensor 140 according to a preset rule, and determines a lens offset from the 15 lens offsets, and uses in the subsequent process.
  • the determined lens offset is used as the lens offset described in the context to perform scene depth calculation of the target scene.
  • FIG. 2 is a block diagram of the depth calculation system.
  • the depth calculation system includes an ISP 110 and a depth calculation module 210.
  • the depth calculation module 210 acquires preset calibration information from the ISP 110 and
  • the ISP 110 reads the stored OIS information, and the first camera and the second camera acquire the first image and the second image respectively obtained by the target scene at the same time, calculate the scene depth of the target scene, and output the disparity map/depth map.
  • the calibration information is the camera calibration parameters such as focal length, baseline, optical center and principal point when initializing;
  • the OIS information is the lens offset.
  • the depth calculation module acquires the lens offset. Since the unit of the lens offset is code and the unit of the scene depth is millimeter (mm), the two are inconsistent. Therefore, the lens needs to be offset according to the OIS motor sensitivity. The quantity is converted into image offset, the unit is pixel (pixel), and then the camera offset parameter is compensated by the lens offset, and the scene depth value of the target scene is calculated according to the compensated camera calibration parameter, thereby calculating the scene depth value. More precise.
  • FIG. 3 is a flowchart of a method for calculating a scene depth according to Embodiment 1 of the present invention. As shown in FIG. 3, the method includes:
  • the lens shift amount can be obtained by Hall sensor detection.
  • the lens offset is abnormal; if the lens offset is not greater than the preset threshold, the lens offset is not abnormal.
  • S340 converting the lens offset into an image offset (see ⁇ x of FIG. 4b).
  • S330 needs to be executed in advance, that is, the OIS motor sensitivity calibration parameter, that is, the image offset caused by the unit lens offset, is input, wherein each camera with the OIS system has its corresponding OIS motor sensitivity calibration. Parameters, the OIS motor sensitivity calibration parameters have been pre-stored before the terminal device leaves the factory. Using the OIS motor sensitivity calibration parameters, the lens offset can be converted to an image offset.
  • some calibration parameters such as the optical center, the main point, the baseline, etc.
  • the image offset is calculated according to the lens offset
  • the lens offset is used to compensate for the change.
  • the camera calibration parameters determine the value of the camera calibration parameters after the change. Referring to Fig. 4, the optical center changes from C' to C, and the principal point changes from u' to u. Referring to Fig. 5a, the optical center of the first camera lens changes from C 1 ' to C 1 , the main point changes from u 1 ' to u 1 , and the baseline changes from B' to B 1 . Referring to Fig.
  • the optical center of the first camera lens changes from C 1 ' to C 1
  • the main point changes from u 1 ' to u 1
  • the optical center of the second camera lens changes from C 2 ' to C 2
  • the main point From u 2 ' to u 2
  • the baseline changes from B' to B 2 .
  • S380 Calculate a scene depth of the target scene. See Equation 2 and Equation 4 for the calculation formula. It is necessary to perform S360 in advance before executing S380, that is, input the first image, and S370, that is, input the second image.
  • the scene depth of the target scene is determined according to the compensated camera calibration parameters, the first image, and the second image.
  • the first camera and the second camera acquire the first image and the second image respectively obtained by the target scene at the same time.
  • S390 Output a scene depth of the target scene.
  • Figure 4a is a schematic diagram of a lens offset scene.
  • the OIS motor pushes the lens from the position of the elliptical dotted line to the specified position (x i , y i ), respectively, to take an image of the fixed chart, and the image sensor will
  • the optical image acquired by the camera is converted into an electronic signal, and the lens offset can be determined according to the position before and after the lens is moved, and the image offset is determined according to the images of the two fixed charts.
  • Figure 4b is a schematic diagram of imaging changes before and after lens shift.
  • the OIS motor pushes the lens of a camera in the X direction as an example.
  • the camera calibration parameters the focal length is f
  • the optical center is C'
  • the main point is u'.
  • part of the calibration parameters of the camera changes the optical center changes from C' to C
  • the main point changes from u' to u.
  • the imaging points before and after the lens movement are x' and x, respectively, ⁇ C is the distance between the optical center C' and the optical center C, that is, the lens shift amount, the unit is code, ⁇ x is the distance between the imaging point x' and the imaging point x, that is, the image Offset in pixels.
  • the image offset caused by the unit lens offset can be measured, that is, the OIS motor sensitivity calibration parameter.
  • the actual image offset can be calculated based on the lens offset during subsequent shooting to compensate for the camera calibration parameters when the terminal is shaken.
  • the OIS motor sensitivity calibration parameters When determining the OIS motor sensitivity calibration parameters, it is assumed that between the lens shift amount ⁇ C and the image shift amount ⁇ x The relationship is linear, and the OIS motor sensitivity calibration parameters can be obtained as: ⁇ ⁇ ⁇ x / ⁇ C, and the ⁇ unit is pixels/code.
  • ⁇ C and ⁇ x are not strictly linear.
  • Higher-order models such as second-order, third-order or higher, can be used to capture more images for more accurate OIS motor sensitivity calibration parameters.
  • Figure 4c is a flow chart for determining OIS motor sensitivity calibration parameters. As shown in Figure 4c, it includes:
  • the determined OIS motor sensitivity calibration parameter may have a relatively large error, and multiple images may be taken to improve the accuracy of the OIS motor sensitivity calibration parameter.
  • the feature point coordinates in the captured image before and after the lens movement are respectively detected, and the image offset amount is acquired.
  • the lens shift amount is determined according to the moving distance of the lens, and the OIS motor sensitivity calibration parameter is determined according to the image shift amount and the lens shift amount.
  • S450 Store the OIS motor sensitivity calibration parameter into the terminal device.
  • the OIS motor sensitivity calibration parameter is stored therein, so that after the terminal device is shipped from the factory, when the target device is used to collect the target scene, the lens is adjusted according to the OIS motor sensitivity calibration parameter stored in advance.
  • the offset is converted to the image offset, and the camera offset parameter is compensated by the lens offset to make the scene depth of the target scene more accurate.
  • FIG. 5a is a schematic diagram of scene depth calculation according to an embodiment of the present invention.
  • the first camera has an OIS system
  • the second camera does not have an OIS system
  • the lens of the first camera moves in the X direction.
  • the lens of the camera is a convex lens, and the incoming rays and the corresponding and parallel rays are formed.
  • the conjugate ray, the intersection of the line connecting the incident point and the exit point with the main optical axis, is called the focal point of the convex lens.
  • the distance from the focus plane or the imaging plane such as CCD is called the focal length.
  • the point at the center of the lens is called the optical center.
  • the intersection of the line of sight with the imaging plane such as the film or CCD is called the main point, and the distance between the first camera lens and the second camera lens is called the baseline.
  • the focal length is f
  • the optical center of the lens of the first camera is C 1 '
  • the main point is u 1 '
  • the optical center of the lens of the second camera is C 2
  • the main The point is u 2
  • the baseline is B'.
  • the scene depth calculation is based on the uncompensated camera calibration parameters, the main point u 1 ' and the baseline B', To complete.
  • the calculated scene depth Z' error is large.
  • the terminal apparatus When pressed the shutter, the terminal apparatus can cause jitter calibration parameters of the camera portion is changed, pushing the first camera lens according to the jitter amplitude shift occurs, the OIS motor, a first optical center of the camera lens from C 1 'becomes C 1 ( That is, the lens offset of the first camera is in code), the main point changes from u 1 ' to u 1 , and the baseline changes from B' to B 1 , and u 1 and B 1 need to be calculated.
  • the camera calibration parameters of the lens offset compensation are used to determine the value of the compensated camera calibration parameters.
  • the calculated scene depth Z is:
  • a 1 is the OIS motor sensitivity calibration parameter of the first camera
  • ⁇ 1 is the lens offset of the first camera
  • pixel pitch is the size of one pixel
  • a 1 ⁇ ⁇ 1 ⁇ ⁇ x 1 is the first image offset
  • a 1 ⁇ ⁇ 1 ⁇ pixel pitch is a unit for converting the first image shift amount from pixel to mm.
  • FIG. 5b is still another schematic diagram of scene depth calculation according to an embodiment of the present invention.
  • the first camera and the second camera are simultaneously provided with an OIS system, and the first camera and the second camera are simultaneously moved in the X direction.
  • the focal length is f
  • the first camera The optical center of the lens is C 1 '
  • the main point is u 1 '
  • the optical center of the lens of the second camera is C 2 '
  • the main point is u 2 '
  • the baseline is B'.
  • the imaging point of the first image acquired by the first camera is x 1
  • the imaging point of the second image acquired by the second camera is x 2 .
  • the depth calculation is based on the uncompensated camera calibration parameters, the main points u 1 ', u 2 ' and the baseline. B' to complete.
  • the calculated scene depth Z' error is large.
  • the terminal device shake will cause some calibration parameters of the camera to change, that is, the optical center of the first camera lens changes from C 1 ' to C 1 , and the main point changes from u 1 ' to u 1 , the second camera
  • the optical center of the lens changes from C 2 ' to C 2 (corresponding to the lens offset of the second camera, the unit is code), the main point changes from u 2 ' to u 2 , and the baseline changes from B' to B 2 , which needs to be calculated.
  • u 1 , u 2 and B 2 are produced.
  • a 1 is the OIS motor sensitivity calibration parameter of the first camera
  • ⁇ 1 is the lens offset of the first camera
  • a 1 ⁇ ⁇ 1 ⁇ ⁇ x 1 is the first image offset
  • the pitch is to convert the unit of the first image offset from pixel to mm
  • a 2 is the OIS motor sensitivity calibration parameter of the second camera
  • ⁇ 2 is the lens offset of the second camera
  • a 2 ⁇ ⁇ 2 ⁇ pixel pitch is a unit pixel of the second image shift amount converted into mm.
  • FIG. 5a and FIG. 5b illustrate how the camera calibration parameter compensation is performed by taking the movement of the camera in one direction as an example, it should be realized that the same can realize the compensation of the camera calibration parameters of the two cameras moving in two directions. I won't go into details here.
  • Figure 6a is a schematic diagram of an image taken when compensating the calibration parameters of the dual camera
  • Figure 6b is a schematic diagram of the image taken after the calibration parameters of the dual camera are not compensated
  • Figure 6c is a partial enlarged view of Figure 6a
  • Figure 6d is a partial enlarged view of Figure 6b It can be seen from Fig. 6a-6d that the image alignment effect is poor when the dual camera calibration parameters are not compensated, and the alignment effect of the image is good after compensating the dual camera calibration parameters.
  • Figure 7a is a schematic diagram of the depth of the scene when the calibration parameters of the dual camera are not compensated
  • Figure 7b is a schematic diagram of the depth of the scene after the calibration parameters of the dual camera are compensated.
  • different depth values are represented by different colors, and black indicates that the scene depth cannot be calculated.
  • the depth of the scene measured at 1000 mm is 1915.8 mm and the depth of the scene measured at 300 mm is 344.6 mm.
  • Figure 7b the depth of the scene measured at 1000 mm is 909.6 mm and the depth of the scene measured at 300 mm is 287.4 mm. It follows that the calculated scene depth value is more accurate after compensating the dual camera calibration parameters.
  • the scene depth calculation method with the dual camera terminal provided by the embodiment of the invention solves the problem that the scene depth value is inaccurate due to the parallax problem.
  • FIG. 8 is a schematic structural diagram of a scene depth calculation apparatus according to Embodiment 2 of the present invention.
  • the scene depth calculation apparatus 800 includes a first acquisition unit 810, a processing unit 820, and a calculation unit 830.
  • the first acquiring unit 810 is configured to acquire a lens offset of the camera with the OIS system; wherein the first camera and/or the second camera have an OIS system, and the first camera and the second camera are arranged side by side in the same On the body of the terminal device.
  • the processing unit 820 is configured to convert the lens offset into an image offset according to a preset OIS motor sensitivity calibration parameter.
  • the calculating unit 830 is configured to obtain, according to the compensated first camera calibration parameter and/or the compensated second camera calibration parameter, and the acquired target scenes obtained by the first camera and the second camera at the same time
  • the first image and the second image are used to calculate a scene depth; wherein the calibration parameter of the first camera is compensated according to the lens offset of the first camera, and the calibration parameter of the second camera is compensated according to the lens offset of the second camera.
  • the processing unit 820 is specifically configured to: convert the lens offset into an image offset according to the following formula,
  • ⁇ x is the image shift amount
  • is the OIS motor sensitivity calibration parameter
  • ⁇ C is the lens shift amount
  • the calculating unit 830 is specifically configured to:
  • Z is the scene depth of the target scene
  • f is the focal length
  • a 1 is the OIS motor sensitivity calibration parameter of the first camera
  • ⁇ 1 is the lens offset
  • pixel pitch is the size of a pixel
  • a 1 ⁇ 1 ⁇ x 1 is the first image offset
  • a ⁇ ⁇ 1 ⁇ pixel pitch is the unit of the first image offset is converted from pixel to mm
  • the main point of the first camera is changed from u 1 ' to u 1 after compensation.
  • B 'becomes B 1 the principal point of the second camera is u 2
  • x 1 is the first image forming point
  • x 2 is an imaging point of the second image.
  • the calculating unit 830 is specifically configured to:
  • a 1 is the OIS motor sensitivity calibration parameter of the first camera
  • ⁇ 1 is the mirror of the first camera
  • the head offset, a 1 ⁇ ⁇ 1 ⁇ ⁇ x 1 is the first image shift amount, and a 1 ⁇ ⁇ 1 ⁇ pixel pitch is a unit for converting the first image shift amount from pixel to mm, and a 2 is second.
  • the OIS motor sensitivity calibration parameter of the camera ⁇ 2 is the lens offset of the second camera, a 2 ⁇ ⁇ 2 ⁇ ⁇ x 2 is the second image offset, and a 2 ⁇ ⁇ 2 ⁇ pixel pitch is the second image bias
  • the unit pixel of the shift amount is converted to mm, and the main point of the first camera is changed from u 1 ' to u 1 after compensation, and the main point of the second camera is changed from u 2 ' to u 2 after compensation, and x 1 is the first
  • the image is imaged and x 2 is the imaged point of the second image.
  • FIG. 9 is a schematic structural diagram of still another scene depth computing apparatus according to Embodiment 2 of the present invention.
  • the device may also be a scene depth computing device 900.
  • the device 900 may further include: a second acquiring unit 910 and a determining unit 920.
  • a second obtaining unit 910 configured to acquire angular velocity information of the terminal device jitter detected by the gyro sensor; convert the angular velocity information into a jitter amplitude of the terminal device, and drive the OIS motor to push the lens of the first camera according to the jitter amplitude And/or the lens of the second camera moves and acquires the lens offset of the first camera and/or the second camera.
  • the determining unit 920 is configured to: push the OIS motor through the OIS controller, move the lens to the designated position; wait for the OIS motor to stabilize and take a picture; and when the captured image reaches a preset number of sheets, detect the feature point coordinates of each image And determining the OIS motor sensitivity calibration parameter according to the specified position of the lens.
  • FIG. 10 is a schematic structural diagram of a terminal according to Embodiment 3 of the present invention.
  • the terminal 1000 includes a camera 1010 (the camera 1010 includes a first camera and a second camera) processor 1020, a memory 1030, and a system bus; the camera 1010, the processor 1020, and the memory 1030 establish a connection through a system bus.
  • the first camera and the second camera are configured to acquire at least the target scene at the same time to obtain the first image and the second image respectively; wherein the first camera and/or the second camera have an OIS system, the first camera and the second camera Parallel to the fuselage of the same terminal device.
  • the memory 1020 is configured to store the first image and the second image.
  • the processor 1030 is configured to acquire a lens offset of the camera with the OIS motor, and convert the lens offset into an image offset according to a preset OIS motor sensitivity calibration parameter; a first camera calibration parameter and/or a compensated second camera calibration parameter, and a first image and a second image acquired from the memory, calculating a scene depth of the target scene; wherein, according to the lens offset of the first camera The calibration parameter of the first camera is compensated, and the calibration parameter of the second camera is compensated according to the lens offset of the second camera.
  • the OIS system is specifically configured to acquire angular velocity information of the terminal device jitter detected by the gyro sensor, convert the angular velocity information into a jitter amplitude of the terminal device, and drive the OIS motor to push the first camera according to the jitter amplitude.
  • the lens of the lens and/or the second camera moves and acquires the lens offset of the first camera and/or the second camera.
  • the processor 1030 is further configured to: push the OIS motor through the OIS controller, move the lens to a designated position; wait for the OIS motor to stabilize and take a picture; and when the captured image reaches a preset number of sheets, detect feature points of each image Coordinates, and determining OIS motor sensitivity calibration parameters according to the specified position of the lens and the feature point coordinates of the respective images.
  • the memory 1020 is further configured to store the OIS motor sensitivity calibration parameter.
  • processor 1030 is specifically configured to convert the lens offset into an image offset according to the following formula.
  • ⁇ x is the image shift amount
  • is the OIS motor sensitivity calibration parameter
  • ⁇ C is the lens shift amount
  • the processor is specifically configured to determine a scene depth of the target scene by using the following formula:
  • Z is the scene depth of the target scene
  • f is the focal length
  • a 1 is the OIS motor sensitivity calibration parameter of the first camera
  • ⁇ 1 is the lens offset
  • pixel pitch is the size of a pixel
  • a 1 ⁇ 1 ⁇ x 1 is the first image shift amount
  • a ⁇ 1 ⁇ pixel pitch is a unit for converting the first image shift amount from pixel to mm
  • the main point of the first camera is changed from u 1 ' to u 1 after compensation.
  • the baseline B 'becomes B 1 the principal point of the second camera is u 2
  • x 1 is the first image forming point
  • the processor is specifically configured to determine a scene depth of the target scene by using the following formula:
  • a 1 is the OIS motor sensitivity calibration parameter of the first camera
  • ⁇ 1 is the mirror of the first camera
  • the head offset, a 1 ⁇ ⁇ 1 ⁇ ⁇ x 1 is the first image shift amount, and a 1 ⁇ ⁇ 1 ⁇ pixel pitch is a unit for converting the first image shift amount from pixel to mm, and a 2 is second.
  • the OIS motor sensitivity calibration parameter of the camera ⁇ 2 is the lens offset of the second camera, a 2 ⁇ ⁇ 2 ⁇ ⁇ x 2 is the second image offset, and a 2 ⁇ ⁇ 2 ⁇ pixel pitch is the second image bias
  • the unit pixel of the shift amount is converted to mm, and the main point of the first camera is changed from u 1 ' to u 1 after compensation, and the main point of the second camera is changed from u 2 ' to u 2 after compensation, and x 1 is the first
  • the image is imaged and x 2 is the imaged point of the second image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

本发明实施例涉及场景深度计算方法、装置和终端。该方法包括:获取带有OIS系统的摄像头的镜头偏移量;其中,第一摄像头和/或第二摄像头带有OIS系统;根据预设的OIS马达感度标定参数,将镜头偏移量转换为图像偏移量;根据补偿后的第一摄像头标定参数和/或补偿后的第二摄像头标定参数,以及第一摄像头和所述第二摄像头在同一时刻采集目标场景分别得到的第一图像和第二图像,计算所述目标场景的场景深度;其中,根据第一摄像头的镜头偏移量补偿第一摄像头的标定参数,根据第二摄像头的镜头偏移量补偿第二摄像头的标定参数。由此,解决了视差问题,用补偿后的摄像头标定参数计算目标场景的场景深度,计算出的目标场景的场景深度值更准确。

Description

场景深度计算方法、装置及终端
本申请要求于2016年10月25日提交中国国家知识产权局专利局、申请号为201610941102.2、发明名称为“一种具有双摄像头终端的图像深度计算的方法和终端”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明实施例涉及通信领域,尤其涉及一种具有双摄像头终端设备的目标场景的场景深度计算方法、装置及终端。
背景技术
当前,通过双摄像头实现深度计算的方式,正应用在越来越多的手机上,比如型号为华为P9的手机。光学图像稳定(Optical Image Stabilization,OIS,通常也可以称为光学防抖)作为提升在低光照下拍照质量的重要手段,也在越来越多的手机上使用。OIS的工作原理是通过镜头的移动来补偿手抖以达到图像的稳定。
现有技术一般通过双摄像头的标定参数来矫正图像,使得双摄像头提供的左右图像在一个方向上对齐,然后计算出视差,再将视差转换为场景深度。但只要其中一个摄像头带有OIS系统后,由于OIS引起镜头偏移,导致双摄像头标定参数发生变化,导致视差问题(正值视差和负值视差同时存在,或者图像无法在一个方向对齐),进而造成计算的场景深度值不准确。
发明内容
本发明实施例提供了一种场景深度计算方法,当OIS系统存在于一个或两个摄像头时,解决了视差问题(正负视差同时存在,或者图像无法在一个方向对齐)导致的目标场景的场景深度值不准确的问题。
第一方面,提供了一种场景深度计算方法,所述方法包括:获取带有OIS系统的摄像头的镜头偏移量;其中,第一摄像头和/或第二摄像头带有OIS系统,第一摄像头和第二摄像头并列设置在同一终端设备的机身上;根据预设的OIS马达感度标定参数,将所述镜头偏移量转换为图像偏移量;根据补偿后的第一摄像头标定参数和/或补偿后的第二摄像头标定参数,以及所述第一摄像头和所述第二摄像头在同一时刻采集目标场景分别得到的第一图像和第二图像,计算所述目标场景的场景深度;其中,根据第一摄像头的镜头偏移量补偿第一摄像头的标定参数,根据第二摄像头的镜头偏移量补偿第二摄像头的标定参数。通过补偿终端设备抖动引起的摄像头标定参数的变化,解决了视差问题,用补偿后的摄像头标定参数计算目标场景的场景深度,计算出的场景深度值更准确。
结合第一方面,在第一方面的第一种可能的实现方式中,所述方法之前还包括:获取陀螺仪传感器检测的终端设备抖动的角速度信息;将所述角速度信息转化为终端设备的抖动幅度,根据所述抖动幅度驱动OIS马达推动所述第一摄像头的镜头和/或第二摄像头的镜头移动,并获取第一摄像头和/或第二摄像头的镜头偏移量。其中,由于终端设备抖动时间和其曝光时间不一致,且抖动时间大于曝光时间,会获取到多个镜头偏移量,根据预设规则,从多个镜头偏移量中确定一个镜头偏移量,并在后续计算目标场景的场景深度时,利用该确定的镜头偏移量进行计算。
结合第一方面,在第一方面的第二种可能的实现方式中,根据以下步骤确定OIS马达感度标定参数:通过OIS控制器,推动OIS马达,将镜头移动至指定位置;等待所述OIS马达稳定后拍照;当拍摄的图像达到预设数量张时,检测各张图像的特征点坐标,并根据镜头的所述指定位置和所述各张图 像的特征点坐标,确定OIS马达感度标定参数。通过在终端设备出厂之前,将该OIS马达感度标定参数存储在其中,以便在终端设备出厂后,计算目标场景的场景深度时,利用存储的OIS马达感度标定参数,将镜头偏移量转换为图像偏移量。
结合第一方面,在第一方面的第三种可能的实现方式中,根据下式,将镜头偏移量转换为图像偏移量,
Δx≈α×ΔC
其中,Δx为图像偏移量,α为OIS马达感度标定参数,ΔC为镜头偏移量,通过OIS马达感度标定参数,将镜头偏移量转换为图像偏移量,与摄像头标定参数的单位保持一致。
结合第一方面,在第一方面的第四种可能的实现方式中,当第一摄像头带有OIS系统时,所述根据补偿后的第一摄像头标定参数和/或补偿后的第二摄像头标定参数,以及所述第一摄像头和所述第二摄像头在同一时刻采集目标场景分别得到的第一图像和第二图像,计算所述目标场景的场景深度具体包括:
利用下式确定目标场景的场景深度,
Figure PCTCN2016112696-appb-000001
其中,Z为目标场景的场景深度,f为焦距,a1为第一摄像头的OIS马达感度标定参数,Δ1为镜头偏移量,pixel pitch为一个pixel的大小,a1×Δ1≈Δx1为第一图像偏移量,a×Δ1×pixel pitch是将第一图像偏移量的单位从pixel转换为mm,补偿后第一摄像头的主点从u1'变为u1,基线从B'变为B1,第二摄像头的主点为u2,x1为所述第一图像成像点,x2为所述第二图像的成像点。
结合第一方面,在第一方面的第五种可能的实现方式中,当第一摄像头和第二摄像头都带有OIS系统时,所述根据补偿后的第一摄像头标定参数和/或补偿后的第二摄像头标定参数,以及所述第一摄像头和所述第二摄像头在 同一时刻采集目标场景分别得到的第一图像和第二图像,计算所述目标场景的场景深度具体包括:
Figure PCTCN2016112696-appb-000002
a1为第一摄像头的OIS马达感度标定参数,Δ1为第一摄像头的镜
其中,
头偏移量,a1×Δ1≈Δx1为第一图像偏移量,a1×Δ1×pixel pitch是将第一图像偏移量的单位从pixel转换为mm,a2为第二摄像头的OIS马达感度标定参数,Δ2为第二摄像头的镜头偏移量,a2×Δ2≈Δx2为第二图像偏移量,a2×Δ2×pixel pitch是将第二图像偏移量的单位pixel转换为mm,补偿后第一摄像头的主点从u1'变为u1,补偿后第二摄像头的主点从u2'变为u2,x1为所述第一图像成像点,x2为所述第二图像的成像点。
第二方面,本发明实施例提供了一种场景深度计算装置,所述装置包括:第一获取单元,处理单元,计算单元;所述第一获取单元,用于获取带有OIS系统的摄像头的镜头偏移量;其中,第一摄像头和/或第二摄像头带有OIS系统,第一摄像头和第二摄像头并列设置在同一终端设备的机身上;所述处理单元,用于根据预设的OIS马达感度标定参数,将所述镜头偏移量转换为图像偏移量;所述计算单元,用于根据补偿后的第一摄像头标定参数和/或补偿后的第二摄像头标定参数,以及所述第一摄像头和所述第二摄像头在同一时刻采集目标场景分别得到的第一图像和第二图像,计算所述目标场景的场景深度;其中,根据第一摄像头的镜头偏移量补偿第一摄像头的标定参数,根据第二摄像头的镜头偏移量补偿第二摄像头的标定参数。通过补偿终端设备抖动引起的摄像头标定参数的变化,解决了视差问题,用补偿后的摄像头标定参数计算场景深度,计算出的场景深度值更准确。
结合第二方面,在第二方面的第一种可能的实现方式中,所述装置还包括:第二获取单元;所述第二获取单元具体用于:获取陀螺仪传感器检测的终端设备抖动的角速度信息;将所述角速度信息转化为终端设备的抖动幅度, 根据所述抖动幅度驱动OIS马达推动所述第一摄像头的镜头和/或第二摄像头的镜头移动,并获取第一摄像头和/或第二摄像头的镜头偏移量。其中,由于终端设备抖动时间和其曝光时间不一致,且抖动时间大于曝光时间,会获取到多个镜头偏移量,根据预设规则,从多个镜头偏移量中确定一个镜头偏移量,并在后续计算目标场景的场景深度时,利用该确定的镜头偏移量进行计算。
结合第二方面,在第二方面的第二种可能的实现方式中,所述装置还包括:确定单元;所述确定单元具体用于,通过OIS控制器,推动OIS马达,将镜头移动至指定位置;等待所述OIS马达稳定后拍照;当拍摄的图像达到预设数量张时,检测各张图像的特征点坐标,并根据镜头的所述指定位置,确定OIS马达感度标定参数。通过在终端设备出厂之前,将该OIS马达感度标定参数存储在其中,以便在终端设备出厂后,计算目标场景的场景深度时,利用存储的OIS马达感度标定参数,将镜头偏移量转换为图像偏移量。
结合第二方面,在第二方面的第三种可能的实现方式中,所述处理单元具体用于:
根据下式,将镜头偏移量转换为图像偏移量,
Δx≈α×ΔC
其中,Δx为图像偏移量,α为OIS马达感度标定参数,ΔC为镜头偏移量,通过OIS马达感度标定参数,将镜头偏移量转换为图像偏移量,与摄像头标定参数的单位保持一致。
结合第二方面,在第二方面的第四种可能的实现方式中,当第一摄像头带有OIS时,所述计算单元,具体用于:
利用下式确定目标场景的场景深度,
Figure PCTCN2016112696-appb-000003
其中,Z为目标场景的场景深度,f为焦距,a1为第一摄像头的OIS马达 感度标定参数,Δ1为镜头偏移量,pixel pitch为一个pixel的大小,a1×Δ1≈Δx1为第一图像偏移量,a×Δ1×pixel pitch是将第一图像偏移量的单位从pixel转换为mm,补偿后第一摄像头的主点从u1'变为u1,基线从B'变为B1,第二摄像头的主点为u2,x1为所述第一图像成像点,x2为所述第二图像的成像点。
结合第二方面,在第二方面的第五种可能的实现方式中,当第一摄像头和第二摄像头都带有OIS时,所述计算单元,具体用于:
利用下式确定目标场景的场景深度,
Figure PCTCN2016112696-appb-000004
a1为第一摄像头的OIS马达感度标定参数,Δ1为第一摄像头的镜
其中,
头偏移量,a1×Δ1≈Δx1为第一图像偏移量,a1×Δ1×pixel pitch是将第一图像偏移量的单位从pixel转换为mm,a2为第二摄像头的OIS马达感度标定参数,Δ2为第二摄像头的镜头偏移量,a2×Δ2≈Δx2为第二图像偏移量,a2×Δ2×pixel pitch是将第二图像偏移量的单位pixel转换为mm,补偿后第一摄像头的主点从u1'变为u1,补偿后第二摄像头的主点从u2'变为u2,x1为所述第一图像成像点,x2为所述第二图像的成像点。
第三方面,本发明实施例提供了一种终端,所述终端包括第一摄像头和第二摄像头,所述第一摄像头和第二摄像头至少用于在同一时刻采集目标场景,分别得到第一图像和第二图像;其中,第一摄像头和/或第二摄像头带有OIS系统,第一摄像头和第二摄像头并列设置在同一终端设备的机身上;存储器,所述存储器用于存储所述第一图像和第二图像;处理器,所述处理器用于获取带有OIS马达的摄像头的镜头偏移量,根据预设的OIS马达感度标定参数,将所述镜头偏移量转换为图像偏移量;根据补偿后的第一摄像头标定参数和/或补偿后的第二摄像头标定参数,以及从所述存储器获取的第一图像和第二图像,计算所述目标场景的场景深度;其中,根据第一摄像头的 镜头偏移量补偿第一摄像头的标定参数,根据第二摄像头的镜头偏移量补偿第二摄像头的标定参数。利用镜头偏移量,补偿终端设备抖动引起的摄像头标定参数的变化,解决了视差问题,用补偿后的摄像头标定参数计算目标场景的场景深度,计算出的场景深度值更准确。
结合第三方面,在第三方面的第一种可能的实现方式中,所述OIS系统具体用于,获取陀螺仪传感器检测的终端设备抖动的角速度信息;将所述角速度信息转化为终端设备的抖动幅度,根据所述抖动幅度驱动OIS马达推动所述第一摄像头的镜头和/或第二摄像头的镜头移动,并获取第一摄像头和/或第二摄像头的镜头偏移量。其中,由于终端设备抖动时间和其曝光时间不一致,且抖动时间大于曝光时间,会获取到多个镜头偏移量,根据预设规则,从多个镜头偏移量中确定一个镜头偏移量,并在后续计算目标场景的场景深度时,利用该确定的镜头偏移量进行计算。
结合第三方面,在第三方面的第二种可能的实现方式中,所述处理器还用于,通过OIS控制器,推动OIS马达,将镜头移动至指定位置;等待所述OIS马达稳定后拍照;当拍摄的图像达到预设数量张时,检测各张图像的特征点坐标,并根据镜头的所述指定位置和所述各张图像的特征点坐标,确定OIS马达感度标定参数;所述存储器还用于,存储所述OIS马达感度标定参数。通过在终端出厂之前,将该OIS马达感度标定参数存储在其中,以便在终端出厂后,计算目标场景的场景深度时,利用存储的OIS马达感度标定参数,将镜头偏移量转换为图像偏移量。
结合第三方面,在第三方面的第三种可能的实现方式中,所述处理器具体用于,根据下式,将镜头偏移量转换为图像偏移量,
Δx≈α×ΔC
其中,Δx为图像偏移量,α为OIS马达感度标定参数,ΔC为镜头偏移量,通过OIS马达感度标定参数,将镜头偏移量转换为图像偏移量,与摄像头标定参数的单位保持一致。
结合第三方面,在第三方面的第四种可能的实现方式中,当第一摄像头带有OIS系统时,所述处理器具体用于,利用下式确定所述目标场景的场景深度,
Figure PCTCN2016112696-appb-000005
其中,Z为所述目标场景的场景深度,f为焦距,a1为第一摄像头的OIS马达感度标定参数,Δ1为镜头偏移量,pixel pitch为一个pixel的大小,a1×Δ1≈Δx1为第一图像偏移量,a×Δ1×pixel pitch是将第一图像偏移量的单位从pixel转换为mm,补偿后第一摄像头的主点从u1'变为u1,基线从B'变为B1,第二摄像头的主点为u2,x1为所述第一图像成像点,x2为所述第二图像的成像点。
结合第三方面,在第三方面的第五种可能的实现方式中,当第一摄像头和第二摄像头都带有OIS系统时,所述处理器具体用于,利用下式确定所述目标场景的场景深度,
Figure PCTCN2016112696-appb-000006
a1为第一摄像头的OIS马达感度标定参数,Δ1为第一摄像头的镜
其中,
头偏移量,a1×Δ1≈Δx1为第一图像偏移量,a1×Δ1×pixel pitch是将第一图像偏移量的单位从pixel转换为mm,a2为第二摄像头的OIS马达感度标定参数,Δ2为第二摄像头的镜头偏移量,a2×Δ2≈Δx2为第二图像偏移量,a2×Δ2×pixel pitch是将第二图像偏移量的单位pixel转换为mm,补偿后第一摄像头的主点从u1'变为u1,补偿后第二摄像头的主点从u2'变为u2,x1为所述第一图像成像点,x2为所述第二图像的成像点。
附图说明
下面通过附图和实施例,对本发明实施例的技术方案做进一步的详细描 述。
图1为OIS系统工作原理框图;
图2为深度计算系统框图;
图3为本发明实施例一提供的场景深度计算方法流程图;
图4a为镜头偏移场景示意图;
图4b为镜头偏移前后成像变化的示意图;
图4c为确定OIS马达感度标定参数流程图;
图5a为本发明实施例提供的场景深度计算的一个示意图;
图5b为本发明实施例提供的场景深度计算的又一个示意图;
图6a为补偿双摄像头标定参数时,拍摄的图像示意图;
图6b为没有补偿双摄像头标定参数后,拍摄的图像示意图;
图6c为图6a的局部放大图;
图6d为图6b的局部放大图;
图7a为没有补偿双摄像头标定参数时,场景深度示意图;
图7b为补偿双摄像头标定参数后,场景深度示意图;
图8为本发明实施例二提供的场景深度计算装置结构示意图;
图9为本发明实施例二提供的又一场景深度计算装置结构示意图;
图10为本发明实施例三提供的终端结构示意图。
具体实施方式
在本发明中,涉及终端设备,终端设备可以是具有双摄像头的设备,包括但不限于照相机(例如数码相机)、摄像机、手机(例如智能手机)、平板电脑(Pad)、个人数字助理(Personal Digital Assistant,PDA)、便携设备(例如,便携式计算机)、可穿戴设备等,本发明实施例对此不做具体限定。
请参考图1,终端设备可以为手机,下面以手机为例对本发明实施例进行 阐述。
目前,越来越多的手机上应用双摄像头计算场景深度,双摄像头模拟人类双目视觉原理感知距离,即从两个点观察一个物体,获取在不同视角下的图像,根据图像之间像素的匹配关系,通过三角测量原理计算出像素之间的偏移来获取物体的场景深度。当双摄像头中的一个或者两个带有OIS系统后,由于OIS引起镜头偏移,导致双摄像头标定参数发生变化,导致视差问题,进而造成场景深度计算不准确,因此,需要补偿双摄像头标定参数,使得目标场景的场景深度计算准确。
图1为OIS系统工作原理框图。如图1所示,终端设备包括:OIS系统100和图像处理器(Image Signal Processor,ISP)110。OIS系统100包括:OIS控制器120、陀螺仪传感器130、霍尔传感器140、马达150、摄像头160。
摄像头160包括第一摄像头和第二摄像头,第一摄像头和第二摄像头可以并列位于终端设备的前面,也可以并列位于终端设备的背面,排布方式可以为水平排布也可以为竖直排布,第一摄像头和/或第二摄像头带有OIS系统,第一摄像头和第二摄像头分别具有镜头(图1中未标示)。
霍尔传感器140是基于霍尔效应进行位移测量的磁场传感器,用于获取带有OIS系统的摄像头的镜头偏移量,即第一摄像头和/或第二摄像头的镜头偏移量。
陀螺仪传感器130是基于终端设备在自由空间方位移动的定位系统,用于获取终端设备抖动时的角速度信息。
OIS控制器120从陀螺仪传感器130获取角速度信息,并将角速度信息转换为终端设备的抖动幅度,并将所述抖动幅度作为参考信号发送给马达150。
马达150可以为OIS马达,用于根据抖动幅度,推动带有OIS系统的摄像头的镜头移动,以保证图像的清晰度;其中,该移动指在X和/或Y方向移动,Y方向指镜头的光心和焦点的连线的平面内的一个方向,X方向指通过镜头的光心,和Y方向垂直的方向。
OIS控制器120还从第一摄像头和第二摄像头获取同一时刻采集目标场景分别得到的第一图像和第二图像。
ISP 110存储从OIS控制器120获取的镜头偏移量、第一图像以及第二图像。
下面结合图1中各个构成部件,对OIS系统的工作原理做具体的介绍。
在准备拍摄时,终端设备进行初始化,通常,在准备就绪后,OIS控制器120控制快门获取图像。
在拍摄时,终端设备会发生抖动,OIS控制器120读取陀螺仪传感器130检测的角速度信息,并将角速度信息转换为终端设备的抖动幅度并作为参考信号发送给OIS马达,OIS马达根据抖动幅度移动带有OIS系统的摄像头的镜头,避免因为终端设备抖动造成的拍摄图像模糊,保证图像的清晰度。其中,该移动可以是第一摄像头的镜头在X和/或Y方向移动和/或第二摄像头的镜头在X和/或Y方向移动。OIS控制器120读取霍尔传感器140检测的带有OIS系统的摄像头的镜头偏移量,亦即第一摄像头和/或第二摄像头的镜头偏移量,并从摄像头获取拍摄的图像,亦即分别对应于第一摄像头和第二摄像头在同一时刻采集目标场景得到的第一图像和第二图像,并将镜头偏移量以及拍摄的图像发送给ISP 110。ISP 110存储镜头偏移量以及摄像头拍摄的第一图像和第二图像。
由于终端设备抖动的时间一般大于其曝光时间,比如,终端设备抖动持续时间为30ms,曝光时间为2ms,在此次终端设备抖动时,霍尔传感器140会获取到15个镜头偏移量,OIS控制器120根据预设的规则,从霍尔传感器140读取到这15个镜头偏移量中,再从15个镜头偏移量中,确定一个镜头偏移量,并在后续过程中,利用该确定的镜头偏移量作为上下文中所述的镜头偏移量,进行目标场景的场景深度计算。
图2为深度计算系统框图。如图2所示,深度计算系统包括ISP 110和深度计算模块210。深度计算模块210从ISP 110获取预设的标定信息以及从 ISP 110中读取存储的OIS信息、第一摄像头和第二摄像头在同一时刻采集目标场景分别得到的第一图像和第二图像,计算目标场景的场景深度,输出视差图/深度图。其中,标定信息为初始化时,摄像头标定参数,比如焦距、基线、光心和主点等;OIS信息为镜头偏移量。
具体地,深度计算模块获取镜头偏移量,由于镜头偏移量的单位为code,场景深度的单位为毫米(mm),二者不一致,因此,需要根据OIS马达感度标定参数,将镜头偏移量转换为图像偏移量,单位为像素(pixel),再用镜头偏移量补偿摄像头标定参数,根据补偿后的摄像头标定参数,计算目标场景的场景深度值,由此计算出的场景深度值更加精确。
图3为本发明实施例一提供的场景深度计算方法流程图。如图3所示,该方法包括:
S310,获取镜头偏移量(参见图4b的ΔC,图5a的Δ1,图5b的Δ1和Δ2)。
具体地,镜头偏移量可以通过霍尔传感器检测获得。
S320,镜头偏移量是否异常;不异常,跳转至S340;异常,跳转至S380。
具体地,如果镜头偏移量大于预设阈值,则镜头偏移量异常;如果镜头偏移量不大于预设阈值,则镜头偏移量不异常。
S340,将镜头偏移量转换为图像偏移量(参见图4b的Δx)。在执行S340之前需要预先执行S330,即输入OIS马达感度标定参数,即单位镜头偏移量所导致的图像偏移量,其中,每一个带有OIS系统的摄像头都有其相应的OIS马达感度标定参数,在终端设备出厂之前,OIS马达感度标定参数已经预先存储在其中。利用OIS马达感度标定参数,可以由镜头偏移量转换为图像偏移量。
S350,补偿双摄像头标定参数。
具体地,由于拍照时终端设备的抖动,导致摄像头的部分标定参数(比如光心、主点、基线等)发生变化,根据镜头偏移量计算出图像偏移量,用镜头偏移量补偿变化的摄像头标定参数,确定变化后的摄像头标定参数的数 值。参见图4的光心从C'变为C,主点从u'变为u。参见图5a的第一摄像头镜头的光心从C1'变为C1,主点从u1'变为u1,基线从B'变为B1。参见图5b中的第一摄像头镜头的光心从C1'变为C1,主点从u1'变为u1,第二摄像头镜头的光心从C2'变为C2,主点从u2'变为u2,基线从B'变为B2
S380,计算目标场景的场景深度。计算公式参见公式2、公式4。在执行S380之前需要预先执行S360,即输入第一图像,以及S370,即输入第二图像。
具体地,根据补偿后的摄像头标定参数、第一图像和第二图像,确定目标场景的场景深度。其中,第一摄像头和第二摄像头在同一时刻采集目标场景分别得到的第一图像和第二图像。
S390,输出目标场景的场景深度。
下面结合图4a-4c,说明如何确定OIS马达感度标定参数。
图4a为镜头偏移场景示意图。如图4a所示,以OIS马达推动一个摄像头的镜头移动为例,OIS马达推动镜头从椭圆形虚线的位置移动至指定位置(xi,yi),分别拍摄固定chart的图像,图像传感器将摄像头获取的光学图像转换为电子信号,可以根据镜头移动前后的位置,确定镜头偏移量,根据两个固定chart的图像,确定图像偏移量。
图4b为镜头偏移前后成像变化的示意图。如图4b所示,以OIS马达推动一个摄像头的镜头在X方向上移动为例,镜头移动前,摄像头标定参数:焦距为f,光心为C',主点为u'。镜头移动后,摄像头的部分标定参数发生变化:光心从C'变为C,主点从u'变为u。镜头移动前后的成像点分别为x'和x,ΔC为光心C'和光心C的距离,即镜头偏移量,单位为code,Δx为成像点x'和成像点x的距离,即图像偏移量,单位为pixel。根据镜头偏移量和图像偏移量,可以测算出单位镜头偏移量所导致的图像偏移量,即OIS马达感度标定参数。根据该OIS马达感度标定参数,可以在后续的拍摄过程中,基于镜头偏移量计算出实际的图像偏移量,以便补偿终端抖动时摄像头标定参数。
在确定OIS马达感度标定参数时,假设镜头偏移量ΔC和图像偏移量Δx间 的关系为线性,可以得到OIS马达感度标定参数为:α≈Δx/ΔC,α单位为pixels/code。
但是ΔC和Δx并不是严格线性的,可以使用更高阶的模型,比如二阶、三阶或者更高,拍摄更多的图像来获得更精准的OIS马达感度标定参数。
图4c为确定OIS马达感度标定参数流程图。如图4c所示,包括:
S410,通过OIS控制器,推动OIS马达,将镜头移动至指定位置(xi,yi)。
具体地,可以是将镜头的坐标从(x1,y1)位置移动至(xi,yi)位置。
S420,等待OIS马达稳定后拍照。
S430,拍摄图像数目是否达到预设数量张;不够,跳转至S410;足够,跳转至S440。
具体地,拍摄图像过少,确定的OIS马达感度标定参数可能误差比较大,可以拍摄多张图像,以提高OIS马达感度标定参数精确度。
另外,镜头偏移量和图像偏移量之间未必存在严格的线性关系,为了更准确地确定在某个镜头偏移量下的图像偏移量,有必要对不同镜头偏移量下的图像偏移情况进行测算。
S440,检测各张图像中的特征点坐标,结合镜头位置(xi,yi),确定OIS马达感度标定参数。
具体地,分别检测镜头移动前后,拍摄的图像中的特征点坐标,获取图像偏移量。根据镜头的移动距离,确定镜头偏移量,根据图像偏移量和镜头偏移量,确定OIS马达感度标定参数。
S450,将OIS马达感度标定参数存储至终端设备中。
具体地,在终端设备出厂之前,将OIS马达感度标定参数存储至其中,以便该终端设备出厂后,在利用该终端设备采集目标场景时,根据预先存储在其中的OIS马达感度标定参数,将镜头偏移量转换为图像偏移量,并用镜头偏移量补偿摄像头标定参数,使目标场景的场景深度更精确。
下面结合图5a、5b、6a-6c,对如何进行深度计算做进一步的说明。
在一个例子中,如图5a所示,图5a为本发明实施例提供的场景深度计算的一个示意图。以第一摄像头带有OIS系统,第二摄像头不带OIS系统,第一摄像头的镜头在X方向移动为例,其中,摄像头的镜头为一凸透镜,入射线和与其对应且相平行的出射线构成共轭光线,其入射点跟出射点的连线与主光轴的交点,称为凸透镜的焦点,从焦点到底片或CCD等成像平面的距离叫焦距,位于透镜中央的点叫光心,主视线与底片或CCD等成像平面的交点叫主点,第一摄像头镜头和第二摄像头镜头的间距叫基线。初始化时,摄像头的以下标定参数是已知的:焦距为f,第一摄像头的镜头的光心为C1',主点为u1',第二摄像头的镜头的光心为C2,主点为u2,基线为B'。其中,摁下快门时,第一摄像头获取的第一图像的成像点为x1,第二摄像头获取的第二图像的成像点为x2
此时,利用相似三角形原理,计算出的场景深度Z'为:
Figure PCTCN2016112696-appb-000007
在现有技术中,由于没有考虑摁下快门时,终端设备抖动造成的摄像头的部分标定参数发生变化,因此场景深度计算是按照未补偿的摄像头标定参数,主点u1'和基线B',来完成的。计算出的场景深度Z'误差较大。
当摁下快门时,终端设备抖动会引起摄像头的部分标定参数发生变化,根据抖动幅度,OIS马达推动第一摄像头镜头发生偏移,第一摄像头镜头的光心从C1'变为C1(即第一摄像头的镜头偏移量,单位为code),主点从u1'变为u1,基线从B'变为B1,需要计算出u1和B1。利用镜头偏移量补偿变化的摄像头标定参数,确定出补偿后的摄像头标定参数的数值,利用相似三角形原理,计算出的场景深度Z为:
Figure PCTCN2016112696-appb-000008
其中,a1为第一摄像头的OIS马达感度标定参数,Δ1为第一摄像头的镜 头偏移量,pixel pitch为一个pixel的大小,a1×Δ1≈Δx1为第一图像偏移量,以pixel为单位,a1×Δ1×pixel pitch是将第一图像偏移量的单位从pixel转换为mm。
在另一个例子中,如图5b所示,图5b为本发明实施例提供的场景深度计算的又一个示意图。以第一摄像头和第二摄像头同时带有OIS系统,第一摄像头和第二摄像头同时在X方向移动为例,初始化时,摄像头的以下标定参数是已知的:焦距为f,第一摄像头的镜头的光心为C1',主点为u1',第二摄像头的镜头的光心为C2',主点为u2',基线为B'。摁下快门时,第一摄像头获取的第一图像的成像点为x1,第二摄像头获取的第二图像的成像点为x2
此时,利用相似三角形原理,计算出的场景深度Z'为:
Figure PCTCN2016112696-appb-000009
在现有技术中,由于没有考虑摁下快门时,终端设备抖动造成的摄像头的部分标定参数发生变化,因此深度计算是按照未补偿的摄像头标定参数,主点u1'、u2'和基线B'来完成的。计算出的场景深度Z'误差较大。
当摁下快门时,终端设备抖动会引起摄像头的部分标定参数发生变化,即第一摄像头镜头的光心从C1'变为C1,主点从u1'变为u1,第二摄像头镜头的光心从C2'变为C2(对应第二摄像头的镜头偏移量,单位为code),主点从u2'变为u2,基线从B'变为B2,需要计算出u1、u2和B2。利用镜头偏移量(包括第一摄像头的镜头偏移量和第二摄像头的镜头偏移量),补偿变化的摄像头标定参数,确定出变化后的摄像头标定参数的数值,利用相似三角形原理,计算出的场景深度Z为:
Figure PCTCN2016112696-appb-000010
其中,a1为第一摄像头的OIS马达感度标定参数,Δ1为第一摄像头的镜头偏移量,a1×Δ1≈Δx1为第一图像偏移量,a1×Δ1×pixel pitch是将第一图像偏移量的单位从pixel转换为mm,a2为第二摄像头的OIS马达感度标定参数,Δ2 为第二摄像头的镜头偏移量,a2×Δ2≈Δx2为第二图像偏移量,a2×Δ2×pixel pitch是将第二图像偏移量的单位pixel转换为mm。
虽然图5a和图5b是以摄像头在一个方向的移动为例说明如何进行摄像头标定参数补偿的,但是应当意识到,同理可以实现两个摄像头在两个方向移动的摄像头标定参数的补偿,此处不再赘述。
图6a为补偿双摄像头标定参数时,拍摄的图像示意图;图6b为没有补偿双摄像头标定参数后,拍摄的图像示意图;图6c为图6a的局部放大图;图6d为图6b的局部放大图;从图6a-6d可以看出,没有补偿双摄像头标定参数时,图像的对齐效果很差,补偿双摄像头标定参数后,图像的对齐效果很好。
图7a为没有补偿双摄像头标定参数时,场景深度示意图;图7b为补偿双摄像头标定参数后,场景深度示意图。在图7a和7b中,不同深度值用不同的颜色表示,黑色表示无法计算场景深度。在图7a中,在1000mm处测得的场景深度为1915.8mm,在300mm处测得的场景深度为344.6mm。在图7b中,在1000mm处测得的场景深度为909.6mm,在300mm处测得的场景深度为287.4mm。由此得出,补偿双摄像头标定参数后,计算的场景深度值更准确。
应用本发明实施例提供的具有双摄像头终端的场景深度计算方法,解决了视差问题导致的场景深度值不准确的问题。
图8为本发明实施例二提供的场景深度计算装置结构示意图,如图8所示,该场景深度计算装置800包括:第一获取单元810,处理单元820,计算单元830。
其中,第一获取单元810,用于获取带有OIS系统的摄像头的镜头偏移量;其中,第一摄像头和/或第二摄像头带有OIS系统,第一摄像头和第二摄像头并列设置在同一终端设备的机身上。
处理单元820,用于根据预设的OIS马达感度标定参数,将所述镜头偏移量转换为图像偏移量。
计算单元830,用于根据补偿后的第一摄像头标定参数和/或补偿后的第二摄像头标定参数,以及获取的所述第一摄像头和所述第二摄像头在同一时刻采集目标场景分别得到的第一图像和第二图像,计算场景深度;其中,根据第一摄像头的镜头偏移量补偿第一摄像头的标定参数,根据第二摄像头的镜头偏移量补偿第二摄像头的标定参数。
具体地,处理单元820具体用于:根据下式,将镜头偏移量转换为图像偏移量,
Δx≈α×ΔC
其中,Δx为图像偏移量,α为OIS马达感度标定参数,ΔC为镜头偏移量。
当第一摄像头带有OIS系统,第二摄像头不带OIS系统时,所述计算单元830,具体用于:
利用下式确定目标场景的场景深度,
Figure PCTCN2016112696-appb-000011
其中,Z为目标场景的场景深度,f为焦距,a1为第一摄像头的OIS马达感度标定参数,Δ1为镜头偏移量,pixel pitch为一个pixel的大小,a1×Δ1≈Δx1为第一图像偏移量,a×Δ1×pixel pitch是将第一图像偏移量的单位从pixel转换为mm,补偿后第一摄像头的主点从u1'变为u1,基线从B'变为B1,第二摄像头的主点为u2,x1为所述第一图像成像点,x2为所述第二图像的成像点。
当第一摄像头和第二摄像头都带有OIS系统时,计算单元830,具体用于:
利用下式确定目标场景的场景深度,
Figure PCTCN2016112696-appb-000012
a1为第一摄像头的OIS马达感度标定参数,Δ1为第一摄像头的镜
其中,
头偏移量,a1×Δ1≈Δx1为第一图像偏移量,a1×Δ1×pixel pitch是将第一图像偏移 量的单位从pixel转换为mm,a2为第二摄像头的OIS马达感度标定参数,Δ2为第二摄像头的镜头偏移量,a2×Δ2≈Δx2为第二图像偏移量,a2×Δ2×pixel pitch是将第二图像偏移量的单位pixel转换为mm,补偿后第一摄像头的主点从u1'变为u1,补偿后第二摄像头的主点从u2'变为u2,x1为所述第一图像成像点,x2为所述第二图像的成像点。
图9为本发明实施例二提供的又一场景深度计算装置结构示意图。如图9所示,该装置还可以为场景深度计算装置900,该装置900还可以包括:第二获取单元910和确定单元920。
第二获取单元910,用于获取陀螺仪传感器检测的终端设备抖动的角速度信息;将所述角速度信息转化为终端设备的抖动幅度,根据所述抖动幅度驱动OIS马达推动所述第一摄像头的镜头和/或第二摄像头的镜头移动,并获取第一摄像头和/或第二摄像头的镜头偏移量。
确定单元920,用于通过OIS控制器,推动OIS马达,将镜头移动至指定位置;等待所述OIS马达稳定后拍照;当拍摄的图像达到预设数量张时,检测各张图像的特征点坐标,并根据镜头的所述指定位置,确定OIS马达感度标定参数。
图10为本发明实施例三提供的终端结构示意图。如图10所示,该终端1000包括摄像头1010(摄像头1010包括第一摄像头和第二摄像头)处理器1020、存储器1030、系统总线;摄像头1010、处理器1020和存储器1030通过系统总线建立连接。
第一摄像头和第二摄像头至少用于在同一时刻采集目标场景,分别得到第一图像和第二图像;其中,第一摄像头和/或第二摄像头带有OIS系统,第一摄像头和第二摄像头并列设置在同一终端设备的机身上。
存储器1020用于存储第一图像和第二图像。
处理器1030用于获取带有OIS马达的摄像头的镜头偏移量,根据预设的OIS马达感度标定参数,将所述镜头偏移量转换为图像偏移量;根据补偿后的 第一摄像头标定参数和/或补偿后的第二摄像头标定参数,以及从所述存储器获取的第一图像和第二图像,计算目标场景的场景深度;其中,根据第一摄像头的镜头偏移量补偿第一摄像头的标定参数,根据第二摄像头的镜头偏移量补偿第二摄像头的标定参数。
具体的,OIS系统具体用于,获取陀螺仪传感器检测的终端设备抖动的角速度信息;将所述角速度信息转化为终端设备的抖动幅度,根据所述抖动幅度驱动OIS马达推动所述第一摄像头的镜头和/或第二摄像头的镜头移动,并获取第一摄像头和/或第二摄像头的镜头偏移量。
处理器1030还用于,通过OIS控制器,推动OIS马达,将镜头移动至指定位置;等待所述OIS马达稳定后拍照;当拍摄的图像达到预设数量张时,检测各张图像的特征点坐标,并根据镜头的所述指定位置和所述各张图像的特征点坐标,确定OIS马达感度标定参数。
存储器1020还用于,存储所述OIS马达感度标定参数。
进一步的,处理器1030具体用于,根据下式,将镜头偏移量转换为图像偏移量,
Δx≈α×ΔC
其中,Δx为图像偏移量,α为OIS马达感度标定参数,ΔC为镜头偏移量。
进一步的,当第一摄像头带有OIS系统时,所述处理器具体用于,利用下式确定所述目标场景的场景深度,
Figure PCTCN2016112696-appb-000013
其中,Z为所述目标场景的场景深度,f为焦距,a1为第一摄像头的OIS马达感度标定参数,Δ1为镜头偏移量,pixel pitch为一个pixel的大小,a1×Δ1≈Δx1为第一图像偏移量,a×Δ1×pixel pitch是将第一图像偏移量的单位从pixel转换为mm,补偿后第一摄像头的主点从u1'变为u1,基线从B'变为B1,第二摄像头的主点为u2,x1为所述第一图像成像点,x2为所述第二图像的成像 点。
进一步的,当第一摄像头和第二摄像头都带有OIS系统时,所述处理器具体用于,利用下式确定所述目标场景的场景深度,
Figure PCTCN2016112696-appb-000014
a1为第一摄像头的OIS马达感度标定参数,Δ1为第一摄像头的镜
其中,
头偏移量,a1×Δ1≈Δx1为第一图像偏移量,a1×Δ1×pixel pitch是将第一图像偏移量的单位从pixel转换为mm,a2为第二摄像头的OIS马达感度标定参数,Δ2为第二摄像头的镜头偏移量,a2×Δ2≈Δx2为第二图像偏移量,a2×Δ2×pixel pitch是将第二图像偏移量的单位pixel转换为mm,补偿后第一摄像头的主点从u1'变为u1,补偿后第二摄像头的主点从u2'变为u2,x1为所述第一图像成像点,x2为所述第二图像的成像点。
专业人员应该还可以进一步意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分步骤是可以通过程序来指令处理器完成,所述的程序可以存储于计算机可读存储介质中,所述存储介质是非短暂性(英文:non-transitory)介质,例如随机存取存储器,只读存储器,快闪存储器,硬盘,固态硬盘,磁带(英文:magnetic tape),软盘(英文:floppy disk),光盘(英文:optical disc)及其任意组合。

Claims (18)

  1. 一种场景深度计算方法,其特征在于,所述方法包括:
    获取带有OIS系统的摄像头的镜头偏移量;其中,第一摄像头和/或第二摄像头带有OIS系统,第一摄像头和第二摄像头并列设置在同一终端设备的机身上;
    根据预设的OIS马达感度标定参数,将所述镜头偏移量转换为图像偏移量;
    根据补偿后的第一摄像头标定参数和/或补偿后的第二摄像头标定参数,以及所述第一摄像头和所述第二摄像头在同一时刻采集目标场景分别得到的第一图像和第二图像,计算所述目标场景的场景深度;其中,根据第一摄像头的镜头偏移量补偿第一摄像头的标定参数,根据第二摄像头的镜头偏移量补偿第二摄像头的标定参数。
  2. 根据权利要求1所述的方法,其特征在于,所述方法之前还包括:
    获取陀螺仪传感器检测的终端设备抖动的角速度信息;
    将所述角速度信息转化为终端设备的抖动幅度,根据所述抖动幅度驱动OIS马达推动所述第一摄像头的镜头和/或第二摄像头的镜头移动,并获取第一摄像头和/或第二摄像头的镜头偏移量。
  3. 根据权利要求1所述的方法,其特征在于,根据以下步骤确定OIS马达感度标定参数:
    通过OIS控制器,推动OIS马达,将镜头移动至指定位置;
    等待所述OIS马达稳定后拍照;
    当拍摄的图像达到预设数量张时,检测各张图像的特征点坐标,并根据镜头的所述指定位置和所述各张图像的特征点坐标,确定OIS马达感度标定参数。
  4. 根据权利要求1所述的方法,其特征在于,所述根据预设的OIS马达感度标定参数,将所述镜头偏移量转换为图像偏移量,具体包括:
    根据下式,将镜头偏移量转换为图像偏移量,
    Δx≈α×ΔC
    其中,Δx为图像偏移量,α为OIS马达感度标定参数,ΔC为镜头偏移量。
  5. 根据权利要求1所述的方法,其特征在于,当第一摄像头带有OIS系统时,所述根据补偿后的第一摄像头标定参数和/或补偿后的第二摄像头标定参数,以及所述第一摄像头和所述第二摄像头在同一时刻采集目标场景分别得到的第一图像和第二图像,计算所述目标场景的场景深度具体包括:
    利用下式确定所述目标场景的场景深度,
    Figure PCTCN2016112696-appb-100001
    其中,Z为所述目标场景的场景深度,f为焦距,a1为第一摄像头的OIS马达感度标定参数,Δ1为镜头偏移量,pixel pitch为一个pixel的大小,a1×Δ1≈Δx1为第一图像偏移量,a×Δ1×pixel pitch是将第一图像偏移量的单位从pixel转换为mm,补偿后第一摄像头的主点从u1'变为u1,基线从B'变为B1,第二摄像头的主点为u2,x1为所述第一图像成像点,x2为所述第二图像的成像点。
  6. 根据权利要求1所述的方法,其特征在于,当第一摄像头和第二摄像头都带有OIS系统时,所述根据补偿后的第一摄像头标定参数和/或补偿后的第二摄像头标定参数,以及所述第一摄像头和所述第二摄像头在同一时刻采集目标场景分别得到的第一图像和第二图像,计算所述目标场景的场景深度具体包括:
    利用下式确定所述目标场景的场景深度,
    Figure PCTCN2016112696-appb-100002
    其中,a1为第一摄像头的OIS马达感度标定参数,Δ1为第一摄像头的镜头偏移量,a1×Δ1≈Δx1为第一图像偏移量,a1×Δ1×pixel pitch是将第一图像偏移 量的单位从pixel转换为mm,a2为第二摄像头的OIS马达感度标定参数,Δ2为第二摄像头的镜头偏移量,a2×Δ2≈Δx2为第二图像偏移量,a2×Δ2×pixel pitch是将第二图像偏移量的单位pixel转换为mm,补偿后第一摄像头的主点从u1'变为u1,补偿后第二摄像头的主点从u2'变为u2,x1为所述第一图像成像点,x2为所述第二图像的成像点。
  7. 一种场景深度计算装置,其特征在于,所述装置包括:第一获取单元,处理单元,计算单元;
    所述第一获取单元,用于获取带有OIS系统的摄像头的镜头偏移量;其中,第一摄像头和/或第二摄像头带有OIS系统,第一摄像头和第二摄像头并列设置在同一终端设备的机身上;
    所述处理单元,用于根据预设的OIS马达感度标定参数,将所述镜头偏移量转换为图像偏移量;
    所述计算单元,用于根据补偿后的第一摄像头标定参数和/或补偿后的第二摄像头标定参数,以及所述第一摄像头和所述第二摄像头在同一时刻采集目标场景分别得到的第一图像和第二图像,计算所述目标场景的场景深度;其中,根据第一摄像头的镜头偏移量补偿第一摄像头的标定参数,根据第二摄像头的镜头偏移量补偿第二摄像头的标定参数。
  8. 根据权利要求7所述的方法,其特征在于,所述装置还包括:第二获取单元;
    所述第二获取单元,具体用于,
    获取陀螺仪传感器检测的终端设备抖动的角速度信息;
    将所述角速度信息转化为终端设备的抖动幅度,根据所述抖动幅度驱动OIS马达推动所述第一摄像头的镜头和/或第二摄像头的镜头移动,并获取第一摄像头和/或第二摄像头的镜头偏移量。
  9. 根据权利要求7所述的装置,其特征在于,所述装置还包括:确定单元;
    所述确定单元具体用于,通过OIS控制器,推动OIS马达,将镜头移动至指定位置;
    等待所述OIS马达稳定后拍照;
    当拍摄的图像达到预设数量张时,检测各张图像的特征点坐标,并根据镜头的所述指定位置,确定OIS马达感度标定参数。
  10. 根据权利要求7所述的方法,其特征在于,所述处理单元具体用于:
    根据下式,将镜头偏移量转换为图像偏移量,
    Δx≈α×ΔC
    其中,Δx为图像偏移量,α为OIS马达感度标定参数,ΔC为镜头偏移量。
  11. 根据权利要求5所述的方法,其特征在于,当第一摄像头带有OIS系统时,所述计算单元,具体用于:
    利用下式确定所述目标场景的场景深度,
    Figure PCTCN2016112696-appb-100003
    其中,Z为所述目标场景的场景深度,f为焦距,a1为第一摄像头的OIS马达感度标定参数,Δ1为镜头偏移量,pixel pitch为一个pixel的大小,a1×Δ1≈Δx1为第一图像偏移量,a×Δ1×pixel pitch是将第一图像偏移量的单位从pixel转换为mm,补偿后第一摄像头的主点从u1'变为u1,基线从B'变为B1,第二摄像头的主点为u2,x1为所述第一图像成像点,x2为所述第二图像的成像点。
  12. 根据权利要求7所述的装置,其特征在于,当第一摄像头和第二摄像头都带有OIS系统时,所述计算单元,具体用于:
    利用下式确定所述目标场景的场景深度,
    Figure PCTCN2016112696-appb-100004
    其中,a1为第一摄像头的OIS马达感度标定参数,Δ1为第一摄像头的镜 头偏移量,a1×Δ1≈Δx1为第一图像偏移量,a1×Δ1×pixel pitch是将第一图像偏移量的单位从pixel转换为mm,a2为第二摄像头的OIS马达感度标定参数,Δ2为第二摄像头的镜头偏移量,a2×Δ2≈Δx2为第二图像偏移量,a2×Δ2×pixel pitch是将第二图像偏移量的单位pixel转换为mm,补偿后第一摄像头的主点从u1'变为u1,补偿后第二摄像头的主点从u2'变为u2,x1为所述第一图像成像点,x2为所述第二图像的成像点。
  13. 一种终端,其特征在于,所述终端包括第一摄像头和第二摄像头,所述第一摄像头和第二摄像头至少用于在同一时刻采集目标场景,分别得到第一图像和第二图像;其中,第一摄像头和/或第二摄像头带有OIS系统,第一摄像头和第二摄像头并列设置在同一终端设备的机身上;
    存储器,所述存储器用于存储所述第一图像和第二图像;
    处理器,所述处理器用于获取带有OIS马达的摄像头的镜头偏移量,根据预设的OIS马达感度标定参数,将所述镜头偏移量转换为图像偏移量;根据补偿后的第一摄像头标定参数和/或补偿后的第二摄像头标定参数,以及从所述存储器获取的第一图像和第二图像,计算所述目标场景的场景深度;其中,根据第一摄像头的镜头偏移量补偿第一摄像头的标定参数,根据第二摄像头的镜头偏移量补偿第二摄像头的标定参数。
  14. 根据权利要求13所述的终端,其特征在于,所述OIS系统具体用于,获取陀螺仪传感器检测的终端设备抖动的角速度信息;
    将所述角速度信息转化为终端设备的抖动幅度,根据所述抖动幅度驱动OIS马达推动所述第一摄像头的镜头和/或第二摄像头的镜头移动,并获取第一摄像头和/或第二摄像头的镜头偏移量。
  15. 根据权利要求13所述的终端,其特征在于,所述处理器还用于,通过OIS控制器,推动OIS马达,将镜头移动至指定位置;
    等待所述OIS马达稳定后拍照;
    当拍摄的图像达到预设数量张时,检测各张图像的特征点坐标,并根据 镜头的所述指定位置和所述各张图像的特征点坐标,确定OIS马达感度标定参数;
    所述存储器还用于,存储所述OIS马达感度标定参数。
  16. 根据权利要求13所述的终端,其特征在于,所述处理器具体用于,
    根据下式,将镜头偏移量转换为图像偏移量,
    Δx≈α×ΔC
    其中,Δx为图像偏移量,α为OIS马达感度标定参数,ΔC为镜头偏移量。
  17. 根据权利要求13所述的终端其特征在于,当第一摄像头带有OIS系统时,所述处理器具体用于,利用下式确定所述目标场景的场景深度,
    Figure PCTCN2016112696-appb-100005
    其中,Z为所述目标场景的场景深度,f为焦距,a1为第一摄像头的OIS马达感度标定参数,Δ1为镜头偏移量,pixel pitch为一个pixel的大小,a1×Δ1≈Δx1为第一图像偏移量,a×Δ1×pixel pitch是将第一图像偏移量的单位从pixel转换为mm,补偿后第一摄像头的主点从u1'变为u1,基线从B'变为B1,第二摄像头的主点为u2,x1为所述第一图像成像点,x2为所述第二图像的成像点。
  18. 根据权利要求13所述的终端,其特征在于,当第一摄像头和第二摄像头都带有OIS系统时,所述处理器具体用于,利用下式确定所述目标场景的场景深度,
    Figure PCTCN2016112696-appb-100006
    其中,a1为第一摄像头的OIS马达感度标定参数,Δ1为第一摄像头的镜头偏移量,a1×Δ1≈Δx1为第一图像偏移量,a1×Δ1×pixel pitch是将第一图像偏移量的单位从pixel转换为mm,a2为第二摄像头的OIS马达感度标定参数,Δ2为第二摄像头的镜头偏移量,a2×Δ2≈Δx2为第二图像偏移量,a2×Δ2×pixel pitch 是将第二图像偏移量的单位pixel转换为mm,补偿后第一摄像头的主点从u1'变为u1,补偿后第二摄像头的主点从u2'变为u2,x1为所述第一图像成像点,x2为所述第二图像的成像点。
PCT/CN2016/112696 2016-10-25 2016-12-28 场景深度计算方法、装置及终端 WO2018076529A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201680054264.2A CN108260360B (zh) 2016-10-25 2016-12-28 场景深度计算方法、装置及终端

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610941102 2016-10-25
CN201610941102.2 2016-10-25

Publications (1)

Publication Number Publication Date
WO2018076529A1 true WO2018076529A1 (zh) 2018-05-03

Family

ID=62024299

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/112696 WO2018076529A1 (zh) 2016-10-25 2016-12-28 场景深度计算方法、装置及终端

Country Status (2)

Country Link
CN (1) CN108260360B (zh)
WO (1) WO2018076529A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3582487A1 (en) * 2018-06-15 2019-12-18 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image stabilisation
CN112581538A (zh) * 2020-12-11 2021-03-30 昆山丘钛光电科技有限公司 一种获取马达感度的方法及装置
CN113873157A (zh) * 2021-09-28 2021-12-31 维沃移动通信有限公司 拍摄方法、装置、电子设备和可读存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111833394A (zh) * 2020-07-27 2020-10-27 深圳惠牛科技有限公司 摄像头校准方法、基于双目测量装置的测量方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102867304A (zh) * 2012-09-04 2013-01-09 南京航空航天大学 双目立体视觉系统中场景立体深度与视差的关系建立方法
CN104954689A (zh) * 2015-06-30 2015-09-30 努比亚技术有限公司 一种利用双摄像头获得照片的方法及拍摄装置
CN105629427A (zh) * 2016-04-08 2016-06-01 东莞佩斯讯光电技术有限公司 基于双可控制镜头倾斜式音圈马达的立体数码摄像装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5493942B2 (ja) * 2009-12-15 2014-05-14 ソニー株式会社 撮像装置と撮像方法
CN105637413B (zh) * 2013-08-21 2018-07-06 奥林巴斯株式会社 摄像装置以及摄像方法
CN103685950A (zh) * 2013-12-06 2014-03-26 华为技术有限公司 一种视频图像防抖方法及装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102867304A (zh) * 2012-09-04 2013-01-09 南京航空航天大学 双目立体视觉系统中场景立体深度与视差的关系建立方法
CN104954689A (zh) * 2015-06-30 2015-09-30 努比亚技术有限公司 一种利用双摄像头获得照片的方法及拍摄装置
CN105629427A (zh) * 2016-04-08 2016-06-01 东莞佩斯讯光电技术有限公司 基于双可控制镜头倾斜式音圈马达的立体数码摄像装置

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3582487A1 (en) * 2018-06-15 2019-12-18 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image stabilisation
US10567659B2 (en) 2018-06-15 2020-02-18 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image compensation method, electronic device and computer-readable storage medium
CN112581538A (zh) * 2020-12-11 2021-03-30 昆山丘钛光电科技有限公司 一种获取马达感度的方法及装置
CN113873157A (zh) * 2021-09-28 2021-12-31 维沃移动通信有限公司 拍摄方法、装置、电子设备和可读存储介质
CN113873157B (zh) * 2021-09-28 2024-04-16 维沃移动通信有限公司 拍摄方法、装置、电子设备和可读存储介质

Also Published As

Publication number Publication date
CN108260360A (zh) 2018-07-06
CN108260360B (zh) 2021-01-05

Similar Documents

Publication Publication Date Title
CN111147741B (zh) 基于对焦处理的防抖方法和装置、电子设备、存储介质
JP6663040B2 (ja) 奥行き情報取得方法および装置、ならびに画像取得デバイス
US8264553B2 (en) Hardware assisted image deblurring
WO2020088133A1 (zh) 图像处理方法、装置、电子设备和计算机可读存储介质
US9092875B2 (en) Motion estimation apparatus, depth estimation apparatus, and motion estimation method
JP6585006B2 (ja) 撮影装置および車両
US9313419B2 (en) Image processing apparatus and image pickup apparatus where image processing is applied using an acquired depth map
WO2019105214A1 (zh) 图像虚化方法、装置、移动终端和存储介质
JP4852591B2 (ja) 立体画像処理装置、方法及び記録媒体並びに立体撮像装置
WO2020259474A1 (zh) 追焦方法、装置、终端设备、计算机可读存储介质
WO2014156731A1 (ja) 撮像装置、固体撮像素子、カメラモジュール、電子機器、および撮像方法
EP3073733A1 (en) Method for generating picture and twin-lens device
WO2013146269A1 (ja) 画像撮像装置、画像処理方法およびプログラム
WO2018076529A1 (zh) 场景深度计算方法、装置及终端
CN109963080B (zh) 图像采集方法、装置、电子设备和计算机存储介质
CN109660718B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
WO2018228466A1 (zh) 对焦区域显示方法、装置及终端设备
JP2015019119A (ja) 画ブレ補正装置
JP5857712B2 (ja) ステレオ画像生成装置、ステレオ画像生成方法及びステレオ画像生成用コンピュータプログラム
US8179431B2 (en) Compound eye photographing apparatus, control method therefor, and program
JP5023750B2 (ja) 測距装置および撮像装置
JP5925109B2 (ja) 画像処理装置、その制御方法、および制御プログラム
JP2015017999A (ja) 撮像装置
US20220286611A1 (en) Electrical image stabilization (eis)-assisted digital image stabilization (dis)
WO2022147703A1 (zh) 跟焦方法、装置、拍摄设备及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16920386

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16920386

Country of ref document: EP

Kind code of ref document: A1