WO2017120771A1 - 一种深度信息获取方法、装置及图像采集设备 - Google Patents

一种深度信息获取方法、装置及图像采集设备 Download PDF

Info

Publication number
WO2017120771A1
WO2017120771A1 PCT/CN2016/070707 CN2016070707W WO2017120771A1 WO 2017120771 A1 WO2017120771 A1 WO 2017120771A1 CN 2016070707 W CN2016070707 W CN 2016070707W WO 2017120771 A1 WO2017120771 A1 WO 2017120771A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
offset
camera
lens
distance
Prior art date
Application number
PCT/CN2016/070707
Other languages
English (en)
French (fr)
Chinese (zh)
Inventor
王君
徐荣跃
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to US16/069,523 priority Critical patent/US10506164B2/en
Priority to CN201680002783.4A priority patent/CN107223330B/zh
Priority to PCT/CN2016/070707 priority patent/WO2017120771A1/zh
Priority to JP2018554610A priority patent/JP6663040B2/ja
Priority to KR1020187022702A priority patent/KR102143456B1/ko
Priority to EP16884334.0A priority patent/EP3389268B1/en
Publication of WO2017120771A1 publication Critical patent/WO2017120771A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/246Calibration of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/685Vibration or motion blur correction performed by mechanical compensation
    • H04N23/687Vibration or motion blur correction performed by mechanical compensation by shifting the lens or sensor position

Definitions

  • Embodiments of the present invention relate to the field of electronic technologies, and in particular, to a depth information acquisition method and a device image acquisition device.
  • depth information refers to information about the vertical distance between the lens (Lens) of the camera module in the image acquisition device and the object being photographed.
  • the camera module can control the lens movement according to the depth information, thereby achieving focus on the captured object. Therefore, the accuracy of the focus depends on the accuracy of the depth information acquisition.
  • Optical Image Stabilization is a technology that relies on a special lens to match the photosensitive element to minimize the instability of the image caused by jitter during the operation of the operator.
  • the OIS is mainly realized by moving the lens.
  • the embodiment of the invention discloses a method, a device and an image acquisition device for acquiring depth information, which can accurately acquire depth information when the multi-camera module performs OIS, thereby enabling accurate and rapid focusing.
  • the first aspect of the embodiment of the present invention discloses a method for acquiring a depth information, which is applied to an image capturing device, where the image capturing device includes a first camera and a second camera, and the method may include:
  • the depth information acquiring device can acquire the first time when detecting the jitter of the first camera and the second camera a first image of the target subject acquired by the camera and a second image of the target subject simultaneously acquired by the second camera, and then detecting an initial distance of the target subject in the first image and the second image, determining the first The offset difference between the image and the second image is used to correct the initial distance using the offset difference, and finally the depth of the target subject is determined according to the corrected initial distance.
  • the depth information acquiring device will respectively capture the same image between the two images obtained by the two cameras. The distance is corrected so that the image acquisition device finally obtains the depth of the object more accurately, thereby improving the accuracy of the camera focus.
  • the first camera includes a first lens
  • the second camera includes a second lens
  • the specific manner in which the depth information acquiring device determines the offset difference between the first image and the second image may be:
  • the first offset of the first lens and the second offset of the second lens may be acquired by a Hall sensor in the camera, and the first offset and the second offset are respectively determined according to the first offset and the second offset.
  • the offset of an image and the offset of the second image ultimately result in an offset difference between the offset of the first image and the offset of the second image.
  • the lens offset is not the actual offset of the image.
  • the actual offset of the image at the time of jitter is determined by the relationship between the lens offset and the image offset, which can make the final target object more deeply. accurate.
  • the image acquisition device is a dual camera image acquisition device.
  • the offset of the first image and the offset of the second image are not 0 when shaking; if only one of the two cameras has With the OIS function, the offset of the image captured by the camera without the OIS function when shaking can be regarded as 0.
  • the depth information acquiring device can also obtain the depth of the target object by the above manner, and finally the average depth can be taken as the actual depth of the target subject.
  • the difference in depth between each camera is greater when performing OIS, and the depth of the target subject acquired in this way will be more accurate, so that it can be accurate and fast during the focusing process. Achieve focus.
  • the initial distance may be a distance of the target subject in the first image relative to the target subject in the second image, and then the offset difference should be the offset of the first image relative to the second image.
  • the difference in offset In other words, the initial distance and the offset difference are both vectors, and the calculation of the offset difference The formula needs to correspond to the calculation of the initial distance.
  • the depth information acquiring device may further acquire the first corresponding to the depth in response to the focus instruction when receiving the focus instruction for the target subject a first moving distance of the lens, and acquiring a second moving distance of the second lens corresponding to the depth, and then determining a focusing position of the first lens according to the first moving distance, and determining a focusing position of the second lens according to the second moving distance Finally, the first lens and the second lens are controlled to move to respective corresponding in-focus positions.
  • the depth information acquiring device can acquire the distance that each lens corresponding to the depth needs to move, and determine each lens according to the corresponding moving distance. The focus position, and then move each lens to its corresponding focus position, so that the focus of the target can be accurately and quickly achieved.
  • the second aspect of the embodiment of the present invention discloses a depth information acquiring device, which is applied to an image capturing device, where the image capturing device includes a first camera and a second camera, and the depth information acquiring device may include an acquiring unit, a detecting unit, and a second a determining unit, a correcting unit, and a second determining unit, wherein: the acquiring unit acquires the first image of the target subject acquired by the first camera and the second camera simultaneously acquired when detecting the first camera and the second camera shaking a second image of the target subject such that the detecting unit detects an initial distance of the target subject in the first image and the second image, and the first determining unit determines an offset difference between the first image and the second image, The correction unit then corrects the initial distance using the offset difference, and the second determination unit determines the depth of the target subject based on the corrected initial distance.
  • the depth information acquiring device will respectively capture the same image between the two images obtained by the two cameras. The distance is corrected so that the image acquisition device finally obtains the depth of the object more accurately, thereby improving the accuracy of the camera focus.
  • the first camera includes a first lens
  • the second camera includes a second lens
  • the first determining unit may include an acquiring subunit and determining the subunit, where:
  • the acquisition subunit may acquire the first offset of the first lens and the second offset of the second lens by the Hall sensor in the camera when the depth information acquiring device detects the first camera and the second camera shake, then the determining unit The unit can determine the deviation of the first image according to the first offset and the second offset, respectively. The shift amount and the offset of the second image finally result in an offset difference between the offset of the first image and the offset of the second image.
  • the lens offset is not the actual offset of the image.
  • the actual offset of the image at the time of jitter is determined by the relationship between the lens offset and the image offset, which can make the final target object more deeply. accurate.
  • the depth information acquiring device may further include: a receiving unit and a control unit, wherein: when the receiving unit receives the focus command for the target object, the receiving unit may trigger the acquiring unit to respond to the focus command to obtain the corresponding depth a first moving distance of a lens, and acquiring a second moving distance of the second lens corresponding to the depth, and the second determining unit determines an in-focus position of the first lens according to the first moving distance, and according to the second moving distance The focus position of the second lens is determined, and finally the control unit controls the first lens and the second lens to move to respective corresponding focus positions.
  • the depth information acquiring device can acquire the distance that each lens corresponding to the depth needs to move, and determine each lens according to the corresponding moving distance. The focus position, and then move each lens to its corresponding focus position, so that the focus of the target can be accurately and quickly achieved.
  • a third aspect of the embodiments of the present invention discloses an image capturing device, including a first camera, a second camera, a processor, and a receiver, wherein: the first camera is configured to detect the first camera and the second camera in the image capturing device.
  • the first camera is configured to detect the first camera and the second camera in the image capturing device.
  • the second camera is configured to acquire the second image of the target subject simultaneously with the first camera when the image capturing device detects the first camera and the second camera shake
  • the processor is mainly used for performing the operations performed by the acquiring unit, the detecting unit, the correcting unit, each determining unit, and the control unit in the depth information acquiring device
  • the receiver is mainly configured to receive a focus instruction for the target object, so that the processor can ring
  • the focus command should be used to focus on the target subject.
  • the depth information acquiring device when detecting the camera shake, may acquire an image respectively collected by the first camera and the second camera, and detect an initial distance of the target object in the two images, and then use the first image and The offset difference of the second image is corrected for the initial distance, and finally the depth calculation formula is substituted according to the corrected initial distance, thereby determining the depth of the target subject.
  • the depth information acquiring device may have an OIS function in both cameras or In the case where one of the cameras has the OIS function, the distance of the same subject in the images acquired by the two cameras is corrected, so that the depth information of the subject is accurately obtained according to the corrected distance, thereby being accurate and fast. Achieve focus on the subject.
  • FIG. 1 is a schematic structural diagram of a camera module disclosed in an embodiment of the present invention.
  • FIG. 2 is a schematic flowchart of a method for acquiring depth information according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram of a scene captured by a dual camera disclosed in an embodiment of the present invention.
  • FIG. 4 is a schematic flowchart diagram of another method for acquiring depth information according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a scene in which a camera moves while performing OIS according to an embodiment of the present invention
  • FIG. 6 is a schematic diagram of a focal length calibration method of a lens disclosed in an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of a depth information acquiring apparatus according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic structural diagram of another depth information acquiring apparatus according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic structural diagram of an image collection device according to an embodiment of the present invention.
  • the embodiment of the invention discloses a method, a device and an image acquisition device for acquiring depth information, which can accurately acquire depth information when the multi-camera module performs OIS, thereby enabling accurate and rapid focusing. The details are described below separately.
  • FIG. 1 is a schematic structural diagram of a camera module disclosed in an embodiment of the present invention.
  • the camera module includes a protective film, a 2 lens group, a 3 focus motor, a 4 filter, and 5 photosensitive elements.
  • 1 protective film is used to include the lens group; 2 lens group is usually composed of multiple lenses, which serves as an imaging function.
  • the lens group has OIS function
  • the lens of the lens group (this The embodiment of the invention is collectively referred to as a lens, also referred to as a lens, which can be moved to the left and right to obtain a clearer image;
  • the focus motor is mainly used to drive the lens movement to assist the focus;
  • the filter is mainly used to filter out the infrared rays, so that the last The color difference of the presented image is small;
  • 5 the photosensitive element is mainly used to convert the image of 2 into an electronic image, and the position of the photosensitive element is fixed.
  • the photosensitive elements corresponding to each camera can be considered to be on the same plane.
  • the camera module shown in Figure 1 can be applied to image capture with camera and camera functions such as smart phones, tablets, personal digital assistants (PDAs), mobile Internet devices (MIDs), digital cameras, etc.
  • the image capturing device includes a plurality of camera modules
  • the plurality of camera modules are arranged side by side in the image capturing device, and the plurality of camera modules may all have an OIS function, or only one camera may have an OIS function, and the present invention is implemented.
  • the example is not limited.
  • the image acquisition device After the image acquisition device acquires the image, the depth information of each object in the image can be calculated. When a certain object needs to be focused, the image acquisition device can perform the shooting according to the image. The depth of the object focuses on the subject.
  • the image capturing device includes a plurality of camera modules, the images collected by the plurality of camera modules will eventually be combined into one image, so that the captured images have higher definition and more satisfying the user's shooting requirements. It should be noted that this solution is proposed based on the OIS of multiple cameras.
  • FIG. 2 is a schematic flowchart diagram of a method for acquiring depth information according to an embodiment of the present invention.
  • the method described in FIG. 2 can be applied to an image acquisition device, which includes a first camera and a second camera.
  • the depth information obtaining method may include the following steps:
  • the depth information acquiring device acquires a first image of the target subject acquired by the first camera and a second image of the target subject simultaneously acquired by the second camera when detecting the shaking of the first camera and the second camera.
  • the first camera and the second camera may collect images in the respective viewing angle ranges in real time.
  • the first camera and the second camera are both dithered at the same time. Therefore, when the depth information acquiring device detects the first camera and the second camera shake (specifically, whether the gyroscope is used to detect whether there is jitter), the first The camera and the second camera simultaneously acquire images of the current environment to obtain a first image of the first camera and a second image of the second camera, respectively, and the depth information acquiring device thereby acquires the first image and the second image.
  • the first image and the second image both include a target subject
  • the first image and the second image are electronic images converted into images of the lens of the camera by the photosensitive elements of the first camera and the second camera.
  • the target subject is any one of the first image and the second image, such as a human face, a building, an animal, and the like, which is not limited in the embodiment of the present invention.
  • the first camera includes a first lens
  • the second camera includes a second lens.
  • the depth information acquiring device acquires the first image and the second image while detecting the first camera and the second camera.
  • the movement distance af_offset 1 of the first lens and the movement distance af_offset 2 of the second lens are also obtained.
  • af_offset 1 is a scalar distance between the current position of the first lens and the first starting position of the first lens on the Z axis of the three-dimensional coordinate system, that is, the current position of the first lens when acquiring the first image is relative to the first The distance at which the first starting position of the lens moves;
  • af_offset 2 is the scalar distance between the current position of the second lens and the second starting position of the second lens on the Z axis of the three-dimensional coordinate system, that is, the second lens is acquiring the second image The distance at which the current position of the time moves relative to the second starting position of the second lens.
  • the first starting position mainly refers to the position of the first lens when the vertical distance between the first lens and the photosensitive element of the first camera is one focal length of the first lens; then the second starting position mainly refers to the second
  • the vertical distance between the lens and the photosensitive element of the second camera is the position of the second lens at one focal length of the second lens.
  • the minimum distance between the first lens and the photosensitive element of the first camera is generally the distance of one focal length of the first lens, and the minimum distance between the photosensitive elements of the second lens and the second camera is also the focal length of the second lens. distance.
  • the depth information acquiring device detects an initial distance of the target subject in the first image and the second image.
  • FIG. 3 is a schematic diagram of a scene captured by a dual camera disclosed in the embodiment of the present invention.
  • the depth information acquiring device may take the plane where the photosensitive element of the camera is located as an XY plane, The vertical direction between the lens of the camera and the photosensitive element is the Z-axis, and a three-dimensional coordinate system is established.
  • the position of the origin of the three-dimensional coordinate system is not limited in the embodiment of the present invention. Therefore, when the depth information acquiring means acquires the first image captured by the first lens and the second image captured by the second lens (as shown in FIG. 3 (a)), the first image and the second image may be The overlap is then mapped on the XY plane of the three-dimensional coordinate system (as shown in Figure (b) of Figure 3).
  • the depth information acquiring means may detect an initial distance between the target subject in the first image and the target subject in the second image, the initial The distance is usually a vector distance, denoted by d 0 , specifically, the first image is overlapped with the second image and then mapped on the XY plane, and the depth information acquiring device acquires the coordinate distance between the target objects in the two images (ie, FIG. 3 ) (b) The distance d0) between the two black points in the first image and the second image shown.
  • the initial distance d 0 may be between the coordinates of the same feature pixel point of the target object in the acquired two images after the two images are superimposed and mapped onto the XY plane of the three-dimensional coordinate system.
  • Vector distance; or on the XY plane, the depth information acquiring means takes a plurality of feature pixel points in the first image, and for each feature pixel point, selects the same feature in the second image as the feature pixel point a pixel point, and calculate a vector distance between the coordinates of the two pixel points, and finally use the average of the vector distances of the plurality of sets of pixel points having the same feature as the initial distance between the target object in the first image and the second image d 0, not limited in the present invention.
  • the specific manner in which the depth information acquiring device detects the initial distance of the target object in the first image and the second image may be:
  • the depth information acquiring device may first overlap the first image and the second image, and then select a feature pixel point P 1 in the target object of the first image, assuming that the coordinates are (P 1x , P 1y ), and select a pixel point P 2 having the same feature as the feature pixel in the target subject of the second image, assuming that the coordinates are (P 2x , P 2y ).
  • the depth information acquiring means can thereby calculate the initial vector distance d 0 of the two pixel points according to the coordinates of P 1 and P 2 , assuming (d 0x , d 0y ), then d 0x can be P 1x -P 2x , It can also be P 2x -P 1x .
  • d 0x P 1x - P 2x
  • d 0y P 1y - P 2y
  • the depth information acquiring device may further establish a three-dimensional coordinate system with a center point of the first image in the photosensitive element of the first camera, and establish another three-dimensional coordinate with a center point of the second image of the photosensitive elements of the second camera. And then respectively obtaining the coordinates of the same feature pixel of the target object in the first image and the second image in the two coordinate systems, and finally calculating the coordinates of the target object between the first image and the second image.
  • Vector distance the unit distances of the two coordinate systems are the same, and the directions of the XYZ axes are the same, only the origins of the coordinates are different.
  • the depth information acquiring device may establish a three-dimensional coordinate system for each of the two cameras, and may also establish only one three-dimensional coordinate system, which is not limited in the embodiment of the present invention.
  • the description is made herein to establish a three-dimensional coordinate system, and the embodiments of the present invention are not described herein again.
  • the depth information acquiring device determines an offset difference between the first image and the second image.
  • the offset difference between the first image and the second image may be understood as the difference between the offset of the first image and the offset of the second image.
  • the depth information acquiring device may respectively determine the coordinate position of the XY plane in the three-dimensional coordinate system, and simultaneously acquire the first camera collected when the pre-recorded camera is not shaken.
  • the coordinate position of the three images and the coordinate position of the fourth image acquired by the second camera respectively calculate a coordinate offset of the coordinate position of the first image with respect to the coordinate position of the third image, assuming d 1 , and coordinates of the second image
  • the coordinate offset of the position relative to the coordinate position of the fourth image is assumed to be d 2
  • the difference between d 1 and d 2 is the offset difference between the first image and the second image.
  • d 1 , d 2 and the offset difference are all vectors.
  • the value of the offset d 1 is generally a vector offset of the image acquired by the first image relative to the first lens at the first starting position
  • the value of the offset d 2 is generally the relative value of the second image. The vector offset of the image acquired at the second starting position of the second lens.
  • the depth information acquiring device records in advance the coordinates of the image acquired by the first lens at one focal length (ie, the first starting position) on the XY plane of the three-dimensional coordinate system (specifically, each pixel in the image may be recorded) The coordinate position of the point). Then, when the first camera is performing OIS, the first lens moves on the XY plane of the three-dimensional coordinate system, so the first image acquired by the first camera will be on the XY plane with respect to the image acquired at the first starting position. There is some offset.
  • the depth information acquiring means selects the first pixel from the first image and collects it from the first image.
  • the pixel at the same position of the pixel of the image assuming that its coordinate is (q' 1x , q' 1y )
  • the depth information acquiring device can obtain the first by comparing the coordinates of the two pixels in the two images.
  • the offset difference is d 1 -d 2 ; conversely, if the initial distance d 0 is the coordinate of the feature pixel point of the target subject in the second image minus the first image
  • the offset difference is d 2 - d 1 , which is not in the embodiment of the present invention.
  • the initial distance may be a vector distance of the target object in the first image relative to the target object in the second image, and the offset difference is the offset of the first image relative to the second image.
  • the offset difference of the offset may also be the vector distance of the target subject in the second image relative to the target subject in the first image, then the offset difference is the offset of the second image relative to The offset difference of the offset of the first image is not limited in the embodiment of the present invention.
  • the depth information acquiring device corrects the initial distance by using the offset difference.
  • the depth information acquiring device may use the offset difference to correct the d 0 to obtain the corrected distance d 0 ', that is, d 0x ',d 0y ').
  • the depth information acquiring device determines the depth of the target subject according to the corrected initial distance.
  • the lens will be offset on the XY plane when the camera is optically anti-shake, and the distance between the captured images will also change.
  • the same subject when the optical image stabilization is not performed, in the images respectively collected by the first camera and the second camera, the same subject may have a certain distance in the two images, but the lens is not present at this time.
  • the XY plane is offset, so the depth of each subject calculated according to the depth calculation formula at this time is accurate.
  • the focal lengths of the first lens and the second lens are slightly different, when one is shaken, any one of the first camera and the second camera performs optical image stabilization, and the first lens and the second lens are each opposite to the camera. There is also a difference in the offset of the corresponding starting position on the XY plane.
  • the actual distance between the first lens and the second lens is D 0 '(FIG. 3)
  • the depth information device since during optical image stabilization, The difference between the respective offsets of the first lens and the second lens on the XY plane is small, and the depth information device generally cannot directly acquire the actual distance between the two lenses. Therefore, in the prior art, when calculating the depth of the subject, generally, the vector distance between the first starting position and the second starting position is directly used as the actual vector distance between the first lens and the second lens, thereby The resulting depth information is not accurate, even larger than the actual depth information.
  • the depth information acquiring device can directly obtain the vector distance D 0 between the first starting position and the second starting position, and can be expressed as: (D 0x , D 0y ). Therefore, the depth information acquiring means can calculate the depth of the target subject using the corrected distances d 0 ' and D 0 , so that the final acquired depth information is relatively accurate.
  • D 0 can be the coordinate of the first starting position on the XY plane minus the coordinate of the second starting position on the XY plane, or the coordinate of the second starting position on the XY plane.
  • the depth information acquiring device may calculate according to the depth calculation formula. The depth of the target subject.
  • the depth information acquiring device may calculate the actual depth of the target object according to the parameter of the first camera, and may calculate the actual depth of the target object according to the parameter of the second camera, which is not limited in the embodiment of the present invention.
  • the image acquisition device must maintain (f 1 +af_offset 1 ) equal to (f 2 +af_offset 2 ), where f 1 is the focal length of the first lens, f 2 is the focal length of the second lens, then the depth information After the acquisition device corrects d 0 , the depth of the target object finally obtained is represented by Depth, which may be Can also be
  • the depth calculation formula may be used to calculate the depth of the target subject in the first image and the depth of the target subject in the second image. among them,
  • the first camera and the second camera both have an OIS function
  • the first lens and the second lens are offset from each other in the XY plane, thereby Both the offset d 1 of the first image and the offset d 2 of the second image are not zero.
  • the offset d 1 is not zero
  • the offset d 2 of the second image acquired by the camera (second camera) without the OIS function is zero. That is to say, the solution is not only applicable to an image acquisition device in which two cameras have an OIS function, but also is applicable to an image acquisition device in which only one camera has an OIS function, which is not limited in the embodiment of the present invention.
  • the solution can be applied not only to an image acquisition device including two cameras and having at least one camera having an OIS function, but also for including three or more cameras, and at least one camera having an OIS function.
  • Image acquisition device In the image capturing device of the multi-camera, taking an image capturing device including three cameras as an example, the depth information acquiring device can combine three cameras in two, and use two cameras in each combination to acquire the target object. Depth, and thus three depths are obtained, and the depth information acquiring device may finally use the average depth of the three depths as the actual depth of the target subject, which is not described herein again in the embodiment of the present invention.
  • the depth information acquiring device when detecting the camera shake, can acquire images respectively collected by the first camera and the second camera, and detect an initial distance of the target object in the two images, and then The initial distance is corrected using the offset difference between the first image and the second image, and finally the depth calculation formula is substituted according to the corrected initial distance, thereby determining the depth of the target subject.
  • the depth information acquiring device may correct the distance of the same subject in the images acquired by the two cameras when the two cameras can both have the OIS function or one of the cameras has the OIS function. Get the shot based on the corrected distance The depth information of the object is more precise, so that the focus of the subject can be accurately and quickly achieved.
  • FIG. 4 is a schematic flowchart diagram of another method for acquiring depth information according to an embodiment of the present invention.
  • the method described in FIG. 4 can be applied to an image acquisition device, which includes a first camera and a second camera.
  • the depth information acquisition method may include the following steps:
  • the depth information acquiring device acquires a first image of the target subject acquired by the first camera and a second image of the target subject simultaneously acquired by the second camera when detecting the shaking of the first camera and the second camera.
  • the depth information acquiring device detects an initial distance of the target subject in the first image and the second image.
  • the depth information acquiring device determines an offset difference between the first image and the second image.
  • the specific manner in which the depth information acquiring device determines the offset difference between the first image and the second image may include the following steps:
  • the depth information acquiring device acquires the first offset of the first lens and the second offset of the second lens
  • the depth information acquiring device determines an offset of the first image according to the first offset, and determines an offset of the second image according to the second offset, to obtain an offset of the first image and a second image. The offset difference of the offset.
  • the first offset of the first lens can be understood as: a vector offset of the current position of the first lens and the first starting position on the XY plane, denoted as L 1
  • a second offset of the second lens It can be understood as: the vector offset of the current position of the second lens and the second starting position on the XY plane, denoted as L 2 .
  • FIG. 5 is a schematic diagram of a scene in which the camera moves when the OIS is performed according to the embodiment of the present invention.
  • the position of the lens indicated by the broken line is the starting position of the lens
  • the position of the lens indicated by the solid line is the position of the lens when the lens is subjected to OIS, and the image of the target subject is captured.
  • Position ie the current position of the lens. In the three-dimensional coordinate system established in FIG.
  • the coordinates of the current position of the lens on the XY plane are subtracted from the coordinates of the starting position on the XY plane, that is, For the offset L of the lens relative to the starting position, the offset is a vector; and the Z-axis coordinate of the current position of the lens minus the Z-axis coordinate of the starting position, that is, the lens and the starting position The distance traveled af_offset.
  • the starting position shown in FIG. 5 is the first starting position of the first lens (assuming the coordinates are (L 1x , L 1y , L 1z )), the current state of the first lens.
  • Position (assuming the coordinates are (L' 1x , L' 1y , L' 1z )), then the current position of the first lens and the first starting position are moved a distance af_offset 1 on the Z axis, ie
  • the first offset L 1 of the current position of the first lens and the first starting position on the XY plane is (L' 1x - L 1x , L' 1y - L 1y ), which is a vector distance.
  • the moving distance af_offset 2 of the second lens is obtained, that is,
  • the second offset L 2 of the second lens is (L' 2x -L 2x , L' 2y -L 2y ).
  • the moving distance and the offset of the lens mentioned herein refer to the distance between the optical centers of the lens (convex lens), which will not be described in detail in the embodiments of the present invention.
  • the depth information acquiring device mainly records the offset of the offset of the first lens and the second lens on the XY plane by using a Hall sensor or a laser, and records how many scales are offset, and also records the offset.
  • the offset of the lens is obtained according to the distance corresponding to each scale and the direction of the lens offset.
  • the depth information acquiring device may also calibrate the Hall scale before acquiring the offsets of the first lens and the second lens.
  • the specific way can be:
  • the depth information acquiring means can control a table in which the camera depth of the camera lens is known (denoted as S), and control the movement of the lens on the main optical axis of the lens First, move the lens to a Hall scale and take the form so that the width of the table on the photosensitive element (denoted as d) can be obtained.
  • the depth information acquiring device may separately calculate the offset d 1 of the first image and the second image by using the relationship between the offset of the lens and the offset of the image.
  • the offset d 2 is calculated as:
  • the offsets of the first lens and the second lens are first acquired, and the offsets of the first image and the second image are respectively calculated by using the above calculation formula, and then the offset difference between the two images is obtained.
  • the correction of the distance between the target objects in the two images is more accurate, and the accuracy of the depth information is improved.
  • the depth information acquiring device corrects the initial distance by using the offset difference.
  • the depth information acquiring device determines the depth of the target subject according to the corrected initial distance.
  • the depth information acquiring device may further calibrate the focal lengths of the first lens and the second lens.
  • the specific way can be:
  • FIG. 6 is a schematic diagram of a focal length calibration method for a lens disclosed in an embodiment of the present invention.
  • the depth information acquiring means can control a table in which the depth of shooting of the lens is known (denoted as S) and whose width is known (denoted as D), and controls the movement of the lens on the main optical axis of the lens.
  • the table is taken when the lens is moved to the position where the contrast of the image is the highest, so that the width of the table on the photosensitive element (denoted as d) can be obtained.
  • the focal length f of the lens can be calculated, that is:
  • the depth information acquiring device can further calculate the focal length of each camera in this way.
  • the focal length difference between the two cameras can also be controlled: the difference between the focal lengths of the two cameras calibrated by the above manner is not greater than the preset focal length threshold, and the preset focal length threshold can be 0.01 if The difference is greater than the preset focus threshold, such as or This indicates that the focal length calibration failed, thus replacing the camera or replacing the lens on the production line. If the difference between the focal lengths of the two cameras determined by the above-mentioned square is smaller than the preset focal length threshold, it indicates that the focal length calibration is successful, and the depth information acquiring means can calculate the depth of the target subject in the image using the calibrated focal length.
  • the depth information acquiring device receives a focus command for the target subject.
  • the depth information acquiring device may acquire the depth of each subject by the manner of acquiring the depth of the target subject.
  • the user can receive the focus command for the target object after clicking the target object on the preview interface of the mobile phone image. That is to say, the focus command received by the depth information acquiring device may be triggered by the user, or may be obtained by image analysis, which is not limited by the embodiment of the present invention.
  • the mobile phone when the mobile phone initiates the person mode photographing, after the depth of each subject in the current scene is acquired, the mobile phone can automatically recognize the target subject as a character, and then a focus instruction for the person in the current scene can be generated. If the user wants to focus on a certain plant in the background, then the user can click on the plant in the preview interface of the mobile phone image, and the depth information acquiring device receives the focus command for the plant.
  • the depth information acquiring device acquires a first moving distance of the first lens corresponding to the depth, and acquires a second moving distance of the second lens corresponding to the depth.
  • the depth information acquiring device may calculate the first movement of the first lens relative to the first starting position on the Z axis according to the acquired depth Depth of the target object.
  • the distance af_offset 1 ' is similarly obtained by the second movement distance af_offset 2 ' of the second lens with respect to the second starting position on the Z axis.
  • the depth Depth of the target subject is S in FIG. 6, and f 1 +af_offset 1 ' is s in FIG. 6, and therefore, the first moving distance af_offset 1 is calculated.
  • the first moving distance af_offset 1 ' can be calculated, specifically:
  • the depth information acquiring device determines a focus position of the first lens according to the first moving distance, and determines a focus position of the second lens according to the second moving distance, and controls the first lens and the second lens to respectively move to respective corresponding focus positions. At the office.
  • the depth information acquiring device may determine the first moving distance af_offset 1 ' of the first lens and the second moving distance af_offset 2 ' of the second lens. Determining, according to the first moving distance af_offset 1 ', a focus position at which the first lens is focused on the target, and determining, according to the second moving distance af_offset 2 ', a focus position at which the second lens is focused on the target, thereby respectively The first lens and the second lens are controlled to move to respective corresponding in-focus positions to achieve focusing on the target subject.
  • the depth information acquiring device may further perform other objects according to the difference between the depth of the other object in the image and the depth of the target object. Blurring. Specifically, the depth information acquiring device blurs the other objects except the target subject (ie, the focus point) by using the blurring algorithm, and the farther the subject is farther away from the target subject, the higher the distance The nearer shots are less blurred.
  • the depth information acquiring means can detect the first offset of the first lens on the XY plane with respect to the first starting position, and the second lens with respect to the second starting position in the XY a second offset on the plane, respectively determining an offset of the first image and an offset of the second image, so that the target subject in the first image and the target subject in the second image can be more accurately
  • the initial distance between the corrections is corrected so that the final calculated depth of the target subject is more accurate.
  • the depth obtained by the embodiment of the present invention can improve the accuracy and speed of the focus of the target object when the target subject is focused, thereby improving the imaging of the image capturing device. quality.
  • FIG. 7 is a schematic structural diagram of a depth information acquiring apparatus according to an embodiment of the present invention.
  • the depth information acquiring device 700 described in FIG. 7 can be applied to an image capturing device, which includes a first camera and a second camera.
  • the depth information acquiring device 700 can include the following units:
  • the acquiring unit 701 is configured to acquire, when the first camera and the second camera are shaken, a first image of the target subject acquired by the first camera and a second image of the target subject simultaneously acquired by the second camera.
  • the detecting unit 702 is configured to detect an initial distance of the target subject in the first image and the second image.
  • the first determining unit 703 is configured to determine an offset difference between the first image and the second image.
  • the correcting unit 704 is configured to correct the initial distance by using the offset difference.
  • the second determining unit 705 is configured to determine the depth of the target subject according to the corrected initial distance.
  • the initial distance may be a vector distance of the target object in the first image relative to the target object in the second image, and the offset difference is the offset of the first image relative to the second image.
  • the offset difference of the offset; the initial distance may also be the vector distance of the target subject in the second image relative to the target subject in the first image, then the offset difference is the offset of the second image relative to The offset difference of the offset of the first image is not limited in the embodiment of the present invention.
  • the depth information acquiring device will respectively capture the same image between the two images obtained by the two cameras. The distance is corrected so that the image acquisition device finally obtains the depth of the object more accurately, thereby improving the accuracy of the camera focus.
  • FIG. 8 is a schematic structural diagram of another depth information acquiring apparatus according to an embodiment of the present invention.
  • the depth information acquiring apparatus 700 described in FIG. 8 is optimized based on the depth information acquiring apparatus 700 shown in FIG. 7.
  • the depth information acquiring apparatus 700 may further include the following units:
  • the receiving unit 706 is configured to receive a focus instruction for the target subject.
  • the acquiring unit 701 is further configured to: obtain a first moving distance of the first lens corresponding to the depth, and acquire a second moving distance of the second lens corresponding to the depth, in response to the focusing instruction.
  • the second determining unit 705 is further configured to determine an in-focus position of the first lens according to the first moving distance, and determine an in-focus position of the second lens according to the second moving distance.
  • control unit 707 configured to control the first lens and the second lens to respectively move to respective corresponding focuses Location.
  • the depth information acquiring device 700 can acquire the distance that each lens corresponding to the depth needs to move, and according to the corresponding The moving distance determines the focus position of each lens, and then moves each lens to the respective focus position of the distance, so that the focus of the target object can be accurately and quickly achieved.
  • the first determining unit 703 may include an obtaining subunit 7031 and a determining subunit 7032, where:
  • the obtaining sub-unit 7031 is configured to acquire a first offset of the first lens and a second offset of the second lens when the depth information acquiring device 700 detects the first camera and the second camera shake.
  • a determining subunit 7032 configured to determine an offset of the first image according to the first offset, and determine an offset of the second image according to the second offset, to obtain an offset of the first image and the second image The offset difference of the offset.
  • the depth information acquiring device can acquire images respectively captured by the first camera and the second camera when detecting camera shake, and detect that the target object is in two images.
  • the initial distance in the first distance is then corrected using the offset difference between the first image and the second image, and finally the depth calculation formula is substituted according to the corrected initial distance, thereby determining the depth of the target subject.
  • the depth information acquiring device may correct the distance of the same subject in the images acquired by the two cameras when the two cameras can both have the OIS function or one of the cameras has the OIS function.
  • the depth information of the subject obtained from the corrected distance is relatively accurate, so that the focus of the subject can be accurately and quickly achieved.
  • FIG. 9 is a schematic structural diagram of an image collection device according to an embodiment of the present invention.
  • the image capturing device 900 described in FIG. 9 may include: a first camera 901, a second camera 902, at least one processor 903 such as a CPU, a receiver 904, a transmitter 905, a display screen 906, and a communication bus 907, wherein :
  • the transmitter 905 is configured to send various data signals such as images to an external device.
  • the display screen 906 is configured to display images captured by the first camera 901 and the second camera 902, and the display screen may be a touch display screen.
  • the communication bus 907 is configured to implement a communication connection between the first camera 901, the second camera 902, the processor 903, the receiver 904, the transmitter 905, and the display 906. among them:
  • the first camera 901 is configured to collect a first image of the target subject when the image capturing device 900 detects the first camera and the second camera.
  • the second camera 902 is configured to acquire a second image of the target subject simultaneously with the first camera when the image capturing device 900 detects the first camera and the second camera.
  • the processor 903 is configured to acquire the first image and the second image, and detect an initial distance of the target object in the first image and the second image.
  • the processor 903 is further configured to determine an offset difference between the first image and the second image, and use the offset difference to correct the initial distance.
  • the processor 903 is further configured to determine a depth of the target subject according to the corrected initial distance.
  • the initial distance may be a vector distance of the target object in the first image relative to the target object in the second image, and the offset difference is the offset of the first image relative to the second image.
  • the offset difference of the offset; the initial distance may also be the vector distance of the target subject in the second image relative to the target subject in the first image, then the offset difference is the offset of the second image relative to The offset difference of the offset of the first image is not limited in the embodiment of the present invention.
  • the image capturing device 900 will respectively capture the two images in the two images from the two cameras. The distance is corrected so that the image acquisition device finally obtains the depth of the object more accurately, thereby improving the accuracy of the camera focus.
  • the specific manner in which the processor 903 determines the offset difference between the first image and the second image may be:
  • the receiver 904 is configured to receive a focus instruction for the target subject.
  • the processor 903 is further configured to: in response to the focus instruction, acquire a first moving distance of the first lens corresponding to the depth, and acquire a second moving distance of the second lens corresponding to the depth.
  • the processor 903 is further configured to determine an in-focus position of the first lens according to the first moving distance, and determine an in-focus position of the second lens according to the second moving distance.
  • the processor 903 further controls the first lens and the second lens to move to respective corresponding in-focus positions.
  • the distance corresponding to each lens corresponding to the depth may be acquired, and according to the corresponding movement.
  • the distance determines the focus position of each lens, and then moves each lens to the respective focus position of the distance, so that the focus of the target object can be accurately and quickly achieved.
  • the image capturing device can include a first camera, a second camera, a processor, and a receiver.
  • the first camera and the second camera may separately acquire the first image and the second image of the target subject
  • the processor may acquire the first image and the second image, and may detect an initial distance of the target subject in the two images, and use the first image with The offset difference of the second image is corrected for the initial distance, and finally the depth calculation formula is substituted according to the corrected initial distance, thereby determining the depth of the target subject.
  • the image capturing device may correct the distance of the same subject in the images acquired by the two cameras in the case where the two cameras can both have the OIS function or one of the cameras has the OIS function, so that the final basis is The depth information of the subject obtained by the corrected distance is more accurate, so that the focus of the subject can be accurately and quickly achieved.
  • the units in the depth information acquiring apparatus of the embodiment of the present invention may be combined, divided, and deleted according to actual needs.
  • the unit in the embodiment of the present invention may be implemented by a general-purpose integrated circuit, such as a CPU (Central Processing Unit) or an ASIC (Application Specific Integrated Circuit).
  • a CPU Central Processing Unit
  • ASIC Application Specific Integrated Circuit
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)
  • Measurement Of Optical Distance (AREA)
  • Focusing (AREA)
  • Automatic Focus Adjustment (AREA)
  • Adjustment Of Camera Lenses (AREA)
  • Length Measuring Devices By Optical Means (AREA)
PCT/CN2016/070707 2016-01-12 2016-01-12 一种深度信息获取方法、装置及图像采集设备 WO2017120771A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
US16/069,523 US10506164B2 (en) 2016-01-12 2016-01-12 Depth information obtaining method and apparatus, and image acquisition device
CN201680002783.4A CN107223330B (zh) 2016-01-12 2016-01-12 一种深度信息获取方法、装置及图像采集设备
PCT/CN2016/070707 WO2017120771A1 (zh) 2016-01-12 2016-01-12 一种深度信息获取方法、装置及图像采集设备
JP2018554610A JP6663040B2 (ja) 2016-01-12 2016-01-12 奥行き情報取得方法および装置、ならびに画像取得デバイス
KR1020187022702A KR102143456B1 (ko) 2016-01-12 2016-01-12 심도 정보 취득 방법 및 장치, 그리고 이미지 수집 디바이스
EP16884334.0A EP3389268B1 (en) 2016-01-12 2016-01-12 Depth information acquisition method and apparatus, and image collection device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/070707 WO2017120771A1 (zh) 2016-01-12 2016-01-12 一种深度信息获取方法、装置及图像采集设备

Publications (1)

Publication Number Publication Date
WO2017120771A1 true WO2017120771A1 (zh) 2017-07-20

Family

ID=59310688

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/070707 WO2017120771A1 (zh) 2016-01-12 2016-01-12 一种深度信息获取方法、装置及图像采集设备

Country Status (6)

Country Link
US (1) US10506164B2 (ko)
EP (1) EP3389268B1 (ko)
JP (1) JP6663040B2 (ko)
KR (1) KR102143456B1 (ko)
CN (1) CN107223330B (ko)
WO (1) WO2017120771A1 (ko)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3582487A1 (en) * 2018-06-15 2019-12-18 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image stabilisation
US11368622B2 (en) * 2017-08-29 2022-06-21 Zte Corporation Photographing method, photographing device and mobile terminal for shifting a lens

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170358101A1 (en) * 2016-06-10 2017-12-14 Apple Inc. Optical Image Stabilization for Depth Sensing
CN108154466B (zh) * 2017-12-19 2021-12-07 北京小米移动软件有限公司 图像处理方法及装置
US10313654B1 (en) * 2018-03-19 2019-06-04 Htc Corporation Image processing method, electronic device, and non-transitory computer readable storage medium
CN108769545A (zh) * 2018-06-12 2018-11-06 Oppo(重庆)智能科技有限公司 一种图像处理方法、图像处理装置及移动终端
CN108769528B (zh) * 2018-06-15 2020-01-10 Oppo广东移动通信有限公司 图像补偿方法和装置、计算机可读存储介质和电子设备
CN108769529B (zh) * 2018-06-15 2021-01-15 Oppo广东移动通信有限公司 一种图像校正方法、电子设备及计算机可读存储介质
CN108737735B (zh) * 2018-06-15 2019-09-17 Oppo广东移动通信有限公司 图像校正方法、电子设备及计算机可读存储介质
CN108737734B (zh) * 2018-06-15 2020-12-01 Oppo广东移动通信有限公司 图像补偿方法和装置、计算机可读存储介质和电子设备
CN109194945A (zh) * 2018-08-02 2019-01-11 维沃移动通信有限公司 一种图像处理方法及终端
CN110830707B (zh) * 2018-08-10 2022-01-14 华为技术有限公司 镜头控制方法、装置及终端
CN109714536B (zh) * 2019-01-23 2021-02-23 Oppo广东移动通信有限公司 图像校正方法、装置、电子设备及计算机可读存储介质
US11122248B1 (en) * 2020-07-20 2021-09-14 Black Sesame International Holding Limited Stereo vision with weakly aligned heterogeneous cameras
WO2022040940A1 (zh) * 2020-08-25 2022-03-03 深圳市大疆创新科技有限公司 标定方法、装置、可移动平台及存储介质
CN112489116A (zh) * 2020-12-07 2021-03-12 青岛科美创视智能科技有限公司 一种使用单相机估算目标距离的方法及系统
CN112950691A (zh) * 2021-02-10 2021-06-11 Oppo广东移动通信有限公司 测量深度信息的控制方法、装置、电子设备及存储介质
CN114147727B (zh) * 2022-02-07 2022-05-20 杭州灵西机器人智能科技有限公司 一种机器人位姿校正的方法、装置和系统
CN114554086A (zh) * 2022-02-10 2022-05-27 支付宝(杭州)信息技术有限公司 一种辅助拍摄方法、装置及电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110261166A1 (en) * 2010-04-21 2011-10-27 Eduardo Olazaran Real vision 3D, video and photo graphic system
CN103246130A (zh) * 2013-04-16 2013-08-14 广东欧珀移动通信有限公司 一种对焦方法及装置
CN104093014A (zh) * 2014-07-21 2014-10-08 宇龙计算机通信科技(深圳)有限公司 图像处理方法和图像处理装置
CN104811688A (zh) * 2014-01-28 2015-07-29 聚晶半导体股份有限公司 图像获取装置及其图像形变检测方法

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7561789B2 (en) * 2006-06-29 2009-07-14 Eastman Kodak Company Autofocusing still and video images
JP4843750B2 (ja) 2010-03-19 2011-12-21 富士フイルム株式会社 撮像装置、方法およびプログラム
US8274552B2 (en) 2010-12-27 2012-09-25 3Dmedia Corporation Primary and auxiliary image capture devices for image processing and related methods
US9041791B2 (en) * 2011-02-01 2015-05-26 Roche Diagnostics Hematology, Inc. Fast auto-focus in imaging
JP5768684B2 (ja) 2011-11-29 2015-08-26 富士通株式会社 ステレオ画像生成装置、ステレオ画像生成方法及びステレオ画像生成用コンピュータプログラム
JP5948856B2 (ja) * 2011-12-21 2016-07-06 ソニー株式会社 撮像装置とオートフォーカス方法並びにプログラム
TWI551113B (zh) * 2011-12-27 2016-09-21 鴻海精密工業股份有限公司 3d成像模組及3d成像方法
US20130258044A1 (en) 2012-03-30 2013-10-03 Zetta Research And Development Llc - Forc Series Multi-lens camera
US9210417B2 (en) * 2013-07-17 2015-12-08 Microsoft Technology Licensing, Llc Real-time registration of a stereo depth camera array
US9524580B2 (en) * 2014-01-06 2016-12-20 Oculus Vr, Llc Calibration of virtual reality systems
US9247117B2 (en) 2014-04-07 2016-01-26 Pelican Imaging Corporation Systems and methods for correcting for warpage of a sensor array in an array camera module by introducing warpage into a focal plane of a lens stack array
CN104469086B (zh) * 2014-12-19 2017-06-20 北京奇艺世纪科技有限公司 一种视频去抖动方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110261166A1 (en) * 2010-04-21 2011-10-27 Eduardo Olazaran Real vision 3D, video and photo graphic system
CN103246130A (zh) * 2013-04-16 2013-08-14 广东欧珀移动通信有限公司 一种对焦方法及装置
CN104811688A (zh) * 2014-01-28 2015-07-29 聚晶半导体股份有限公司 图像获取装置及其图像形变检测方法
CN104093014A (zh) * 2014-07-21 2014-10-08 宇龙计算机通信科技(深圳)有限公司 图像处理方法和图像处理装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3389268A4 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11368622B2 (en) * 2017-08-29 2022-06-21 Zte Corporation Photographing method, photographing device and mobile terminal for shifting a lens
EP3582487A1 (en) * 2018-06-15 2019-12-18 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image stabilisation
US10567659B2 (en) 2018-06-15 2020-02-18 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image compensation method, electronic device and computer-readable storage medium
JP2021518087A (ja) * 2018-06-15 2021-07-29 オッポ広東移動通信有限公司Guangdong Oppo Mobile Telecommunications Corp., Ltd. 画像補正方法、画像補正装置、および電子機器
JP7127149B2 (ja) 2018-06-15 2022-08-29 オッポ広東移動通信有限公司 画像補正方法、画像補正装置、および電子機器
JP7127149B6 (ja) 2018-06-15 2022-10-03 オッポ広東移動通信有限公司 画像補正方法、画像補正装置、および電子機器

Also Published As

Publication number Publication date
KR20180101466A (ko) 2018-09-12
JP6663040B2 (ja) 2020-03-11
US10506164B2 (en) 2019-12-10
US20190028646A1 (en) 2019-01-24
JP2019510234A (ja) 2019-04-11
EP3389268B1 (en) 2021-05-12
CN107223330A (zh) 2017-09-29
KR102143456B1 (ko) 2020-08-12
CN107223330B (zh) 2020-06-26
EP3389268A1 (en) 2018-10-17
EP3389268A4 (en) 2018-12-19

Similar Documents

Publication Publication Date Title
WO2017120771A1 (zh) 一种深度信息获取方法、装置及图像采集设备
CN111147741B (zh) 基于对焦处理的防抖方法和装置、电子设备、存储介质
US9998650B2 (en) Image processing apparatus and image pickup apparatus for adding blur in an image according to depth map
US9762803B2 (en) Focal point adjustment device, camera system, and focal point adjustment method for imaging device
WO2020088133A1 (zh) 图像处理方法、装置、电子设备和计算机可读存储介质
US10306165B2 (en) Image generating method and dual-lens device
JP4858263B2 (ja) 3次元計測装置
US9619886B2 (en) Image processing apparatus, imaging apparatus, image processing method and program
CN106998413A (zh) 图像处理设备、摄像设备和图像处理方法
JP2020095069A (ja) 撮像装置
CN113875219A (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
CN108260360B (zh) 场景深度计算方法、装置及终端
US20220174217A1 (en) Image processing method and device, electronic device, and computer-readable storage medium
JP6645711B2 (ja) 画像処理装置、画像処理方法、プログラム
TW201642008A (zh) 影像擷取裝置及其動態對焦方法
JP2020095071A (ja) 交換レンズおよび撮像装置
JP2020095070A (ja) 撮像装置
JP7292145B2 (ja) 回転半径演算装置および回転半径演算方法
JP2017130890A (ja) 画像処理装置およびその制御方法ならびにプログラム
JP2018194694A (ja) 制御装置および撮像装置
WO2019041905A1 (zh) 一种拍照方法、拍照装置及移动终端
JP2017076051A (ja) 画像処理装置、撮像装置、画像処理方法、プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16884334

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018554610

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2016884334

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2016884334

Country of ref document: EP

Effective date: 20180713

ENP Entry into the national phase

Ref document number: 20187022702

Country of ref document: KR

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 1020187022702

Country of ref document: KR