WO2020124517A1 - 拍摄设备的控制方法、拍摄设备的控制装置及拍摄设备 - Google Patents

拍摄设备的控制方法、拍摄设备的控制装置及拍摄设备 Download PDF

Info

Publication number
WO2020124517A1
WO2020124517A1 PCT/CN2018/122523 CN2018122523W WO2020124517A1 WO 2020124517 A1 WO2020124517 A1 WO 2020124517A1 CN 2018122523 W CN2018122523 W CN 2018122523W WO 2020124517 A1 WO2020124517 A1 WO 2020124517A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
shooting device
information
target object
determining
Prior art date
Application number
PCT/CN2018/122523
Other languages
English (en)
French (fr)
Inventor
胡攀
郑洪涌
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2018/122523 priority Critical patent/WO2020124517A1/zh
Priority to CN201880065930.1A priority patent/CN111213364A/zh
Publication of WO2020124517A1 publication Critical patent/WO2020124517A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/958Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging
    • H04N23/959Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging by adjusting depth of field during image capture, e.g. maximising or setting range based on scene characteristics
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present application relates to the field of shooting technology, and in particular, to a control method of a shooting device, a control device of the shooting device, and a shooting device.
  • Embodiments of the present application provide a control method of a shooting device, a control device of the shooting device, and a shooting device.
  • a first acquisition module which is used to acquire a first image of a target object when the shooting device is located at a first position
  • a second acquisition module the second acquisition module is used to acquire a second image of the target object when the shooting device is in the second position;
  • a determining module configured to determine depth information of the target object according to the first image and the second image;
  • a focusing module configured to control the shooting device to focus on the target object at the second position according to the depth information.
  • the shooting device includes a processor and a memory, and the memory stores one or more programs.
  • the processor is used to execute the one or more programs to implement the control method of the shooting device according to the above embodiment.
  • the control method of the shooting device, the control device of the shooting device, and the shooting device of the embodiment of the present application determine the depth information of the target object through the images taken by the shooting device at different positions, and focus the target object according to the depth information, saving hardware costs At the same time, multi-point focusing can be achieved simply and conveniently.
  • FIG. 1 is a schematic flowchart of a control method of a shooting device according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of modules of a photographing device according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of another module of a photographing device according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of the focusing principle of the shooting device according to the embodiment of the present application.
  • FIG. 5 is a schematic diagram of a scene of a control method of a shooting device according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of another scene of a control method of a shooting device according to an embodiment of the present application.
  • FIG. 7 is a schematic flowchart of a control method of a shooting device according to another embodiment of the present application.
  • FIG. 8 is a schematic block diagram of a method for controlling a shooting device according to another embodiment of the present application.
  • FIG. 9 is a schematic flowchart of a control method of a shooting device according to another embodiment of the present application.
  • FIG. 10 is a schematic block diagram of a control method of a shooting device according to another embodiment of the present application.
  • FIG. 11 is a schematic flowchart of a control method of a shooting device according to still another embodiment of the present application.
  • FIG. 12 is a schematic block diagram of a control method of a shooting device according to still another embodiment of the present application.
  • Shooting device 100 optical axis 101, control device 10, first acquisition module 12, second acquisition module 14, determination module 16, first determination unit 162, first determination subunit 1622, second determination subunit 1624, second The determination unit 164, the focusing module 18, the third determination unit 182, the fourth determination unit 184, the adjustment unit 186, the inertial measurement unit 30, the lens 40, the image sensor 50, the processor 60, and the memory 70.
  • first and second are used for description purposes only, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features.
  • the features defined as “first” and “second” may explicitly or implicitly include one or more of the features.
  • the meaning of “plurality” is two or more, unless otherwise specifically limited.
  • connection should be understood in a broad sense, for example, it can be fixed or detachable Connected, or integrally connected; may be mechanical, electrical, or may communicate with each other; may be directly connected, or may be indirectly connected through an intermediary, may be the connection between two elements or the interaction of two elements relationship.
  • installation should be understood in a broad sense, for example, it can be fixed or detachable Connected, or integrally connected; may be mechanical, electrical, or may communicate with each other; may be directly connected, or may be indirectly connected through an intermediary, may be the connection between two elements or the interaction of two elements relationship.
  • an embodiment of the present application provides a control method of a photographing apparatus 100, a control apparatus 10 of the photographing apparatus 100, and a photographing apparatus.
  • the control method of the shooting device 100 according to the embodiment of the present application includes:
  • Step S12 When the shooting device 100 is located at the first position, acquire a first image of the target object P;
  • Step S14 When the shooting device 100 is located at the second position, acquire a second image of the target object P;
  • Step S16 Determine the depth information of the target object P according to the first image and the second image
  • Step S18 Control the shooting device 100 to focus on the target object P at the second position according to the depth information.
  • the control device 10 of the photographing apparatus 100 includes a first acquisition module 12, a second acquisition module 14, a determination module 16, and a focusing module 18.
  • the first acquisition module 12 is used to acquire the first image of the target object P when the shooting device 100 is located at the first position.
  • the second acquisition module 14 is used to acquire the second image of the target object P when the shooting device 100 is located at the second position.
  • the determination module 16 is used to determine the depth information of the target object P according to the first image and the second image.
  • the focusing module 18 is used to control the shooting device 100 to focus on the target object P at the second position according to the depth information.
  • the control method of the photographing apparatus 100, the control apparatus of the photographing apparatus 100, and the photographing apparatus 100 of the embodiment of the present application determine the depth information of the target object P through the images taken by the photographing apparatus 100 at different positions, and focus the target object P according to the depth information , While saving hardware costs, you can easily and conveniently achieve multi-point focusing.
  • a shooting device 100 includes a processor 60 and a memory 70.
  • the memory 70 stores one or more programs.
  • the processor 60 is used to execute one or more programs to implement the present invention. Apply to the control method of the photographing apparatus 100 of any embodiment.
  • the photographing apparatus 100 further includes an inertial measurement unit 30, a lens 40, and an image sensor 50.
  • the inertial measurement unit 30, the lens 40, the image sensor 50, the processor 60, and the memory 70 are connected through the bus 11.
  • the light from the subject passes through the lens 40 and is imaged on the image sensor 50.
  • the processor 60 of the photographing apparatus 100 controls the photographing apparatus 100 and processes the image captured by the image sensor 50.
  • the working principle of the shooting device 100 in FIG. 2 is similar to the working principle of the shooting device 100 in FIG. 3, but the control device 10 of the shooting device 100 controls the shooting device 100. In order to avoid redundancy, it will not be repeated here.
  • step S12 is executed before step S14.
  • step S14 may be performed before step S12.
  • the shooting device 100 includes but is not limited to a camera and other electronic devices with shooting functions, such as mobile phones, tablet computers, smart wearable devices, personal computers, drones, handheld gimbal devices, notebook computers, and the like.
  • a camera uses a camera as an example.
  • the photographing device 100 may be used to take more photos at multiple positions or in multiple postures to ensure a balance of matching accuracy and calculation error to obtain more accurate depth information. That is to say, the first position and the second position are only used to distinguish two different positions, and are not exhaustive.
  • the shooting device 100 may be provided with a depth camera, which directly obtains the depth information of the target object in the picture taken by the shooting device 100 through the depth camera, and then performs subsequent focus plane adjustment based on the depth information.
  • the depth camera may be a time-of-flight (TOF) camera.
  • TOF time-of-flight
  • the TOF camera can obtain a depth map by taking a picture.
  • FOV field of view
  • FOV field of view
  • the relative posture of the TOF camera and the shooting device 100 can be calibrated by a special calibration tool.
  • the camera's focusing principle is: when the camera takes a picture, the points on the non-aligned plane will form a diffuse spot on the scene plane, if the opening angle of the diffuse spot to the human eye is less than the limit resolution of the human eye (about 1'), the human eye will not feel unclear to the image. Under the limitation of the size of the diffuse spot, the depth of the distance allowed on the alignment plane is the depth of field.
  • L is the distance of the target object (aligned with the plane)
  • F is the aperture value
  • f is the camera focal length, which is equal to the ratio of the focal length and the aperture diameter
  • is the minimum allowable diffused spot diameter.
  • Active autofocus emits infrared, ultrasonic or laser light to the subject through the fuselage, and then receives the reflected echo to measure the distance, and then adjusts the lens focus according to the depth information and the focus curve.
  • Passive autofocus includes phase focus detection and contrast focus detection.
  • active autofocus needs to design a special focus optical path and focus sensor to obtain phase information.
  • PDAF Phase Detection Auto Focus
  • the shooting device 100 shoots the target object P at the first position to obtain the first image of the target object P, and the focus plane S1 is at the human eye and is perpendicular to the shooting The optical axis 101 of the device 100; then, the shooting device 100 shoots the target object P at the second position to obtain a second image of the target object P. Since the focus plane is not adjusted and the position of the shooting device 100 changes, the focus The plane S1 is at the human ear and is perpendicular to the optical axis 101 of the photographing device 100.
  • the focus plane S1 will change from a plane that is perpendicular to the optical axis 101 of the shooting device 100 and changes in the human eye. It is formed in the plane of the human ear and perpendicular to the optical axis 101 of the photographing apparatus 100, so that the plane S2 of the human eye and perpendicular to the optical axis 101 of the photographing apparatus 100 may appear out of focus.
  • the related technology is equipped with a high-precision processor on the shooting device to record the rotation of the shooting device, and then calculate the adjustment value of the depth of the focus plane according to the rotation angle, according to the focus table of the lens (focus table), move the lens or An image sensor, so that after the shooting device moves from the first position to the second position, the focusing plane still falls on the focusing plane focused at the center of the first position (that is, the focusing plane still falls on the human eye and is perpendicular to the shooting device 100 The plane of the optical axis 101) to realize the focus compensation function.
  • a high-precision processor on the shooting device to record the rotation of the shooting device, and then calculate the adjustment value of the depth of the focus plane according to the rotation angle, according to the focus table of the lens (focus table), move the lens or An image sensor, so that after the shooting device moves from the first position to the second position, the focusing plane still falls on the focusing plane focused at the center of the first position (that is, the focusing plane still falls on the human eye and is
  • the control method of the photographing apparatus 100, the control apparatus of the photographing apparatus 100, and the photographing apparatus 100 of the embodiment of the present application are located at two different positions of the photographing apparatus 100 to target the same target.
  • the object P takes two pictures from different perspectives to obtain the first image and the second image, thereby obtaining the depth information of the target object P.
  • the camera focusing plane is adjusted to achieve the focus compensation function.
  • step S16 includes:
  • Step S162 Determine the spatial coordinates of the target object P according to the first image and the second image
  • Step S164 Determining depth information according to spatial coordinates.
  • the determination module 16 includes a first determination unit 162 and a second determination unit 164.
  • the first determining unit 162 is used to determine the spatial coordinates of the target object P according to the first image and the second image; the second determining unit 164 is used to determine the depth information according to the spatial coordinates.
  • the depth information of the target object P is determined.
  • the "spatial coordinates” here can be the spatial coordinates X of all points in the same field of view in the camera coordinate system when the first image is taken
  • the "depth information” here can be the target when the second image is taken Depth information of object P.
  • R is the rotation matrix
  • T is the translation matrix. The specific calculation method of the rotation matrix R and the translation matrix T will be described in detail later.
  • the Z-axis direction value of the space coordinate X in the camera coordinate system when the first image is obtained at the first position and the space coordinate X of the corresponding point in the camera coordinate system when the second image is obtained at the second position
  • the value of the Z axis direction is the depth, so that the depth information can be determined.
  • step S162 includes:
  • Step S1622 Determine the relative pose information of the shooting device 100 at the first position and the second position according to the first image and the second image;
  • Step S1624 Determine the spatial coordinates of the target object P according to the relative posture information.
  • the first determining unit 162 includes a first determining subunit 1622 and a second determining subunit 1624.
  • the first determination subunit 1622 is used to determine the relative posture information of the shooting device 100 at the first position and the second position according to the first image and the second image.
  • the second determining subunit 1624 is used to determine the spatial coordinates of the target object P according to the relative pose information.
  • the spatial coordinates of the target object P are determined based on the first image and the second image.
  • step S1622 includes:
  • the relative posture information is determined according to the first matching set M and the parameter information of the photographing apparatus 100.
  • the first determining sub-unit 1622 is used to process the first image and the second image to obtain the first matching set M of the first image and the second image, and to obtain the first matching set M and the shooting device 100 according to the first matching set M
  • the parameter information determines the relative pose information.
  • the relative posture information of the shooting device 100 at the first position and the second position is determined based on the first image and the second image.
  • processing the first image and the second image to obtain a first matching set M of the first image and the second image includes:
  • the first feature point set I 1 and the second feature point set I 2 are matched to obtain a first matching set M.
  • the first determining subunit 1622 is used to determine the first feature point set I 1 of the first image and the second feature point set I 2 of the second image; and to match the first feature point set I 1 and the second feature point set I 2 to get the first matching set M.
  • determining the first feature point set I 1 of the first image and the second feature point set I 2 of the second image includes: determining the first feature point set I by at least one of feature extraction and block matching 1 and the second feature point set I 2 .
  • the first determining subunit 1622 is configured to determine the first feature point set I 1 and the second feature point set I 2 by at least one of feature extraction and block matching.
  • the first image and the second image may be processed through image sparse matching to obtain the first matching set M of the first image and the second image.
  • the algorithm for feature point extraction includes, but is not limited to, Oriented FAST and Rotated BRIEF, ORB) algorithm, HARRIS corner extraction algorithm, Scale-invariant feature transform (SIFT) algorithm and Speeded Up Robust Features (SURF) algorithm.
  • the first feature point set I 1 and the second feature point set I 2 are matched to calculate the first A matching set M:
  • M ⁇ (x 1 ,x 2 )
  • x 1 is an element in the first feature point set I 1
  • x 2 is an element in the second feature point set I 2
  • the content of an element includes: two-dimensional pixel coordinates, feature descriptors, and the size of the neighborhood. Two-dimensional pixel coordinates are also the positions of feature points.
  • the feature descriptor is the feature of an image neighborhood centered on the feature point. In general, the feature descriptor is a one-dimensional or several-dimensional vector, such as SIFT features, SURF features, even in the most simplified case. This is the average pixel value of the block area. If the image is in RGB format, the feature descriptor is the RGB value, if it is YUV, the feature descriptor is the YUV value. Of course, under normal circumstances, the feature descriptor will not be such a simple feature, generally there will be some statistically combined features such as gradient and direction.
  • the elements with the highest similarity or exceeding a certain threshold can be combined into a matching pair.
  • the reason for using the "approximately equal" symbol in the above formula for calculating the first matching set is that there is an equal sign relationship only when two image points represent the same object point, so that it is a point on a perfect match
  • the method of finding matching points by extracting feature points and then performing similarity matching may not exactly correspond to the same point due to accuracy errors and other reasons, and may have a deviation of several pixels.
  • the photographing apparatus 100 includes an inertial measurement unit 30 (Inertial measurement unit, IMU), matching the first feature point set I 1 and the second feature point set I 2 to obtain the first matching set M, including:
  • IMU Inertial measurement unit
  • the first feature point set I 1 and the second feature point set I 2 are matched according to the motion information to obtain the first matching set M.
  • the photographing device 100 includes an inertial measurement unit 30, and the first determining subunit 1622 is configured to detect motion information of the photographing device 100 using the inertial measurement unit 30, and to match the first feature point set I according to the motion information 1 and the second feature point set I 2 to get the first matching set M.
  • the motion information may be camera rotation and translation information provided by the IMU unit, and the search area when matching the image feature points may be guided according to the motion information.
  • the IMU has 3-axis acceleration and 3-axis angular velocity, and can output the rotation angle and translation of the yaw axis (YAW), roll axis (ROLL), and pitch axis (PITCH) in three directions, so it can guide The search area when matching image feature points improves the matching efficiency.
  • the rotation matrix R and the translation matrix T can be determined according to the motion information.
  • the relative pose information includes an essential matrix E, a rotation matrix R, and a translation matrix T, and determining the relative pose information according to the first matching set M and the parameter information of the shooting device 100 includes:
  • the essential matrix E is decomposed to obtain a rotation matrix R and a translation matrix T.
  • the relative pose information includes an essential matrix E, a rotation matrix R, and a translation matrix T.
  • the first determining subunit 1622 is used to determine the essential matrix E based on the first matching set M and parameter information under preset constraint conditions ; And used to decompose the essential matrix E to obtain the rotation matrix R and the translation matrix T.
  • the relative pose information is determined according to the first matching set M and the parameter information of the photographing device 100.
  • the relative posture information may be determined according to the first matching set M and the parameter information of the photographing device 100 by calculating the camera rotation and translation information based on sparse matching.
  • the parameter information of the shooting device 100 may be the internal parameter matrix K of the shooting device 100.
  • the optimized essential matrix E can be calculated through the optimization method under the following constraints:
  • the optimal rotation matrix R and translation matrix T can be obtained by decomposing the essential matrix E:
  • the rotation matrix R and the translation matrix T are relative posture changes of the shooting device 100 when taking the first image and the second image, that is, relative posture information.
  • decomposing the essential matrix E to obtain the rotation matrix R and the translation matrix T can be performed by singular value decomposition (Singular Value Decomposition, SVD).
  • the optimization method is that the point set satisfies the above constraint formula, then the equation system is solved, and then re-checked by RANSAC (or least_median) to obtain the optimal result.
  • RANSAC or least_median
  • fx and fy represent the camera focal length in pixels in the x and y directions
  • cx and cy represent the center offset in pixels in the x and y directions.
  • radial distortion parameters such as k1 and k2 and tangential distortion parameters such as p1 and p2 are also included.
  • tangential distortion parameters such as p1 and p2
  • x′′ x′ ⁇ (1+k 1 ⁇ r 2 +k 2 ⁇ r 4 )+2 ⁇ p 1 ⁇ x′ ⁇ y′+p 2 ⁇ (r 2 +2x′ 2 )
  • y′′ y′ ⁇ (1+k 1 ⁇ r 2 +k 2 ⁇ r 4 )+p 1 ⁇ (r 2 +2 ⁇ y′ 2 )+2 ⁇ p 2 ⁇ x′ ⁇ y′
  • u, v are the coordinates of a pixel in pixels.
  • the relative pose information includes an essential matrix E, a rotation matrix R and a translation matrix T, and determining the spatial coordinates of the target object P according to the relative pose information includes:
  • a third image is determined according to the second matching set N and the first image, and the third image is an image corresponding to the second matching set N in the first image;
  • the third image is processed according to the rotation matrix R and the translation matrix T to obtain the spatial coordinates of the target object P.
  • the relative pose information includes an essential matrix E, a rotation matrix R, and a translation matrix T.
  • the second determining subunit 1624 is configured to process the first image and the second image according to the essential matrix E to obtain the first image and the second image.
  • the second matching set N of the two images and for determining the third image based on the second matching set N and the first image, the third image being the image corresponding to the second matching set N in the first image;
  • the rotation matrix R and the translation matrix T process the third image to obtain the spatial coordinates of the target object P.
  • the spatial coordinates of the target object P are determined based on the relative posture information.
  • the first image and the second image may be processed according to the essential matrix E in a dense matching manner to obtain the second matching set N of the first image and the second image.
  • the second matching set N of more corresponding pixels in all first images and second images can be calculated with reference to the essential matrix E obtained by sparse matching:
  • N ⁇ (u 1 ,u 2 )
  • (K -1 u 2 ) T EK -1 u 1 0, u 2 ⁇ P 1 , u 2 ⁇ P 2 ⁇ ;
  • P 1 and P 2 are closely matched pixels in the same field of view in the first image and the second image.
  • the image corresponding to the pixels in the first image corresponding to the second matching set N is regarded as the "common image", that is, the third image.
  • the final rotation matrix R and translation matrix T can be used to restore the coordinate X of the pixel point (corresponding to the same object point) in the third image in the three-dimensional space to obtain the spatial coordinate of the target object P:
  • the three-dimensional coordinates are the coordinate values referenced by the camera coordinate system when the first image is taken at the first position.
  • the Z axis direction value of the space coordinate X in the camera coordinate system when the first image is obtained at the first position, and the space coordinate X'Z of the corresponding point in the camera coordinate system when the second image is obtained at the second position The value of the axis direction is the depth, so that the depth information can be determined.
  • step S18 includes:
  • Step S182 When focusing on the target object P at the second position, the depth of the adjustment point of the second image is determined according to the depth information, and the adjustment point of the second image is related to the focus of the first image;
  • Step S184 Determine the adjustment information of the shooting device 100 according to the depth of the adjustment point
  • Step S186 Adjust the shooting device 100 according to the adjustment information so that the shooting device 100 focuses on the target object P at the second position.
  • the focusing module 18 includes a third determination unit 182, a fourth determination unit 184, and an adjustment unit 186.
  • the third determining unit 182 is used to determine the depth of the adjustment point of the second image according to the depth information when focusing on the target object P at the second position, and the adjustment point of the second image is related to the focus of the first image.
  • the fourth determination unit 184 is used to determine the adjustment information of the photographing apparatus 100 according to the depth of the adjustment point.
  • the adjustment unit 186 is used to adjust the shooting device 100 according to the adjustment information so that the shooting device 100 focuses on the target object P at the second position.
  • the shooting device 100 is controlled to focus on the target object P at the second position according to the depth information.
  • the shooting device 100 is controlled to focus on the target object P at the second position according to the depth information.
  • FIG. 5 Please refer to FIG. 5 again. It can be understood that in this example, when the first image is taken at the first position, the focus plane passes through the human eye, and when the second image is taken at the second position, the focus plane passes through the person because the focus is not adjusted. Therefore, it is necessary to adjust the shooting device 100 so that the adjusted focusing plane passes through the human eye.
  • the focus plane is S1
  • the depth of the focus Q1 is L1
  • the focus plane S1 passes through the human eye.
  • the focus plane is still plane S1 because the focus is not adjusted, and due to the change in position, the focus plane S1 passes through the human ear instead of the human eye, resulting in The plane S2 may be out of focus, that is, the focus plane needs to be adjusted from the plane S1 to the plane S2.
  • the adjustment point corresponding to the focal point Q1 of the first image is the intersection point Q2 of the plane passing through the human eye and perpendicular to the optical axis 101 and the optical axis 101.
  • the depth L2 of the adjustment point Q2 can be determined according to the depth information. In this way, the shooting device 100 can be adjusted according to L1 and L2, so that the focus plane is adjusted from the plane S1 to the plane S2, so that the shooting device 100 focuses the target object P in the human eye at the second position.
  • step S184 includes:
  • the adjustment information is determined according to the depth of the adjustment point and the preset adjustment relationship of the shooting device 100.
  • the fourth determination unit 184 is used to determine the adjustment information according to the depth of the adjustment point and the preset adjustment relationship of the photographing apparatus 100.
  • the shooting device 100 is adjusted according to the adjustment information so that the shooting device 100 focuses on the target object P at the second position.
  • the preset adjustment relationship may be a focus table. After determining the values of the depth L1 of the focus Q and the depth L2 of the adjustment point Q2, the focus table may be queried accordingly to determine the adjustment information. Further, the adjustment information includes at least one of lens adjustment information and image sensor adjustment information.
  • the distance required to move the lens from L1 to L2 can be queried, so as to adjust the plane where the human eye focusing on the target object P at the second position, that is, the plane S2, to Achieve focus compensation.
  • the distance of the image sensor required to focus from L1 to L2 can be queried according to the focus table, thereby adjusting the plane of the human eye where the shooting device 100 focuses on the target object P at the second position, that is, the plane S2 To achieve focus compensation.
  • the distance between the lens movement required to focus from L1 to L2 and the distance moved by the image sensor can be queried to adjust the plane of the human eye where the shooting device 100 focuses on the target object P at the second position, That is the plane S2 to achieve focus compensation.
  • Any process or method description in a flowchart or otherwise described herein may be understood as representing a module, segment, or portion of code that includes one or more executable instructions for performing specific logical functions or steps of a process , And the scope of the preferred embodiment of the present application includes additional executions, where the order may not be shown or discussed, including performing the functions in a substantially simultaneous manner or in reverse order according to the functions involved, which shall It is understood by those skilled in the art to which the embodiments of the present application belong.
  • a "computer-readable medium” may be any device that can contain, store, communicate, propagate, or transmit a program for use by or in connection with an instruction execution system, apparatus, or device.
  • computer-readable media include the following: electrical connections (electronic devices) with one or more wires, portable computer cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable and editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer-readable medium may even be paper or other suitable medium on which the program can be printed, because, for example, by optically scanning the paper or other medium, followed by editing, interpretation, or other appropriate if necessary Process to obtain the program electronically and then store it in computer memory.
  • each part of the present application may be implemented by hardware, software, firmware, or a combination thereof.
  • multiple steps or methods may be performed using software or firmware stored in memory and executed by a suitable instruction execution system.
  • a logic gate circuit for performing a logic function on a data signal
  • PGA programmable gate arrays
  • FPGA field programmable gate arrays
  • each functional unit in each embodiment of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units may be integrated into one module.
  • the above-mentioned integrated modules may be executed in the form of hardware or software function modules. If the integrated module is executed in the form of a software function module and sold or used as an independent product, it may also be stored in a computer-readable storage medium.
  • the storage medium mentioned above may be a read-only memory, a magnetic disk or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)

Abstract

一种拍摄设备的控制方法包括:(S12)在拍摄设备(100)位于第一位置时,获取目标物体(P)的第一图像;(S14)在拍摄设备(100)位于第二位置时,获取目标物体(P)的第二图像;(S16)根据第一图像和第二图像确定目标物体(P)的深度信息;(S18)根据深度信息控制拍摄设备(100)在第二位置对目标物体(P)对焦。本申请还公开了一种拍摄设备(100)的控制装置(10)和拍摄设备(100)。

Description

拍摄设备的控制方法、拍摄设备的控制装置及拍摄设备 技术领域
本申请涉及拍摄技术领域,特别涉及一种拍摄设备的控制方法、拍摄设备的控制装置及拍摄设备。
背景技术
相关技术中,对于仅有中心对焦点的相机,如果需要对焦画面的其他区域,需要移动相机以使目标物体位于画面中央,再重新对焦。然而如此,会改变画面的构图,导致当需要对目标物体进行对焦时,只能采用中心构图的方式,而无法采用三分法构图,S形构图等非对称式构图方式。因此,只有中心对焦点的相机如何实现多点对焦成为了亟待解决的问题。
发明内容
本申请的实施方式提供一种拍摄设备的控制方法、拍摄设备的控制装置及拍摄设备。
本申请实施方式的拍摄设备的控制方法包括:
在所述拍摄设备位于第一位置时,获取目标物体的第一图像;
在所述拍摄设备位于第二位置时,获取所述目标物体的第二图像;
根据所述第一图像和所述第二图像确定所述目标物体的深度信息;
根据所述深度信息控制所述拍摄设备在所述第二位置对所述目标物体对焦。
本申请实施方式的拍摄设备的控制装置包括:
第一获取模块,所述第一获取模块用于在所述拍摄设备位于第一位置时获取目标物体的第一图像;
第二获取模块,所述第二获取模块用于在所述拍摄设备位于第二位置时,获取所述目标物体的第二图像;
确定模块,所述确定模块用于根据所述第一图像和所述第二图像确定所述目标物体的深度信息;
对焦模块,所述对焦模块用于根据所述深度信息控制所述拍摄设备在所述第二位置对所述目标物体对焦。
本申请实施方式的拍摄设备包括处理器和存储器,所述存储器存储有一个或多个程序,所述处理器用于执行所述一个或多个程序以实现上述实施方式的拍摄设备的控制方法。
本申请实施方式的拍摄设备的控制方法、拍摄设备的控制装置和拍摄设备,通过 拍摄设备在不同位置拍摄的图像确定目标物体的深度信息,并根据深度信息对目标物体对焦,在节省硬件成本的同时,可以简单方便地实现多点对焦。
本申请的实施方式的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实施方式的实践了解到。
附图说明
本申请的上述和/或附加的方面和优点从结合下面附图对实施方式的描述中将变得明显和容易理解,其中:
图1是本申请实施方式的拍摄设备的控制方法的流程示意图;
图2是本申请实施方式的拍摄设备的模块示意图;
图3是本申请实施方式的拍摄设备的另一模块示意图;
图4是本申请实施方式的拍摄设备的对焦原理示意图;
图5是本申请实施方式的拍摄设备的控制方法的场景示意图;
图6是本申请实施方式的拍摄设备的控制方法的另一场景示意图;
图7是本申请另一实施方式的拍摄设备的控制方法的流程示意图;
图8是本申请另一实施方式的拍摄设备的控制方法的模块示意图;
图9是本申请又一实施方式的拍摄设备的控制方法的流程示意图;
图10是本申请又一实施方式的拍摄设备的控制方法的模块示意图;
图11是本申请再一实施方式的拍摄设备的控制方法的流程示意图;
图12是本申请再一实施方式的拍摄设备的控制方法的模块示意图。
主要元件符号说明:
拍摄设备100、光轴101、控制装置10、第一获取模块12、第二获取模块14、确定模块16、第一确定单元162、第一确定子单元1622、第二确定子单元1624、第二确定单元164、对焦模块18、第三确定单元182、第四确定单元184、调整单元186、惯性测量单元30、镜头40、图像传感器50、处理器60、存储器70。
具体实施方式
下面详细描述本申请的实施方式,所述实施方式的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本申请,而不能理解为对本申请的限制。
在本申请的描述中,需要理解的是,术语“第一”、“第二”仅用于描述目的,而不能 理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个所述特征。在本申请的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。
在本申请的描述中,需要说明的是,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或一体地连接;可以是机械连接,也可以是电连接或可以相互通信;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通或两个元件的相互作用关系。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本申请中的具体含义。
下文的公开提供了许多不同的实施方式或例子用来实现本申请的不同结构。为了简化本申请的公开,下文中对特定例子的部件和设置进行描述。当然,它们仅仅为示例,并且目的不在于限制本申请。此外,本申请可以在不同例子中重复参考数字和/或参考字母,这种重复是为了简化和清楚的目的,其本身不指示所讨论各种实施方式和/或设置之间的关系。此外,本申请提供了的各种特定的工艺和材料的例子,但是本领域普通技术人员可以意识到其他工艺的应用和/或其他材料的使用。
下面详细描述本申请的实施方式,所述实施方式的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本申请,而不能理解为对本申请的限制。
请参阅图1和图2,本申请实施方式提供一种拍摄设备100的控制方法和拍摄设备100的控制装置10和拍摄设备。
本申请实施方式的拍摄设备100的控制方法包括:
步骤S12:在拍摄设备100位于第一位置时,获取目标物体P的第一图像;
步骤S14:在拍摄设备100位于第二位置时,获取目标物体P的第二图像;
步骤S16:根据第一图像和第二图像确定目标物体P的深度信息;
步骤S18:根据深度信息控制拍摄设备100在第二位置对目标物体P对焦。
本申请实施方式的拍摄设备100的控制装置10包括第一获取模块12、第二获取模块14、确定模块16和对焦模块18。第一获取模块12用于在拍摄设备100位于第一位置时获取目标物体P的第一图像。第二获取模块14用于在拍摄设备100位于第二位置时,获取目标物体P的第二图像。确定模块16用于根据第一图像和第二图像确定目标物体P的深度信息。对焦模块18用于根据深度信息控制拍摄设备100在第二位置对目标物体P对焦。
本申请实施方式的拍摄设备100的控制方法、拍摄设备100的控制装置和拍摄设备100,通过拍摄设备100在不同位置拍摄的图像确定目标物体P的深度信息,并根据深度信息对目标物体P对焦,在节省硬件成本的同时,可以简单方便地实现多点对焦。
另外,如图3所示,本申请另一实施方式的拍摄设备100包括处理器60和存储器70,存储器70存储有一个或多个程序,处理器60用于执行一个或多个程序以实现本申请任一实施方式的拍摄设备100的控制方法。拍摄设备100还包括惯性测量单元30、镜头40和图像传感器50。惯性测量单元30、镜头40、图像传感器50、处理器60和存储器70通过总线11连接。来自被摄物体的光线通过镜头40,在图像传感器50上成像。拍摄设备100的处理器60对拍摄设备100进行控制,并对图像传感器50捕捉到的图像进行处理。图2中拍摄设备100的工作原理和图3中拍摄设备100的工作原理类似,不过是由拍摄设备100的控制装置10对拍摄设备100进行控制,为避免冗余,在此不再赘述。
需要说明的是,在图1的实施方式中,步骤S12在步骤S14之前执行。在其它实施方式中,步骤S14可在步骤S12之前执行。
拍摄设备100包括但不限于相机和具有拍摄功能的其它电子设备,例如手机、平板电脑、智能可穿戴设备、个人计算机、无人机、手持云台设备、笔记型电脑等。下面以相机为例进行说明。
另外,可以用拍摄设备100在多个位置或以多个姿态拍摄更多的照片,以保证匹配精度和计算误差的均衡,得到更为准确的深度信息。也即是说,第一位置和第二位置只是用于区分两个的不同位置,并非穷举。
当然,拍摄设备100可设有一个深度相机,直接通过深度相机获取拍摄设备100所拍摄画面内目标物体的深度信息,然后在此深度信息基础上去进行后续的对焦平面调整。进一步地,深度相机可以是飞行时间(Time of flight,TOF)相机。可以理解,只要TOF相机和拍摄设备100之间的相对姿态标定好,TOF相机拍摄一张就可以得到深度图。TOF相机和拍摄设备100之间,一般有一些平移量和视场角(Field of view,FOV)的差异,在匹配上之后,就可以根据图像点和匹配关系找到TOF拍摄的深度图上的对应点,从而获取图像该点的深度。进一步地,可以通过专门的标定工具来标定TOF相机和拍摄设备100的相对姿态。
请参阅图4,相机的对焦原理是:相机拍照时,非对准平面上的点在景象平面上会形成弥散斑,如果弥散斑对人眼的张角小于人眼的极限分辨率(约为1′),则人眼将不会对图像有不清晰的感觉。在弥散斑大小的限制下,对准平面上前后允许的距离深度便是景深。
具体的景深计算如下式所示:
Figure PCTCN2018122523-appb-000001
其中,L为目标物体的距离(对准平面),F为光圈值,f为相机焦距,等于焦距和光圈直径的比值,σ为最小允许弥散光斑直径。
另外,常见的自动对焦有主动式和被动式两种方式。主动式自动对焦通过机身对被摄物体发射红外线,超声波或激光,然后接收反射回波等方式进行测距,然后依据深度信息和对焦曲线对镜头对焦进行调节。被动式自动对焦包括相位对焦检测和对比度对焦检测两种。对于单反相机,主动式自动对焦需要设计特殊的对焦光路和对焦传感器来获取相位信息,对于无反数码相机,大多开始使用相位对焦(Phase Detection Auto Focus,PDAF)图像传感器来实现直接在成像光路上获取相位信息,但这类图像传感器对成像的画质有损伤,并且在光线较暗的情况下对焦准确率低。对比度对焦主要使用在普通数码相机上,这种对焦方式速度较慢而且对反差信息滤波器的设计很敏感。
目前高端单反相机,为了满足画质和对焦速度的要求,大多需要在相机的硬件结构上装配多点聚焦系统,比如多点的对焦传感器和特殊设计的光路,这会导致硬件成本急剧上升。
对于仅有中心对焦点的相机而言,如果需要对焦画面的其他区域,需要移动相机来使目标物体位于画面中央,重新对焦。但这种方式会改变画面的构图,使得当需要对目标物体进行对焦时,只能采用中心构图的方式,而无法采用非对称式构图的方式,例如三分法构图、S形构图等。因此,通常地,只有中心对焦点的相机需要考虑其他方式来实现多点对焦。
具体地,在一个例子中,请参阅图5,拍摄设备100在第一位置对目标物体P进行拍摄,以获取目标物体P的第一图像,此时对焦平面S1在人眼,且垂直于拍摄设备100的光轴101;然后,拍摄设备100在第二位置对目标物体P进行拍摄,以获取目标物体P的第二图像,由于未调整对焦平面且拍摄设备100的位置发生变化,此时对焦平面S1在人耳处,且垂直于拍摄设备100的光轴101。也即是说,只有中心对焦功能的拍摄设备100在从第一位置移动到第二位置后,如果不调整,对焦平面S1会从在人眼且垂直于拍摄设备100光轴101的平面,变成在人耳且垂直于拍摄设备100光轴101的平面,这样在人眼且垂直于拍摄设备100的光轴101的平面S2则可能出现虚焦的情况。
为了解决上述问题,相关技术在拍摄设备上配备了高精密的处理器记录拍摄设备的转动,然后根据旋转的角度计算对焦平面深度的调整值,根据镜头的对焦表(focus table),移动镜头或者图像传感器,使得拍摄设备从第一位置移动到第二位置后,对焦平面仍落在在第一位置中心对焦的对焦平面上(也即是对焦平面仍落在在人眼且垂直于拍摄设备100光轴101的平面),以实现对焦补偿的功能。然而,由于硬件成本的限制和高精密仪器及计算对焦调整的技术难度大,并考虑到拍摄设备续航和拍摄设备性能的均衡,难以通过只有 中心对焦的拍摄设备实现对焦补偿。
基于以上的讨论,请一并参阅图6,本申请实施方式的拍摄设备100的控制方法、拍摄设备100的控制装置和拍摄设备100,在拍摄设备100位于不同的两个位置,以对同一目标物体P进行两次不同视角的拍照,得到第一图像和第二图像,从而得到目标物体P的深度信息,最后基于此深度信息,调整相机对焦平面,实现对焦补偿的功能。相比于现在的多点聚焦相机的解决方案和需要高精密处理器的单点对焦单相机的解决方案,其优点在于:(1)节省硬件成本;(2)参考了图像信息,比单纯基于高精密传感器计算物距变化的方式,可以一次性计算多点深度,一次实现多点对焦,而不用调整相机姿态多次;(3)对焦点密度更高。
请参阅图7,在某些实施方式中,步骤S16包括:
步骤S162:根据第一图像和第二图像确定目标物体P的空间坐标;
步骤S164:根据空间坐标确定深度信息。
请参阅图8,在某些实施方式中,确定模块16包括第一确定单元162和第二确定单元164。第一确定单元162用于根据第一图像和第二图像确定目标物体P的空间坐标;第二确定单元164用于根据空间坐标确定深度信息。
如此,实现目标物体P的深度信息的确定。请注意,此处的“空间坐标”可以是在第一图像拍摄时的相机坐标系下,相同视野中所有点的空间坐标X,此处的“深度信息”可以是在第二图像拍摄时目标物体P的深度信息。进一步地,可以根据公式X‘=R -1(X-T)计算出在第二图像拍摄时的相机坐标系下,对应点的空间坐标X‘。其中,R为旋转矩阵,T为平移矩阵。关于旋转矩阵R和平移矩阵T的具体计算方式在后文详述。可以理解,在第一位置拍摄得到第一图像时的相机坐标系下的空间坐标X的Z轴方向值,和在第二位置拍摄得到第二图像时的相机坐标系下对应点的空间坐标X‘的Z轴方向值就是深度,这样就可以确定深度信息。
请参阅图9,在某些实施方式中,步骤S162包括:
步骤S1622:根据第一图像和第二图像确定拍摄设备100在第一位置和第二位置的相对姿态信息;
步骤S1624:根据相对姿态信息确定目标物体P的空间坐标。
在某些实施方式中,第一确定单元162包括第一确定子单元1622和第二确定子单元1624。第一确定子单元1622用于根据第一图像和第二图像确定拍摄设备100在第一位置和第二位置的相对姿态信息。第二确定子单元1624用于根据相对姿态信息确定目标物体P的空间坐标。
如此,实现根据第一图像和第二图像确定目标物体P的空间坐标。
具体地,步骤S1622包括:
处理第一图像和第二图像以得到第一图像和第二图像的第一匹配集M;
根据第一匹配集M和拍摄设备100的参数信息确定相对姿态信息。
请结合图10,第一确定子单元1622用于处理第一图像和第二图像以得到第一图像和第二图像的第一匹配集M,以及用于根据第一匹配集M和拍摄设备100的参数信息确定相对姿态信息。
如此,实现根据第一图像和第二图像确定拍摄设备100在第一位置和第二位置的相对姿态信息。
在某些实施方式中,处理第一图像和第二图像以得到第一图像和第二图像的第一匹配集M,包括:
确定第一图像的第一特征点集I 1和第二图像的第二特征点集I 2
匹配第一特征点集I 1和第二特征点集I 2以得到第一匹配集M。
在某些实施方式中,第一确定子单元1622用于确定第一图像的第一特征点集I 1和第二图像的第二特征点集I 2;以及用于匹配第一特征点集I 1和第二特征点集I 2以得到第一匹配集M。
如此,实现处理第一图像和第二图像以得到第一图像和第二图像的第一匹配集M。具体地,确定第一图像的第一特征点集I 1和第二图像的第二特征点集I 2,包括:通过特征提取和分块匹配中的至少一种方式确定第一特征点集I 1和第二特征点集I 2。类似地,第一确定子单元1622用于通过特征提取和分块匹配中的至少一种方式确定第一特征点集I 1和第二特征点集I 2
请注意,在本申请实施方式中,可以通过图像稀疏匹配的方式处理第一图像和第二图像以得到第一图像和第二图像的第一匹配集M。
进一步地,确定第一图像的第一特征点集I 1和第二图像的第二特征点集I 2时,特征点提取的算法,包括但不限于定向快速旋转简化(Oriented FAST and Rotated BRIEF,ORB)算法、HARRIS角点提取算法、尺度不变特征变换(Scale-invariant feature transform,SIFT)算法和加速稳健特征(Speeded Up Robust Features,SURF)算法。
在得到第一图像的第一特征点集I 1和第二图像的第二特征点集I 2之后,将第一特征点集I 1和第二特征点集I 2进行匹配,从而计算出第一匹配集M:
M={(x 1,x 2)|(K -1x 2) TEK -1x 1≈0,x 1∈I 1,x 2∈I 2}。
其中,x 1是第一特征点集I 1中的元素,x 2是第二特征点集I 2中的元素。进一步地,一个元素的内容包括:二维像素坐标、特征描述子和邻域的大小。二维像素坐标也即是特征点的位置。特征描述子是以该特征点为中心的一块图像邻域的特征,一般情况下,特征描述 子是一个一维或几维的向量,比如SIFT特征,SURF特征,甚至达到最简化的情况下可能就是该块区域的像素值均值。如果图像是RGB格式的,特征描述子就是RGB值,如果是YUV的,特征描述子就是YUV值。当然,一般情况下特征描述子不会是这样简单的特征,一般会有一些梯度,方向等统计结合的特征。
另外,可以通过匹配x 1,x 2中的特征向量,将相似度最高或者超过一定阈值的元素组成一个匹配对。可以理解,上述计算出第一匹配集的公式中,使用“约等于”符号的原因是只有两个图像点代表同一个物点的情况下才存在等号关系,这样才是完全匹配上的点,但通过提取特征点然后进行相似度匹配时来找匹配点的方法,由于精度误差等原因,不一定完全对应同一点,可能会有几个像素的偏差。
在某些实施方式中,拍摄设备100包括惯性测量单元30(Inertial measurement unit,IMU),匹配第一特征点集I 1和第二特征点集I 2以得到第一匹配集M,包括:
利用惯性测量单元30检测拍摄设备100的运动信息;
根据运动信息匹配第一特征点集I 1和第二特征点集I 2以得到第一匹配集M。
在某些实施方式中,拍摄设备100包括惯性测量单元30,第一确定子单元1622用于利用惯性测量单元30检测拍摄设备100的运动信息,以及用于根据运动信息匹配第一特征点集I 1和第二特征点集I 2以得到第一匹配集M。
如此,实现匹配第一特征点集I 1和第二特征点集I 2以得到第一匹配集M。具体地,运动信息可以为IMU单元提供的相机旋转和平移信息,可以根据运动信息指导图像特征点匹配时的搜索区域。在本实施方式中,IMU有3轴加速度和3轴角速度,可以输出偏航轴(YAW)、横滚轴(ROLL)、俯仰轴(PITCH)三个方向的旋转角和平移量,因此可以指导图像特征点匹配时的搜索区域,提高匹配效率。另外,在IMU精度足够时,可以根据运动信息确定旋转矩阵R和平移矩阵T。
在某些实施方式中,相对姿态信息包括本质矩阵E、旋转矩阵R和平移矩阵T,根据第一匹配集M和拍摄设备100的参数信息确定相对姿态信息,包括:
根据第一匹配集M和参数信息在预设约束条件下确定本质矩阵E;
对本质矩阵E进行分解以得到旋转矩阵R和平移矩阵T。
在某些实施方式中,相对姿态信息包括本质矩阵E、旋转矩阵R和平移矩阵T,第一确定子单元1622用于根据第一匹配集M和参数信息在预设约束条件下确定本质矩阵E;以及用于对本质矩阵E进行分解以得到旋转矩阵R和平移矩阵T。
如此,实现根据第一匹配集M和拍摄设备100的参数信息确定相对姿态信息。请注意,在本申请实施方式中,可以通过基于稀疏匹配的相机旋转和平移信息计算的方式,来根据第一匹配集M和拍摄设备100的参数信息确定相对姿态信息。
具体地,拍摄设备100的参数信息可以是拍摄设备100的内参数矩阵K。利用拍摄设备100的内参数矩阵K和第一匹配集M,在下述约束下,通过最优化方法,可以计算出最优化的本质矩阵E:
Figure PCTCN2018122523-appb-000002
通过对本质矩阵E进行分解可以得出最优化的旋转矩阵R和平移矩阵T:
Figure PCTCN2018122523-appb-000003
旋转矩阵R和平移矩阵T是拍摄第一图像和第二图像时,拍摄设备100的相对姿态变化,也即是相对姿态信息。
请注意,旋转矩阵R和平移矩阵T的参考坐标系具体是拍摄第一图像时还是拍摄第二图像时的相机坐标,取决于“相对姿态变化”的相对方向,如果是第一图像相对于第二图像姿态变化,那么就是拍摄第二图像时的相机坐标系。另外,分解本质矩阵E得到旋转矩阵R和平移矩阵T可以通过奇异值分解(Singular Value Decomposition,SVD)。
进一步地,最优化方法就是点集满足上述约束公式,然后方程组求解,然后通过RANSAC(或least_median)重校验,获取最优结果。具体可参考opencv的findEssential函数,其与findHomography函数基本一致。
另外,相机内参数矩阵简要描述就是:
Figure PCTCN2018122523-appb-000004
其中,fx,fy代表x,y方向上以像素为单位的相机焦距,cx,cy代表x,y方向上以像素为单位的中心偏移。
如果考虑相机畸变,则还包括k1,k2等径向畸变参数和p1,p2等切向畸变参数。具体描述如下:
u=fx·x′+cx
v=fy·y′+cy
x″=x′·(1+k 1·r 2+k 2·r 4)+2·p 1·x′·y′+p 2·(r 2+2x′ 2)
y″=y′·(1+k 1·r 2+k 2·r 4)+p 1·(r 2+2·y′ 2)+2·p 2·x′·y′
这里r2=x’ 2+y’ 2
u=fx·x″+cx
v=fy·y″+cy
其中,u,v为某个像素点以像素为单位的坐标。
在某些实施方式中,相对姿态信息包括本质矩阵E、旋转矩阵R和平移矩阵T,根据相对姿态信息确定目标物体P的空间坐标,包括:
根据本质矩阵E处理第一图像和第二图像以得到第一图像和第二图像的第二匹配集N;
根据第二匹配集N和第一图像确定第三图像,第三图像为第二匹配集N在第一图像中 所对应的图像;
根据旋转矩阵R和平移矩阵T处理第三图像,以得到目标物体P的空间坐标。
在某些实施方式中,相对姿态信息包括本质矩阵E、旋转矩阵R和平移矩阵T,第二确定子单元1624用于根据本质矩阵E处理第一图像和第二图像以得到第一图像和第二图像的第二匹配集N;以及用于根据第二匹配集N和第一图像确定第三图像,第三图像为第二匹配集N在第一图像中所对应的图像;以及用于根据旋转矩阵R和平移矩阵T处理第三图像,以得到目标物体P的空间坐标。
如此,实现根据相对姿态信息确定目标物体P的空间坐标。请注意,在本申请实施方式中,可以通过密集匹配的方式根据本质矩阵E处理第一图像和第二图像以得到第一图像和第二图像的第二匹配集N。
具体地,可以在稀疏匹配所得本质矩阵E参考下,计算出所有第一图像和第二图像中更多对应像素点的第二匹配集N:
N={(u 1,u 2)|(K -1u 2) TEK -1u 1=0,u 2∈P 1,u 2∈P 2};
其中,P 1和P 2为第一图像和第二图像中相同视野的密集匹配的像素点。
然后,将第二匹配集N所对应的第一图像中像素点所对应的图像作为“共同图像”,也即是第三图像。
最后,可以利用最终的旋转矩阵R和平移矩阵T来还原出第三图像中像素点(对应相同的物点)在三维空间中的坐标X,以得到目标物体P的空间坐标:
Figure PCTCN2018122523-appb-000005
如前所述,此处的三维坐标是在第一位置拍摄得到第一图像时的相机坐标系为参考的坐标值。可以根据公式X‘=R -1(X-T)计算出在第二位置拍摄得到第二图像时的相机坐标系下,对应点的空间坐标X‘。在第一位置拍摄得到第一图像时的相机坐标系下的空间坐标X的Z轴方向值,和在第二位置拍摄得到第二图像时的相机坐标系下对应点的空间坐标X‘的Z轴方向值就是深度,这样就可以确定深度信息。
请参阅图11,在某些实施方式中,步骤S18包括:
步骤S182:在第二位置对目标物体P对焦时,根据深度信息确定第二图像的调整点的深度,第二图像的调整点与第一图像的焦点相关;
步骤S184:根据调整点的深度确定拍摄设备100的调整信息;
步骤S186:根据调整信息调整拍摄设备100以使拍摄设备100在第二位置对目标物体P对焦。
请参阅图12,在某些实施方式中,对焦模块18包括第三确定单元182、第四确定单元184和调整单元186。第三确定单元182用于在第二位置对目标物体P对焦时,根据深 度信息确定第二图像的调整点的深度,第二图像的调整点与第一图像的焦点相关。第四确定单元184用于根据调整点的深度确定拍摄设备100的调整信息。调整单元186用于根据调整信息调整拍摄设备100以使拍摄设备100在第二位置对目标物体P对焦。
如此,实现根据深度信息控制拍摄设备100在第二位置对目标物体P对焦。请再次参阅图5,可以理解,在本示例中,在第一位置拍摄第一图像时,对焦平面经过人眼,在第二位置拍摄第二图像时,由于未调整焦距,导致对焦平面经过人耳,因此,需要调整拍摄设备100以使调整后的对焦平面经过人眼。
具体地,在第一位置拍摄第一图像时,对焦平面为S1,焦点Q1的深度为L1,对焦平面S1经过人眼。
在第二位置拍摄第二图像时,由于未调整焦距,对焦平面仍然为平面S1,且由于位置的变化,对焦平面S1经过人耳而非人眼,导致垂直于光轴101且经过人眼的平面S2可能虚焦,也即是说,需要将对焦平面从平面S1调整到平面S2。
在第二图像中,与第一图像的焦点Q1对应的调整点,即为经过人眼且垂直于光轴101的平面与光轴101的交点Q2。可以根据深度信息确定调整点Q2的深度L2。这样,可以根据L1与L2来调整拍摄设备100,使得对焦平面从平面S1调整到平面S2,从而使拍摄设备100在第二位置对目标物体P在人眼对焦。
在某些实施方式中,步骤S184包括:
根据调整点的深度和拍摄设备100的预设调整关系确定调整信息。
在某些实施方式中,第四确定单元184用于根据调整点的深度和拍摄设备100的预设调整关系确定调整信息。
如此,实现根据调整信息调整拍摄设备100以使拍摄设备100在第二位置对目标物体P对焦。具体地,预设调整关系可以是对焦表(focus table),在确定了焦点Q的深度L1和调整点Q2的深度L2的值之后,可以据此查询对焦表,从而确定调整信息。进一步地,调整信息包括镜头调整信息和图像传感器调整信息中的至少一种。
在一个例子中,可以根据对焦表,查询从L1对焦到L2需要的镜头移动的距离,从而调整拍摄设备100在第二位置对焦到目标物体P的人眼所在平面,也即是平面S2,以实现对焦补偿。
在另一个例子中,可以根据对焦表,查询从L1对焦到L2需要的图像传感器移动的距离,从而调整拍摄设备100在第二位置对焦到目标物体P的人眼所在平面,也即是平面S2,以实现对焦补偿。
在再一个例子中,可以根据对焦表,查询从L1对焦到L2需要的镜头移动的距离和图像传感器移动的距离,从而调整拍摄设备100在第二位置对焦到目标物体P的人眼所在平 面,也即是平面S2,以实现对焦补偿。
在本说明书的描述中,参考术语“一个实施方式”、“一些实施方式”、“示意性实施方式”、“示例”、“具体示例”、或“一些示例”等的描述意指结合所述实施方式或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于执行特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的执行,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施方式所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于执行逻辑功能的可执行指令的定序列表,可以具体执行在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本申请的各部分可以用硬件、软件、固件或它们的组合来执行。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来执行。例如,如果用硬件来执行,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来执行:具有用于对数据信号执行逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解执行上述实施方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施方式的步骤之一或其组合。
此外,在本申请各个实施方式中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式执行,也可以采用软件功能模块的形式执行。所述集成的模块如果以软件功能模块的形式执行并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本申请的实施方式,可以理解的是,上述实施方式是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施方式进行变化、修改、替换和变型。

Claims (25)

  1. 一种拍摄设备的控制方法,其特征在于,包括:
    在所述拍摄设备位于第一位置时,获取目标物体的第一图像;
    在所述拍摄设备位于第二位置时,获取所述目标物体的第二图像;
    根据所述第一图像和所述第二图像确定所述目标物体的深度信息;
    根据所述深度信息控制所述拍摄设备在所述第二位置对所述目标物体对焦。
  2. 如权利要求1所述的拍摄设备的控制方法,其特征在于,根据所述第一图像和所述第二图像确定所目标物体的深度信息,包括:
    根据所述第一图像和所述第二图像确定所述目标物体的空间坐标;
    根据所述空间坐标确定所述深度信息。
  3. 如权利要求2所述的拍摄设备的控制方法,其特征在于,根据所述第一图像和所述第二图像确定所述目标物体的空间坐标,包括:
    根据所述第一图像和所述第二图像确定所述拍摄设备在所述第一位置和所述第二位置的相对姿态信息;
    根据所述相对姿态信息确定所述目标物体的空间坐标。
  4. 如权利要求3所述的拍摄设备的控制方法,其特征在于,根据所述第一图像和所述第二图像确定所述拍摄设备在所述第一位置和所述第二位置的相对姿态信息,包括:
    处理所述第一图像和所述第二图像以得到所述第一图像和所述第二图像的第一匹配集;
    根据所述第一匹配集和所述拍摄设备的参数信息确定所述相对姿态信息。
  5. 如权利要求4所述的拍摄设备的控制方法,其特征在于,处理所述第一图像和所述第二图像以得到所述第一图像和所述第二图像的第一匹配集,包括:
    确定所述第一图像的第一特征点集和所述第二图像的第二特征点集;
    匹配所述第一特征点集和所述第二特征点集以得到所述第一匹配集。
  6. 如权利要求5所述的拍摄设备的控制方法,其特征在于,所述拍摄设备包括惯性测量单元,匹配所述第一特征点集和所述第二特征点集以得到所述第一匹配集,包括:
    利用所述惯性测量单元检测所述拍摄设备的运动信息;
    根据所述运动信息匹配所述第一特征点集和所述第二特征点集以得到所述第一匹配集。
  7. 如权利要求5所述的拍摄设备的控制方法,其特征在于,确定所述第一图像的第一特征点集和所述第二图像的第二特征点集,包括:
    通过特征提取和分块匹配中的至少一种方式确定所述第一特征点集和所述第二特征点集。
  8. 如权利要求4所述的拍摄设备的控制方法,其特征在于,所述相对姿态信息包括本质矩阵、旋转矩阵和平移矩阵,根据所述第一匹配集和所述拍摄设备的参数信息确定所述相对姿态信息,包括:
    根据所述第一匹配集和所述参数信息在预设约束条件下确定所述本质矩阵;
    对所述本质矩阵进行分解以得到所述旋转矩阵和所述平移矩阵。
  9. 如权利要求3所述的拍摄设备的控制方法,其特征在于,所述相对姿态信息包括本质矩阵、旋转矩阵和平移矩阵,根据所述相对姿态信息确定所述目标物体的空间坐标,包括:
    根据所述本质矩阵处理所述第一图像和所述第二图像以得到所述第一图像和所述第二图像的第二匹配集;
    根据所述第二匹配集和所述第一图像确定第三图像,所述第三图像为所述第二匹配集在所述第一图像中所对应的图像;
    根据所述旋转矩阵和所述平移矩阵处理所述第三图像,以得到所述目标物体的空间坐标。
  10. 如权利要求1所述的拍摄设备的控制方法,其特征在于,根据所述深度信息控制所述拍摄设备在所述第二位置对所述目标物体对焦,包括:
    在所述第二位置对所述目标物体对焦时,根据所述深度信息确定所述第二图像的调整点的深度,所述第二图像的调整点与所述第一图像的焦点相关;
    根据所述调整点的深度确定所述拍摄设备的调整信息;
    根据所述调整信息调整所述拍摄设备以使所述拍摄设备在所述第二位置对所述目标物体对焦。
  11. 如权利要求10所述的拍摄设备的控制方法,其特征在于,根据所述调整点的深度 确定所述拍摄设备的调整信息,包括:
    根据所述调整点的深度和所述拍摄设备的预设调整关系确定所述调整信息。
  12. 如权利要求10所述的拍摄设备的控制方法,其特征在于,所述调整信息包括镜头调整信息和图像传感器调整信息中的至少一种。
  13. 一种拍摄设备的控制装置,其特征在于,包括:
    第一获取模块,所述第一获取模块用于在所述拍摄设备位于第一位置时获取目标物体的第一图像;
    第二获取模块,所述第二获取模块用于在所述拍摄设备位于第二位置时,获取所述目标物体的第二图像;
    确定模块,所述确定模块用于根据所述第一图像和所述第二图像确定所述目标物体的深度信息;
    对焦模块,所述对焦模块用于根据所述深度信息控制所述拍摄设备在所述第二位置对所述目标物体对焦。
  14. 如权利要求13所述的拍摄设备的控制装置,其特征在于,所述确定模块包括:
    第一确定单元,所述第一确定单元用于根据所述第一图像和所述第二图像确定所述目标物体的空间坐标;
    第二确定单元,所述第二确定单元用于根据所述空间坐标确定所述深度信息。
  15. 如权利要求14所述的拍摄设备的控制装置,其特征在于,所述第一确定单元包括:
    第一确定子单元,所述第一确定子单元用于根据所述第一图像和所述第二图像确定所述拍摄设备在所述第一位置和所述第二位置的相对姿态信息;
    第二确定子单元,所述第二确定子单元用于根据所述相对姿态信息确定所述目标物体的空间坐标。
  16. 如权利要求15所述的拍摄设备的控制装置,其特征在于,所述第一确定子单元用于:
    处理所述第一图像和所述第二图像以得到所述第一图像和所述第二图像的第一匹配集;
    根据所述第一匹配集和所述拍摄设备的参数信息确定所述相对姿态信息。
  17. 如权利要求16所述的拍摄设备的控制装置,其特征在于,所述第一确定子单元用于确定所述第一图像的第一特征点集和所述第二图像的第二特征点集;以及用于匹配所述第一特征点集和所述第二特征点集以得到所述第一匹配集。
  18. 如权利要求17所述的拍摄设备的控制装置,其特征在于,所述拍摄设备包括惯性测量单元,所述第一确定子单元用于利用所述惯性测量单元检测所述拍摄设备的运动信息;以及用于根据所述运动信息匹配所述第一特征点集和所述第二特征点集以得到所述第一匹配集。
  19. 如权利要求17所述的拍摄设备的控制装置,其特征在于,所述第一确定子单元用于:
    通过特征提取和分块匹配中的至少一种方式确定所述第一特征点集和所述第二特征点集。
  20. 如权利要求16所述的拍摄设备的控制装置,其特征在于,所述相对姿态信息包括本质矩阵、旋转矩阵和平移矩阵,所述第一确定子单元用于根据所述第一匹配集和所述参数信息在预设约束条件下确定所述本质矩阵;以及用于对所述本质矩阵进行分解以得到所述旋转矩阵和所述平移矩阵。
  21. 如权利要求15所述的拍摄设备的控制装置,其特征在于,所述相对姿态信息包括本质矩阵、旋转矩阵和平移矩阵,所述第二确定子单元用于根据所述本质矩阵处理所述第一图像和所述第二图像以得到所述第一图像和所述第二图像的第二匹配集;以及用于根据所述第二匹配集和所述第一图像确定第三图像,所述第三图像为所述第二匹配集在所述第一图像中所对应的图像;以及用于根据所述旋转矩阵和所述平移矩阵处理所述第三图像,以得到所述目标物体的空间坐标。
  22. 如权利要求13所述的拍摄设备的控制装置,其特征在于,所述对焦模块包括:
    第三确定单元,所述第三确定单元用于在所述第二位置对所述目标物体对焦时,根据所述深度信息确定所述第二图像的调整点的深度,所述第二图像的调整点与所述第一图像的焦点相关;
    第四确定单元,所述第四确定单元用于根据所述调整点的深度确定所述拍摄设备的调整信息;
    调整单元,所述调整单元用于根据所述调整信息调整所述拍摄设备以使所述拍摄设备在所述第二位置对所述目标物体对焦。
  23. 如权利要求22所述的拍摄设备的控制装置,其特征在于,所述第四确定单元用于:
    根据所述调整点的深度和所述拍摄设备的预设调整关系确定所述调整信息。
  24. 如权利要求22所述的拍摄设备的控制装置,其特征在于,所述调整信息包括镜头调整信息和图像传感器调整信息中的至少一种。
  25. 一种拍摄设备,其特征在于,包括处理器和存储器,所述存储器存储有一个或多个程序,所述处理器用于执行所述一个或多个程序以实现权利要求1-12任一项所述的拍摄设备的控制方法。
PCT/CN2018/122523 2018-12-21 2018-12-21 拍摄设备的控制方法、拍摄设备的控制装置及拍摄设备 WO2020124517A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2018/122523 WO2020124517A1 (zh) 2018-12-21 2018-12-21 拍摄设备的控制方法、拍摄设备的控制装置及拍摄设备
CN201880065930.1A CN111213364A (zh) 2018-12-21 2018-12-21 拍摄设备的控制方法、拍摄设备的控制装置及拍摄设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/122523 WO2020124517A1 (zh) 2018-12-21 2018-12-21 拍摄设备的控制方法、拍摄设备的控制装置及拍摄设备

Publications (1)

Publication Number Publication Date
WO2020124517A1 true WO2020124517A1 (zh) 2020-06-25

Family

ID=70790041

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/122523 WO2020124517A1 (zh) 2018-12-21 2018-12-21 拍摄设备的控制方法、拍摄设备的控制装置及拍摄设备

Country Status (2)

Country Link
CN (1) CN111213364A (zh)
WO (1) WO2020124517A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114500842A (zh) * 2022-01-25 2022-05-13 维沃移动通信有限公司 视觉惯性标定方法及其装置

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022141271A1 (zh) * 2020-12-30 2022-07-07 深圳市大疆创新科技有限公司 云台系统的控制方法、控制设备、云台系统和存储介质
CN113301248B (zh) * 2021-04-13 2022-09-06 中科创达软件股份有限公司 拍摄方法、装置、电子设备及计算机存储介质
CN116095473A (zh) * 2021-11-01 2023-05-09 中兴终端有限公司 镜头自动对焦方法、装置、电子设备和计算机存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102168954A (zh) * 2011-01-14 2011-08-31 浙江大学 基于单目摄像机的深度、深度场及物体大小的测量方法
CN102984530A (zh) * 2011-09-02 2013-03-20 宏达国际电子股份有限公司 图像处理系统及自动对焦方法
WO2013069279A1 (ja) * 2011-11-09 2013-05-16 パナソニック株式会社 撮像装置
CN103292695A (zh) * 2013-05-10 2013-09-11 河北科技大学 一种单目立体视觉测量方法
CN107509027A (zh) * 2017-08-08 2017-12-22 深圳市明日实业股份有限公司 一种单目快速对焦方法及系统
CN108711166A (zh) * 2018-04-12 2018-10-26 浙江工业大学 一种基于四旋翼无人机的单目相机尺度估计方法
CN108717712A (zh) * 2018-05-29 2018-10-30 东北大学 一种基于地平面假设的视觉惯导slam方法

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156859B (zh) * 2011-04-21 2012-10-03 刘津甦 手部姿态与空间位置的感知方法
CN104081755B (zh) * 2012-11-30 2018-02-02 松下知识产权经营株式会社 图像处理装置以及图像处理方法
CN104102068B (zh) * 2013-04-11 2017-06-30 聚晶半导体股份有限公司 自动对焦方法及自动对焦装置
CN103246130B (zh) * 2013-04-16 2016-01-20 广东欧珀移动通信有限公司 一种对焦方法及装置
CN105744138B (zh) * 2014-12-09 2020-02-21 联想(北京)有限公司 快速对焦方法和电子设备
CN106412433B (zh) * 2016-10-09 2019-01-29 深圳奥比中光科技有限公司 基于rgb-ir深度相机的自动对焦方法及系统
CN106846403B (zh) * 2017-01-04 2020-03-27 北京未动科技有限公司 一种三维空间中手部定位的方法、装置及智能设备
CN107025666A (zh) * 2017-03-09 2017-08-08 广东欧珀移动通信有限公司 基于单摄像头的深度检测方法及装置和电子装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102168954A (zh) * 2011-01-14 2011-08-31 浙江大学 基于单目摄像机的深度、深度场及物体大小的测量方法
CN102984530A (zh) * 2011-09-02 2013-03-20 宏达国际电子股份有限公司 图像处理系统及自动对焦方法
WO2013069279A1 (ja) * 2011-11-09 2013-05-16 パナソニック株式会社 撮像装置
CN103292695A (zh) * 2013-05-10 2013-09-11 河北科技大学 一种单目立体视觉测量方法
CN107509027A (zh) * 2017-08-08 2017-12-22 深圳市明日实业股份有限公司 一种单目快速对焦方法及系统
CN108711166A (zh) * 2018-04-12 2018-10-26 浙江工业大学 一种基于四旋翼无人机的单目相机尺度估计方法
CN108717712A (zh) * 2018-05-29 2018-10-30 东北大学 一种基于地平面假设的视觉惯导slam方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HU, PAN: "Research of Local Invariant Features Extraction Algorithm Based on Visual Servo", CHINESE MASTER’S THESES FULL-TEXT DATABASE, INFORMATION SCIENCE AND TECHNOLOGY, no. 06, 10 June 2017 (2017-06-10) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114500842A (zh) * 2022-01-25 2022-05-13 维沃移动通信有限公司 视觉惯性标定方法及其装置

Also Published As

Publication number Publication date
CN111213364A (zh) 2020-05-29

Similar Documents

Publication Publication Date Title
WO2020124517A1 (zh) 拍摄设备的控制方法、拍摄设备的控制装置及拍摄设备
CN107636682B (zh) 图像采集装置及其操作方法
JP6663040B2 (ja) 奥行き情報取得方法および装置、ならびに画像取得デバイス
US11102413B2 (en) Camera area locking
US10915998B2 (en) Image processing method and device
EP3627821B1 (en) Focusing method and apparatus for realizing clear human face, and computer device
WO2017020150A1 (zh) 一种图像处理方法、装置及摄像机
US11671701B2 (en) Electronic device for recommending composition and operating method thereof
WO2018053825A1 (zh) 对焦方法和装置、图像拍摄方法和装置及摄像系统
US20210127059A1 (en) Camera having vertically biased field of view
WO2021218568A1 (zh) 图像深度确定方法及活体识别方法、电路、设备和介质
KR102382871B1 (ko) 렌즈의 포커스를 제어하기 위한 전자 장치 및 전자 장치 제어 방법
EP3718296B1 (en) Electronic device and method for controlling autofocus of camera
CN110213491B (zh) 一种对焦方法、装置及存储介质
US20210051262A1 (en) Camera device and focus method
WO2023236508A1 (zh) 一种基于亿像素阵列式相机的图像拼接方法及系统
JP5857712B2 (ja) ステレオ画像生成装置、ステレオ画像生成方法及びステレオ画像生成用コンピュータプログラム
TW202242716A (zh) 用於目標匹配的方法、裝置、設備及儲存媒體
WO2022141271A1 (zh) 云台系统的控制方法、控制设备、云台系统和存储介质
WO2022021093A1 (zh) 拍摄方法、拍摄装置及存储介质
US11956530B2 (en) Electronic device comprising multi-camera, and photographing method
WO2018161322A1 (zh) 基于深度的图像处理方法、处理装置和电子装置
WO2020216037A1 (zh) 控制装置、摄像装置、移动体、控制方法以及程序
CN115278071B (zh) 图像处理方法、装置、电子设备和可读存储介质
US11949984B2 (en) Electronic device that performs a driving operation of a second camera based on a determination that a tracked object is leaving the field of view of a moveable first camera having a lesser angle of view than the second camera, method for controlling the same, and recording medium of recording program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18943473

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18943473

Country of ref document: EP

Kind code of ref document: A1