WO2016101481A1 - 自动对焦方法及装置 - Google Patents

自动对焦方法及装置 Download PDF

Info

Publication number
WO2016101481A1
WO2016101481A1 PCT/CN2015/077963 CN2015077963W WO2016101481A1 WO 2016101481 A1 WO2016101481 A1 WO 2016101481A1 CN 2015077963 W CN2015077963 W CN 2015077963W WO 2016101481 A1 WO2016101481 A1 WO 2016101481A1
Authority
WO
WIPO (PCT)
Prior art keywords
coordinate
spatial
target object
focus
distance
Prior art date
Application number
PCT/CN2015/077963
Other languages
English (en)
French (fr)
Inventor
鲍协浩
姜东亚
杨万坤
Original Assignee
小米科技有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 小米科技有限责任公司 filed Critical 小米科技有限责任公司
Priority to MX2015009132A priority Critical patent/MX358881B/es
Priority to JP2016565542A priority patent/JP6348611B2/ja
Priority to KR1020157016842A priority patent/KR101678483B1/ko
Priority to RU2015129487A priority patent/RU2612892C2/ru
Priority to BR112015019722A priority patent/BR112015019722A2/pt
Priority to US14/809,591 priority patent/US9729775B2/en
Publication of WO2016101481A1 publication Critical patent/WO2016101481A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Definitions

  • the present disclosure relates to the field of communication terminal technologies, and in particular, to an autofocus method and apparatus.
  • a camera function integrated on the smart terminal. Through the camera function, the user can perform scenes or characters of interest at any time and any place. Shooting. When the camera function is turned on, when the user framing through the viewfinder, manual focus can be used to achieve focus on the framing content by clicking on a framing target in the viewfinder, such as a person's face.
  • the present disclosure provides an autofocus method and apparatus to solve the cumbersome manual focus operation in the related art, resulting in a problem that the user's shooting experience is poor.
  • an autofocus method comprising:
  • the target object is automatically focused according to the second spatial data.
  • the acquiring the first spatial data of the target object includes:
  • a first space vector angle of the first vector between the focus and the first position is calculated.
  • the calculating the focus to the first vertical distance of the image sensor comprises:
  • a difference between the image distance and a fixed focal length is calculated, the difference being used as the first vertical distance of the focus to the image sensor.
  • the obtaining, by the first vertical distance, the first spatial coordinate of the first position of the target object imaged on the image sensor comprises:
  • a first spatial coordinate imaged by the target object on the image sensor Determining, according to the second two-dimensional coordinate and the first vertical distance, a first spatial coordinate imaged by the target object on the image sensor, wherein an X-axis coordinate value of the first spatial coordinate is the An X-axis coordinate value of the second two-dimensional coordinate, a Y-axis coordinate value of the first spatial coordinate is a Y-axis coordinate value of the second two-dimensional coordinate, and a Z-axis coordinate value of the first spatial coordinate is the The first vertical distance.
  • acquiring location change data includes:
  • the spatial change vector angle detected by the direction sensor as the position change data is acquired.
  • the calculating the second spatial data of the target object according to the first spatial data and the position change data includes:
  • the second space vector angle is a space vector angle of the second vector between the focus and the second position
  • the second position is a position at which the target object is imaged on the image sensor after the autofocus is completed
  • the performing autofocus according to the second spatial data includes:
  • the lens group is moved until the distance from the lens group to the image sensor is the adjusted image distance.
  • the method before performing autofocusing according to the second spatial data, the method includes:
  • the second spatial coordinate is corrected according to the third spatial coordinate, and the corrected first Two spatial coordinates, including:
  • an autofocus apparatus comprising:
  • An acquiring module configured to acquire first spatial data of the target object after the user clicks on the target object in the viewfinder to complete manual focusing;
  • a detecting module configured to acquire position change data when detecting that the framing content in the viewfinder changes
  • a first calculating module configured to calculate second spatial data of the target object according to the first spatial data and the position change data
  • a focusing module configured to perform auto focusing on the target object according to the second spatial data.
  • the acquiring module includes:
  • a first vertical distance calculation sub-module configured to calculate a first vertical distance of the focus to the image sensor, wherein the target object is imaged on the image sensor when the manual focus is completed;
  • a first spatial coordinate obtaining submodule configured to use the focus as an origin of a three-dimensional rectangular coordinate system, and obtain a first space in which the target object is imaged at a first position on the image sensor according to the first vertical distance coordinate;
  • a first space vector angle calculation submodule configured to calculate a first space vector angle of the first vector between the focus and the first position.
  • the first vertical distance calculation submodule includes:
  • the image distance obtaining sub-module is used to obtain the image distance when the manual focus is completed
  • the difference calculation sub-module is configured to calculate a difference between the image distance and the fixed focal length, and use the difference as the first vertical distance of the focus to the image sensor.
  • the first spatial coordinate obtaining submodule includes:
  • a first two-dimensional coordinate acquisition sub-module configured to acquire, by using a center of the viewfinder as an origin of a plane rectangular coordinate system, a first two-dimensional coordinate of the target object in the plane rectangular coordinate system, where The center of the viewfinder is in the same normal direction as the focus;
  • a second two-dimensional coordinate obtaining sub-module configured to convert the first two-dimensional coordinates according to a preset ratio, to obtain a second two-dimensional coordinate that the target object is imaged on the image sensor;
  • a first spatial coordinate determining submodule configured to determine, according to the second two-dimensional coordinate and the first vertical distance, a first spatial coordinate imaged by the target object on the image sensor, wherein the first The X-axis coordinate value of the spatial coordinate is an X-axis coordinate value of the second two-dimensional coordinate, and the Y-axis coordinate value of the first spatial coordinate is a Y-axis coordinate value of the second two-dimensional coordinate, the first The Z-axis coordinate value of the spatial coordinates is the first vertical distance.
  • the detecting module includes:
  • An acceleration detecting submodule configured to determine, by the acceleration data detected by the acceleration sensor, whether the viewfinder moves;
  • a change vector angle acquisition submodule configured to acquire, as the viewfinder moves, a spatial change vector angle detected by the direction sensor as the position change data.
  • the first calculating module includes:
  • a first straight line distance calculation submodule configured to calculate a first straight line distance of the focus to the first position according to the first space coordinate
  • a second space vector angle calculation submodule configured to calculate a second space vector angle according to the first space vector angle and the space change vector angle, where the second space vector angle is the focus and the second position a space vector angle of the second vector between the second position, where the target object is imaged on the image sensor after the autofocus is completed;
  • a second spatial coordinate calculation submodule configured to calculate a second spatial coordinate of the second position according to the first straight line distance and the second space vector angle.
  • the focusing module includes:
  • a second vertical distance obtaining submodule for obtaining a second vertical distance of the focus to the second position according to the second spatial coordinate, wherein the second vertical distance is the Z of the second spatial coordinate Axis coordinate value;
  • the lens group moving sub-module is configured to move the lens group until the distance from the lens group to the image sensor is the adjusted image distance.
  • the device further includes:
  • a second calculating module configured to calculate a third spatial coordinate of the second location by using an image recognition algorithm
  • a correction module configured to correct the second spatial coordinate according to the third spatial coordinate to obtain the corrected second spatial coordinate.
  • the calibration module includes:
  • a correction threshold judging module configured to determine whether a distance between the third spatial coordinate and the second spatial coordinate is less than a preset correction threshold
  • a corrected coordinate value calculation submodule configured to calculate, as less than the correction threshold, an average value of the X-axis coordinate values of the third spatial coordinate and the second spatial coordinate as the X-axis of the corrected second spatial coordinate a coordinate value, an average value of the Y-axis coordinate values of the third space coordinate and the second space coordinate is calculated as a Y-axis coordinate value of the corrected second space coordinate, and according to the first straight line distance,
  • the X-axis coordinate value of the corrected second spatial coordinate and the corrected Y-axis coordinate value of the second spatial coordinate are used to calculate a Z-axis coordinate value of the corrected second spatial coordinate.
  • another autofocus device comprising:
  • a memory for storing processor executable instructions
  • processor is configured to:
  • the target object is automatically focused according to the second spatial data.
  • the first spatial data of the target object is acquired, and when the finder content in the viewfinder is detected to change, the position change data is acquired.
  • the second spatial data of the target object is calculated according to the first spatial data and the position change data, the autofocus may be completed according to the second spatial data. Therefore, when the user takes a picture, if the viewfinder moves, but the target object does not move out of the viewfinder, the target object can be automatically focused, thereby avoiding the manual focus operation when the framing content changes, simplifying the focus operation flow and improving The focusing speed increases the user's shooting experience accordingly.
  • the present disclosure obtains the first spatial coordinate imaged by the target object on the image sensor by acquiring the image distance obtained after the manual focus is completed and the focus as the origin of the three-dimensional Cartesian coordinate system when acquiring the first spatial data of the target object.
  • the first space vector angle so that the first spatial coordinate and the first spatial vector angle can be used to calculate spatial data after the position of the target object changes, thereby facilitating automatic focusing.
  • the disclosure can also use the acceleration sensor integrated in the terminal to determine whether the viewfinder moves, and when the viewfinder moves, the direction change sensor can detect the spatial variation vector angle generated by the movement, thereby being able to change the vector angle according to the space.
  • a spatial coordinate and a first spatial vector angle are used to calculate spatial data after the position of the target object changes to achieve autofocus.
  • the present disclosure can also correct the second spatial coordinate by the third spatial coordinate calculated by the image recognition algorithm before performing autofocus according to the second spatial coordinate, thereby further improving the accuracy of the auto focus.
  • FIG. 1 is a flow chart of an autofocus method according to an exemplary embodiment of the present disclosure.
  • FIG. 2 is a flow chart of another autofocus method according to an exemplary embodiment of the present disclosure.
  • FIG. 3 is a schematic diagram of imaging after completion of focusing of a terminal according to an exemplary embodiment of the present disclosure.
  • FIG. 4 is a block diagram of an autofocus apparatus according to an exemplary embodiment of the present disclosure.
  • FIG. 5 is a block diagram of another auto-focusing device according to an exemplary embodiment of the present disclosure.
  • FIG. 6 is a block diagram of another auto-focusing device according to an exemplary embodiment of the present disclosure.
  • FIG. 7 is a block diagram of another auto-focusing device according to an exemplary embodiment of the present disclosure.
  • FIG. 8 is a block diagram of another auto-focusing device according to an exemplary embodiment of the present disclosure.
  • FIG. 9 is a block diagram of another auto-focusing device according to an exemplary embodiment of the present disclosure.
  • FIG. 10 is a block diagram of another auto-focusing device according to an exemplary embodiment of the present disclosure.
  • FIG. 11 is a block diagram of another auto-focusing device according to an exemplary embodiment of the present disclosure.
  • FIG. 12 is a block diagram of another auto-focusing device according to an exemplary embodiment of the present disclosure.
  • FIG. 13 is a schematic structural diagram of an autofocus device according to an exemplary embodiment of the present disclosure.
  • first, second, third, etc. may be used in the present disclosure to describe various information, such information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other.
  • first information may also be referred to as second information without departing from the scope of the present disclosure.
  • second information may also be referred to as first information.
  • word "if” as used herein may be interpreted as "when” or “when” or “in response to a determination.”
  • FIG. 1 is a flowchart of an auto focus method according to an exemplary embodiment. The method may be used in a terminal, including the following steps:
  • step 101 when the user clicks on the target object in the viewfinder to complete manual focusing, the first spatial data of the target object is acquired.
  • the terminal in the embodiment of the present disclosure mainly refers to various smart terminals integrated with camera functions, such as a smart phone, a tablet computer, a PDA (Personal Digital Assistant), and the like.
  • the lens group for implementing the camera function on the smart terminal usually adopts a fixed focus (f), that is, optical zoom cannot be performed, and during the focusing process, the terminal changes the lens group and the image sensor for imaging by moving the lens group.
  • the distance between the two is such that the distance is equal to the image distance (v), that is, the focal plane imaged by the focus coincides with the vertical plane of the image sensor, thereby making the image clear, and the focus is completed.
  • the user when the user turns on the camera function of the terminal, the user can view the viewfinder.
  • the scene content adjusts the picture you want to shoot, and you can perform manual focus by clicking on a target object in the viewfinder.
  • the target object is imaged on the image sensor, and the image is sharp.
  • the viewfinder content changes, causing the position of the target object to change in the viewfinder, but the terminal can automatically focus on the target object when the viewfinder is not removed.
  • the first spatial data of the target object is acquired, and the first spatial data may include the first spatial coordinate and the first spatial vector angle, so as to complete the subsequent autofocus process by using the first spatial data.
  • the terminal may first calculate the first vertical distance of the focus to the image sensor, and use the focus as the origin to establish a three-dimensional rectangular coordinate system. At this time, since the manual focus has been completed, the target object is imaged.
  • the first position of the target object imaged on the image sensor is obtained, and the first spatial coordinate of the first position in the three-dimensional Cartesian coordinate system is obtained, the first space coordinate is determined by the X-axis coordinate and the Y-axis coordinate
  • the Z-axis coordinate composition wherein the Z-axis coordinate value is the aforementioned first vertical distance; and then based on the aforementioned three-dimensional rectangular coordinate system, the first angle between the focus and the first position can be calculated by using the vector angle formula in the related art a first space vector angle of the vector, the first space vector angle including an angle between the first vector and the X-axis of the X-axis, an angle between the first vector and the Y-axis of the Y-axis, and a Z-axis of the first vector and the Z-axis angle.
  • step 102 when it is detected that the framing content in the finder changes, the position change data is acquired.
  • the smart terminal usually integrates a plurality of sensors with different functions, which may include an acceleration sensor and a direction sensor.
  • the acceleration sensor is used to detect the magnitude and direction of the acceleration received by the smart terminal, so that the terminal can be judged whether the rotation occurs, and the direction sensor is used for The moving angle of each coordinate axis of the smart terminal in the three-dimensional space is detected.
  • the direction sensor may be specifically a gyro sensor.
  • the terminal may determine whether the terminal has rotated according to the acceleration data, and thereby determine whether the viewfinder has moved; when determining that the viewfinder moves, and stops moving.
  • the space change vector angle detected by the direction sensor is obtained, the space change vector angle is the X-axis change angle of the current space vector angle relative to the space vector angle when the manual focus is completed on the X-axis, and the Y-axis The Y-axis change angle and the Z-axis change angle on the Z-axis.
  • step 103 second spatial data of the target object is calculated based on the first spatial data and the position change data.
  • a first space vector angle of the first vector between the focus and the first position is obtained in step 101, and a spatial variation vector angle is obtained in step 102, so that the first space vector angle and space change may be used in this step.
  • a vector angle, the second space vector angle is calculated as a space vector angle of the second vector between the focus and the second position, and the second position is the image of the target object imaged after the autofocus is completed.
  • the position of the X-axis of the second space vector angle is the sum of the X-axis angle of the first space vector angle and the X-axis change angle of the spatial variation vector angle
  • the angle of the Y-axis of the second space vector angle is The sum of the Y-axis angle of the first space vector angle and the Y-axis change angle of the spatially varying vector angle
  • the Z-axis angle of the second space vector angle is the Z-axis angle of the first space vector angle and the spatial variation vector angle
  • the sum of the Z-axis change angles; the first position
  • step 104 the target object is automatically focused in accordance with the second spatial data.
  • a second vertical distance from the focus to the second position may be obtained according to the second spatial coordinate, where the second vertical distance is the Z-axis coordinate value of the second spatial coordinate, and the calculation is performed.
  • the sum of the two vertical distances and the fixed focal length, the sum is taken as the adjusted image distance, and then the terminal moves the lens group until the distance from the lens group to the image sensor is the adjusted image distance, the image of the target object falls on the image On the sensor, the target object is clearly imaged at this time, and autofocus is completed.
  • the second spatial data before the autofocusing according to the second spatial data, the second spatial data may be corrected, that is, when the second spatial data is the second spatial coordinate of the second location, the image recognition may be performed.
  • the algorithm calculates a third spatial coordinate of the second position, and corrects the second spatial coordinate according to the third spatial coordinate to obtain the corrected second spatial coordinate.
  • the terminal may determine whether the distance between the third spatial coordinate and the second spatial coordinate is less than a preset correction threshold, and when less than the correction threshold, calculate the third spatial coordinate and the second space.
  • the average value of the X-axis coordinate values of the coordinates is taken as the X-axis coordinate value of the corrected second space coordinate, and the average value of the Y-axis coordinate values of the third space coordinate and the second space coordinate is calculated as the corrected second space coordinate.
  • the Y-axis coordinate value, and then the corrected second space can be calculated according to the first straight line distance, the corrected X-axis coordinate value of the second spatial coordinate, and the corrected Y-axis coordinate value of the second spatial coordinate.
  • the first spatial data of the target object is acquired, and when the finder content in the viewfinder is detected to change, the position is acquired.
  • the change data after the second spatial data of the target object is calculated according to the first spatial data and the position change data, the auto focus can be completed according to the second spatial data.
  • the embodiment may further correct the second spatial coordinate by using the third spatial coordinate calculated by the image recognition algorithm before performing autofocus according to the second spatial coordinate, thereby The accuracy of autofocus can be further improved.
  • FIG. 2 is a flowchart of another autofocus method according to an exemplary embodiment.
  • the method can be used in a terminal, including the following steps:
  • step 201 when the user clicks on the target object in the viewfinder to complete manual focusing, the first vertical distance of the focus to the image sensor is calculated.
  • the terminal in the embodiment of the present disclosure mainly refers to various intelligent terminals integrated with camera functions.
  • the Gaussian imaging formula is satisfied between the focal length (f), the object distance (u) and the image distance (v), wherein
  • the focal length refers to the distance between the lens group and the focus.
  • the object distance refers to the distance from the vertical plane where the object is photographed to the lens group, and the distance from the image taken to the lens group.
  • FIG. 3 is a schematic diagram of imaging after completion of focusing of a terminal according to an exemplary embodiment.
  • the picture to be photographed can be adjusted by viewing the framing content in the viewfinder, and a certain target object in the viewfinder can be clicked, as shown in FIG. Perform manual focus.
  • the target object is imaged on the image sensor, and the image is clear at this time.
  • the image distance is v1 and the fixed focal length is f
  • the first vertical distance from the focus to the image sensor is d1
  • step 202 the first spatial coordinate of the first position imaged by the target object on the image sensor is obtained according to the first vertical distance with the focus as the origin of the three-dimensional Cartesian coordinate system.
  • the center of the viewfinder can be first used as the origin of the plane rectangular coordinate system, and the center of the viewfinder is in the same normal direction as the focus.
  • the first two-dimensional object of the target object in the plane rectangular coordinate system is acquired.
  • the target object is imaged.
  • the first position P1 of the target object imaged on the image sensor is set, so the first space coordinate of the first position P1 in the three-dimensional Cartesian coordinate system is obtained, and according to the size of the viewfinder and the image sensor
  • the size is converted into a first two-dimensional coordinate P(x, y) according to a preset ratio, and a second two-dimensional coordinate imaged by the target object on the image sensor is obtained, which is assumed to be (x1, y1).
  • the viewfinder has a pixel size of 1440 ⁇ 1080
  • the image sensor has a length and width of 0.261 inches and 0.196 inches, respectively, assuming that the first two-dimensional coordinates of the target object on the viewfinder are P (500px, 500px) corresponds to the second two-dimensional coordinates in the three-dimensional Cartesian coordinate system (0.090 inches, 0.090 inches).
  • the first spatial coordinate P1(x1, y1, z1) imaged by the target object on the image sensor may be determined according to the second two-dimensional coordinate (x1, y1) and the first vertical distance d1, wherein the first spatial coordinate X
  • the axis coordinate value is the X-axis coordinate value x1 of the second two-dimensional coordinate
  • the Y-axis coordinate value of the first space coordinate is the Y-axis coordinate value y1 of the second two-dimensional coordinate
  • the Z-axis coordinate value of the first spatial coordinate is the first value.
  • the vertical distance z1 d1.
  • step 203 a first space vector angle of the first vector between the focus and the first position is calculated.
  • the terminal can calculate the first space vector angle of the focus to the first position P1 by using the vector angle formula of the three-dimensional Cartesian coordinate system ( ⁇ x1, ⁇ y1, ⁇ z1), wherein the angle between the first vector and the X-axis of the X-axis is ⁇ x1, the angle between the first vector and the Y-axis of the Y-axis is ⁇ y1, and the angle between the first vector and the Z-axis of the Z-axis is ⁇ Z1.
  • step 204 it is determined whether the viewfinder has moved by the acceleration data detected by the acceleration sensor.
  • a plurality of sensors having different functions are generally integrated on the terminal, wherein the acceleration sensor can be used to detect the magnitude and direction of the acceleration received by the terminal.
  • the acceleration sensor can be used to detect the magnitude and direction of the acceleration received by the terminal.
  • the terminal acquires the detected by the acceleration sensor, After the acceleration data, it can be determined whether the terminal has rotated according to the acceleration data, and then it can be determined whether the viewfinder has moved.
  • step 205 when the viewfinder moves, the spatial change vector angle as the position change data detected by the direction sensor is acquired.
  • the terminal can also integrate a direction sensor for detecting the moving angle of each coordinate axis of the terminal in the three-dimensional space.
  • the direction sensor can be specifically a gyro sensor.
  • the space change vector angle detected by the direction sensor may be acquired when the movement stops, and the spatial change vector angle is the current space vector angle relative to the manual focus completion.
  • the space vector angles are the X-axis change angle ⁇ ⁇ x on the X-axis, the Y-axis change angle ⁇ ⁇ y on the Y-axis, and the Z-axis change angle ⁇ ⁇ z on the Z-axis, respectively.
  • step 206 a first straight line distance of the focus to the first position is calculated based on the first spatial coordinates.
  • the first spatial coordinate P1 (x1, y1, z1) is obtained in the foregoing step 202.
  • the first straight line distance ⁇ of the focus to P1 can be calculated according to P1(x1, y1, z1), and the calculation formula of the ⁇ is as follows :
  • a second space vector angle is calculated based on the first space vector angle and the spatial variation vector angle.
  • the second space vector angle may be calculated according to the first space vector angle obtained in step 203 and the spatial change vector angle obtained in step 205, where the second space vector angle is between the focus and the second position P2.
  • the space vector angle of the two vectors, the second position P2 is that after the manual focus is completed, the framing content in the viewfinder changes, but when the target object is not moved out of the viewfinder, the target object is imaged after the terminal completes the auto focus. The location on the top.
  • the second space vector angle is calculated as follows:
  • ⁇ x2 ⁇ x1+ ⁇ x
  • step 208 the second space sitting in the second position is calculated according to the first straight line distance and the second space vector angle. Standard.
  • the second spatial coordinate P2 of the second position P2 can be calculated according to the first straight line distance ⁇ calculated in step 206 and the second space vector angle ( ⁇ x2, ⁇ y2, ⁇ z2) calculated in step 207. (x2, y2, z2), where ⁇ is multiplied by the cosine of ⁇ x2 to obtain the X-axis coordinate value x2 of P2, and ⁇ is multiplied by the cosine of ⁇ y2 to obtain the Y-axis coordinate value y2, ⁇ and ⁇ z2 of P2
  • the cosine value is multiplied to obtain the Z-axis coordinate value z2 of P2, that is, the second spatial coordinate can be calculated according to the following formula:
  • step 209 a second vertical distance of the focus to the second position is obtained according to the second spatial coordinate, wherein the second vertical distance is a Z-axis coordinate value of the second spatial coordinate.
  • the second spatial coordinate P2 (x2, y2, z2) of the second position P2 is obtained in step 208, so that the second vertical distance d2 of the focus to the second position P2 can be obtained according to the second spatial coordinate, the second vertical distance D2 is the Z-axis coordinate value z2 of the second spatial coordinate.
  • step 210 the sum of the second vertical distance and the fixed focus is calculated, and the sum is taken as the adjusted image distance.
  • V2 d2+f formula (5)
  • step 211 the lens group is moved until the distance from the lens group to the image sensor is the adjusted image distance.
  • the adjusted image distance is calculated according to the position of the target object imaged on the image sensor after the autofocus in the foregoing step.
  • the terminal can perform auto focus by controlling the movement of the lens group, and the auto focus is completed when the lens group moves to the image distance v2 after the adjustment from the image sensor.
  • the present disclosure also provides an embodiment of an autofocus device and a terminal to which it is applied.
  • FIG. 4 is a block diagram of an auto-focusing device according to an exemplary embodiment of the present disclosure.
  • the device includes an acquisition module 410, a detection module 420, a first calculation module 430, and a focus module 440.
  • the acquiring module 410 is configured to acquire first spatial data of the target object after the user clicks on the target object in the viewfinder to complete manual focusing.
  • the detecting module 420 is configured to acquire position change data when detecting that the framing content in the viewfinder changes;
  • the first calculating module 430 is configured to calculate second spatial data of the target object according to the first spatial data and the position change data;
  • the focusing module 440 is configured to perform auto focusing on the target object according to the second spatial data.
  • the target object in the viewfinder when the user takes a picture by using the terminal, when the user clicks the target object in the viewfinder to complete the manual focus, the first spatial data of the target object is acquired, and when the finder content in the viewfinder is detected to change, the position is acquired.
  • the change data after the second spatial data of the target object is calculated according to the first spatial data and the position change data, the auto focus can be completed according to the second spatial data. Therefore, when the user takes a picture, if the viewfinder moves, but the target object does not move out of the viewfinder, the target object can be automatically focused, thereby avoiding the manual focus operation when the framing content changes, simplifying the focus operation flow and improving The focusing speed increases the user's shooting experience accordingly.
  • FIG. 5 is a block diagram of another auto-focusing device according to an exemplary embodiment of the present disclosure.
  • the acquiring module 410 may include: The first vertical distance calculation sub-module 411, the first spatial coordinate acquisition sub-module 412, and the first spatial vector angle calculation sub-module 413.
  • the first vertical distance calculation sub-module 411 is configured to calculate a first vertical distance of the focus to the image sensor, wherein the target object is imaged on the image sensor when the manual focus is completed;
  • the first spatial coordinate obtaining sub-module 412 is configured to use the focus as an origin of a three-dimensional Cartesian coordinate system, and obtain a first position of the target object imaged on the image sensor according to the first vertical distance First space coordinate;
  • the first space vector angle calculation sub-module 413 is configured to calculate a first space vector angle of the first vector between the focus and the first position.
  • FIG. 6 is a block diagram of another auto-focusing device according to an exemplary embodiment of the present disclosure.
  • the embodiment is based on the foregoing embodiment shown in FIG.
  • the module 411 can include an image distance obtaining sub-module 4111 and a difference calculating sub-module 4112.
  • the image distance obtaining sub-module 4111 is configured to obtain an image distance when the manual focus is completed
  • the difference calculation sub-module 4112 is configured to calculate a difference between the image distance and a fixed focus, using the difference as the first vertical distance of the focus to the image sensor.
  • FIG. 7 is a block diagram of another auto-focusing device according to an exemplary embodiment of the present disclosure.
  • the embodiment is based on the foregoing embodiment shown in FIG.
  • the module 412 may include: a first two-dimensional coordinate acquisition sub-module 4121, a second two-dimensional coordinate obtaining sub-module 4122, and a first spatial coordinate The sub-module 4123 is determined.
  • the first two-dimensional coordinate acquisition sub-module 4121 is configured to acquire the first two-dimensionality of the target object in the plane rectangular coordinate system with the center of the viewfinder as an origin of the plane rectangular coordinate system. a coordinate, wherein a center of the viewfinder is in the same normal direction as the focus;
  • the second two-dimensional coordinate obtaining sub-module 4122 is configured to convert the first two-dimensional coordinates according to a preset ratio to obtain a second two-dimensional coordinate that the target object is imaged on the image sensor;
  • the first spatial coordinate determining sub-module 4123 is configured to determine, according to the second two-dimensional coordinate and the first vertical distance, a first spatial coordinate imaged by the target object on the image sensor, where
  • the X-axis coordinate value of the first spatial coordinate is an X-axis coordinate value of the second two-dimensional coordinate
  • the Y-axis coordinate value of the first spatial coordinate is a Y-axis coordinate value of the second two-dimensional coordinate
  • the Z-axis coordinate value of the first spatial coordinate is the first vertical distance.
  • the image distance obtained by the manual focus is completed, and the focus is taken as the origin of the three-dimensional rectangular coordinate system, and the image of the target object imaged on the image sensor is obtained.
  • a spatial coordinate and a first spatial vector angle so that the spatial data after the change of the target object position can be calculated by using the first spatial coordinate and the first spatial vector angle, thereby facilitating automatic focusing.
  • FIG. 8 is a block diagram of another auto-focusing device according to an exemplary embodiment of the present disclosure.
  • the detecting module 420 may include: The acceleration detection sub-module 421 and the variation vector angle acquisition sub-module 422.
  • the acceleration detecting sub-module 421 is configured to determine, by the acceleration data detected by the acceleration sensor, whether the viewfinder moves;
  • the change vector angle acquisition sub-module 422 is configured to acquire a spatial change vector angle as the position change data detected by the direction sensor when the viewfinder moves.
  • FIG. 9 is a block diagram of another auto-focusing device according to an exemplary embodiment of the present disclosure.
  • the first computing module 430 may be based on the foregoing embodiment shown in FIG. 8 .
  • the first straight line distance calculation submodule 431, the second space vector angle calculation submodule 432, and the second space coordinate calculation submodule 433 are included.
  • the first straight-line distance calculation sub-module 431 is configured to calculate a first straight-line distance of the focus to the first position according to the first spatial coordinate;
  • the second space vector angle calculation sub-module 432 is configured to calculate a second space vector angle according to the first space vector angle and the space change vector angle, where the second space vector angle is the focus and a spatial vector angle of a second vector between the second positions, the second position being a position at which the target object is imaged on the image sensor after the autofocus is completed;
  • the second spatial coordinate calculation sub-module 433 is configured to calculate a second spatial coordinate of the second position according to the first linear distance and the second spatial vector angle.
  • FIG. 10 is another auto-focus device frame according to an exemplary embodiment of the present disclosure.
  • the focus module 440 may include a second vertical distance obtaining sub-module 441, an adjusted image distance calculating sub-module 442, and a lens group moving sub-module 443.
  • the second vertical distance obtaining submodule 441 is configured to obtain a second vertical distance from the focus to the second position according to the second spatial coordinate, wherein the second vertical distance is the Z-axis coordinate value of the second spatial coordinate;
  • the adjustment image distance calculation sub-module 442 is configured to calculate a sum of the second vertical distance and a fixed focus, and use the sum as an adjusted image distance;
  • the lens group moving sub-module 443 is configured to move the lens group until the distance of the lens group to the image sensor is the adjusted image distance.
  • the acceleration sensor integrated in the terminal determines whether the viewfinder moves, and when the viewfinder moves, the direction change sensor can detect the spatial variation vector angle generated by the movement, thereby being able to change the vector angle according to the space.
  • the first spatial coordinate and the first spatial vector angle are used to calculate spatial data after the position of the target object changes, so as to achieve autofocus.
  • FIG. 11 is a block diagram of another auto-focusing device according to an exemplary embodiment of the present disclosure.
  • the embodiment may be based on the foregoing embodiment shown in FIG. 9 or FIG.
  • the second calculation module 450 and the correction module 460 are included.
  • the second calculation module 450 is configured to calculate a third spatial coordinate of the second location by using an image recognition algorithm
  • the correction module 460 is configured to correct the second spatial coordinate according to the third spatial coordinate to obtain the corrected second spatial coordinate.
  • FIG. 12 is a block diagram of another auto-focusing apparatus according to an exemplary embodiment of the present disclosure.
  • the correction module 460 may include: The correction threshold determination sub-module 461 and the correction coordinate value calculation sub-module 462.
  • the correction threshold determining sub-module 461 is configured to determine whether a distance between the third spatial coordinate and the second spatial coordinate is less than a preset correction threshold
  • the corrected coordinate value calculation sub-module 462 is configured to calculate, as less than the correction threshold, an average value of the X-axis coordinate values of the third spatial coordinate and the second spatial coordinate as the corrected second space Calculating an average value of the Y-axis coordinate values of the third spatial coordinate and the second spatial coordinate as a Y-axis coordinate value of the corrected second spatial coordinate, and according to the first straight
  • the Z-axis coordinate value of the corrected second spatial coordinate is calculated by the line distance, the corrected X-axis coordinate value of the second spatial coordinate, and the corrected Y-axis coordinate value of the second spatial coordinate.
  • the third spatial coordinate calculated by the image recognition algorithm corrects the second spatial coordinate, thereby further improving the accuracy of the autofocus.
  • the present disclosure also provides another auto-focusing device, the device including a processor; for storing A processor executable memory of instructions; wherein the processor is configured to:
  • the device embodiment since it basically corresponds to the method embodiment, reference may be made to the partial description of the method embodiment.
  • the device embodiments described above are merely illustrative, wherein the modules described as separate components may or may not be physically separate, and the components displayed as modules may or may not be physical modules, ie may be located A place, or it can be distributed to multiple network modules. Some or all of the modules may be selected according to actual needs to achieve the objectives of the present disclosure. Those of ordinary skill in the art can understand and implement without any creative effort.
  • FIG. 13 is a schematic structural diagram of an apparatus 1300 for controlling video picture presentation according to an exemplary embodiment of the present disclosure.
  • device 1300 can be a mobile phone with routing functionality, a computer, a digital broadcast terminal, a messaging device, a gaming console, a tablet device, a medical device, a fitness device, a personal digital assistant, and the like.
  • device 1300 can include one or more of the following components: processing component 1302, memory 1304, power component 1306, multimedia component 1308, audio component 1310, input/output (I/O) interface 1313, sensor component 1314, And a communication component 1316.
  • Processing component 1302 typically controls the overall operation of device 1300, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
  • Processing component 1302 can include one or more processors 1320 to execute instructions to perform all or part of the steps described above.
  • processing component 1302 can include one or more modules to facilitate interaction between component 1302 and other components.
  • processing component 1302 can include a multimedia module to facilitate interaction between multimedia component 1308 and processing component 1302.
  • Memory 1304 is configured to store various types of data to support operation at device 1300. Examples of such data include instructions for any application or method operating on device 1300, contact data, phone book data, messages, pictures, videos, and the like.
  • Memory 1304 can be implemented by any type of volatile or non-volatile storage device, or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Disk or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read only memory
  • EPROM erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Disk Disk or Optical Disk.
  • Power component 1306 provides power to various components of device 1300.
  • Power component 1306 can include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for device 1300.
  • the multimedia component 1308 includes a screen between the device 1300 and the user that provides an output interface.
  • the screen can include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen can be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touches, slides, and gestures on the touch panel. The touch sensor may sense not only the boundary of the touch or sliding action, but also the duration and pressure associated with the touch or slide operation.
  • the multimedia component 1308 includes a front camera and/or a rear camera. When the device 1300 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 1310 is configured to output and/or input an audio signal.
  • the audio component 1310 includes a microphone (MIC) that is configured to receive an external audio signal when the device 1300 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode.
  • the received audio signal may be further stored in memory 1304 or transmitted via communication component 1316.
  • the audio component 1310 also includes a speaker for outputting an audio signal.
  • the I/O interface 1313 provides an interface between the processing component 1302 and the peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to, a home button, a volume button, a start button, and a lock button.
  • Sensor assembly 1314 includes one or more sensors for providing device 1300 with a status assessment of various aspects.
  • sensor assembly 1314 can detect an open/closed state of device 1300, a relative positioning of components, such as the display and keypad of device 1300, and sensor component 1314 can also detect a change in position of one component of device 1300 or device 1300. The presence or absence of contact by the user with the device 1300, the orientation or acceleration/deceleration of the device 1300 and the temperature change of the device 1300.
  • Sensor assembly 1314 can include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • Sensor assembly 1314 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 1314 can also include an acceleration sensor, a gyro sensor, a magnetic sensor, a pressure sensor, a microwave sensor, or a temperature sensor.
  • Communication component 1316 is configured to facilitate wired or wireless communication between device 1300 and other devices.
  • the device 1300 can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof.
  • communication component 1316 receives broadcast signals or broadcast associated information from an external broadcast management system via a broadcast channel.
  • the communication component 1316 also includes a near field communication (NFC) module to facilitate short range communication.
  • NFC near field communication
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • apparatus 1300 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), Field Programmable Gate Array (FPGA), controller, microcontroller, microprocessor or other electronic component implementation for performing the above methods.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA Field Programmable Gate Array
  • controller microcontroller, microprocessor or other electronic component implementation for performing the above methods.
  • non-transitory computer readable storage medium comprising instructions, such as a memory 1304 comprising instructions executable by processor 1320 of apparatus 1300 to perform the above method.
  • the non-transitory computer readable storage medium may be a ROM, a random access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, and an optical data storage device.
  • a non-transitory computer readable storage medium when instructions in the storage medium are executed by a processor of a terminal, enabling the terminal to perform an autofocus method, the method comprising: when the user clicks on a target in the viewfinder Obtaining first spatial data of the target object after the object is manually focused; acquiring position change data when detecting that the framing content in the finder changes; according to the first spatial data and the position change data Calculating second spatial data of the target object; performing autofocus according to the second spatial data.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Studio Devices (AREA)
  • Focusing (AREA)
  • Automatic Focus Adjustment (AREA)

Abstract

本公开是关于自动对焦方法及装置,所述方法包括:当用户点击取景器中的目标物体完成手动对焦后,获取所述目标物体的第一空间数据;当检测到所述取景器中的取景内容发生变化时,获取位置变化数据;根据所述第一空间数据和所述位置变化数据计算所述目标物体的第二空间数据;按照所述第二空间数据对所述目标物体进行自动对焦。应用本公开实施例,在用户拍照过程中,如果取景器发生移动,但目标物体未移出取景器时,可以自动对焦到该目标物体,从而避免了在取景内容变化时的手动对焦操作,简化了对焦操作流程,提高了对焦速度,相应提升了用户的拍摄体验。

Description

自动对焦方法及装置
本申请基于申请号为2014108321087、申请日为2014年12月26日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本公开涉及通信终端技术领域,尤其涉及自动对焦方法及装置。
背景技术
随着智能终端的发展,用户可以通过智能终端实现各种应用功能,其中一种最常见的应用功能为智能终端上集成的照相机功能,通过照相机功能用户可以随时随地对感兴趣的场景或人物进行拍摄。在开启照相机功能时,当用户通过取景器取景后,可以采用手动对焦方式,通过点击取景器中的某个取景目标,例如人物面部,以实现对取景内容进行对焦。
相关技术中,如果用户手动对焦后移动了取景器,则取景器中的取景内容会发生变化,此时照相机会自动对焦到取景器中心。但是,由于重新对焦后的焦点与用户所对焦的取景目标之间发生了偏差,因此用户需要重新进行手动对焦才能将焦点重新设置到取景目标上,因此对焦操作繁琐,导致用户的照相体验较差。
发明内容
本公开提供了自动对焦方法及装置,以解决相关技术中手动对焦操作繁琐,导致用户拍摄体验较差的问题。
根据本公开实施例的第一方面,提供一种自动对焦方法,所述方法包括:
当用户点击取景器中的目标物体完成手动对焦后,获取所述目标物体的第一空间数据;
当检测到所述取景器中的取景内容发生变化时,获取位置变化数据;
根据所述第一空间数据和所述位置变化数据计算所述目标物体的第二空间数据;
按照所述第二空间数据对所述目标物体进行自动对焦。
可选的,所述获取所述目标物体的第一空间数据,包括:
计算焦点到图像传感器的第一垂直距离,其中,所述手动对焦完成时所述目标物体所成像位于所述图像传感器上;
以所述焦点作为三维直角坐标系的原点,根据所述第一垂直距离获得所述目标物体所成像在所述图像传感器上的第一位置的第一空间坐标;
计算所述焦点与所述第一位置之间的第一向量的第一空间向量角。
可选的,所述计算焦点到图像传感器的第一垂直距离,包括:
获得所述手动对焦完成时的像距;
计算所述像距与定焦焦距之间的差值,将所述差值作为所述焦点到图像传感器的第一垂直距离。
可选的,所述根据所述第一垂直距离获得所述目标物体所成像在所述图像传感器上的第一位置的第一空间坐标,包括:
以所述取景器的中心作为平面直角坐标系的原点,获取所述目标物体在所述平面直角坐标系中的第一二维坐标,其中,所述取景器的中心与所述焦点在同一法线方向;
按照预设比例转换所述第一二维坐标,获得所述目标物体所成像在所述图像传感器上的第二二维坐标;
根据所述第二二维坐标和所述第一垂直距离确定所述目标物体所成像在所述图像传感器上的第一空间坐标,其中,所述第一空间坐标的X轴坐标值为所述第二二维坐标的X轴坐标值,所述第一空间坐标的Y轴坐标值为所述第二二维坐标的Y轴坐标值,所述第一空间坐标的Z轴坐标值为所述第一垂直距离。
可选的,所述当检测到所述取景器中的取景内容发生变化时,获取位置变化数据,包括:
通过加速度传感器检测到的加速度数据判断所述取景器是否发生移动;
当所述取景器发生移动时,获取通过方向传感器检测到的作为所述位置变化数据的空间变化向量角。
可选的,所述根据所述第一空间数据和所述位置变化数据计算所述目标物体的第二空间数据,包括:
根据所述第一空间坐标计算所述焦点到所述第一位置的第一直线距离;
根据所述第一空间向量角和所述空间变化向量角,计算第二空间向量角,所述第二空间向量角为所述焦点与第二位置之间的第二向量的空间向量角,所述第二位置为所述自动对焦完成后所述目标物体所成像在所述图像传感器上的位置;
根据所述第一直线距离与所述第二空间向量角计算所述第二位置的第二空间坐标。
可选的,所述按照所述第二空间数据进行自动对焦,包括:
根据所述第二空间坐标获得所述焦点到所述第二位置的第二垂直距离,其中,所述第二垂直距离为所述第二空间坐标的Z轴坐标值;
计算所述第二垂直距离与定焦焦距的和,将所述和作为调整后的像距;
移动镜头组,直至所述镜头组到所述图像传感器的距离为所述调整后的像距。
可选的,所述按照所述第二空间数据进行自动对焦之前,所述方法包括:
通过图像识别算法计算所述第二位置的第三空间坐标;
根据所述第三空间坐标对所述第二空间坐标进行校正,获得校正后的第二空间坐标。
可选的,所述根据所述第三空间坐标对所述第二空间坐标进行校正,获得校正后的第 二空间坐标,包括:
判断所述第三空间坐标与所述第二空间坐标之间的距离是否小于预设的校正阈值;
当小于所述校正阈值时,计算所述第三空间坐标和所述第二空间坐标的X轴坐标值的平均值作为校正后的第二空间坐标的X轴坐标值,以及计算所述第三空间坐标和所述第二空间坐标的Y轴坐标值的平均值作为校正后的第二空间坐标的Y轴坐标值;
根据所述第一直线距离、所述校正后的第二空间坐标的X轴坐标值、以及所述校正后的第二空间坐标的Y轴坐标值,计算所述校正后的第二空间坐标的Z轴坐标值。
根据本公开实施例的第二方面,提供一种自动对焦装置,所述装置包括:
获取模块,用于当用户点击取景器中的目标物体完成手动对焦后,获取所述目标物体的第一空间数据;
检测模块,用于当检测到所述取景器中的取景内容发生变化时,获取位置变化数据;
第一计算模块,用于根据所述第一空间数据和所述位置变化数据计算所述目标物体的第二空间数据;
对焦模块,用于按照所述第二空间数据对所述目标物体进行自动对焦。
可选的,所述获取模块,包括:
第一垂直距离计算子模块,用于计算焦点到图像传感器的第一垂直距离,其中,所述手动对焦完成时所述目标物体所成像位于所述图像传感器上;
第一空间坐标获得子模块,用于以所述焦点作为三维直角坐标系的原点,根据所述第一垂直距离获得所述目标物体所成像在所述图像传感器上的第一位置的第一空间坐标;
第一空间向量角计算子模块,用于计算所述焦点与所述第一位置之间的第一向量的第一空间向量角。
可选的,所述第一垂直距离计算子模块,包括:
像距获得子模块,用于获得所述手动对焦完成时的像距;
差值计算子模块,用于计算所述像距与定焦焦距之间的差值,将所述差值作为所述焦点到图像传感器的第一垂直距离。
可选的,所述第一空间坐标获得子模块,包括:
第一二维坐标获取子模块,用于以所述取景器的中心作为平面直角坐标系的原点,获取所述目标物体在所述平面直角坐标系中的第一二维坐标,其中,所述取景器的中心与所述焦点在同一法线方向;
第二二维坐标获得子模块,用于按照预设比例转换所述第一二维坐标,获得所述目标物体所成像在所述图像传感器上的第二二维坐标;
第一空间坐标确定子模块,用于根据所述第二二维坐标和所述第一垂直距离确定所述目标物体所成像在所述图像传感器上的第一空间坐标,其中,所述第一空间坐标的X轴坐标值为所述第二二维坐标的X轴坐标值,所述第一空间坐标的Y轴坐标值为所述第二二维坐标的Y轴坐标值,所述第一空间坐标的Z轴坐标值为所述第一垂直距离。
可选的,所述检测模块,包括:
加速度检测子模块,用于通过加速度传感器检测到的加速度数据判断所述取景器是否发生移动;
变化向量角获取子模块,用于当所述取景器发生移动时,获取通过方向传感器检测到的作为所述位置变化数据的空间变化向量角。
可选的,所述第一计算模块,包括:
第一直线距离计算子模块,用于根据所述第一空间坐标计算所述焦点到所述第一位置的第一直线距离;
第二空间向量角计算子模块,用于根据所述第一空间向量角和所述空间变化向量角,计算第二空间向量角,所述第二空间向量角为所述焦点与第二位置之间的第二向量的空间向量角,所述第二位置为所述自动对焦完成后所述目标物体所成像在所述图像传感器上的位置;
第二空间坐标计算子模块,用于根据所述第一直线距离与所述第二空间向量角计算所述第二位置的第二空间坐标。
可选的,所述对焦模块,包括:
第二垂直距离获得子模块,用于根据所述第二空间坐标获得所述焦点到所述第二位置的第二垂直距离,其中,所述第二垂直距离为所述第二空间坐标的Z轴坐标值;
调整像距计算子模块,用于计算所述第二垂直距离与定焦焦距的和,将所述和作为调整后的像距;
镜头组移动子模块,用于移动镜头组,直至所述镜头组到所述图像传感器的距离为所述调整后的像距。
可选的,所述装置还包括:
第二计算模块,用于通过图像识别算法计算所述第二位置的第三空间坐标;
校正模块,用于根据所述第三空间坐标对所述第二空间坐标进行校正,获得校正后的第二空间坐标。
可选的,所述校正模块,包括:
校正阈值判断子模块,用于判断所述第三空间坐标与所述第二空间坐标之间的距离是否小于预设的校正阈值;
校正坐标值计算子模块,用于当小于所述校正阈值时,计算所述第三空间坐标和所述第二空间坐标的X轴坐标值的平均值作为校正后的第二空间坐标的X轴坐标值,计算所述第三空间坐标和所述第二空间坐标的Y轴坐标值的平均值作为校正后的第二空间坐标的Y轴坐标值,以及根据所述第一直线距离、所述校正后的第二空间坐标的X轴坐标值、以及所述校正后的第二空间坐标的Y轴坐标值,计算所述校正后的第二空间坐标的Z轴坐标值。
根据本公开实施例的第三方面,提供另一种自动对焦装置,包括:
处理器;
用于存储处理器可执行指令的存储器;
其中,所述处理器被配置为:
当用户点击取景器中的目标物体完成手动对焦后,获取所述目标物体的第一空间数据;
当检测到所述取景器中的取景内容发生变化时,获取位置变化数据;
根据所述第一空间数据和所述位置变化数据计算所述目标物体的第二空间数据;
按照所述第二空间数据对所述目标物体进行自动对焦。
本公开的实施例提供的技术方案可以包括以下有益效果:
本公开中在利用终端拍照时,当用户点击取景器中的目标物体完成手动对焦后,获取目标物体的第一空间数据,当检测到取景器中的取景内容发生变化时,获取位置变化数据,当根据第一空间数据和位置变化数据计算得到目标物体的第二空间数据后,可以按照该第二空间数据完成自动对焦。因此在用户拍照过程中,如果取景器发生移动,但目标物体未移出取景器时,可以自动对焦到该目标物体,从而避免了在取景内容变化时的手动对焦操作,简化了对焦操作流程,提高了对焦速度,相应提升了用户的拍摄体验。
本公开在获取目标物体的第一空间数据时,通过在手动对焦完成后得到的像距,以及以焦点作为三维直角坐标系的原点,获得目标物体所成像在图像传感器上的第一空间坐标和第一空间向量角,从而可以利用该第一空间坐标和第一空间向量角计算目标物体位置变化后的空间数据,从而方便实现自动对焦。
本公开还可以利用终端内集成的加速度传感器判断取景器是否发生移动,并在取景器发生移动时,可以通过方向传感器检测到移动所产生的空间变化向量角,从而能够根据空间变化向量角、第一空间坐标和第一空间向量角计算目标物体位置变化后的空间数据,以便实现自动对焦。
本公开还可以在根据第二空间坐标进行自动对焦前,通过图像识别算法计算得到的第三空间坐标对第二空间坐标进行校正,从而可以进一步提高自动对焦的精确性。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
图1是本公开根据一示例性实施例示出的一种自动对焦方法流程图。
图2是本公开根据一示例性实施例示出的另一种自动对焦方法流程图。
图3是本公开根据一示例性实施例示出的一种终端对焦完成后的成像示意图。
图4是本公开根据一示例性实施例示出的一种自动对焦装置的框图。
图5是本公开根据一示例性实施例示出的另一种自动对焦装置的框图。
图6是本公开根据一示例性实施例示出的另一种自动对焦装置的框图。
图7是本公开根据一示例性实施例示出的另一种自动对焦装置的框图。
图8是本公开根据一示例性实施例示出的另一种自动对焦装置的框图。
图9是本公开根据一示例性实施例示出的另一种自动对焦装置的框图。
图10是本公开根据一示例性实施例示出的另一种自动对焦装置的框图。
图11是本公开根据一示例性实施例示出的另一种自动对焦装置的框图。
图12是本公开根据一示例性实施例示出的另一种自动对焦装置的框图。
图13是本公开根据一示例性实施例示出的一种用于自动对焦装置的一结构示意图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
在本公开使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本公开。在本公开和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,尽管在本公开可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本公开范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。
如图1所示,图1是根据一示例性实施例示出的一种自动对焦方法流程图,该方法可以用于终端中,包括以下步骤:
在步骤101中,当用户点击取景器中的目标物体完成手动对焦后,获取目标物体的第一空间数据。
本公开实施例中的终端主要指各种集成了照相机功能的智能终端,例如,智能手机、平板电脑、PDA(Personal Digital Assistant,个人数字助理)等。其中,智能终端上用于实现照相机功能的镜头组通常采用定焦焦距(f),即无法进行光学变焦,在对焦过程中,终端通过移动镜头组,改变镜头组与用于成像的图像传感器之间的距离,使得上述距离等于像距(v),即对焦所成像的焦平面与图像传感器的垂直平面重合,从而使得成像清晰,此时完成对焦。
本公开实施例中,当用户开启终端的照相机功能后,可以通过查看取景器中的取 景内容调整想要拍摄的画面,并且可以通过点击取景器中的某一目标物体完成手动对焦。在手动对焦完成后,目标物体所成像位于图像传感器上,此时成像清晰。为了在用户手动对焦后,如果用户移动取景器重新取景,即取景内容发生变化,导致目标物体在取景器中的位置发生变化,但未移出取景器时,终端可以自动对焦到该目标物体上,则在手动对焦完成后,获取目标物体的第一空间数据,该第一空间数据可以包括第一空间坐标和第一空间向量角,以便利用该第一空间数据,完成后续的自动对焦过程。
在获取目标物体的第一空间数据时,终端可以首先计算焦点到图像传感器的第一垂直距离,并以焦点作为原点,建立三维直角坐标系,此时由于手动对焦已经完成,因此目标物体所成像位于图像传感器上,设目标物体所成像在图像传感器上的第一位置,获取该第一位置在三维直角坐标系中的第一空间坐标,该第一空间坐标由X轴坐标、Y轴坐标和Z轴坐标组成,其中,Z轴坐标值即为前述第一垂直距离;然后基于前述三维直角坐标系,可以利用相关技术中的向量夹角公式,计算出焦点与第一位置之间的第一向量的第一空间向量角,该第一空间向量角包括第一向量与X轴的X轴夹角,第一向量与Y轴的Y轴夹角,以及第一向量与Z轴的Z轴夹角。
在步骤102中,当检测到取景器中的取景内容发生变化时,获取位置变化数据。
智能终端上通常集成了多种具有不同功能的传感器,其中可以包括加速度传感器和方向传感器,加速度传感器用于检测智能终端受到的加速度的大小和方向,从而可以判断终端是否发生旋转,方向传感器用于检测智能终端在三维空间内的各个坐标轴的移动角度,例如,该方向传感器可以具体为陀螺仪传感器。
本实施例中,终端获取到通过加速度传感器检测到的加速度数据后,根据该加速度数据可以确定终端是否发生了旋转,进而可以判断取景器是否发生移动;当判断取景器发生移动,且在移动停止时,可以获取通过方向传感器检测到的空间变化向量角,该空间变化向量角为当前空间向量角相对于手动对焦完成时的空间向量角分别在X轴上的X轴变化角,Y轴上的Y轴变化角,以及Z轴上的Z轴变化角。
在步骤103中,根据第一空间数据和位置变化数据计算目标物体的第二空间数据。
在步骤101中获得了焦点与第一位置之间的第一向量的第一空间向量角,以及在步骤102中获得了空间变化向量角,因此本步骤中可以根据第一空间向量角和空间变化向量角,计算第二空间向量角,该第二空间向量角为焦点与第二位置之间的第二向量的空间向量角,第二位置即为自动对焦完成后目标物体所成像在图像传感器上的位置,其中,第二空间向量角的X轴夹角为第一空间向量角的X轴夹角与空间变化向量角的X轴变化角的和,第二空间向量角的Y轴夹角为第一空间向量角的Y轴夹角与空间变化向量角的Y轴变化角的和,第二空间向量角的Z轴夹角为第一空间向量角的Z轴夹角与空间变化向量角的Z轴变化角的和;在步骤101中获得了第一位置 在三维直角坐标系中的第一空间坐标,因此本步骤中可以根据第一空间坐标计算焦点到第一位置的第一直线距离,并根据该第一直线距离与前述获得的第二空间向量角计算第二位置的第二空间坐标,其中,第一直线距离与第二空间向量角的X轴夹角的余弦值相乘得到第二空间坐标的X轴坐标值,第一直线距离与第二空间向量角的Y轴夹角的余弦值相乘得到第二空间坐标的Y轴坐标值,第一直线距离与第二空间向量角的Z轴夹角的余弦值相乘得到第二空间坐标的Z轴坐标值。
在步骤104中,按照第二空间数据对目标物体进行自动对焦。
在步骤103中得到第二空间坐标后,可以根据该第二空间坐标获得焦点到第二位置的第二垂直距离,该第二垂直距离即为第二空间坐标的Z轴坐标值,并计算第二垂直距离与定焦焦距的和,将该和作为调整后的像距,然后终端移动镜头组,直至该镜头组到图像传感器的距离为调整后的像距时,目标物体所成像落在图像传感器上,此时目标物体成像清晰,完成自动对焦。
本公开实施例中,在按照所述第二空间数据进行自动对焦之前,还可以对第二空间数据进行校正,即当第二空间数据为第二位置的第二空间坐标时,可以通过图像识别算法计算第二位置的第三空间坐标,并根据第三空间坐标对前述第二空间坐标进行校正,获得校正后的第二空间坐标。
在校正第二空间坐标的具体过程中,终端可以判断第三空间坐标与第二空间坐标之间的距离是否小于预设的校正阈值,当小于校正阈值时,计算第三空间坐标和第二空间坐标的X轴坐标值的平均值作为校正后的第二空间坐标的X轴坐标值,以及计算第三空间坐标和第二空间坐标的Y轴坐标值的平均值作为校正后的第二空间坐标的Y轴坐标值,然后根据第一直线距离、校正后的第二空间坐标的X轴坐标值、以及校正后的第二空间坐标的Y轴坐标值,可以计算得到校正后的第二空间坐标的Z轴坐标值,该校正后的Z轴坐标值即为前述第二垂直距离,例如,假设第一直线距离为L,校正后的X轴坐标值为a,校正后的Y轴坐标值为b,校正后的Z轴坐标值为c,则上述值满足公式L2=a2+b2+c2
由上述实施例可见,在利用终端拍照时,当用户点击取景器中的目标物体完成手动对焦后,获取目标物体的第一空间数据,当检测到取景器中的取景内容发生变化时,获取位置变化数据,当根据第一空间数据和位置变化数据计算得到目标物体的第二空间数据后,可以按照该第二空间数据完成自动对焦。因此在用户拍照过程中,如果取景器发生移动,但目标物体未移出取景器时,可以自动对焦到该目标物体,从而避免了在取景内容变化时的手动对焦操作,简化了对焦操作流程,提高了对焦速度,相应提升了用户的拍摄体验;进一步,该实施例还可以在根据第二空间坐标进行自动对焦前,通过图像识别算法计算得到的第三空间坐标对第二空间坐标进行校正,从而可以进一步提高自动对焦的精确性。
如图2所示,图2是根据一示例性实施例示出的另一种自动对焦方法流程图,该 方法可以用于终端中,包括以下步骤:
在步骤201中,当用户点击取景器中的目标物体完成手动对焦后,计算焦点到图像传感器的第一垂直距离。
本公开实施例中的终端主要指各种集成了照相机功能的智能终端,在照相机成像过程中,焦距(f)、物距(u)和像距(v)之间满足高斯成像公式,其中,焦距指镜头组到焦点之间的距离,物距指所拍摄物体所在的垂直平面到镜头组的距离,像距指所拍摄物体所成像到镜头组的距离。其中,终端通常无法进行光学变焦,其镜头组采用定焦f,因此在对焦过程中,终端可以通过移动镜头组,改变镜头组与用于成像的图像传感器之间的距离,在对焦完成时,使得上述距离等于v,同时焦点到图像传感器的垂直距离为d。如图3所示,图3是根据一示例性实施例示出的一种终端对焦完成后的成像示意图。
本实施例中,当用户开启终端的照相机功能后,可以通过查看取景器中的取景内容调整想要拍摄的画面,并且可以通过点击取景器中的某一目标物体,如图3中所示,进行手动对焦。在手动对焦完成后,目标物体所成像位于图像传感器上,此时成像清晰,假设此时像距为v1,定焦焦距为f,则焦点到图像传感器的第一垂直距离为d1,该d1的计算公式如下:
d1=v1-f   公式(1)
在步骤202中,以焦点作为三维直角坐标系的原点,根据第一垂直距离获得目标物体所成像在图像传感器上的第一位置的第一空间坐标。
本步骤中,可以首先以取景器的中心作为平面直角坐标系的原点,该取景器的中心与焦点在同一法线方向上,此时获取目标物体在该平面直角坐标系中的第一二维坐标,假设为P(x,y),第一二维坐标的单位为像素(px);然后以焦点作为原点,建立三维直角坐标系,此时由于手动对焦已经完成,因此目标物体所成像位于图像传感器上,设目标物体所成像在图像传感器上的第一位置P1,因此获取该第一位置P1在三维直角坐标系中的第一空间坐标,此时可以根据取景器的大小和图像传感器的大小,按照预设比例转换第一二维坐标P(x,y),获得目标物体所成像在图像传感器上的第二二维坐标,假设为(x1,y1)。例如,拍摄长宽比例为4:3的照片,取景器的像素尺寸为1440×1080,图像传感器的长宽分别为0.261英寸和0.196英寸,假设目标物体在取景器上的第一二维坐标为P(500px,500px),则对应到三维直角坐标系中的第二二维坐标为(0.090英寸,0.090英寸)。
后续根据第二二维坐标(x1,y1)和第一垂直距离d1可以确定目标物体所成像在图像传感器上的第一空间坐标P1(x1,y1,z1),其中,第一空间坐标的X轴坐标值为第二二维坐标的X轴坐标值x1,第一空间坐标的Y轴坐标值为第二二维坐标的Y轴坐标值y1,第一空间坐标的Z轴坐标值为第一垂直距离z1=d1。
在步骤203中,计算焦点与第一位置之间的第一向量的第一空间向量角。
在手动对焦完成,并获得第一位置的第一空间坐标后,终端可以通过三维直角坐标系的向量夹角公式,计算焦点到第一位置P1的第一空间向量角(∠x1,∠y1,∠z1),其中,第一向量与X轴的X轴夹角为∠x1,第一向量与Y轴的Y轴夹角为∠y1,以及第一向量与Z轴的Z轴夹角为∠z1。
在步骤204中,通过加速度传感器检测到的加速度数据判断取景器是否发生移动。
终端上通常集成了多种具有不同功能的传感器,其中加速度传感器可以用于检测终端受到的加速度的大小和方向,本实施例中,在手动对焦完成后,当终端获取到通过加速度传感器检测到的加速度数据后,可以根据该加速度数据确定终端是否发生了旋转,进而可以判断取景器是否发生移动。
在步骤205中,当取景器发生移动时,获取通过方向传感器检测到的作为位置变化数据的空间变化向量角。
终端上除了集成加速度传感器外,还可以集成用于检测终端在三维空间内的各个坐标轴的移动角度的方向传感器,例如,该方向传感器可以具体为陀螺仪传感器。
当步骤204中终端根据加速度数据判断取景器发生移动时,则可以在移动停止时,获取通过方向传感器检测到的空间变化向量角,该空间变化向量角为当前空间向量角相对于手动对焦完成时的空间向量角,分别在X轴上的X轴变化角∠△x,Y轴上的Y轴变化角∠△y,以及Z轴上的Z轴变化角∠△z。
在步骤206中,根据第一空间坐标计算焦点到第一位置的第一直线距离。
前述步骤202中获得了第一空间坐标P1(x1,y1,z1),本步骤中可以根据P1(x1,y1,z1)计算焦点到P1的第一直线距离ρ,该ρ的计算公式如下:
ρ2=x12+y12+z12   公式(2)
在步骤207中,根据第一空间向量角和空间变化向量角,计算第二空间向量角。
本步骤中,可以根据步骤203中获得的第一空间向量角和步骤205中获得的空间变化向量角计算第二空间向量角,该第二空间向量角为焦点与第二位置P2之间的第二向量的空间向量角,该第二位置P2为手动对焦完成后,取景器中的取景内容发生变化,但目标物体未移出该取景器时,终端完成自动对焦后目标物体所成像应该在图像传感器上的位置。
其中,假设第二向量与X轴的X轴夹角为∠x2,第二向量与Y轴的Y轴夹角为∠y2,以及第二向量与Z轴的Z轴夹角为∠z2,则第二空间向量角按照如下公式进行计算:
∠x2=∠x1+∠△x;
∠y2=∠y1+∠△y;
∠z2=∠z1+∠△z;   公式(3)
在步骤208中,根据第一直线距离与第二空间向量角计算第二位置的第二空间坐 标。
本步骤中可以根据步骤206中计算得到的第一直线距离ρ,以及步骤207中计算的第二空间向量角(∠x2,∠y2,∠z2)计算第二位置P2的第二空间坐标P2(x2,y2,z2),其中,ρ与∠x2的余弦值相乘得到P2的X轴坐标值x2,ρ与∠y2的余弦值相乘得到P2的Y轴坐标值y2,ρ与∠z2的余弦值相乘得到P2的Z轴坐标值z2,即第二空间坐标可以按照如下公式进行计算:
x2=ρ×cos∠x2
y2=ρ×cos∠y2
z2=ρ×cos∠z2   公式(4)
在步骤209中,根据第二空间坐标获得焦点到第二位置的第二垂直距离,其中,第二垂直距离为第二空间坐标的Z轴坐标值。
步骤208中获得了第二位置P2的第二空间坐标P2(x2,y2,z2),因此可以根据该第二空间坐标获得焦点到第二位置P2的第二垂直距离d2,该第二垂直距离d2即为第二空间坐标的Z轴坐标值z2。
需要说明的是,由本步骤可知,为了实现手动对焦后的自动对焦,需要获得第二垂直距离d2(即为z2),因此步骤208中可以只计算公式(4)中的z 2=ρ×cos∠z2,相应的,步骤205中方向传感器可以仅获得∠△z,以及步骤207中可以只计算∠z2=∠z1+∠△z,由此可以进一步节约终端的计算资源。
在步骤210中,计算第二垂直距离与定焦焦距的和,将和作为调整后的像距。
本步骤中,假设调整后的像距为v2,则v2的计算公式如下:
v2=d2+f   公式(5)
在步骤211中,移动镜头组,直至镜头组到图像传感器的距离为调整后的像距。
由于在对焦完成时,镜头组与用于成像的图像传感器之间的距离等于像距,因此在前述步骤中根据自动对焦后目标物体所成像在图像传感器上的位置计算出调整后的像距时,终端可以通过控制镜头组移动进行自动对焦,当镜头组移动到与图像传感器的距离为调整后的像距v2时,即完成自动对焦。
由上述实施例可见,在利用终端拍照时,在用户点击取景器中的目标物体完成手动对焦后,当检测到取景器中的取景内容发生变化时,可以通过计算目标物体位置变化后的空间数据,获得自动对焦完成后的像距,从而通过控制移动镜头组移动满足该像距,以便完成自动对焦。因此在用户拍照过程中,如果取景器发生移动,但目标物体未移出取景器时,可以自动对焦到该目标物体,从而避免了在取景内容变化时的手动对焦操作,简化了对焦操作流程,提高了对焦速度,相应提升了用户的拍摄体验。
与前述自动对焦方法实施例相对应,本公开还提供了自动对焦装置及其所应用的终端的实施例。
如图4所示,图4是本公开根据一示例性实施例示出的一种自动对焦装置框图, 所述装置包括:获取模块410、检测模块420、第一计算模块430和对焦模块440。
其中,所述获取模块410,被配置为当用户点击取景器中的目标物体完成手动对焦后,获取所述目标物体的第一空间数据;
所述检测模块420,被配置为当检测到所述取景器中的取景内容发生变化时,获取位置变化数据;
所述第一计算模块430,被配置为根据所述第一空间数据和所述位置变化数据计算所述目标物体的第二空间数据;
所述对焦模块440,被配置为按照所述第二空间数据对所述目标物体进行自动对焦。
由上述实施例可见,在利用终端拍照时,当用户点击取景器中的目标物体完成手动对焦后,获取目标物体的第一空间数据,当检测到取景器中的取景内容发生变化时,获取位置变化数据,当根据第一空间数据和位置变化数据计算得到目标物体的第二空间数据后,可以按照该第二空间数据完成自动对焦。因此在用户拍照过程中,如果取景器发生移动,但目标物体未移出取景器时,可以自动对焦到该目标物体,从而避免了在取景内容变化时的手动对焦操作,简化了对焦操作流程,提高了对焦速度,相应提升了用户的拍摄体验。
如图5所示,图5是本公开根据一示例性实施例示出的另一种自动对焦装置框图,该实施例在前述图4所示实施例的基础上,所述获取模块410可以包括:第一垂直距离计算子模块411、第一空间坐标获得子模块412和第一空间向量角计算子模块413。
其中,所述第一垂直距离计算子模块411,被配置为计算焦点到图像传感器的第一垂直距离,其中,所述手动对焦完成时所述目标物体所成像位于所述图像传感器上;
所述第一空间坐标获得子模块412,被配置为以所述焦点作为三维直角坐标系的原点,根据所述第一垂直距离获得所述目标物体所成像在所述图像传感器上的第一位置的第一空间坐标;
所述第一空间向量角计算子模块413,被配置为计算所述焦点与所述第一位置之间的第一向量的第一空间向量角。
如图6所示,图6是本公开根据一示例性实施例示出的另一种自动对焦装置框图,该实施例在前述图5所示实施例的基础上,所述第一垂直距离计算子模块411可以包括:像距获得子模块4111和差值计算子模块4112。
其中,所述像距获得子模块4111,被配置为获得所述手动对焦完成时的像距;
所述差值计算子模块4112,被配置为计算所述像距与定焦焦距之间的差值,将所述差值作为所述焦点到图像传感器的第一垂直距离。
如图7所示,图7是本公开根据一示例性实施例示出的另一种自动对焦装置框图,该实施例在前述图5所示实施例的基础上,所述第一空间坐标获得子模块412可以包括:第一二维坐标获取子模块4121、第二二维坐标获得子模块4122和第一空间坐标 确定子模块4123。
其中,所述第一二维坐标获取子模块4121,被配置为以所述取景器的中心作为平面直角坐标系的原点,获取所述目标物体在所述平面直角坐标系中的第一二维坐标,其中,所述取景器的中心与所述焦点在同一法线方向;
所述第二二维坐标获得子模块4122,被配置为按照预设比例转换所述第一二维坐标,获得所述目标物体所成像在所述图像传感器上的第二二维坐标;
所述第一空间坐标确定子模块4123,被配置为根据所述第二二维坐标和所述第一垂直距离确定所述目标物体所成像在所述图像传感器上的第一空间坐标,其中,所述第一空间坐标的X轴坐标值为所述第二二维坐标的X轴坐标值,所述第一空间坐标的Y轴坐标值为所述第二二维坐标的Y轴坐标值,所述第一空间坐标的Z轴坐标值为所述第一垂直距离。
由上述实施例可见,在获取目标物体的第一空间数据时,通过在手动对焦完成后得到的像距,以及以焦点作为三维直角坐标系的原点,获得目标物体所成像在图像传感器上的第一空间坐标和第一空间向量角,从而可以利用该第一空间坐标和第一空间向量角计算目标物体位置变化后的空间数据,从而方便实现自动对焦。
如图8所示,图8是本公开根据一示例性实施例示出的另一种自动对焦装置框图,该实施例在前述图5所示实施例的基础上,所述检测模块420可以包括:加速度检测子模块421和变化向量角获取子模块422。
其中,所述加速度检测子模块421,被配置为通过加速度传感器检测到的加速度数据判断所述取景器是否发生移动;
所述变化向量角获取子模块422,被配置为当所述取景器发生移动时,获取通过方向传感器检测到的作为所述位置变化数据的空间变化向量角。
如图9所示,图9是本公开根据一示例性实施例示出的另一种自动对焦装置框图,该实施例在前述图8所示实施例的基础上,所述第一计算模块430可以包括:第一直线距离计算子模块431、第二空间向量角计算子模块432和第二空间坐标计算子模块433。
其中,所述第一直线距离计算子模块431,被配置为根据所述第一空间坐标计算所述焦点到所述第一位置的第一直线距离;
所述第二空间向量角计算子模块432,被配置为根据所述第一空间向量角和所述空间变化向量角,计算第二空间向量角,所述第二空间向量角为所述焦点与第二位置之间的第二向量的空间向量角,所述第二位置为所述自动对焦完成后所述目标物体所成像在所述图像传感器上的位置;
所述第二空间坐标计算子模块433,被配置为根据所述第一直线距离与所述第二空间向量角计算所述第二位置的第二空间坐标。
如图10所示,图10是本公开根据一示例性实施例示出的另一种自动对焦装置框 图,该实施例在前述图9所示实施例的基础上,所述对焦模块440可以包括:第二垂直距离获得子模块441、调整像距计算子模块442和镜头组移动子模块443。
其中,所述第二垂直距离获得子模块441,被配置为根据所述第二空间坐标获得所述焦点到所述第二位置的第二垂直距离,其中,所述第二垂直距离为所述第二空间坐标的Z轴坐标值;
所述调整像距计算子模块442,被配置为计算所述第二垂直距离与定焦焦距的和,将所述和作为调整后的像距;
所述镜头组移动子模块443,被配置为移动镜头组,直至所述镜头组到所述图像传感器的距离为所述调整后的像距。
由上述实施例可见,利用终端内集成的加速度传感器判断取景器是否发生移动,并在取景器发生移动时,可以通过方向传感器检测到移动所产生的空间变化向量角,从而能够根据空间变化向量角、第一空间坐标和第一空间向量角计算目标物体位置变化后的空间数据,以便实现自动对焦。
如图11所示,图11是本公开根据一示例性实施例示出的另一种自动对焦装置框图,该实施例在前述图9或图10所示实施例的基础上,所述装置还可以包括:第二计算模块450和校正模块460。
其中,所述第二计算模块450,被配置为通过图像识别算法计算所述第二位置的第三空间坐标;
所述校正模块460,被配置为根据所述第三空间坐标对所述第二空间坐标进行校正,获得校正后的第二空间坐标。
如图12所示,图12是本公开根据一示例性实施例示出的另一种自动对焦装置框图,该实施例在前述图11所示实施例的基础上,所述校正模块460可以包括:校正阈值判断子模块461和校正坐标值计算子模块462。
其中,所述校正阈值判断子模块461,被配置为判断所述第三空间坐标与所述第二空间坐标之间的距离是否小于预设的校正阈值;
所述校正坐标值计算子模块462,被配置为当小于所述校正阈值时,计算所述第三空间坐标和所述第二空间坐标的X轴坐标值的平均值作为校正后的第二空间坐标的X轴坐标值,计算所述第三空间坐标和所述第二空间坐标的Y轴坐标值的平均值作为校正后的第二空间坐标的Y轴坐标值,以及根据所述第一直线距离、所述校正后的第二空间坐标的X轴坐标值、以及所述校正后的第二空间坐标的Y轴坐标值,计算所述校正后的第二空间坐标的Z轴坐标值。
由上述实施例可见,在根据第二空间坐标进行自动对焦前,通过图像识别算法计算得到的第三空间坐标对第二空间坐标进行校正,从而可以进一步提高自动对焦的精确性。
相应的,本公开还提供另一种自动对焦装置,所述装置包括有处理器;用于存储 处理器可执行指令的存储器;其中,所述处理器被配置为:
当用户点击取景器中的目标物体完成手动对焦后,获取所述目标物体的第一空间数据;
当检测到所述取景器中的取景内容发生变化时,获取位置变化数据;
根据所述第一空间数据和所述位置变化数据计算所述目标物体的第二空间数据;
按照所述第二空间数据进行自动对焦。
上述装置中各个模块的功能和作用的实现过程具体详见上述方法中对应步骤的实现过程,在此不再赘述。
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本公开方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
如图13所示,图13是本公开根据一示例性实施例示出的一种用于控制视频画面呈现的装置1300的一结构示意图。例如,装置1300可以是具有路由功能的移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。
参照图13,装置1300可以包括以下一个或多个组件:处理组件1302,存储器1304,电源组件1306,多媒体组件1308,音频组件1310,输入/输出(I/O)的接口1313,传感器组件1314,以及通信组件1316。
处理组件1302通常控制装置1300的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件1302可以包括一个或多个处理器1320来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件1302可以包括一个或多个模块,便于处理组件1302和其他组件之间的交互。例如,处理组件1302可以包括多媒体模块,以方便多媒体组件1308和处理组件1302之间的交互。
存储器1304被配置为存储各种类型的数据以支持在装置1300的操作。这些数据的示例包括用于在装置1300上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器1304可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件1306为装置1300的各种组件提供电力。电源组件1306可以包括电源管理系统,一个或多个电源,及其他与为装置1300生成、管理和分配电力相关联的组件。
多媒体组件1308包括在所述装置1300和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件1308包括一个前置摄像头和/或后置摄像头。当装置1300处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件1310被配置为输出和/或输入音频信号。例如,音频组件1310包括一个麦克风(MIC),当装置1300处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器1304或经由通信组件1316发送。在一些实施例中,音频组件1310还包括一个扬声器,用于输出音频信号。
I/O接口1313为处理组件1302和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件1314包括一个或多个传感器,用于为装置1300提供各个方面的状态评估。例如,传感器组件1314可以检测到装置1300的打开/关闭状态,组件的相对定位,例如所述组件为装置1300的显示器和小键盘,传感器组件1314还可以检测装置1300或装置1300一个组件的位置改变,用户与装置1300接触的存在或不存在,装置1300方位或加速/减速和装置1300的温度变化。传感器组件1314可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件1314还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件1314还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器,微波传感器或温度传感器。
通信组件1316被配置为便于装置1300和其他设备之间有线或无线方式的通信。装置1300可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信组件1316经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件1316还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,装置1300可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、 现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器1304,上述指令可由装置1300的处理器1320执行以完成上述方法。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
一种非临时性计算机可读存储介质,当所述存储介质中的指令由终端的处理器执行时,使得终端能够执行一种自动对焦方法,所述方法包括:当用户点击取景器中的目标物体完成手动对焦后,获取所述目标物体的第一空间数据;当检测到所述取景器中的取景内容发生变化时,获取位置变化数据;根据所述第一空间数据和所述位置变化数据计算所述目标物体的第二空间数据;按照所述第二空间数据进行自动对焦。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其它实施方案。本公开旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。

Claims (19)

  1. 一种自动对焦方法,其特征在于,所述方法包括:
    当用户点击取景器中的目标物体完成手动对焦后,获取所述目标物体的第一空间数据;
    当检测到所述取景器中的取景内容发生变化时,获取位置变化数据;
    根据所述第一空间数据和所述位置变化数据计算所述目标物体的第二空间数据;
    按照所述第二空间数据对所述目标物体进行自动对焦。
  2. 根据权利要求1所述的方法,其特征在于,所述获取所述目标物体的第一空间数据,包括:
    计算焦点到图像传感器的第一垂直距离,其中,所述手动对焦完成时所述目标物体所成像位于所述图像传感器上;
    以所述焦点作为三维直角坐标系的原点,根据所述第一垂直距离获得所述目标物体所成像在所述图像传感器上的第一位置的第一空间坐标;
    计算所述焦点与所述第一位置之间的第一向量的第一空间向量角。
  3. 根据权利要求2所述的方法,其特征在于,所述计算焦点到图像传感器的第一垂直距离,包括:
    获得所述手动对焦完成时的像距;
    计算所述像距与定焦焦距之间的差值,将所述差值作为所述焦点到图像传感器的第一垂直距离。
  4. 根据权利要求2所述的方法,其特征在于,所述根据所述第一垂直距离获得所述目标物体所成像在所述图像传感器上的第一位置的第一空间坐标,包括:
    以所述取景器的中心作为平面直角坐标系的原点,获取所述目标物体在所述平面直角坐标系中的第一二维坐标,其中,所述取景器的中心与所述焦点在同一法线方向;
    按照预设比例转换所述第一二维坐标,获得所述目标物体所成像在所述图像传感器上的第二二维坐标;
    根据所述第二二维坐标和所述第一垂直距离确定所述目标物体所成像在所述图像传感器上的第一空间坐标,其中,所述第一空间坐标的X轴坐标值为所述第二二维坐标的X轴坐标值,所述第一空间坐标的Y轴坐标值为所述第二二维坐标的Y轴坐标值,所述第一空间坐标的Z轴坐标值为所述第一垂直距离。
  5. 根据权利要求2所述的方法,其特征在于,所述当检测到所述取景器中的取景内容发生变化时,获取位置变化数据,包括:
    通过加速度传感器检测到的加速度数据判断所述取景器是否发生移动;
    当所述取景器发生移动时,获取通过方向传感器检测到的作为所述位置变化数据的空间变化向量角。
  6. 根据权利要求5所述的方法,其特征在于,所述根据所述第一空间数据和所述位 置变化数据计算所述目标物体的第二空间数据,包括:
    根据所述第一空间坐标计算所述焦点到所述第一位置的第一直线距离;
    根据所述第一空间向量角和所述空间变化向量角,计算第二空间向量角,所述第二空间向量角为所述焦点与第二位置之间的第二向量的空间向量角,所述第二位置为所述自动对焦完成后所述目标物体所成像在所述图像传感器上的位置;
    根据所述第一直线距离与所述第二空间向量角计算所述第二位置的第二空间坐标。
  7. 根据权利要求6所述的方法,其特征在于,所述按照所述第二空间数据进行自动对焦,包括:
    根据所述第二空间坐标获得所述焦点到所述第二位置的第二垂直距离,其中,所述第二垂直距离为所述第二空间坐标的Z轴坐标值;
    计算所述第二垂直距离与定焦焦距的和,将所述和作为调整后的像距;
    移动镜头组,直至所述镜头组到所述图像传感器的距离为所述调整后的像距。
  8. 根据权利要求6或7所述的方法,其特征在于,所述按照所述第二空间数据进行自动对焦之前,所述方法包括:
    通过图像识别算法计算所述第二位置的第三空间坐标;
    根据所述第三空间坐标对所述第二空间坐标进行校正,获得校正后的第二空间坐标。
  9. 根据权利要求8所述的方法,其特征在于,所述根据所述第三空间坐标对所述第二空间坐标进行校正,获得校正后的第二空间坐标,包括:
    判断所述第三空间坐标与所述第二空间坐标之间的距离是否小于预设的校正阈值;
    当小于所述校正阈值时,计算所述第三空间坐标和所述第二空间坐标的X轴坐标值的平均值作为校正后的第二空间坐标的X轴坐标值,以及计算所述第三空间坐标和所述第二空间坐标的Y轴坐标值的平均值作为校正后的第二空间坐标的Y轴坐标值;
    根据所述第一直线距离、所述校正后的第二空间坐标的X轴坐标值、以及所述校正后的第二空间坐标的Y轴坐标值,计算所述校正后的第二空间坐标的Z轴坐标值。
  10. 一种自动对焦装置,其特征在于,所述装置包括:
    获取模块,用于当用户点击取景器中的目标物体完成手动对焦后,获取所述目标物体的第一空间数据;
    检测模块,用于当检测到所述取景器中的取景内容发生变化时,获取位置变化数据;
    第一计算模块,用于根据所述第一空间数据和所述位置变化数据计算所述目标物体的第二空间数据;
    对焦模块,用于按照所述第二空间数据对所述目标物体进行自动对焦。
  11. 根据权利要求10所述的装置,其特征在于,所述获取模块,包括:
    第一垂直距离计算子模块,用于计算焦点到图像传感器的第一垂直距离,其中,所述手动对焦完成时所述目标物体所成像位于所述图像传感器上;
    第一空间坐标获得子模块,用于以所述焦点作为三维直角坐标系的原点,根据所述第 一垂直距离获得所述目标物体所成像在所述图像传感器上的第一位置的第一空间坐标;
    第一空间向量角计算子模块,用于计算所述焦点与所述第一位置之间的第一向量的第一空间向量角。
  12. 根据权利要求11所述的装置,其特征在于,所述第一垂直距离计算子模块,包括:
    像距获得子模块,用于获得所述手动对焦完成时的像距;
    差值计算子模块,用于计算所述像距与定焦焦距之间的差值,将所述差值作为所述焦点到图像传感器的第一垂直距离。
  13. 根据权利要求11所述的装置,其特征在于,所述第一空间坐标获得子模块,包括:
    第一二维坐标获取子模块,用于以所述取景器的中心作为平面直角坐标系的原点,获取所述目标物体在所述平面直角坐标系中的第一二维坐标,其中,所述取景器的中心与所述焦点在同一法线方向;
    第二二维坐标获得子模块,用于按照预设比例转换所述第一二维坐标,获得所述目标物体所成像在所述图像传感器上的第二二维坐标;
    第一空间坐标确定子模块,用于根据所述第二二维坐标和所述第一垂直距离确定所述目标物体所成像在所述图像传感器上的第一空间坐标,其中,所述第一空间坐标的X轴坐标值为所述第二二维坐标的X轴坐标值,所述第一空间坐标的Y轴坐标值为所述第二二维坐标的Y轴坐标值,所述第一空间坐标的Z轴坐标值为所述第一垂直距离。
  14. 根据权利要求11所述的装置,其特征在于,所述检测模块,包括:
    加速度检测子模块,用于通过加速度传感器检测到的加速度数据判断所述取景器是否发生移动;
    变化向量角获取子模块,用于当所述取景器发生移动时,获取通过方向传感器检测到的作为所述位置变化数据的空间变化向量角。
  15. 根据权利要求14所述的装置,其特征在于,所述第一计算模块,包括:
    第一直线距离计算子模块,用于根据所述第一空间坐标计算所述焦点到所述第一位置的第一直线距离;
    第二空间向量角计算子模块,用于根据所述第一空间向量角和所述空间变化向量角,计算第二空间向量角,所述第二空间向量角为所述焦点与第二位置之间的第二向量的空间向量角,所述第二位置为所述自动对焦完成后所述目标物体所成像在所述图像传感器上的位置;
    第二空间坐标计算子模块,用于根据所述第一直线距离与所述第二空间向量角计算所述第二位置的第二空间坐标。
  16. 根据权利要求15所述的装置,其特征在于,所述对焦模块,包括:
    第二垂直距离获得子模块,用于根据所述第二空间坐标获得所述焦点到所述第二位置 的第二垂直距离,其中,所述第二垂直距离为所述第二空间坐标的Z轴坐标值;
    调整像距计算子模块,用于计算所述第二垂直距离与定焦焦距的和,将所述和作为调整后的像距;
    镜头组移动子模块,用于移动镜头组,直至所述镜头组到所述图像传感器的距离为所述调整后的像距。
  17. 根据权利要求15或16所述的装置,其特征在于,所述装置还包括:
    第二计算模块,用于通过图像识别算法计算所述第二位置的第三空间坐标;
    校正模块,用于根据所述第三空间坐标对所述第二空间坐标进行校正,获得校正后的第二空间坐标。
  18. 根据权利要求17所述的装置,其特征在于,所述校正模块,包括:
    校正阈值判断子模块,用于判断所述第三空间坐标与所述第二空间坐标之间的距离是否小于预设的校正阈值;
    校正坐标值计算子模块,用于当小于所述校正阈值时,计算所述第三空间坐标和所述第二空间坐标的X轴坐标值的平均值作为校正后的第二空间坐标的X轴坐标值,计算所述第三空间坐标和所述第二空间坐标的Y轴坐标值的平均值作为校正后的第二空间坐标的Y轴坐标值,以及根据所述第一直线距离、所述校正后的第二空间坐标的X轴坐标值、以及所述校正后的第二空间坐标的Y轴坐标值,计算所述校正后的第二空间坐标的Z轴坐标值。
  19. 一种自动对焦装置,其特征在于,包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为:
    当用户点击取景器中的目标物体完成手动对焦后,获取所述目标物体的第一空间数据;
    当检测到所述取景器中的取景内容发生变化时,获取位置变化数据;
    根据所述第一空间数据和所述位置变化数据计算所述目标物体的第二空间数据;
    按照所述第二空间数据对所述目标物体进行自动对焦。
PCT/CN2015/077963 2014-12-26 2015-04-30 自动对焦方法及装置 WO2016101481A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
MX2015009132A MX358881B (es) 2014-12-26 2015-04-30 Método de auto-enfoque y dispositivo de auto-enfoque.
JP2016565542A JP6348611B2 (ja) 2014-12-26 2015-04-30 自動ピント合わせ方法、装置、プログラム及び記録媒体
KR1020157016842A KR101678483B1 (ko) 2014-12-26 2015-04-30 자동 핀트 맞춤 방법, 장치, 프로그램 및 기록매체
RU2015129487A RU2612892C2 (ru) 2014-12-26 2015-04-30 Способ автоматической фокусировки и устройство автоматической фокусировки
BR112015019722A BR112015019722A2 (pt) 2014-12-26 2015-04-30 método de autofocagem e dispositivo de autofocagem
US14/809,591 US9729775B2 (en) 2014-12-26 2015-07-27 Auto-focusing method and auto-focusing device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410832108.7 2014-12-26
CN201410832108.7A CN104469167B (zh) 2014-12-26 2014-12-26 自动对焦方法及装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/809,591 Continuation US9729775B2 (en) 2014-12-26 2015-07-27 Auto-focusing method and auto-focusing device

Publications (1)

Publication Number Publication Date
WO2016101481A1 true WO2016101481A1 (zh) 2016-06-30

Family

ID=52914463

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/077963 WO2016101481A1 (zh) 2014-12-26 2015-04-30 自动对焦方法及装置

Country Status (9)

Country Link
US (1) US9729775B2 (zh)
EP (1) EP3038345B1 (zh)
JP (1) JP6348611B2 (zh)
KR (1) KR101678483B1 (zh)
CN (1) CN104469167B (zh)
BR (1) BR112015019722A2 (zh)
MX (1) MX358881B (zh)
RU (1) RU2612892C2 (zh)
WO (1) WO2016101481A1 (zh)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104469167B (zh) 2014-12-26 2017-10-13 小米科技有限责任公司 自动对焦方法及装置
CN105100624B (zh) * 2015-08-28 2019-03-01 Oppo广东移动通信有限公司 一种拍摄方法及终端
JP6335394B2 (ja) 2015-09-25 2018-05-30 富士フイルム株式会社 撮像システム及び撮像制御方法
CN105262954B (zh) * 2015-11-17 2019-07-19 腾讯科技(深圳)有限公司 触发摄像头自动聚焦的方法和装置
CN106534702A (zh) * 2016-12-22 2017-03-22 珠海市魅族科技有限公司 一种对焦的方法以及对焦装置
CN110267041B (zh) 2019-06-28 2021-11-09 Oppo广东移动通信有限公司 图像编码方法、装置、电子设备和计算机可读存储介质
CN110248096B (zh) 2019-06-28 2021-03-12 Oppo广东移动通信有限公司 对焦方法和装置、电子设备、计算机可读存储介质
CN110276767B (zh) 2019-06-28 2021-08-31 Oppo广东移动通信有限公司 图像处理方法和装置、电子设备、计算机可读存储介质
CN110660090B (zh) 2019-09-29 2022-10-25 Oppo广东移动通信有限公司 主体检测方法和装置、电子设备、计算机可读存储介质
CN110796041B (zh) 2019-10-16 2023-08-18 Oppo广东移动通信有限公司 主体识别方法和装置、电子设备、计算机可读存储介质
WO2021077270A1 (zh) * 2019-10-21 2021-04-29 深圳市大疆创新科技有限公司 一种获取目标距离的方法、控制装置及移动平台
CN110996003B (zh) * 2019-12-16 2022-03-25 Tcl移动通信科技(宁波)有限公司 一种拍照定位方法、装置及移动终端
WO2022151473A1 (zh) * 2021-01-18 2022-07-21 深圳市大疆创新科技有限公司 拍摄控制方法、拍摄控制装置及云台组件
WO2022266915A1 (zh) * 2021-06-24 2022-12-29 深圳市大疆创新科技有限公司 镜头的对焦控制方法和装置、拍摄装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000066086A (ja) * 1998-08-20 2000-03-03 Nikon Corp 自動焦点調節装置
CN1973231A (zh) * 2004-05-07 2007-05-30 株式会社理光 图像获取相机
CN103747183A (zh) * 2014-01-15 2014-04-23 北京百纳威尔科技有限公司 一种手机拍摄对焦方法
CN104243806A (zh) * 2013-06-20 2014-12-24 索尼公司 成像装置、信息显示方法和信息处理单元
CN104469167A (zh) * 2014-12-26 2015-03-25 小米科技有限责任公司 自动对焦方法及装置

Family Cites Families (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3513950B2 (ja) * 1993-12-14 2004-03-31 株式会社ニコン 像振れ補正カメラ
JP2001235675A (ja) * 2000-02-22 2001-08-31 Eiji Kawamura 焦点制御システム
US6968094B1 (en) * 2000-03-27 2005-11-22 Eastman Kodak Company Method of estimating and correcting camera rotation with vanishing point location
US20020080257A1 (en) * 2000-09-27 2002-06-27 Benjamin Blank Focus control system and process
JP2002131624A (ja) * 2000-10-25 2002-05-09 Olympus Optical Co Ltd 多点自動焦点カメラ
WO2002097730A1 (fr) * 2001-05-25 2002-12-05 Matsushita Electric Industrial Co., Ltd. Dispositif de generation d'image grand-angle
US20030103067A1 (en) * 2001-12-05 2003-06-05 Trell Erik Y. Method and device for material, graphical and computer/holography-animated structural reproduction, rendition and exploration of real space elementary particle states, transitions, properties and processes
WO2003098922A1 (en) * 2002-05-15 2003-11-27 The Board Of Governors For Higher Education, State Of Rhode Island And Providence Plantations An imaging system and method for tracking the motion of an object
JP4211292B2 (ja) * 2002-06-03 2009-01-21 ソニー株式会社 画像処理装置および画像処理方法、プログラム並びにプログラム記録媒体
JP3922543B2 (ja) * 2002-06-05 2007-05-30 ソニー株式会社 撮像装置、および画像表示装置
JP2005338352A (ja) * 2004-05-26 2005-12-08 Fujinon Corp オートフォーカスシステム
JP3829144B2 (ja) * 2004-11-25 2006-10-04 シャープ株式会社 合焦エリア調節カメラ付携帯端末
DE102004060609A1 (de) * 2004-12-16 2006-06-29 Yxlon International Security Gmbh Verfahren zum Messen des Impulsübertragungsspektrums von elastisch gestreuten Röntgenquanten
JP3886524B2 (ja) * 2004-12-21 2007-02-28 松下電器産業株式会社 カメラ端末および監視システム
US7409149B2 (en) * 2005-11-03 2008-08-05 International Business Machines Corporation Methods for improved autofocus in digital imaging systems
US7627240B2 (en) * 2006-03-27 2009-12-01 Nokia Corporation Optical device with improved autofocus performance and method related thereto
EP2059026A4 (en) * 2006-08-30 2010-12-29 Nikon Corp APPARATUS AND METHOD FOR ALIGNING IMAGE AND CAMERA
US8462988B2 (en) * 2007-01-23 2013-06-11 Valeo Schalter Und Sensoren Gmbh Method and system for universal lane boundary detection
JP2009049810A (ja) * 2007-08-21 2009-03-05 Canon Inc 撮像装置及びその制御方法及びプログラム
JP5268433B2 (ja) * 2008-06-02 2013-08-21 キヤノン株式会社 撮像装置、及び撮像装置の制御方法
JP2009294509A (ja) * 2008-06-06 2009-12-17 Sony Corp 3次元像表示装置
JP5366454B2 (ja) * 2008-06-30 2013-12-11 キヤノン株式会社 光学機器
US8237807B2 (en) * 2008-07-24 2012-08-07 Apple Inc. Image capturing device with touch screen for adjusting camera settings
JP2010050603A (ja) 2008-08-20 2010-03-04 Casio Comput Co Ltd 撮影装置およびプログラム
US8134597B2 (en) * 2008-12-05 2012-03-13 Sony Ericsson Mobile Communications Ab Camera system with touch focus and method
JP2011030008A (ja) * 2009-07-27 2011-02-10 Canon Inc 撮像装置
JP5574650B2 (ja) * 2009-09-11 2014-08-20 古野電気株式会社 画像処理装置、これを搭載したレーダ装置、画像処理方法及び画像処理プログラム
JP5654223B2 (ja) * 2009-09-11 2015-01-14 古野電気株式会社 画像処理装置、これを搭載したレーダ装置、画像処理方法及び画像処理プログラム
TWI413854B (zh) * 2009-10-07 2013-11-01 Altek Corp A digital camera capable of detecting the name of the subject being used and a method thereof
JP2011139379A (ja) 2009-12-28 2011-07-14 Canon Inc 画像処理装置、画像処理方法及びプログラム
CN101762871B (zh) * 2009-12-30 2011-04-27 北京控制工程研究所 一种姿态敏感器光学系统
JP5589527B2 (ja) * 2010-04-23 2014-09-17 株式会社リコー 撮像装置および追尾被写体検出方法
WO2011161973A1 (ja) * 2010-06-24 2011-12-29 パナソニック株式会社 全方位撮影システム
JP5594157B2 (ja) * 2011-01-14 2014-09-24 株式会社Jvcケンウッド 撮像装置および撮像方法
EP2716030A1 (en) * 2011-05-30 2014-04-09 Sony Ericsson Mobile Communications AB Improved camera unit
KR101784523B1 (ko) 2011-07-28 2017-10-11 엘지이노텍 주식회사 터치형 휴대용 단말기
US10099614B2 (en) 2011-11-28 2018-10-16 Magna Electronics Inc. Vision system for vehicle
JP5370542B1 (ja) * 2012-06-28 2013-12-18 カシオ計算機株式会社 画像処理装置、撮像装置、画像処理方法及びプログラム
JP5409873B2 (ja) * 2012-10-22 2014-02-05 キヤノン株式会社 情報処理装置、その制御方法、プログラム及び記憶媒体
JP6271990B2 (ja) * 2013-01-31 2018-01-31 キヤノン株式会社 画像処理装置、画像処理方法
KR101431373B1 (ko) * 2013-02-26 2014-08-18 경북대학교 산학협력단 스테레오 정합을 이용한 차량의 움직임 측정 장치
JP6103526B2 (ja) * 2013-03-15 2017-03-29 オリンパス株式会社 撮影機器,画像表示機器,及び画像表示機器の表示制御方法
JP5865547B2 (ja) * 2013-03-19 2016-02-17 株式会社日立国際電気 画像表示装置および画像表示方法
CA2819956C (en) * 2013-07-02 2022-07-12 Guy Martin High accuracy camera modelling and calibration method
CN103699592B (zh) * 2013-12-10 2018-04-27 天津三星通信技术研究有限公司 应用于便携式终端的视频拍摄定位方法及便携式终端
JP2015167603A (ja) * 2014-03-05 2015-09-28 コニカミノルタ株式会社 撮影台
JP6415196B2 (ja) * 2014-09-08 2018-10-31 キヤノン株式会社 撮像装置および撮像装置の制御方法
US10419779B2 (en) * 2014-10-08 2019-09-17 Lg Electronics Inc. Method and device for processing camera parameter in 3D video coding
US9684830B2 (en) * 2014-11-14 2017-06-20 Intel Corporation Automatic target selection for multi-target object tracking

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000066086A (ja) * 1998-08-20 2000-03-03 Nikon Corp 自動焦点調節装置
CN1973231A (zh) * 2004-05-07 2007-05-30 株式会社理光 图像获取相机
CN104243806A (zh) * 2013-06-20 2014-12-24 索尼公司 成像装置、信息显示方法和信息处理单元
CN103747183A (zh) * 2014-01-15 2014-04-23 北京百纳威尔科技有限公司 一种手机拍摄对焦方法
CN104469167A (zh) * 2014-12-26 2015-03-25 小米科技有限责任公司 自动对焦方法及装置

Also Published As

Publication number Publication date
CN104469167A (zh) 2015-03-25
US9729775B2 (en) 2017-08-08
BR112015019722A2 (pt) 2017-07-18
KR20160091246A (ko) 2016-08-02
EP3038345B1 (en) 2022-09-14
EP3038345A1 (en) 2016-06-29
RU2015129487A (ru) 2017-01-23
MX358881B (es) 2018-08-31
MX2015009132A (es) 2016-08-17
RU2612892C2 (ru) 2017-03-13
CN104469167B (zh) 2017-10-13
JP6348611B2 (ja) 2018-06-27
JP2017505466A (ja) 2017-02-16
KR101678483B1 (ko) 2016-11-23
US20160191783A1 (en) 2016-06-30

Similar Documents

Publication Publication Date Title
WO2016101481A1 (zh) 自动对焦方法及装置
JP6267363B2 (ja) 画像を撮影する方法および装置
CN108419016B (zh) 拍摄方法、装置及终端
WO2016008246A1 (zh) 拍摄参数调节方法及装置
EP2991336B1 (en) Image capturing method and apparatus
CN110493526B (zh) 基于多摄像模块的图像处理方法、装置、设备及介质
CN110557547B (zh) 镜头位置调整方法及装置
CN106210496B (zh) 照片拍摄方法及装置
WO2016029641A1 (zh) 照片获取方法及装置
JP6335289B2 (ja) 画像フィルタを生成する方法及び装置
CN105282441B (zh) 拍照方法及装置
EP3544286B1 (en) Focusing method, device and storage medium
WO2018205902A1 (zh) 防抖控制方法和装置
WO2017124899A1 (zh) 一种信息处理方法及装置、电子设备
WO2018133388A1 (zh) 智能飞行设备的拍摄方法及智能飞行设备
WO2018053722A1 (zh) 全景照片拍摄方法及装置
CN110769147A (zh) 拍摄方法及电子设备
CN113364965A (zh) 基于多摄像头的拍摄方法、装置及电子设备
US11555696B2 (en) Electronic terminal, photographing method and device, and storage medium
CN112866555B (zh) 拍摄方法、装置、设备及存储介质
CN114244999A (zh) 自动对焦的方法、装置、摄像设备及存储介质
CN114666490A (zh) 对焦方法、装置、电子设备和存储介质
WO2019134513A1 (zh) 拍照对焦方法、装置、存储介质及电子设备
CN106131403B (zh) 触摸对焦方法及装置
WO2023225910A1 (zh) 视频显示方法及装置、终端设备及计算机存储介质

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 20157016842

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2016565542

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: MX/A/2015/009132

Country of ref document: MX

ENP Entry into the national phase

Ref document number: 2015129487

Country of ref document: RU

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15871557

Country of ref document: EP

Kind code of ref document: A1

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112015019722

Country of ref document: BR

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 112015019722

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20150817

122 Ep: pct application non-entry in european phase

Ref document number: 15871557

Country of ref document: EP

Kind code of ref document: A1