WO2020019295A1 - 图像获取方法、成像装置及拍摄系统 - Google Patents

图像获取方法、成像装置及拍摄系统 Download PDF

Info

Publication number
WO2020019295A1
WO2020019295A1 PCT/CN2018/097421 CN2018097421W WO2020019295A1 WO 2020019295 A1 WO2020019295 A1 WO 2020019295A1 CN 2018097421 W CN2018097421 W CN 2018097421W WO 2020019295 A1 WO2020019295 A1 WO 2020019295A1
Authority
WO
WIPO (PCT)
Prior art keywords
imaging device
focus
sharpness
motion data
current
Prior art date
Application number
PCT/CN2018/097421
Other languages
English (en)
French (fr)
Inventor
苏冠樑
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201880038821.0A priority Critical patent/CN110754080B/zh
Priority to PCT/CN2018/097421 priority patent/WO2020019295A1/zh
Publication of WO2020019295A1 publication Critical patent/WO2020019295A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present invention relates to the field of imaging technology, and in particular, to an image acquisition method, an imaging device, and a photographing system.
  • cameras usually switch between the point focus function and the global focus function based on a change in the distance between the subject and the camera in the shooting scene.
  • This type of camera is usually a binocular camera.
  • current motion cameras are usually monocular cameras and cannot perform distance detection. Therefore, it is also impossible to switch between the point focus function and the global focus function through distance detection, which will further affect the imaging quality of the images taken by the motion camera.
  • Embodiments of the present invention provide an image acquisition method, an imaging device, and a photographing system.
  • An image acquisition method includes: detecting a current sharpness of a captured image of an imaging device; and controlling the imaging device to switch between two shooting modes according to the current sharpness, the two shooting modes including dots Focus-spot metering mode and global focus-global metering mode.
  • the imaging device includes a processor for detecting a current sharpness of a captured image of the imaging device, and controlling the imaging device to switch between two shooting modes according to the current sharpness.
  • the shooting modes include a spot focus-spot metering mode and a global focus-global metering mode.
  • a photographing system includes the above-mentioned imaging device and a carrier.
  • the imaging device is mounted on the carrier.
  • the image acquisition method, imaging device, and shooting system control the imaging device to switch between the spot focus-spot metering mode and the global focus-global metering mode based on the current sharpness of the captured image, so that the imaging device's
  • the shooting mode can be adaptively adjusted according to the current sharpness, which is conducive to obtaining higher imaging quality under different imaging conditions.
  • FIG. 1 is a schematic structural diagram of a photographing system according to some embodiments of the present invention.
  • FIGS 2 to 17 are schematic flowcharts of an image acquisition method according to some embodiments of the present invention.
  • sports cameras are generally monocular cameras, and considering that they are mainly suitable for shooting sports scenes, sports cameras usually only have global focus-global metering functions.
  • the present invention provides a photographing system 100.
  • the photographing system 100 includes an imaging device 10 and a carrier 20.
  • the imaging device 10 is mounted on a carrier 20.
  • the imaging device 10 may include a monocular imaging device 10, that is, a device having only one camera; or the imaging device 10 may also be a binocular imaging device 10, that is, a device having two cameras; or the imaging device 10 may also be It is a multi-eye imaging device 10, that is, a device having multiple cameras; or, the imaging device 10 may also be a device that integrates one or more cameras, such as a mobile phone. In a specific embodiment of the invention, the imaging device 10 is taken as an example for monocular imaging device 10 for description.
  • the carrier 20 may be a gimbal or a movable platform.
  • the imaging device 10 can be directly mounted on the gimbal; or, the imaging device 10 can also be directly mounted on a movable platform; or the imaging device 10 can also be mounted indirectly on a movable platform, that is, the imaging device 10 is first mounted on the gimbal Above, the gimbal on which the imaging device 10 is mounted is further mounted on a movable platform.
  • the gimbal can be a handheld gimbal, and the movable platform can be a drone, a vehicle, a ship, etc.
  • the imaging device 10 may be directly mounted on the pan / tilt head, and the imaging device 10 and the pan / tilt head may form an inseparable integrated structure.
  • the imaging device 10 is a motion camera.
  • the shooting mode of the action camera of the embodiment of the present invention includes a spot focus-spot metering mode and a global focus-global metering mode.
  • the action camera of the embodiment of the present invention can work in the spot focus-spot metering mode.
  • the action camera of the embodiment of the present invention can Works in global focus-global metering mode. In this way, the shooting mode of the imaging device 10 can be adjusted under different shooting needs, and it is beneficial to obtain higher imaging quality under different imaging conditions.
  • the present invention further provides an image acquisition method for the imaging device 10.
  • Image acquisition methods include:
  • the imaging device 10 is controlled to switch between two shooting modes according to the current definition.
  • the two shooting modes include a spot focus-spot metering mode and a global focus-global metering mode.
  • the image acquisition method according to the embodiment of the present invention may be implemented by the imaging device 10 according to the embodiment of the present invention.
  • the image acquisition method according to any embodiment of the present invention may also be implemented by other devices different from the imaging device. This is not limited.
  • the imaging device 10 is used as an example for description.
  • the imaging device 10 includes a processor 11. Both step S1 and step S2 may be implemented by the processor 11. That is to say, the processor 11 can be used to detect the current sharpness of the captured image of the imaging device 10, and control the imaging device 10 to switch between two shooting modes according to the current sharpness.
  • the two shooting modes include spot focus-spot measurement. Light Mode and Global Focus-Global Metering Mode.
  • the imaging device 10 when the imaging device 10 works in the spot focus-spot metering mode, the imaging device 10 focuses on a partial area in the scene of the imaging device 10 (generally, the partial area is the main area of the scene).
  • the imaging device 10 is based on Focusing operations are performed on pixel values of multiple pixels corresponding to this part of the area, and metering operations are performed based on pixel values of multiple pixels corresponding to this part of the area.
  • the imaging device 10 When the imaging device 10 works in the global focus-global metering mode, the imaging device 10 focuses on all areas in the field of view of the imaging device 10, and the imaging device 10 performs a focusing operation based on the pixel values of all pixels corresponding to all the areas, and The metering operation is performed based on the pixel values of all pixels corresponding to all the regions.
  • the imaging device 10 with a spot focus-spot metering mode and a global focus-global metering mode switching function is usually a binocular imaging device 10, and the binocular imaging device 10 measures the distance of objects in the scene based on the binocular stereo vision method. The change in distance is used as the basis for switching the spot focus-spot metering mode and global focus-global metering mode. After the distance of the main part in the scene changes more than a certain set value, the spot focus-spot metering mode is switched to Global focus-global metering mode.
  • the imaging device 10 is a monocular imaging device 10
  • the monocular imaging device 10 cannot obtain the distance between the object in the scene and itself, and cannot switch the shooting mode based on the distance change information.
  • the image acquisition method uses the detected current sharpness of the captured image of the imaging device 10 as a judgment criterion for switching the shooting mode to switch the shooting mode of the imaging device 10.
  • the step S2 of controlling the imaging device 10 to switch between the two shooting modes according to the current definition includes:
  • step S21 may be implemented by the processor 11. That is to say, the processor 11 may be configured to control the imaging device 10 to switch from the spot focus-spot metering mode to the global focus-global metering mode when the current sharpness is less than a preset sharpness.
  • the captured image may be a still image or a dynamic image, such as a moving image or a video.
  • the captured image can be a preview image or a target image that is finally output to the user.
  • the current sharpness of the captured image can be used as the basis for switching the shooting mode.
  • the imaging device 10 when the imaging device 10 is operated in the spot focus-spot metering mode, it is explained that there may be a prominent main body part in the shooting scene. At this time, if the shooting scene changes greatly (for example, the imaging device 10 moves, etc.), there may not be a prominent subject part in the current shooting scene. At this time, if the imaging device 10 does not switch to global focus-global measurement Light mode, still focusing on a part of the current shooting scene, then only a small part of the objects in the captured image are clear, most of the objects are blurred, and the final captured image is not sharp.
  • the processor 11 needs to control the imaging device 10 to switch from the spot focus-spot metering mode to the global focus global metering mode, so that the imaging quality of the captured image Guaranteed.
  • the image acquisition method of the embodiment of the present invention controls the imaging device 10 to switch between the spot focus-spot metering mode and the global focus-global metering mode based on the current sharpness of the captured image, so that the shooting mode of the imaging device 10 can be
  • the self-adaptive adjustment of sharpness is conducive to obtaining higher imaging quality under different imaging conditions.
  • detecting the current sharpness of the captured image of the imaging device 10 in step S1 includes:
  • step S11 and step S12 can both be implemented by the processor 11. That is to say, the processor 11 can be used to detect the motion data of the imaging device 10 and determine the current definition according to the motion data.
  • the motion data of the imaging device 10 includes the amplitude of the jitter of the imaging device 10 in one or more directions, the posture change value in one or more directions, the speed of movement in one or more directions, At least one of acceleration in multiple directions, angular velocity in one or more directions, and angular acceleration in one or more directions.
  • the motion data of the imaging device 10 may include only the amplitude of the jitter in one or more directions, the pose change value in one or more directions, the speed of motion in one or more directions, Any one of at least one of acceleration in one or more directions, and angular velocity in one or more directions; or, the motion data of the imaging device 10 may include both the amplitude of the jitter in one or more directions and one Or two types of posture change values in multiple directions; or, the motion data of the imaging device 10 may include both the amplitude of the jitter in one or more directions, the posture change values in one or more directions, and one or more There are three kinds of movement speeds in multiple directions; or, the movement data of the imaging device 10 may simultaneously include the amplitude of the jitter in one or more directions, the pose change value in one or more directions, and the one or more directions.
  • the motion data of the imaging device 10 may include jitter amplitudes in one or more directions at the same time
  • the motion data of 10 can include the amplitude of the jitter in one or more directions, the value of the pose change in one or more directions, the speed of movement in one or more directions, and the acceleration in one or more directions.
  • the jitter amplitude of the imaging device 10 in one or more directions refers to the offset of the current posture of the imaging device 10 from the current reference posture; the posture change value of the imaging device 10 in one or more directions Refers to the amount of change in posture of the imaging device 10 over a period of time; the movement speed of the imaging device 10 in one or more directions refers to the movement speed of the imaging device 10 at multiple times during the shooting process; imaging The acceleration of the device 10 in one or more directions refers to the change value of the moving speed of the imaging device 10 over a period of time; the angular velocity of the imaging device 10 in one or more directions refers to the imaging device 10 during shooting The angular velocity of the imaging device 10 in one or more directions refers to a change value of the angular velocity of the imaging device 10 over a period of time.
  • the moving direction of the imaging device 10 may be any one of a roll direction, a pitch direction, and a yaw direction, and may also be a roll direction and a pitch direction, or a roll direction and a yaw direction, or a pitch direction.
  • the shake amplitude refers to the shake amplitude of the imaging device 10 in the roll direction
  • the pose change value refers to the roll direction of the imaging device 10 Posture changes, etc.
  • the shake amplitude includes the shake amplitude of the imaging device 10 in the roll direction and the shake amplitude in the yaw direction
  • the posture change value includes the Posture change values in the roll direction and posture change values in the yaw direction.
  • the shake amplitude includes the shake amplitude of the imaging device 10 in the roll direction, the shake amplitude in the yaw direction, and the The shake amplitude and pose change value include the pose change value of the imaging device 10 in the roll direction, the pose change value in the yaw direction, and the pose change value in the pitch direction.
  • the motion data of the imaging device 10 may be obtained through a motion sensor, and the motion sensor may be mounted on the imaging device 10 or on a carrier 20 mounted on the imaging device 10.
  • the processor 11 directly reads the motion data of the imaging device 10 from the motion sensor; when the motion sensor is mounted on the carrier 20, the communication module of the imaging device 10 first receives the carrier The communication data transmitted by the 20 is transmitted by the communication module to the processor 11.
  • the motion data of the imaging device 10 may be determined based on the motion data of the carrier 20 and the conversion relationship between the imaging device 10 and the carrier 20.
  • the motion sensor may be a gyroscope, an acceleration sensor, an inertial measurement unit, and the like.
  • the motion data of the imaging device 10 characterizes the motion state of the imaging device 10.
  • the processor 11 may determine whether the current shooting scene has undergone a large change based on the motion state of the imaging device 10, such as exceeding a preset range of change, and may control the imaging device 10 from a point when the shooting scene undergoes a large change. Focus-spot metering mode switches to global focus-global metering mode.
  • the processor 11 may determine whether the current shooting scene has changed greatly based on the following manner, that is, step S12 determines that the current sharpness according to the motion data includes: :
  • step S121 and step S122 may both be implemented by the processor 11. That is, the processor 11 may be further configured to determine whether the motion data is greater than a preset motion threshold, and determine that the current sharpness is less than the preset sharpness when the motion data is greater than the preset motion threshold.
  • the preset sharpness corresponds to a preset motion threshold, where both the preset sharpness and the preset motion threshold are determined based on a large amount of experimental data in a previous period.
  • the motion data is greater than a preset motion threshold, it indicates that the imaging device 10 is in a motion state and the motion is relatively fast at this time.
  • the motion data includes only one
  • the motion data is greater than a preset motion threshold
  • the preset movement threshold value is the preset shake amplitude threshold value
  • the shake amplitude is greater than the preset shake amplitude threshold value
  • the current sharpness of the captured image captured by the imaging device 10 based on the spot focus-spot metering mode will be lower than the preset definition.
  • the processor 11 needs to control the imaging device 10 to switch to work in the global focus-global metering mode to ensure that The captured image has high definition.
  • the magnitude relationship between the current sharpness and the preset sharpness may be determined jointly by multiple pieces of motion data.
  • motion data including jitter amplitude and posture change values as an example
  • the jitter amplitude is greater than a preset jitter amplitude threshold and the posture change value is greater than the preset posture change threshold
  • the pose change value sets a jitter weight value and a pose change weight value respectively, and determines a jitter score value based on the shake amplitude, determines a pose change score value based on the pose change weight value, and according to the shake score value and jitter weight Value, pose change score and pose change weight to calculate the motion score.
  • the motion score is greater than the preset motion score
  • determine that the motion data is greater than the preset motion threshold, and further determine that the current sharpness is less than the preset sharpness.
  • the processor 11 may further determine whether a large change has occurred in the current shooting scene based on the following manners, that is, determining the current sharpness according to the motion data in step S12 includes:
  • both steps S123 and S124 may be implemented by the processor 11. That is, the processor 11 may be further configured to obtain a mapping relationship between preset motion data and sharpness, and determine a current sharpness corresponding to the motion data according to the mapping relationship.
  • the mapping relationship between preset motion data and sharpness includes the following: (1) the preset motion data and sharpness are both specific values, and one preset motion data corresponds to one sharpness; (2) the preset Let the motion data be a range, the sharpness be a specific value, and a preset motion data range correspond to a sharpness; (3) the preset motion data is a specific value, the sharpness is a range, and a preset motion The data corresponds to a sharpness range; (4) The preset motion data and sharpness are both a range, and a preset motion data range corresponds to a sharpness range.
  • the preset mapping relationship between motion data and sharpness is determined based on a large amount of experimental data in the previous period.
  • the mapping relationship between the preset motion data and the definition may be stored in the memory of the imaging device 10 in the form of a mapping table.
  • the processor 11 continuously acquires the motion data of the imaging device 10 when the imaging device 10 is operating, and searches for a current definition corresponding to the motion data in a mapping table based on the acquired motion data. Then, the determined current sharpness is compared with a preset sharpness. If the current sharpness is smaller than the preset sharpness, the imaging device 10 is controlled to switch to work in the global focus-global metering mode to ensure that the captured image has a higher sharpness.
  • mapping relationship between the preset motion data and sharpness can be used in actual applications, such as a mapping diagram, which is not specifically limited here.
  • the imaging device 10 is mounted on the carrier 20, and the motion data includes the amplitude of the jitter of the imaging device 10 in one or more directions.
  • Step S11 of detecting motion data of the imaging device 10 includes:
  • S112 Determine the jitter amplitude according to the expected compensation amplitude and the actual compensation amplitude.
  • both step S111 and step S112 may be implemented by the processor 11. That is to say, the processor 11 may be further configured to obtain a desired compensation range when the carrier 20 performs shake compensation on the imaging device 10 and an actual compensation range of the carrier 20, and determine the shake amplitude according to the expected compensation range and the actual compensation range.
  • the carrier 20 when the carrier 20 is a pan / tilt head, if the imaging device 10 shakes during the movement of the imaging device 10, the carrier 20 will perform shake compensation on the imaging device 10 based on the detected shake amplitude.
  • the carrier 20 can determine a desired compensation range based on the shake amplitude of the imaging device 10, and then perform shake compensation based on the desired compensation range.
  • the processor 11 sets the difference between the expected compensation range and the actual compensation range (specifically, The value of the actual compensation amplitude minus the expected compensation amplitude) is taken as the jitter amplitude.
  • the compensation range is expected to be 0 ° relative to the yaw angle in the yaw direction. Is offset by -5 °.
  • the carrier 20 performs jitter compensation based on the determined desired compensation amplitude, and shifts to -5 ° from a yaw angle of 0 °.
  • the attitude angle of the imaging device 10 in the yaw direction is shifted by 0.5 ° from the position of 0 °, that is, the actual compensation range is -4.5 °, so
  • the processor 11 can calculate the final jitter amplitude of 0.5 ° based on the expected compensation amplitude of -5 ° and the actual compensation amplitude of -4.5 degrees.
  • the processor 11 may further determine the current sharpness of the captured image based on the amplitude of the shake. Specifically, the current sharpness of the captured image may be determined by the methods described in step S121 and step S122, or the current sharpness of the captured image may be determined by the method described in step S123 and step S124. After determining the current definition, the processor 11 may control the shooting mode of the imaging device 10 based on the current definition.
  • the motion data includes the amplitude of the jitter of the imaging device 10 in one or more directions, and the posture change value of the imaging device 10 in one or more directions.
  • Step S11 of detecting motion data of the imaging device 10 includes:
  • Step S12 determining the current definition according to the motion data includes:
  • S125 Determine the current sharpness according to the amplitude of the jitter.
  • steps S113, S114, S115, and S125 can all be implemented by the processor 11. That is to say, the processor 11 can also be used to obtain the pose change value of the imaging device 10, and determine whether the pose change value is greater than the preset pose change value. When the pose change value is greater than the preset pose change value, obtain The jitter amplitude of the imaging device 10 and the current sharpness are determined based on the jitter amplitude.
  • the processor 11 first obtains the posture change value of the imaging device 10, and compares the magnitude of the posture change value with the preset posture change value, and further acquires imaging only when the posture change value is greater than the preset posture change value.
  • the shaking amplitude of the device 10 may be inactive when the posture change value is smaller than the preset posture change value.
  • the posture change value includes only the posture change value in one direction, for example, if the posture change value includes only the posture change value in the roll direction, the corresponding preset posture change value is in the roll direction.
  • the preset posture change value is the preset roll posture change value. At this time, it is only necessary to compare the magnitude of the posture change value in the rolling direction with the preset roll posture change value.
  • the posture change value includes the posture change values in multiple directions, for example, when the roll direction, the pitch direction, and the yaw direction are simultaneously included, the posture change value in the roll direction is greater than the preset roll posture change Value, the attitude change value in the pitch direction is greater than the preset attitude change value, and the attitude change value in the yaw direction is greater than the preset yaw attitude change value.
  • Value; or, respectively, the attitude change value in the roll direction, the attitude change value in the pitch direction, and the attitude change value in the yaw direction are assigned corresponding roll weights, pitch weights, and yaw weights.
  • the pose is calculated based on the pose change value in the roll direction, roll weight, pose change value in the pitch direction, pitch weight value, pose change value in the yaw direction, and yaw weight value Change value, and then compare the pose change value with the preset pose change value. It can be understood that when the value of the posture change of the imaging device 10 is small, it indicates that the movement state of the imaging device 10 changes slowly. Since the movement of the imaging device 10 is relatively gentle, the imaging device 10 may not experience jitter or the amplitude of the jitter is small It can be ignored, or the change in the shooting scene of the imaging device 10 is small. Therefore, at this time, the imaging device 10 can still use the spot focus-spot metering mode to capture captured images.
  • the imaging device 10 When the posture change value of the imaging device 10 is large, it indicates that the movement state of the imaging device 10 changes rapidly. Since the imaging device 10 moves violently, the imaging device 10 is likely to be accompanied by a large The situation of the shake amplitude or the shooting scene of the imaging device 10 also changes greatly. Therefore, at this time, the imaging device 10 needs to switch from the spot focus-spot metering mode to the global focus-global metering mode to obtain a clearer captured image to ensure the clarity of the captured image.
  • detecting the current sharpness of the captured image of the imaging device 10 in step S1 includes:
  • step S13 may be implemented by the processor 11. That is to say, the processor 11 can also be used to obtain the focus evaluation function value of the captured image and use it as the current sharpness.
  • the focus evaluation function value can be used to evaluate the sharpness of a captured image.
  • the focus evaluation function value can be calculated based on the focus evaluation function.
  • the focus evaluation function may be a Bernner gradient function, a Tenengrad gradient function, a Laplacian gradient function, a grayscale variance function, and the like. Taking the Bernner gradient function as an example, the Bernner gradient function calculates the value of the focus evaluation function by calculating the square of the gray difference between two adjacent pixels.
  • D (f) ⁇ y ⁇ x
  • 2 where f (x, y) is the gray value of the pixel point (x, y) corresponding to the captured image, and D (f) is the focus evaluation function value.
  • f (x, y) is the gray value of the pixel point (x, y) corresponding to the captured image
  • D (f) is the focus evaluation function value. It can be understood that when the captured image has a high degree of clarity, the colors, textures, and edges of the image are more clear, and the gray value between two adjacent pixels also has a large difference. Therefore, the focus evaluation function can be directly used as the current sharpness. After the processor 11 calculates the focus evaluation function value, the focus evaluation function value is compared with the preset sharpness.
  • the processor 11 needs to control the imaging
  • the device 10 needs to switch from the spot focus-spot metering mode to the global focus-global metering mode to obtain a clearer captured image.
  • the processor 11 may determine the current sharpness based on the motion data and the focus evaluation function at the same time. At this time, detecting the current sharpness of the captured image of the imaging device 10 in step S1 includes:
  • S15 Determine the current sharpness of the imaging device 10 according to the motion data and the focus evaluation function value.
  • step S15 determining the current sharpness of the imaging device 10 according to the motion data and the focus evaluation function value includes:
  • S151 Based on the weight method, determine the current sharpness of the imaging device 10 according to the motion data and the focus evaluation function value.
  • steps S14, S15, and S151 may be implemented by the processor 11. That is to say, the processor 11 may be further configured to obtain the motion data of the imaging device 10 and the focus evaluation function value of the captured image, and determine the current definition of the imaging device 10 according to the motion data and the focus evaluation function value.
  • the processor 11 executes step S151, the processor 11 actually performs an action of determining the current sharpness of the imaging device 10 based on the weighting method and according to the motion data and the focus evaluation function value.
  • the processor 11 first obtains the shake amplitude of the imaging device 10 and calculates a focus evaluation function value of the captured image.
  • the processor 11 obtains the The movement of the shake amplitude and the calculation of the focus evaluation function value of the captured image may be performed simultaneously; or, the processor 11 first obtains the shake amplitude of the imaging device 10 and then calculates the focus evaluation function value of the captured image; or the processor 11 first calculates the shooting The focus evaluation function value of the image is obtained, and then the shake amplitude of the imaging device 10 is obtained.
  • the processor 11 determines a jitter score value according to the jitter amplitude of the imaging device 10, and determines a focus score value according to a focus evaluation function value.
  • the jitter score has a mapping relationship with the jitter amplitude.
  • the larger the jitter amplitude, the larger the corresponding jitter score, and the mapping relationship between the two is stored in the memory of the imaging device 10 in the form of a mapping table; similarly, the focus evaluation
  • the function value and the focus score also have a mapping relationship.
  • the smaller the focus evaluation function value, the larger the corresponding jitter score, and the mapping relationship between the two is also stored in the memory of the imaging device 10 in the form of a mapping table.
  • the processor 11 also needs to obtain a jitter score and a jitter weight, and the jitter score and the jitter weight are preset.
  • the processor 11 finds a jitter score corresponding to the currently obtained jitter amplitude from a mapping table of the jitter score and a jitter amplitude, and finds a corresponding value of the current focus evaluation function from a mapping table of the focus evaluation function value and the focus score. Focus score.
  • the current definition can be determined based on the total score value.
  • the total score value and the definition have a mapping relationship, and the mapping relationship between the two is stored in a memory such as a mapping table in the memory of the imaging device 10, and the processor 11 determines the current definition corresponding to the total score value through the mapping table, And control the imaging device 10 to switch from the spot focus-spot metering mode to the global focus-global metering mode when the current definition is less than the preset definition; or, the processor 11 may also compare the total score value with the preset total score Value to determine whether the current sharpness is less than the preset sharpness, that is, the current sharpness is considered to be less than the preset sharpness when the total score value is greater than the preset total score value, and the current clearness is considered when the total score value is less than the preset total score value. Degree is greater than the preset sharpness.
  • the processor 11 controls the switching of the shooting mode based on the determination result.
  • the processor 11 When the motion data includes a plurality of values, taking the motion data including jitter amplitude and posture change values as an example, the processor 11 first obtains the jitter amplitude, posture change value of the imaging device 10, and calculates a focus evaluation function value of the captured image, where: The order of obtaining the shake amplitude, the posture change value, and the focus evaluation function value of the captured image of the imaging device 10 is not limited. Subsequently, the processor 11 determines a jitter score value according to the jitter amplitude of the imaging device 10, determines a pose change score value according to the pose change value, and determines a focus score value according to the focus evaluation function value. Among them, the jitter score has a mapping relationship with the jitter amplitude.
  • the mapping relationship between the two is stored in the memory of the imaging device 10 in the form of a mapping table.
  • the pose The change value and the pose change score also have a mapping relationship.
  • the mapping relationship between the two is stored in the memory of the imaging device 10 in the form of a mapping table, for example.
  • the focus evaluation function value and the focus score also have a mapping relationship. The smaller the focus evaluation function value, the larger the corresponding jitter score, and the mapping relationship between the two is also stored in the memory of the imaging device 10 in the form of a mapping table.
  • the processor 11 also needs to obtain a jitter score, a pose change weight, and a jitter weight.
  • the jitter score, the pose change weight, and the jitter weight are preset.
  • the processor 11 finds a jitter score corresponding to the currently acquired jitter amplitude from a mapping table of the jitter score and a jitter amplitude, and finds a value corresponding to the current pose from a mapping table of the pose change score and the pose change value For the corresponding pose change score, a focus score corresponding to the current focus evaluation function value is found from a mapping table of the focus evaluation function value and the focus score.
  • the processor 11 calculates the total score value
  • the current definition can be determined based on the total score value.
  • the total score value and the sharpness have a mapping relationship, and the mapping relationship between the two is stored in the memory of the imaging device 10 in the form of a mapping table.
  • the processor 11 determines the current sharpness corresponding to the total score value through the mapping table, and Control the imaging device 10 to switch from the spot focus-spot metering mode to the global focus-global metering mode when the current resolution is less than the preset resolution; or, the processor 11 may also compare the total score value with a preset total score value To determine whether the current sharpness is less than the preset sharpness, that is, the current sharpness is considered to be less than the preset sharpness when the total score value is greater than the preset total score value, and the current sharpness is considered when the total score value is less than the preset total score value. Greater than the preset sharpness. The processor 11 then controls the switching of the shooting mode based on the determination result.
  • determining the current sharpness of the imaging device 10 according to the motion data and the focus evaluation function value in step S15 includes:
  • step S152 may be implemented by the processor 11. That is, the processor 11 may be further configured to determine that when the motion data is greater than a preset motion threshold and the focus evaluation function value is less than the preset threshold, the current sharpness is smaller than the preset sharpness.
  • the processor 11 needs to jointly judge the current sharpness according to the two parameters of the jitter amplitude and the focus evaluation function value. With preset sharpness sizes.
  • the processor 11 determines that the current sharpness is smaller than the preset sharpness. If the jitter amplitude is smaller than the preset jitter threshold, the processor 11 determines that the current sharpness is greater than the preset sharpness regardless of whether the focus evaluation function value is smaller than the preset threshold. If the focus evaluation function value is greater than the preset threshold, the processor 11 determines that the current sharpness is greater than the preset sharpness regardless of whether the magnitude of the shake is greater than the preset shake threshold.
  • the motion data when the motion data includes a plurality, the motion data may include a shake amplitude of the imaging device 10 in one or more directions, a pose change value in one or more directions, and one or more directions. At least one of the motion speed, the acceleration in one or more directions, and the angular velocity in one or more directions, with the motion data including the amplitude of the jitter in one direction and the value of the pose change in one direction.
  • the motion threshold includes a preset jitter threshold and a preset pose change threshold.
  • the processor 11 needs to jointly determine the current sharpness and the preset sharpness according to three parameters: the shake amplitude, the pose change value, and the focus evaluation function value. size.
  • the processor 11 determines that the current sharpness is smaller than the preset sharpness.
  • the processor 11 determines that the current sharpness is greater than the preset sharpness regardless of whether the pose change value is greater than the preset pose change threshold or whether the focus evaluation function value is less than the preset threshold value. degree.
  • the processor 11 determines that the current sharpness is greater than the preset sharpness regardless of whether the amplitude of the shake is greater than the preset shake threshold or whether the value of the focus evaluation function is smaller than the preset threshold. degree.
  • the processor 11 determines that the current sharpness is greater than the preset sharpness regardless of whether the amplitude of the shake is greater than the preset shake threshold or whether the pose change value is greater than the preset pose change threshold. degree.
  • the method for determining the current sharpness of the imaging device 10 according to the motion data and the focus evaluation function value is not only the content described above, but in actual applications, other methods can also be adopted as needed, as long as It is only necessary to adaptively switch and adjust the shooting mode of the imaging device 10 under corresponding conditions, which is not specifically limited here.
  • control method according to the embodiment of the present invention further includes:
  • step S3 may be implemented by the processor 11. That is, the processor 11 can also be used to control the imaging device 10 to switch between the two shooting modes according to a control instruction.
  • the control instruction may be input by a user, or may be autonomously issued by the processor 11 based on a switching standard of two shooting modes.
  • the user can input through the physical keys on the imaging device 10, for example, the user presses the button on the imaging device 10 to move, expand or reduce the focus area of the imaging device 10; or, the user can touch
  • the touch screen of the imaging device 10 is used to input control instructions.
  • the user performs operations such as clicking, zooming, stretching, etc. on the touch screen to move, expand, or reduce the focus area of the imaging device 10;
  • the instruction is transmitted to the imaging device 10, for example, the imaging device 10 is mounted on a gimbal, and the gimbal is further mounted on a drone.
  • the user implements control instructions through a physical button on the remote controller or a touch screen.
  • the remote controller then sends the control instructions to the drone, and the drone then sends the control instructions to the imaging device 10, for example, the imaging device 10 is mounted on a handheld gimbal, and the handheld gimbal can communicate with mobile electronic devices Connect, the user realizes the input of the control instruction through the mobile electronic device, and the mobile electronic device transmits the control instruction Hold the pan / tilt head, and then transmit the control instructions to the imaging device 10, or the mobile electronic device can directly communicate with the imaging device 10 and transmit the control instructions to the imaging device 10, or the handheld part of the handheld pan / tilt head With a display screen, the user can input control instructions through touch operation on the display screen.
  • the processor 11 autonomously issues a control instruction based on the switching standard of the two shooting modes
  • the processor 11 autonomously issues a control instruction based on the shooting mode switching judgment standard according to any one of the foregoing embodiments.
  • the imaging device 10 can not only switch the shooting mode based on the control instruction input by the user, but also automatically switch the shooting mode.
  • the automatic switching of the shooting mode can help users obtain higher-quality shooting Images, and for users with strong photography skills, switching the shooting mode based on control instructions input by the user can enable the user to obtain captured images according to their own preferences, and the user experience can be greatly improved.
  • the image acquisition method according to the embodiment of the present invention further includes:
  • S5 Control the imaging device 10 to focus on the subject area.
  • steps S4 and S5 may be implemented by the processor 11. That is, the processor 11 may be further configured to process the captured image to determine a subject area in the captured image in a predetermined focus area, and control the imaging device 10 to focus the subject area.
  • the imaging device 10 focuses on a predetermined focus area. Further, the imaging device 10 may further process the captured image to determine whether a subject (for example, a human face, etc.) exists in a predetermined focus area. If a subject exists in the captured image, the area corresponding to the subject is the subject area. The imaging device 10 further focuses the subject area to make the subject more clear.
  • a subject for example, a human face, etc.
  • the processor 11 may process multiple captured faces after processing the captured image to identify the faces. For identity authentication, if there is a target user (such as the holder of the imaging device 10) in multiple faces, the processor 11 uses the area corresponding to the target user as the subject area, and the imaging device 10 further focuses the subject area so that the target user The clearest.
  • a target user such as the holder of the imaging device 10.
  • the clarity of the subject area can be further improved, and the user experience is better.
  • the processor 11 When the processor 11 processes the captured image, it can process only the image of the predetermined focus area, thereby reducing the amount of data that the processor 11 needs to process.
  • the image acquisition method according to the embodiment of the present invention further includes:
  • step S22 may be implemented by the processor 11. That is, the processor 11 may be further configured to control the imaging device 10 to switch from the global focus-global metering mode to the spot focus-spot metering mode when the current definition is greater than a predetermined definition.
  • the processor 11 when the imaging device 10 works in the global focus-global metering mode, the processor 11 also acquires the current sharpness of the captured image, where the current sharpness of the captured image may be determined by any of the implementation methods described above. The determination method described in the method. After determining the current sharpness of the captured image, the processor 11 compares the current sharpness with a predetermined sharpness. If the current sharpness is less than the predetermined sharpness, the imaging device 10 remains in the global focus-global metering mode; If the sharpness is greater than a predetermined sharpness, the imaging device 10 switches the shooting mode to a spot focus-spot metering mode.
  • the imaging device 10 needs to switch to the global focus-global metering mode so that the captured image has high definition. After the imaging device 10 is switched to the global focus-global metering mode, if it is detected that the sharpness of the captured image is greater than a predetermined sharpness, it means that the movement of the imaging device 10 tends to be smooth or in a stationary state. It is because a subject with a target appears in the scene. Therefore, the imaging device 10 needs to be controlled to switch from the global focus-global metering mode to the spot focus-spot metering mode to make the part of the subject clearer.
  • the predetermined sharpness and the preset sharpness may take the same value or different values.
  • the value of the predetermined sharpness is different from the value of the preset sharpness, and the value of the predetermined sharpness is greater than the value of the preset sharpness.
  • the imaging device 10 can be prevented from being just switched In the global focus-global metering mode, the current sharpness of the captured image is improved, and the current sharpness is greater than the preset sharpness, and the imaging device 10 directly switches back to the spot focus-spot metering mode, causing the switching to be too frequent , Increasing the problem of power consumption of the imaging device 10.
  • the image acquisition method according to the embodiment of the present invention further includes:
  • step S6 may be implemented by the processor 11. That is, the processor 11 may be further configured to determine a current focus area of the imaging device 10 according to a historical focus record when the imaging device 10 is switched from the global focus-global metering mode to the spot focus-spot metering mode.
  • the current focus area is the last focus area of the imaging device 10 in the spot focus-spot metering mode.
  • the imaging device 10 switches from global focus-global metering mode to spot focus-spot measurement In light mode, the current focus area is also the intended focus area.
  • the focus area of the historical focus record is the focus area determined by the imaging device 10 based on the recognition of the subject
  • the coordinates of pixels at multiple edges of the focus area are recorded and stored in the memory.
  • the imaging device 10 determines the current focus area based on the coordinates of pixels at multiple edges stored in the memory. Among them, storing only the coordinates of pixels at multiple edges instead of storing all the pixel coordinates that fall into the historical focus area can reduce the amount of data stored.
  • the imaging device 10 determines the current focus area based on the coordinates of pixels of multiple edges stored in the memory. Among them, storing only the coordinates of pixels at multiple edges instead of storing all the pixel coordinates that fall into the historical focus area can reduce the amount of data stored.
  • the current focus area in the current spot focus-spot metering mode is determined by the historical focus record in the last spot focus-spot metering mode.
  • some application scenarios such as changing the scene from the subject to the surrounding environment, and then In the application scenario of changing from the surrounding environment back to the subject, you can quickly focus on the part of the subject, the quality of the captured image is better, and the user experience is better.
  • the image acquisition method according to the embodiment of the present invention further includes:
  • Step S2 controls the imaging device 10 to switch between two shooting modes according to the current definition.
  • the two shooting modes include a spot focus-spot metering mode and a global focus-global metering mode.
  • S23 Control the imaging device 10 to switch between the two shooting modes according to the working mode and the current definition.
  • steps S7 and S23 may be implemented by a processor. That is to say, the processor can also be used for acquiring the working mode of the imaging device 10, and controlling the imaging device 10 to switch between the two shooting modes according to the working mode and the current definition.
  • the working mode may include a following mode and a free mode.
  • the imaging device 10 works in the following mode, the imaging device 10 can track and shoot the subject in the scene. At this time, the subject may be in a stationary or moving state. When the subject is in the moving state, the imaging device 10 moves according to the moving state of the subject. So that the subject is always in the field of view of the imaging device 10.
  • the imaging device 10 operates in a free mode, the imaging device 10 does not track the subject.
  • spot focus-spot metering is generally used for shooting.
  • the processor 11 still obtains the current resolution of the captured image, and the processor 11 can perform the following actions based on the current resolution:
  • the priority of the working mode is higher than the priority of the current sharpness. At this time, regardless of the size of the current sharpness, the imaging device 10 always works in the spot focus-spot metering mode; or
  • the priority of the current resolution is higher than the priority of the working mode.
  • the current resolution is less than the preset resolution, the current working mode is not considered at this time, and the processor 11 controls the imaging device 10 to focus from point to point.
  • the metering mode is switched to global focus-global metering mode.
  • the imaging device 10 when the priority of the working mode is higher than the priority of the current sharpness, the imaging device 10 always keeps working in the spot focus-spot metering mode, and the subject in the captured image is always clear. If the priority of the current sharpness is high, Due to the priority of the working mode, once the current sharpness is less than the preset sharpness, the imaging device 10 switches from the spot focus-spot metering mode to the global focus-global metering mode, so that all areas in the captured image can be clearly displayed .
  • the imaging device 10 works in the free mode, generally, the scene shot by the imaging device 10 in the free mode is considered to have no prominent subject, and the imaging device 10 usually uses the global focus-global metering mode for shooting so that All areas in the image are clearly displayed.
  • the processor 11 still obtains the current resolution of the captured image, and the processor 11 can perform the following actions based on the current resolution:
  • the priority of the working mode is higher than the priority of the current definition. At this time, regardless of the size of the current definition, the imaging device 10 always works in the global focus-global metering mode; or
  • the priority of the current definition is higher than the priority of the working mode.
  • the current definition is greater than the predetermined definition, the current working mode is not considered at this time, and the processor 11 controls the imaging device 10 from the global focus to the global measurement.
  • the light mode switches to spot focus-spot metering mode.
  • the imaging device 10 when the priority of the working mode is higher than the priority of the current sharpness, the imaging device 10 always maintains the global focus-global metering mode for shooting, and all areas in the captured image can always be clearly displayed. The level is higher than the priority of the working mode. Once the current sharpness is greater than the preset sharpness, the imaging device 10 switches from the global focus-global metering mode to the spot focus-spot metering mode, which ensures the overall clarity of the captured image. It can also optimize the sharpness of the subject in the captured image.
  • the current definition can be determined according to the motion data.
  • the processor 11 determines the size of the current definition and the preset definition or the predetermined definition according to the size of the motion data and the preset motion threshold.
  • the processor 11 confirms the current definition corresponding to the motion data from the mapping relationship between the preset motion data and the definition, and then compares the size of the current definition with the preset definition or the predetermined definition.
  • the processor 11 directly uses the focus evaluation function of the captured image as the current sharpness, and then compares the size of the current sharpness with a preset sharpness or a predetermined sharpness.
  • the processor 11 jointly determines the current sharpness according to the motion data and the focus evaluation function value, and then compares the current sharpness with the size of a preset sharpness or a predetermined sharpness.
  • the specific manner can refer to the foregoing content, which is not repeated here.
  • Any process or method description in a flowchart or otherwise described herein can be understood as representing a module, fragment, or portion of code that includes one or more executable instructions for performing a particular logical function or step of a process
  • the scope of the preferred embodiments of the present invention includes additional execution, wherein the functions may be performed out of the order shown or discussed, including performing the functions in a substantially simultaneous manner or in the reverse order according to the functions involved, which should It is understood by those skilled in the art to which the embodiments of the present invention pertain.
  • Logic and / or steps represented in a flowchart or otherwise described herein, for example, a ordered list of executable instructions that may be considered to perform a logical function may be embodied in any computer-readable medium, For use by, or in combination with, an instruction execution system, device, or device (such as a computer-based system, a system that includes a processor, or another system that can fetch and execute instructions from an instruction execution system, device, or device) Or equipment.
  • a "computer-readable medium” may be any device that can contain, store, communicate, propagate, or transmit a program for use by or in connection with an instruction execution system, apparatus, or device.
  • computer-readable media include the following: electrical connections (electronic devices) with one or more wirings, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read-only memory (ROM), erasable and editable read-only memory (EPROM or flash memory), fiber optic devices, and portable optical disk read-only memory (CDROM).
  • the computer-readable medium may even be paper or other suitable medium on which the program can be printed, because, for example, by optically scanning the paper or other medium, followed by editing, interpretation, or other suitable Processing to obtain the program electronically and then store it in computer memory.
  • a person of ordinary skill in the art can understand that performing all or part of the steps carried by the foregoing implementation method can be completed by a program instructing related hardware.
  • the program can be stored in a computer-readable storage medium, and the program is executing , Including one or a combination of the steps of the method embodiments.
  • each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may exist separately physically, or two or more units may be integrated into one module.
  • the above integrated modules can be executed in the form of hardware or software functional modules. When the integrated module is executed in the form of a software functional module and sold or used as an independent product, it may also be stored in a computer-readable storage medium.
  • the aforementioned storage medium may be a read-only memory, a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Studio Devices (AREA)

Abstract

一种图像获取方法、成像装置(10)和拍摄系统(100)。图像获取方法包括:检测成像装置(10)的拍摄图像的当前清晰度(S1);根据所当前清晰度控制成像装置(10)在两种拍摄模式之间切换,两种拍摄模式包括点对焦-点测光模式和全局对焦-全局测光模式(S2)。

Description

图像获取方法、成像装置及拍摄系统 技术领域
本发明涉及成像技术领域,特别涉及一种图像获取方法、成像装置及拍摄系统。
背景技术
目前,相机通常基于拍摄场景中主体与相机之间的距离变化来进行点对焦功能与全局对焦功能的切换,该种相机通常为双目相机。但目前的运动相机通常为单目相机,无法实现距离检测,因此,也无法通过距离的检测来实现点对焦功能和全局对焦功能的切换,进一步地会影响运动相机的拍摄图像的成像质量。
发明内容
本发明的实施例提供一种图像获取方法、成像装置及拍摄系统。
本发明实施方式的图像获取方法包括:检测成像装置的拍摄图像的当前清晰度;和根据所述当前清晰度控制所述成像装置在两种拍摄模式之间切换,两种所述拍摄模式包括点对焦-点测光模式和全局对焦-全局测光模式。
本发明实施方式的成像装置包括处理器,所述处理器用于检测成像装置的拍摄图像的当前清晰度,以及根据所述当前清晰度控制所述成像装置在两种拍摄模式之间切换,两种所述拍摄模式包括点对焦-点测光模式和全局对焦-全局测光模式。
本发明实施方式的拍摄系统包括上述的成像装置及载体。所述成像装置搭载在所述载体上。
本发明实施方式的图像获取方法、成像装置及拍摄系统,基于拍摄图像的当前清晰度来控制成像装置在点对焦-点测光模式和全局对焦-全局测光模式之间切换,使得成像装置的拍摄模式可以根据当前清晰度进行自适应的调整,有利于在不同的成像条件下获得较高的成像质量。
本发明的实施方式的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本发明的实施方式的实践了解到。
附图说明
本发明的上述和/或附加的方面和优点从结合下面附图对实施方式的描述中将变得明显和容易理解,其中:
图1是本发明某些实施方式的拍摄系统的结构示意图。
图2至图17是本发明某些实施方式的图像获取方法的流程示意图。
具体实施方式
下面详细描述本发明的实施方式,所述实施方式的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本发明,而不能理解为对本发明的限制。
目前,运动已经逐渐成为减压或健康生活的一种重要方式,不少人都希望使用相机记录下自己参加运动的一面,并通过视频的方式与更多的朋友分享自己的各种运动经历。然而,普通相机却难以捕捉到运动的精彩瞬间。因此,能够具有防水防摔能力,且拍出的图像或视频相对稳定清晰的运动相机很好地满足了运动爱好者的拍摄需求。
现在,运动相机一般为单目相机,而考虑到主要适用于运动场景的拍摄,运动相机通常仅具有全局对焦-全局测光功能。
然而,在实际的拍摄需求中,上述运动相机的场景适用性较窄,无法满足用户的多拍摄需求。
基于此,请参阅图1,本发明提供一种拍摄系统100。拍摄系统100包括成像装置10和载体20。成像装置10搭载在载体20上。
其中,成像装置10可以包括单目成像装置10,即仅具有一个摄像头的装置;或者,成像装置10也可为双目成像装置10,即具有两个摄像头的装置;或者,成像装置10也可为多目成像装置10,即具有多个摄像头的装置;或者,成像装置10也可以是集成有一个或多个摄像头的装置,如手机等。在发明的具体实施例中,以成像装置10为单目成像装置10为例进行说明。
载体20可以是云台或可移动平台。成像装置10可以直接搭载在云台上;或者,成像装置10也可直接搭载在可移动平台上;或者,成像装置10也可间接搭载在可移动平台上,即成像装置10首先搭载在云台上,搭载有成像装置10的云台进一步搭载在可移动平台上。云台可以是手持云台,可移动平台可以是无人机、车辆、船舶等。
在本发明的具体实施例中,成像装置10可以直接搭载在云台上,且成像装置10与云台可以组成不可拆分的一体结构。
在本发明的具体实施例中,成像装置10为运动相机。本发明实施方式的运动相机的拍摄模式包括点对焦-点测光模式和全局对焦-全局测光模式。在拍摄场景中存在突出的主体部分时,本发明实施方式的运动相机可以工作在点对焦-点测光模式下,在拍摄场景中不存在突出的主体部分时,本发明实施方式的运动相机可以工作在全局对焦-全局测光模式下。如此,可以在不同拍摄需求下,调整成像装置10的拍摄模式,且有利于在不同的成像条件下 获得较高的成像质量。
请一并参阅图1和图2,本发明还提供一种用于上述成像装置10的图像获取方法。图像获取方法包括:
S1:检测成像装置10的拍摄图像的当前清晰度;和
S2:根据当前清晰度控制成像装置10在两种拍摄模式之间切换,两种拍摄模式包括点对焦-点测光模式和全局对焦-全局测光模式。
请再参阅图1,本发明实施方式的图像获取方法可以由本发明实施方式的成像装置10实现,当然,本发明任一实施方式的图像获取方法也可以由其它不同于成像装置的装置实现,在此不做限定。此处仅以成像装置10为例进行说明。成像装置10包括处理器11。步骤S1和步骤S2均可以由处理器11实现。也即是说,处理器11可用于检测成像装置10的拍摄图像的当前清晰度,以及根据当前清晰度控制成像装置10在两种拍摄模式之间切换,两种拍摄模式包括点对焦-点测光模式和全局对焦-全局测光模式。
具体地,成像装置10工作在点对焦-点测光模式下时,成像装置10对焦的是成像装置10的场景中的部分区域(一般地,部分区域为场景的主体区域),成像装置10基于这部分区域对应的多个像素的像素值进行对焦操作,并且基于对焦的这部分区域对应的多个像素的像素值做测光操作。成像装置10工作在全局对焦-全局测光模式下时,成像装置10对焦的是成像装置10的视场中的全部区域,成像装置10基于全部区域对应的所有像素的像素值进行对焦操作,并且基于全部区域对应的所有像素的像素值做测光操作。
目前,具有点对焦-点测光模式和全局对焦-全局测光模式切换功能的成像装置10通常为双目成像装置10,双目成像装置10基于双目立体视觉法测量场景中物体的距离,并将距离的变化作为点对焦-点测光模式和全局对焦-全局测光模式的切换依据,在场景中主体部分的距离变化超过某一个设定值之后将点对焦-点测光模式切换到全局对焦-全局测光模式。但当成像装置10为单目成像装置10时,单目成像装置10无法获取场景中物体与自身的距离,也就无法基于距离的变化信息进行拍摄模式的切换。
本发明实施方式的图像获取方法将检测到的成像装置10的拍摄图像的当前清晰度作为拍摄模式切换的判断标准,来进行成像装置10拍摄模式的切换。
具体地,请参阅图3,步骤S2根据当前清晰度控制成像装置10在两种拍摄模式之间切换包括:
S21:在当前清晰度小于预设清晰度时,控制成像装置10从点对焦-点测光模式切换为全局对焦-全局测光模式。
请再参阅图1,步骤S21可以由处理器11实现。也即是说,处理器11可用于在当前清晰度小于预设清晰度时,控制成像装置10从点对焦-点测光模式切换为全局对焦-全局测 光模式。
其中,拍摄图像可以是静态图像或者是动态图像,如动图、视频等。拍摄图像可以是预览图像,也可以是最终输出给用户的目标图像。
拍摄图像的当前清晰度可以作为拍摄模式切换的依据。
可以理解,在成像装置10工作在点对焦-点测光模式下时,说明拍摄的场景中可能存在突出的主体部分。此时,若拍摄的场景发生了较大变化(例如,成像装置10发生了运动等),则当前拍摄场景可能不存在突出的主体部分,此时成像装置10如果没有切换到全局对焦-全局测光模式,仍旧对焦当前拍摄场景的部分区域,那么拍摄图像中仅有一小部分物体是清晰的,大部分物体均较为模糊,最终整幅拍摄图像清晰度不高。因此,在检测到拍摄图像的当前清晰度小于预设清晰度时,处理器11需要控制成像装置10从点对焦-点测光模式切换到全局对焦全局测光模式,以使拍摄图像的成像质量得到保障。
本发明实施方式的图像获取方法基于拍摄图像的当前清晰度来控制成像装置10在点对焦-点测光模式和全局对焦-全局测光模式之间切换,使得成像装置10的拍摄模式可以根据当前清晰度进行自适应的调整,有利于在不同的成像条件下能获得较高的成像质量。
请参阅图4,在某些实施方式中,步骤S1检测成像装置10的拍摄图像的当前清晰度包括:
S11:检测成像装置10的运动数据;和
S12:根据运动数据确定当前清晰度。
请再参阅图1,步骤S11和步骤S12均可以由处理器11实现。也即是说,处理器11可用于检测成像装置10的运动数据,以及根据运动数据确定当前清晰度。
其中,成像装置10的运动数据包括成像装置10在一个或多个方向上的抖动幅度、在一个或多个方向上的位姿变化值、在一个或多个方向上的运动速度、在一个或多个方向上的加速度、在一个或多个方向上的角速度、在一个或多个方向上的角加速度中的至少一种。也即是说,成像装置10的运动数据可以仅包括一个或多个方向上的抖动幅度、在一个或多个方向上的位姿变化值、在一个或多个方向上的运动速度、在一个或多个方向上的加速度、在一个或多个方向上的角速度中的至少一种中的任意一种;或者,成像装置10的运动数据可以同时包括一个或多个方向上的抖动幅度以及一个或多个方向上的位姿变化值两种;或者,成像装置10的运动数据可以同时包括一个或多个方向上的抖动幅度、一个或多个方向上的位姿变化值、以及在一个或多个方向上的运动速度三种;或者,成像装置10的运动数据可以同时包括一个或多个方向上的抖动幅度、在一个或多个方向上的位姿变化值、在一个或多个方向上的运动速度、在一个或多个方向上的加速度四种;或者,成像装置10的运动数据可以同时包括一个或多个方向上的抖动幅度、在一个或多个方向上的位姿变化值、 在一个或多个方向上的运动速度、在一个或多个方向上的加速度、在一个或多个方向上的角速度五种;或者成像装置10的运动数据可以同时包括一个或多个方向上的抖动幅度、在一个或多个方向上的位姿变化值、在一个或多个方向上的运动速度、在一个或多个方向上的加速度、在一个或多个方向上的角速度、在一个或多个方向上的角加速度六种等。
成像装置10在一个或多个方向上的抖动幅度指的是成像装置10当前的位姿相对于当前的参考位姿的偏移量;成像装置10在一个或多个方向上的位姿变化值指的是成像装置10在一段时间内的位姿的变化量;成像装置10在一个或多个方向上的运动速度指的是成像装置10在拍摄过程中的多个时刻下的运动速度;成像装置10在一个或多个方向上的加速度指的是成像装置10在一段时间内的运动速度的变化值;成像装置10在一个或多个方向上的角速度指的是成像装置10在拍摄过程中的多个时刻下的角速度;成像装置10在一个或多个方向上的角加速度指的是成像装置10在一段时间内的角速度的变化值。
成像装置10的运动方向可以是横滚方向、俯仰方向、偏航方向中的任意一种,也可以是横滚方向和俯仰方向两种,或横滚方向和偏航方向两种,或俯仰方向和偏航方向两种,也可以是横滚方向、俯仰方向、以及偏航方向三种。
举例来说,当成像装置10的运动方向包括横滚方向时,抖动幅度指的是成像装置10在横滚方向上的抖动幅度,位姿变化值指的是成像装置10在横滚方向上的位姿变化值等。当成像装置10的运动方向包括横滚方向及偏航方向时,抖动幅度包括成像装置10在横滚方向上的抖动幅度及在偏航方向上的抖动幅度,位姿变化值包括成像装置10在横滚方向上的位姿变化值及在偏航方向上的位姿变化值等。当成像装置10的运动方向包括横滚方向、俯仰方向、偏航方向时,抖动幅度包括成像装置10在横滚方向上的抖动幅度、在偏航方向上的抖动幅度、及在俯仰方向上的抖动幅度,位姿变化值包括成像装置10在横滚方向上的位姿变化值、在偏航方向上的位姿变化值、及在俯仰方向上的位姿变化值等。
成像装置10的运动数据可以通过运动传感器获取,运动传感器可以装设在成像装置10上或者装设在成像装置10搭载的载体20上。当运动传感器装设在成像装置10上时,处理器11从运动传感器中直接读取成像装置10的运动数据;当运动传感器装设在载体20上时,首先由成像装置10的通信模块接收载体20发送的运动数据,通信模块再将接收到的运动数据传送到处理器11中,或者,可以基于载体20的运动数据、成像装置10与载体20之间的转换关系确定成像装置10的运动数据。其中,运动传感器可以是陀螺仪、加速度传感器、惯性测量单元等等。
成像装置10的运动数据表征了成像装置10的运动状态。处理器11可基于成像装置10的运动状态判断当前的拍摄场景是否发生了较大的变化,如超过预设的变化范围,并在拍摄场景发生了较大变化时,可以控制成像装置10从点对焦-点测光模式切换到全局对焦- 全局测光模式。
具体地,请参阅图5,在一个实施例中,处理器11可基于下述方式判断当前的拍摄场景是否发生了较大的变化,也即是说,步骤S12根据运动数据确定当前清晰度包括:
S121:判断运动数据是否大于预设运动阈值;和
S122:若是,则确定当前清晰度小于预设清晰度。
请再参阅图1,步骤S121和步骤S122均可以由处理器11实现。也即是说,处理器11还可用于判断运动数据是否大于预设运动阈值,以及在运动数据大于预设运动阈值时确定当前清晰度小于预设清晰度。
具体地,预设清晰度对应有一个预设运动阈值,其中,预设清晰度和预设运动阈值均是基于前期的大量实验数据来确定的。在运动数据大于预设运动阈值时,说明此时成像装置10的处于运动状态且运动较为快速。
当运动数据仅包括一个时,在该运动数据大于预设运动阈值时即确定当前清晰度小于预设清晰度。以运动数据为抖动幅度,预设运动阈值为预设抖动幅度阈值为例,当抖动幅度大于预设抖动幅度阈值时,说明成像装置10的运动较为剧烈,对应的拍摄的场景变化也较大,此时成像装置10基于点对焦-点测光模式拍摄的拍摄图像的当前清晰度会低于预设清晰度,处理器11需要控制成像装置10切换到全局对焦-全局测光模式下工作以确保拍摄图像具有较高的清晰度。
当运动数据包括多个时,当前清晰度与预设清晰度的大小关系可以由多个运动数据共同确定。以运动数据包括抖动幅度及姿态变化值为例,当抖动幅度大于预设抖动幅度阈值,并且姿态变化值大于预设姿态变化阈值时确定当前清晰度小于预设清晰度;或者,为抖动幅度和位姿变化值分别设定一个抖动权值和一个位姿变化权值,并且基于抖动幅度确定一个抖动分值,基于位姿变化权值确定一个位姿变化分值,根据抖动分值、抖动权值、位姿变化分值和位姿变化权值计算运动分值,当运动分值大于预设运动分值时,确定运动数据大于预设运动阈值,进一步确定当前清晰度小于预设清晰度。
可以理解,当运动数据包括多个时,确定当前清晰度与预设清晰度之间的大小关系除了上述说明的内容,在实际应用中,也可以采用其它方式,例如,可以对多个运动数据进行优先级排序,并根据优先级高低及运动数据来确定当前清晰度与预设清晰度之间的大小关系,此处不做具体限定。
请参阅图6,在一个实施例中,处理器11还可基于下述方式判断当前的拍摄场景是否发生了较大的变化,也即是说,步骤S12根据运动数据确定当前清晰度包括:
S123:获取预设运动数据与清晰度的映射关系;和
S124:根据映射关系确定与运动数据对应的当前清晰度。
请再参阅图1,步骤S123和步骤S124均可以由处理器11实现。也即是说,处理器11还可用于获取预设运动数据与清晰度的映射关系,以及根据映射关系确定与运动数据对应的当前清晰度。
具体地,预设运动数据与清晰度的映射关系包括以下几种:(1)预设运动数据和清晰度均为一个具体的值,一个预设运动数据与一个清晰度对应;(2)预设运动数据为一个范围,清晰度为一个具体的值,一个预设运动数据范围与一个清晰度对应;(3)预设运动数据为一个具体的值,清晰度为一个范围,一个预设运动数据与一个清晰度范围对应;(4)预设运动数据和清晰度均为一个范围,一个预设运动数据范围与一个清晰度范围对应。预设运动数据与清晰度的映射关系时基于前期的大量实验数据来确定的。预设运动数据与清晰度之间的映射关系可以以映射表的形式存储在成像装置10的存储器中。
示例性的,处理器11在成像装置10工作时不断获取成像装置10的运动数据,并基于获取到的运动数据在映射表中寻找与该运动数据对应的当前清晰度。随后,再将确定出来的当前清晰度与预设清晰度进行比较。若当前清晰度小于预设清晰度,则控制成像装置10切换到全局对焦-全局测光模式下工作以确保拍摄图像具有较高的清晰度。
可以理解的是,预设运动数据与清晰度的映射关系的具体体现形式除了上述的内容,在实际应用中,还可以采用其它体现形式,例如映射图,此处不做具体限定。
请参阅图7,在某些实施方式中,成像装置10搭载在载体20上,运动数据包括成像装置10在一个或多个方向上的抖动幅度。步骤S11检测成像装置10的运动数据包括:
S111:获取载体20对成像装置10进行抖动补偿时的期望补偿幅度和载体20的实际补偿幅度;和
S112:根据期望补偿幅度与实际补偿幅度确定所述抖动幅度。
请再参阅图1,在某些实施方式中,步骤S111和步骤S112均可以由处理器11实现。也即是说,处理器11还可用于获取载体20对成像装置10进行抖动补偿时的期望补偿幅度和载体20的实际补偿幅度,根据期望补偿幅度与实际补偿幅度确定所述抖动幅度。
具体地,当载体20为云台时,在成像装置10运动过程中,若成像装置10发生抖动,则载体20会基于检测到的抖动幅度对成像装置10进行抖动补偿。通常,载体20可以基于成像装置10的抖动幅度确定一个期望补偿幅度,再基于期望补偿幅度做抖动补偿。但实际操作中,由于补偿精度或其它因素的影响,可能出现期望补偿幅度与实际补偿幅度不等的情况,此时处理器11将期望补偿幅度与实际补偿幅度之间的差值(具体地为实际补偿幅度减去期望补偿幅度的值)作为抖动幅度。例如,假设成像装置10在偏航方向上的姿态角为相对于偏航角为0°的参考位置偏移了5°,则期望补偿幅度为在偏航方向上相对于偏航角为0°的位置偏移-5°。随后,载体20基于确定的期望补偿幅度做抖动补偿,向偏航角为0°的 偏移-5°。但抖动补偿完毕后,成像装置10变为在偏航方向上的姿态角为相对于偏航角为0°的位置偏移了0.5°,也即是说,实际补偿幅度为-4.5°,因此,处理器11可基于期望补偿幅度-5°和实际补偿幅度-4.5度计算出最终的抖动幅度0.5°。
处理器11在确定抖动幅度后,进一步地,可基于该抖动幅度确定拍摄图像的当前清晰度。具体地,可通过步骤S121及步骤S122所述的方式来确定拍摄图像的当前清晰度,或者通过步骤S123及步骤S124所述的方式来确定拍摄图像的当前清晰度。处理器11在确定当前清晰度后,可基于当前清晰度做成像装置10的拍摄模式的控制。
请参阅图8,在某些实施方式中,运动数据包括成像装置10在一个或多个方向上的抖动幅度、以及成像装置10在一个或多个方向上的位姿变化值。步骤S11检测成像装置10的运动数据包括:
S113:获取成像装置10的位姿变化值;
S114:判断位姿变化值是否大于预设位姿变化值;
S115:在位姿变化值大于预设位姿变化值时,获取成像装置10的抖动幅度;
步骤S12根据运动数据确定当前清晰度包括:
S125:根据抖动幅度确定当前清晰度。
请再参阅图1,步骤S113、步骤S114、步骤S115和步骤S125均可以由处理器11实现。也即是说,处理器11还可用于获取成像装置10的位姿变化值,判断位姿变化值是否大于预设位姿变化值,在位姿变化值大于预设位姿变化值时,获取成像装置10的抖动幅度,以及根据抖动幅度确定当前清晰度。
具体地,处理器11首先获取成像装置10的位姿变化值,并比较位姿变化值与预设位姿变化值的大小,在位姿变化值大于预设位姿变化值时才进一步获取成像装置10的抖动幅度,而在位姿变化值小于预设位姿变化值时则可以不做动作。其中,位姿变化值仅包括一个方向上的位姿变化值时,例如,位姿变化值仅包括横滚方向上的位姿变化值,则对应的预设位姿变化值为横滚方向上的预设位姿变化值,即预设横滚位姿变化值,此时仅需要比较横滚方向上的位姿变化值与预设横滚位姿变化值的大小即可。位姿变化值包括多个方向上的位姿变化值时,例如,同时包括横滚方向、俯仰方向和偏航方向时,在横滚方向上的位姿变化值大于预设横滚位姿变化值、俯仰方向上的位姿变化值大于预设俯仰位姿变化值、及偏航方向上的位姿变化值大于预设偏航位姿变化值时确定位姿变化值大于预设位姿变化值;或者,分别为横滚方向上的位姿变化值、俯仰方向上的位姿变化值、及偏航方向上的位姿变化值对应分配横滚权值、俯仰权值、及偏航权值,根据横滚方向上的位姿变化值、横滚权值、俯仰方向上的位姿变化值、俯仰权值、偏航方向上的位姿变化值、及偏航权值来计算位姿变化值,再将位姿变化值与预设位姿变化值作比较。可以理解,在成像装置10 的位姿变化值较小时,说明成像装置10的运动状态的改变较为缓慢,由于成像装置10的运动较为平缓,成像装置10可能不会出现抖动,或者抖动幅度很小可以忽略不计,或者成像装置10的拍摄场景的变化也较小。因此,此时成像装置10仍旧可以使用点对焦-点测光模式采集拍摄图像。而在成像装置10的位姿变化值较大时,说明成像装置10的运动状态的改变较为快速,由于成像装置10的运动较为剧烈,成像装置10在运动过程中极有可能伴随着较大的抖动幅度的情况,或者成像装置10的拍摄场景的变化也较大。因此,此时成像装置10需要从点对焦-点测光模式切换到全局对焦-全局测光模式以获取较为清晰的拍摄图像,以保证拍摄图像的清晰度。
请参阅图9,在某些实施方式中,除了上述的使用运动数据来确定当前清晰度的方式以外,还可以通过以下方式来确定当前清晰度。此时,步骤S1检测成像装置10的拍摄图像的当前清晰度包括:
S13:获取拍摄图像的聚焦评价函数值并作为当前清晰度。
请再参阅图1,在某些实施方式中,步骤S13可以由处理器11实现。也即是说,处理器11还可用于获取拍摄图像的聚焦评价函数值并作为当前清晰度。
具体地,聚焦评价函数值可用于评价拍摄图像的清晰度。当聚焦评价函数值较大时,拍摄图像的清晰度较高,当聚焦评价函数值较小时,拍摄图像的清晰度较低。聚焦评价函数值可基于聚焦评价函数计算得到。聚焦评价函数可为Bernner梯度函数、Tenengrad梯度函数、Laplacian梯度函数、灰度方差函数等等。以Bernner梯度函数为例,Bernner梯度函数通过计算相邻两个像素灰度差的平方来计算聚焦评价函数值,该函数的定义为:D(f)=∑ yx|f(x+2,y)-f(x,y)| 2,其中,f(x,y)为拍摄图像对应的像素点(x,y)的灰度值,D(f)为聚焦评价函数值。可以理解,当拍摄图像的清晰较高时,图像的颜色、纹理、边缘更为清楚,则相邻两个像素点之间的灰度值也具有较大的差值。因此,可以将聚焦评价函数之直接作为当前清晰度。处理器11计算出聚焦评价函数值后,将聚焦评价函数值与预设清晰度进行比较,若聚焦评价函数值小于预设清晰度,则说明此时拍摄图像较为模糊,处理器11需要控制成像装置10需要从点对焦-点测光模式切换到全局对焦-全局测光模式以获取较为清晰的拍摄图像。
请一并参阅图10和图11,在某些实施方式中,处理器11可同时基于运动数据和聚焦评价函数至来确定当前清晰度。此时,步骤S1检测成像装置10的拍摄图像的当前清晰度包括:
S14:获取成像装置10的运动数据及拍摄图像的聚焦评价函数值;和
S15:根据运动数据及聚焦评价函数值确定成像装置10的当前清晰度。
其中,步骤S15根据运动数据及聚焦评价函数值确定成像装置10的当前清晰度包括:
S151:基于权重法,根据运动数据及聚焦评价函数值确定成像装置10的当前清晰度。
请再参阅图1,在某些实施方式中,步骤S14、步骤S15和步骤S151均可以由处理器11实现。也即是说,处理器11还可用于获取成像装置10的运动数据及拍摄图像的聚焦评价函数值,以及根据运动数据及聚焦评价函数值确定成像装置10的当前清晰度。在处理器11执行步骤S151时,处理器11实际上执行基于权重法,根据运动数据及聚焦评价函数值确定成像装置10的当前清晰度的动作。
具体地,当运动数据包括一个时,以运动数据包括抖动幅度为例,处理器11首先获取成像装置10的抖动幅度以及计算拍摄图像的聚焦评价函数值,其中,处理器11获取成像装置10的抖动幅度的动作以及计算拍摄图像的聚焦评价函数值的动作可以同时进行;或者,处理器11首先获取成像装置10的抖动幅度,再计算拍摄图像的聚焦评价函数值;或者处理器11首先计算拍摄图像的聚焦评价函数值,再获取成像装置10的抖动幅度。随后,处理器11根据成像装置10的抖动幅度确定抖动分值,并根据聚焦评价函数值确定聚焦分值。其中,抖动分值与抖动幅度具有映射关系,抖动幅度越大,对应的抖动分值越大,二者的映射关系以诸如映射表的形式存储在成像装置10的存储器中;同样地,聚焦评价函数值与聚焦分值也具有映射关系,聚焦评价函数值越小,对应的抖动分值越大,二者的映射关系也以映射表的形式存储在成像装置10的存储器中。处理器11还需要获取抖动分值和抖动权值,抖动分值和抖动权值是预先设定的。处理器11从抖动分值与抖动幅度的映射表中找到与当前获取的抖动幅度对应的抖动分值,从聚焦评价函数值与聚焦分值的映射表中找到与当前的聚焦评价函数值对应的聚焦分值。随后,处理器11基于抖动分值、抖动权值、聚焦分值和聚焦权值确定总分值,总分值=抖动分值×抖动分值+聚焦分值×聚焦权值。处理器11计算出总分值后,可基于总分值确定当前清晰度。具体地,总分值与清晰度具有映射关系,二者的映射关系以诸如映射表的形式存储在成像装置10的存储器中,处理器11通过映射表确定与总分值对应的当前清晰度,并在当前清晰度小于预设清晰度时控制成像装置10从点对焦-点测光模式切换到全局对焦-全局测光模式;或者,处理器11也可通过比较总分值与预设总分值的大小来判断当前清晰度是否小于预设清晰度,即在总分值大于预设总分值认为当前清晰度小于预设清晰度,在总分值小于预设总分值时认为当前清晰度大于预设清晰度。处理器11再基于判断结果进行拍摄模式切换的控制。
当运动数据包括多个时,以运动数据包括抖动幅度和位姿变化值为例,处理器11首先获取成像装置10的抖动幅度、位姿变化值以及计算拍摄图像的聚焦评价函数值,其中,成像装置10的抖动幅度、位姿变化值以及计算拍摄图像的聚焦评价函数值的获取顺序不做限制。随后,处理器11根据成像装置10的抖动幅度确定抖动分值,根据位姿变化值确定位姿变化分值,并根据聚焦评价函数值确定聚焦分值。其中,抖动分值与抖动幅度具有映射 关系,抖动幅度越大,对应的抖动分值越大,二者的映射关系以诸如映射表的形式存储在成像装置10的存储器中;同样地,位姿变化值与位姿变化分值也具有映射关系,位姿变化值越大,位姿变化分值越大,二者的映射关系以诸如映射表的形式存储在成像装置10的存储器中;同样地,聚焦评价函数值与聚焦分值也具有映射关系,聚焦评价函数值越小,对应的抖动分值越大,二者的映射关系也以映射表的形式存储在成像装置10的存储器中。处理器11还需获取抖动分值、位姿变化权值和抖动权值,抖动分值、位姿变化权值和抖动权值是预先设定的。处理器11从抖动分值与抖动幅度的映射表中找到与当前获取的抖动幅度对应的抖动分值,从位姿变化分值与位姿变化值的映射表中找到与当前的位姿变化值对应的位姿变化分值,从聚焦评价函数值与聚焦分值的映射表中找到与当前的聚焦评价函数值对应的聚焦分值。随后,处理器11基于抖动分值、抖动权值、位姿变化值、位姿变化权值、聚焦分值和聚焦权值确定总分值,总分值=抖动分值×抖动分值+位姿变化分值×位姿变化权值+聚焦分值×聚焦权值。处理器11计算出总分值后,可基于总分值确定当前清晰度。具体地,总分值与清晰度具有映射关系,二者的映射关系以映射表的形式存储在成像装置10的存储器中,处理器11通过映射表确定与总分值对应的当前清晰度,并在当前清晰度小于预设清晰度时控制成像装置10从点对焦-点测光模式切换到全局对焦-全局测光模式;或者,处理器11也可通过比较总分值与预设总分值的大小来判断当前清晰度是否小于预设清晰度,即在总分值大于预设总分值认为当前清晰度小于预设清晰度,在总分值小于预设总分值时认为当前清晰度大于预设清晰度。处理器11再基于判断结果进行拍摄模式切换的控制。
请参阅图12,在某些实施方式中,步骤S15根据运动数据及聚焦评价函数值确定成像装置10的当前清晰度包括:
S152:当运动数据大于预设运动阈值,且聚焦评价函数值小于预设阈值时,确定当前清晰度小于预设清晰度。
请再参阅图1,在某些实施方式中,步骤S152可以由处理器11实现。也即是说,处理器11还可用于当运动数据大于预设运动阈值,且聚焦评价函数值小于预设阈值时,确定当前清晰度小于预设清晰度
具体地,当运动数据包括一个时,以运动数据包括抖动幅度,预设运动阈值为预设抖动阈值为例,处理器11需要根据抖动幅度和聚焦评价函数值两个参数来共同判断当前清晰度与预设清晰度的大小。当抖动幅度大于预设抖动阈值,并且聚焦评价函数值小于预设阈值时,处理器11判定当前清晰度小于预设清晰度。若抖动幅度小于预设抖动阈值,则不论聚焦评价函数值是否小于预设阈值,处理器11均判定当前清晰度大于预设清晰度。若聚焦评价函数值大于预设阈值,则不论抖动幅度是否大于预设抖动阈值,处理器11均判定当前 清晰度大于预设清晰度。
根据上述说明,当运动数据包括多个时,运动数据可以包括成像装置10在一个或多个方向上的抖动幅度、在一个或多个方向上的位姿变化值、在一个或多个方向上的运动速度、在一个或多个方向上的加速度、在一个或多个方向上的角速度中的至少一种,以运动数据包括一个方向上的抖动幅度及一个方向上的位姿变化值,预设运动阈值包括预设抖动阈值和预设位姿变化阈值为例,处理器11需要根据抖动幅度、位姿变化值和聚焦评价函数值三个参数来共同判断当前清晰度与预设清晰度的大小。例如,当抖动幅度大于预设抖动阈值,并且位姿变化值大于预设位姿变化阈值,并且聚焦评价函数值小于预设阈值时,处理器11判定当前清晰度小于预设清晰度。或者,若抖动幅度小于预设抖动阈值,则不论位姿变化值是否大于预设位姿变化阈值,也不论聚焦评价函数值是否小于预设阈值,处理器11均判定当前清晰度大于预设清晰度。或者,若位姿变化值小于预设位姿变化阈值,则不论抖动幅度是否大于预设抖动阈值,也不论聚焦评价函数值是否小于预设阈值,处理器11均判定当前清晰度大于预设清晰度。或者,若聚焦评价函数值大于预设阈值,则不论抖动幅度是否大于预设抖动阈值,也不论位姿变化值是否大于预设位姿变化阈值,处理器11均判定当前清晰度大于预设清晰度。
可以理解,无论运动数据包括一个还是多个,根据运动数据及聚焦评价函数值确定成像装置10的当前清晰度的方法除了上述说明的内容,在实际应用中,还可以根据需要采用其它方法,只要使得在相应条件下,能够适应性切换调整成像装置10的拍摄模式即可,此处不做具体限定。
请参阅图13,在某些实施方式中,本发明实施方式的控制方法还包括:
S3:根据控制指令控制成像装置10在两种拍摄模式中切换。
请再参阅图1,在某些实施方式中,步骤S3可以由处理器11实现。也即是说,处理器11还可用于根据控制指令控制成像装置10在两种拍摄模式中切换。
其中,控制指令可以是用户输入的,也可以是处理器11基于两种拍摄模式的切换标准自主下达的。
在控制指令为用户输入时,用户可以通过成像装置10上的物理按键输入,例如,用户按住成像装置10上的按钮以移动、扩大或缩小成像装置10的对焦区域;或者,用户可以触控成像装置10的触摸屏以实现控制指令的输入,例如,用户在触摸屏上进行点选、伸缩、拉长等操作以实现移动、扩大或缩小成像装置10的对焦区域;或者,用户通过载体20将控制指令传送给成像装置10,例如,成像装置10搭载在云台上,云台进一步搭载在无人机上,无人机与遥控器的情景下,用户通过遥控器上的物理按键或触摸屏实现控制指令的输入,遥控器再将控制指令传送给无人机,无人机再将控制指令传送给成像装置10,又例 如,成像装置10搭载在手持云台上,手持云台可以与移动电子设备通信连接,用户通过移动电子设备实现控制指令的输入,移动电子设备将控制指令传送给手持云台,手持云台再将控制指令传输给成像装置10,或者,移动电子设备可以直接与成像装置10通信连接,并将控制指令传送给成像装置10,或者,手持云台的手持部上设有显示屏,可以通过用户对显示屏的触控操作实现控制指令的输入。
在处理器11基于两种拍摄模式的切换标准自主下达控制指令时,处理器11基于上述任意一项实施方式所述的拍摄模式切换判断标准做控制指令的自主下达。
如此,成像装置10既可以基于用户输入的控制指令做拍摄模式切换,也可自动进行拍摄模式切换,对于摄影技术较弱的用户来说,拍摄模式的自动切换可以帮助用户获取较高质量的拍摄图像,而对于摄影技术较强的用户来说,基于用户输入的控制指令做拍摄模式切换可以使得用户根据自身的喜好来获取拍摄图像,用户体验能够得到较大改善。
请参阅图14,在某些实施方式中,在成像装置10工作在点对焦-点测光模式时,本发明实施方式的图像获取方法还包括:
S4:处理拍摄图像以在预定对焦区域中确定拍摄图像中的主体区域;和
S5:控制成像装置10对焦主体区域。
请再参阅图1,在某些实施方式中,步骤S4和步骤S5均可以由处理器11实现。也即是说,处理器11还可用于处理拍摄图像以在预定对焦区域中确定拍摄图像中的主体区域,控制成像装置10对焦主体区域。
具体地,在点对焦-点测光模式下,成像装置10对焦在预定对焦区域。进一步地,成像装置10还可以处理拍摄图像以判断预定对焦区域中是否存在有拍摄主体(例如,人脸等),若拍摄图像中存在有拍摄主体,则拍摄主体对应的区域即为主体区域,成像装置10进一步对焦主体区域以使拍摄主体更为清晰。
在某些情境下,预定对焦区域中可能存在多个拍摄主体,例如,预定对焦区域内存在多张人脸,那么处理器11在处理拍摄图像以识别出人脸之后,还可以对多个人脸进行身份认证,若多张人脸中存在目标用户(例如成像装置10的持有者等),则处理器11将目标用户对应的区域作为主体区域,成像装置10进一步对焦主体区域以使目标用户最为清晰。
如此,通过主体区域的确定,可以进一步提升主体区域的清晰度,用户使用体验更佳。
处理器11处理拍摄图像时,可以仅对预定对焦区域的图像做处理,从而减小处理器11所需处理的数据量。
请参阅图15,在某些实施方式中,本发明实施方式的图像获取方法还包括:
S22:在当前清晰度大于预定清晰度时,控制成像装置10从全局对焦-全局测光模式切换为点对焦-点测光模式。
请再参阅图1,在某些实施方式中,步骤S22可以由处理器11实现。也即是说,处理器11还可用于在当前清晰度大于预定清晰度时,控制成像装置10从全局对焦-全局测光模式切换为点对焦-点测光模式。
具体地,在成像装置10工作在全局对焦-全局测光模式下时,处理器11也会获取拍摄图像的当前清晰度,其中,拍摄图像的当前清晰度的确定方式可以是上述任意一项实施方式所述的确定方式。在确定拍摄图像的当前清晰度后,处理器11将当前清晰度与预定清晰度进行比较,若当前清晰度小于预定清晰度,则成像装置10保持工作在全局对焦-全局测光模式;若当前清晰度大于预定清晰度,则成像装置10将拍摄模式切换为点对焦-点测光模式。
可以理解,由于成像装置10的运动等导致拍摄的场景变化较大,成像装置10需要切换到全局对焦-全局测光模式以使得拍摄图像具有较高的清晰度。在成像装置10切换到全局对焦-全局测光模式下工作后,若检测到拍摄图像的清晰度大于预定清晰度,此时说明成像装置10的运动趋于平缓或者是处于静止状态,此时可能是场景中有目标的主体出现,因此,需要控制成像装置10从全局对焦-全局测光模式切换为点对焦-点测光模式,以使主体的部分更为清晰。
需要说明的是,预定清晰度与预设清晰度可以取相同的值,也可以取不同的值。在本发明的具体实施例中,预定清晰度的取值与预设清晰度的取值不同,且预定清晰度的取值大于预设清晰度的取值,如此,可以避免成像装置10刚切换到全局对焦-全局测光模式下时,获取的拍摄图像的当前清晰度得到提高,当前清晰度大于预设清晰度,成像装置10就直接切换回点对焦-点测光模式,导致切换过于频繁,增大成像装置10的耗能的问题。
请参阅图16,在某些实施方式中,本发明实施方式的图像获取方法还包括:
S6:在成像装置10从全局对焦-全局测光模式切换为点对焦-点测光模式时,根据历史对焦记录确定成像装置10的当前对焦区域。
请再参阅图1,在某些实施方式中,步骤S6可以由处理器11实现。也即是说,处理器11还可用于在成像装置10从全局对焦-全局测光模式切换为点对焦-点测光模式时,根据历史对焦记录确定成像装置10的当前对焦区域。
其中,当前对焦区域为成像装置10上一次在点对焦-点测光模式下的对焦区域。
具体地,如果历史对焦记录的对焦区域为成像装置10的预定对焦区域(即默认的对焦区域,无需用户设定),则成像装置10从全局对焦-全局测光模式切换为点对焦-点测光模式时,当前对焦区域也为预定对焦区域。
如果历史对焦记录的对焦区域为成像装置10基于主体的识别确定的对焦区域,则该对焦区域的多个边缘的像素的坐标会被记录并存储在存储器中。当成像装置10从全局对焦- 全局测光模式切换为点对焦-点测光模式时,成像装置10会基于存储在存储器中的多个边缘的像素的坐标来确定当前对焦区域。其中,仅存储多个边缘的像素的坐标,而非存储落入到历史对焦区域的全部的像素坐标可以减少数据的存储量。
如果历史对焦记录的对焦区域为用户指定的对焦区域,则该对焦区域的多个边缘的像素的坐标会被记录并存储在存储器中。当成像装置10从全局对焦-全局测光模式切换为点对焦-点测光模式时,成像装置10会基于存储在存储器中的多个边缘的像素的坐标来确定当前对焦区域。其中,仅存储多个边缘的像素的坐标,而非存储落入到历史对焦区域的全部的像素坐标可以减少数据的存储量。
如此,通过上一次点对焦-点测光模式下的历史对焦记录来确定当前的点对焦-点测光模式下的当前对焦区域,在一些应用情景,例如从场景从主体变换到周边环境,再从周边环境变换回主体的应用情景下,可以快速对焦到主体的部分,拍摄图像的质量更佳,用户体验更好。
请参阅图17,在某些实施方式中,本发明实施方式的图像获取方法还包括:
S7:获取成像装置10的工作模式;
步骤S2根据当前清晰度控制成像装置10在两种拍摄模式之间切换,两种拍摄模式包括点对焦-点测光模式和全局对焦-全局测光模式包括:
S23:根据工作模式及当前清晰度控制成像装置10在两种拍摄模式之间切换。
请再参阅图1,在某些实施方式中,步骤S7和步骤S23均可以由处理器实现。也即是说,处理器还可用于获取成像装置10的工作模式,以及根据工作模式及当前清晰度控制成像装置10在两种拍摄模式之间切换。
具体地,工作模式可以包括跟随模式和自由模式。成像装置10工作在跟随模式下时,成像装置10可以对场景中的主体进行跟踪拍摄,此时主体可能处于静止或运动状态,在主体处于运动状态下,成像装置10跟随主体的运动状态运动,以使主体始终处于成像装置10的视场内。成像装置10工作在自由模式下时,成像装置10不对主体进行跟踪。
在成像装置10工作在跟随模式下时,一般地,为了使主体更清晰通常会采用点对焦-点测光进行拍摄。此时,处理器11仍旧会获取拍摄图像的当前清晰度,处理器11基于当前清晰度可执行以下动作:
(1)工作模式的优先级高于当前清晰度的优先级,此时不考虑当前清晰度的大小,成像装置10始终保持工作在点对焦-点测光模式下;或者
(2)当前清晰度的优先级高于工作模式的优先级,在当前清晰度小于预设清晰度的时候,此时不考虑当前的工作模式,处理器11控制成像装置10从点对焦-点测光模式切换为全局对焦-全局测光模式。
如此,当工作模式的优先级高于当前清晰度的优先级,成像装置10始终保持工作在点对焦-点测光模式,则拍摄图像中的主体始终保持清晰,若当前清晰度的优先级高于工作模式的优先级,一旦当前清晰度小于预设清晰度,成像装置10就从点对焦-点测光模式切换为全局对焦-全局测光模式,使得拍摄图像中的全部区域都能清晰显示。
在成像装置10工作在自由模式下时,一般地,自由模式下成像装置10拍摄的场景被认为不存在突出的主体,成像装置10通常会采用全局对焦-全局测光模式进行拍摄,以使得拍摄图像中的全部区域都能清晰显示。此时,处理器11仍旧会获取拍摄图像的当前清晰度,处理器11基于当前清晰度可执行以下动作:
(1)工作模式的优先级高于当前清晰度的优先级,此时不考虑当前清晰度的大小,成像装置10始终保持工作在全局对焦-全局测光模式下;或者
(2)当前清晰度的优先级高于工作模式的优先级,在当前清晰度大于预定清晰度的时候,此时不考虑当前的工作模式,处理器11控制成像装置10从全局对焦-全局测光模式切换为点对焦-点测光模式。
如此,当工作模式的优先级高于当前清晰度的优先级,成像装置10始终保持全局对焦-全局测光模式进行拍摄,则拍摄图像中的全部区域始终能清晰显示,若当前清晰度的优先级高于工作模式的优先级,一旦当前清晰度大于预设清晰度,成像装置10就从全局对焦-全局测光模式切换为点对焦-点测光模式,如此既保证了拍摄图像的整体清晰度,还可以优化拍摄图像中主体的清晰度。
其中,当前清晰度可以根据运动数据来确定,此时,处理器11根据运动数据与预设运动阈值的大小来判断当前清晰度与预设清晰度或预定清晰度的大小。或者,处理器11从预设运动数据和清晰度的映射关系中确认与运动数据对应的当前清晰度,再比较当前清晰度与预设清晰度或预定清晰度的大小。或者,处理器11直接将拍摄图像的聚焦评价函数之作为当前清晰度,再比较当前清晰度与预设清晰度或预定清晰度的大小。或者,处理器11根据运动数据和聚焦评价函数值共同确定当前清晰度,再比较当前清晰度与预设清晰度或预定清晰度的大小等。具体方式可以参照前述内容,此处不再赘述。
在本说明书的描述中,参考术语“一个实施方式”、“一些实施方式”、“示意性实施方式”、“示例”、“具体示例”、或“一些示例”等的描述意指结合所述实施方式或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于执行特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分, 并且本发明的优选实施方式的范围包括另外的执行,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本发明的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于执行逻辑功能的可执行指令的定序列表,可以具体执行在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本发明的各部分可以用硬件、软件、固件或它们的组合来执行。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来执行。例如,如果用硬件来执行,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来执行:具有用于对数据信号执行逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解执行上述实施方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本发明各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式执行,也可以采用软件功能模块的形式执行。所述集成的模块如果以软件功能模块的形式执行并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本发明的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本发明的限制,本领域的普通技术人员在本发明的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (43)

  1. 一种图像获取方法,其特征在于,所述图像获取方法包括:
    检测成像装置的拍摄图像的当前清晰度;和
    根据所述当前清晰度控制所述成像装置在两种拍摄模式之间切换,两种所述拍摄模式包括点对焦-点测光模式和全局对焦-全局测光模式。
  2. 根据权利要求1所述的图像获取方法,其特征在于,所述根据所述当前清晰度控制所述成像装置在两种拍摄模式之间切换的步骤包括:
    在所述当前清晰度小于预设清晰度时,控制所述成像装置从所述点对焦-点测光模式切换为所述全局对焦-全局测光模式。
  3. 根据权利要求1所述的图像获取方法,其特征在于,所述根据所述当前清晰度控制所述成像装置在两种拍摄模式之间切换的步骤包括:
    在所述当前清晰度大于预定清晰度,控制所述成像装置从所述全局对焦-全局测光模式切换为所述点对焦-点测光模式。
  4. 根据权利要求1所述的图像获取方法,其特征在于,所述检测成像装置的拍摄图像的当前清晰度的步骤包括:
    检测所述成像装置的运动数据;和
    根据所述运动数据确定所述当前清晰度。
  5. 根据权利要求4所述的图像获取方法,其特征在于,所述运动数据包括成像装置在一个或多个方向上的抖动幅度、在一个或多个方向上的位姿变化值、在一个或多个方向上的运动速度、在一个或多个方向上的加速度、在一个或多个方向上的角速度中、在一个或多个方向上的角加速度的至少一种。
  6. 根据权利要求5所述的图像获取方法,其特征在于,所述运动数据为依据所述成像装置设置的运动传感器获取;或
    所述成像装置搭载在载体上,所述运动数据依据所述载体的运动数据获取。
  7. 根据权利要求5所述的图像获取方法,其特征在于,所述根据所述运动数据确定所述当前清晰度的步骤包括:
    判断所述运动数据是否大于预设运动阈值;和
    若是,则确定所述当前清晰度小于预设清晰度。
  8. 根据权利要求5所述的图像获取方法,其特征在于,所述根据所述运动数据确定所述当前清晰度的步骤包括:
    获取预设运动数据与清晰度的映射关系;和
    根据所述映射关系确定与所述运动数据对应的所述当前清晰度。
  9. 根据权利要求5所述的图像获取方法,其特征在于,所述成像装置搭载在载体上,所述运动数据包括所述成像装置在一个或多个方向上的抖动幅度,所述检测所述成像装置的运动数据的步骤包括:
    获取所述载体对所述成像装置进行抖动补偿时的期望补偿幅度和所述载体的实际补偿幅度;和
    根据所述期望补偿幅度与所述实际补偿幅度确定所述抖动幅度。
  10. 根据权利要求9所述的图像获取方法,其特征在于,所述载体包括云台、可移动平台中的至少一种。
  11. 根据权利要求5所述的图像获取方法,其特征在于,所述运动数据包括所述成像装置在一个或多个方向上的抖动幅度、在一个或多个方向上的位姿变化值,所述检测所述成像装置的运动数据的步骤包括:
    获取所述成像装置的位姿变化值;
    判断所述位姿变化值是否大于预设位姿变化值;
    在所述位姿变化值大于所述预设位姿变化值时,获取所述成像装置的抖动幅度;
    所述根据所述运动数据确定所述当前清晰度包括:
    根据所述抖动幅度确定所述当前清晰度。
  12. 根据权利要求1所述的图像获取方法,其特征在于,所述检测成像装置的拍摄图像的当前清晰度的步骤包括:
    获取所述拍摄图像的聚焦评价函数值并作为所述当前清晰度。
  13. 根据权利要求1所述的图像获取方法,其特征在于,所述检测所述成像装置的当 前清晰度的步骤包括:
    获取所述成像装置的运动数据及所述拍摄图像的聚焦评价函数值;和
    根据所述运动数据及所述聚焦评价函数值确定所述成像装置的当前清晰度。
  14. 根据权利要求13所述的图像获取方法,其特征在于,所述根据所述运动数据及所述聚焦评价函数值计算所述成像装置的当前清晰度的步骤包括:
    基于权重法,根据所述运动数据及所述聚焦评价函数值确定所述成像装置的当前清晰度;或
    当所述运动数据大于预设运动阈值,且所述聚焦评价函数值小于预设阈值时,确定所述当前清晰度小于预设清晰度。
  15. 根据权利要求1所述的图像获取方法,其特征在于,所述图像获取方法还包括:
    获取所述成像装置的工作模式;
    所述根据所述当前清晰度控制所述成像装置在两种拍摄模式之间切换,两种所述拍摄模式包括点对焦-点测光模式和全局对焦-全局测光模式的步骤包括:
    根据所述工作模式及所述当前清晰度控制所述成像装置在两种所述拍摄模式之间切换。
  16. 根据权利要求1所述的图像获取方法,其特征在于,所述图像获取方法还包括:
    根据控制指令控制所述成像装置在两种所述拍摄模式中切换。
  17. 根据权利要求1所述的图像获取方法,其特征在于,所述成像装置工作在所述点对焦-点测光模式时,所述图像获取方法还包括:
    处理所述拍摄图像以在预定对焦区域中确定所述拍摄图像中的主体区域;和
    控制所述成像装置对焦所述主体区域。
  18. 根据权利要求1所述的图像获取方法,其特征在于,所述图像获取方法还包括:
    在所述成像装置从所述全局对焦-全局测光模式切换为所述点对焦-点测光模式时,根据历史对焦记录确定所述成像装置的当前对焦区域。
  19. 根据权利要求18所述的图像获取方法,其特征在于,所述当前对焦区域为所述成像装置上一次在所述点对焦-点测光模式下的对焦区域。
  20. 根据权利要求1所述的图像获取方法,其特征在于,所述成像装置包括单目成像装置。
  21. 一种成像装置,其特征在于,所述成像装置包括处理器,所述处理器用于:
    检测所述成像装置的拍摄图像的当前清晰度;和
    根据所述当前清晰度控制所述成像装置在两种拍摄模式之间切换,两种所述拍摄模式包括点对焦-点测光模式和全局对焦-全局测光模式。
  22. 根据权利要求21所述的成像装置,其特征在于,所述处理器还用于:
    在所述当前清晰度小于预设清晰度时,控制所述成像装置从所述点对焦-点测光模式切换为所述全局对焦-全局测光模式。
  23. 根据权利要求21所述的成像装置,其特征在于,所述处理器还用于:
    在所述当前清晰度大于预定清晰度,控制所述成像装置从所述全局对焦-全局测光模式切换为所述点对焦-点测光模式。
  24. 根据权利要求21所述的成像装置,其特征在于,所述处理器还用于:
    检测所述成像装置的运动数据;和
    根据所述运动数据确定所述当前清晰度。
  25. 根据权利要求24所述的成像装置,其特征在于,所述运动数据包括成像装置在一个或多个方向上的抖动幅度、在一个或多个方向上的位姿变化值、在一个或多个方向上的运动速度、在一个或多个方向上的加速度、在一个或多个方向上的角速度中、在一个或多个方向上的角加速度的至少一种。
  26. 根据权利要求25所述的成像装置,其特征在于,所述运动数据为依据所述成像装置设置的运动传感器获取;或
    所述成像装置搭载在载体上,所述运动数据依据所述载体的运动数据获取。
  27. 根据权利要求25所述的成像装置,其特征在于,所述处理器还用于:
    判断所述运动数据是否大于预设运动阈值;和
    若是,则确定所述当前清晰度小于预设清晰度。
  28. 根据权利要求25所述的成像装置,其特征在于,所述处理器还用于:
    获取预设运动数据与清晰度的映射关系;和
    根据所述映射关系确定与所述运动数据对应的所述当前清晰度。
  29. 根据权利要求25所述的成像装置,其特征在于,所述成像装置搭载在载体上,所述运动数据包括所述成像装置在一个或多个方向上的抖动幅度,所述处理器还用于:
    获取所述载体对所述成像装置进行抖动补偿时的期望补偿幅度和所述载体的实际补偿幅度;和
    根据所述期望补偿幅度与所述实际补偿幅度确定所述抖动幅度。
  30. 根据权利要求29所述的成像装置,其特征在于,所述载体包括云台、可移动平台中的至少一种。
  31. 根据权利要求25所述的成像装置,其特征在于,所述运动数据包括所述成像装置在一个或多个方向上的抖动幅度、在一个或多个方向上的位姿变化值,所述处理器还用于:
    获取所述成像装置的位姿变化值;
    判断所述位姿变化值是否大于预设位姿变化值;
    在所述位姿变化值大于所述预设位姿变化值时,获取所述成像装置的抖动幅度;
    所述根据所述运动数据确定所述当前清晰度包括:
    根据所述抖动幅度确定所述当前清晰度。
  32. 根据权利要求21所述的成像装置,其特征在于,所述处理器还用于:
    获取所述拍摄图像的聚焦评价函数值并作为所述当前清晰度。
  33. 根据权利要求21所述的成像装置,其特征在于,所述处理器还用于:
    获取所述成像装置的运动数据及所述拍摄图像的聚焦评价函数值;和
    根据所述运动数据及所述聚焦评价函数值确定所述成像装置的当前清晰度。
  34. 根据权利要求33所述的成像装置,其特征在于,所述处理器还用于:
    基于权重法,根据所述运动数据及所述聚焦评价函数值确定所述成像装置的当前清晰 度;或
    当所述运动数据大于预设运动阈值,且所述聚焦评价函数值小于预设阈值时,确定所述当前清晰度小于预设清晰度。
  35. 根据权利要求21所述的成像装置,其特征在于,所述处理器还用于:
    获取所述成像装置的工作模式;和
    根据所述工作模式及所述当前清晰度控制所述成像装置在两种所述拍摄模式之间切换。
  36. 根据权利要求21所述的成像装置,其特征在于,所述处理器还用于:
    根据控制指令控制所述成像装置在两种所述拍摄模式中切换。
  37. 根据权利要求21所述的成像装置,其特征在于,所述成像装置工作在所述点对焦-点测光模式时,所述处理器还用于:
    处理所述拍摄图像以在预定对焦区域中确定所述拍摄图像中的主体区域;和
    控制所述成像装置对焦所述主体区域。
  38. 根据权利要求21所述的成像装置,其特征在于,所述处理器还用于:
    在所述成像装置从所述全局对焦-全局测光模式切换为所述点对焦-点测光模式时,根据历史对焦记录确定所述成像装置的当前对焦区域。
  39. 根据权利要求38所述的成像装置,其特征在于,所述处理器还用于:
    所述当前对焦区域为所述成像装置上一次在所述点对焦-点测光模式下的对焦区域。
  40. 根据权利要求21所述的成像装置,其特征在于,所述成像装置包括单目成像装置。
  41. 一种拍摄系统,其特征在于,所述拍摄系统包括:
    权利要求20至40任意一项所述的成像装置;和
    载体,所述成像装置搭载在所述载体上。
  42. 根据权利要求41所述的拍摄系统,其特征在于,当所述载体为云台时,所述成像装置与所述云台为不可拆分的一体结构。
  43. 根据权利要求42所述的拍摄系统,其特征在于,所述云台包括手持云台。
PCT/CN2018/097421 2018-07-27 2018-07-27 图像获取方法、成像装置及拍摄系统 WO2020019295A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880038821.0A CN110754080B (zh) 2018-07-27 2018-07-27 图像获取方法、成像装置及拍摄系统
PCT/CN2018/097421 WO2020019295A1 (zh) 2018-07-27 2018-07-27 图像获取方法、成像装置及拍摄系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/097421 WO2020019295A1 (zh) 2018-07-27 2018-07-27 图像获取方法、成像装置及拍摄系统

Publications (1)

Publication Number Publication Date
WO2020019295A1 true WO2020019295A1 (zh) 2020-01-30

Family

ID=69180364

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/097421 WO2020019295A1 (zh) 2018-07-27 2018-07-27 图像获取方法、成像装置及拍摄系统

Country Status (2)

Country Link
CN (1) CN110754080B (zh)
WO (1) WO2020019295A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112738503A (zh) * 2020-12-30 2021-04-30 凌云光技术股份有限公司 一种小景深镜头相机水平度判定调整的装置及方法
CN112793587A (zh) * 2021-02-26 2021-05-14 深圳裹动智驾科技有限公司 感知方法及系统
CN112817118A (zh) * 2021-01-18 2021-05-18 中国科学院上海技术物理研究所 一种红外自动对焦快速搜索方法
CN114185164A (zh) * 2021-12-17 2022-03-15 重庆切克威科技有限公司 显微镜的快速自动对焦方法
CN114245023A (zh) * 2022-02-24 2022-03-25 浙江华创视讯科技有限公司 一种聚焦处理方法及装置、摄像装置和存储介质
CN115546172A (zh) * 2022-10-19 2022-12-30 广州纳动半导体设备有限公司 基于机器视觉的芯片载板-基板近零间隙测量方法
CN117528259A (zh) * 2024-01-08 2024-02-06 深圳市浩瀚卓越科技有限公司 云台的智能拍摄补光方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103905717A (zh) * 2012-12-27 2014-07-02 联想(北京)有限公司 一种切换方法、装置及电子设备
US20160004923A1 (en) * 2014-07-01 2016-01-07 Brain Corporation Optical detection apparatus and methods
CN106210495A (zh) * 2015-05-06 2016-12-07 小米科技有限责任公司 图像拍摄方法和装置
CN206181178U (zh) * 2016-08-31 2017-05-17 深圳零度智能飞行器有限公司 一种航拍相机
CN107465855A (zh) * 2017-08-22 2017-12-12 上海歌尔泰克机器人有限公司 图像的拍摄方法及装置、无人机

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103513395B (zh) * 2012-06-15 2018-05-04 中兴通讯股份有限公司 一种被动式自动聚焦方法及装置
CN104618654B (zh) * 2015-02-13 2017-10-13 成都品果科技有限公司 一种基于晃动检测的移动电子设备连续聚焦方法及系统
JP2017116840A (ja) * 2015-12-25 2017-06-29 オリンパス株式会社 撮像装置
CN107864340B (zh) * 2017-12-13 2019-07-16 浙江大华技术股份有限公司 一种摄影参数的调整方法及摄影设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103905717A (zh) * 2012-12-27 2014-07-02 联想(北京)有限公司 一种切换方法、装置及电子设备
US20160004923A1 (en) * 2014-07-01 2016-01-07 Brain Corporation Optical detection apparatus and methods
CN106210495A (zh) * 2015-05-06 2016-12-07 小米科技有限责任公司 图像拍摄方法和装置
CN206181178U (zh) * 2016-08-31 2017-05-17 深圳零度智能飞行器有限公司 一种航拍相机
CN107465855A (zh) * 2017-08-22 2017-12-12 上海歌尔泰克机器人有限公司 图像的拍摄方法及装置、无人机

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112738503A (zh) * 2020-12-30 2021-04-30 凌云光技术股份有限公司 一种小景深镜头相机水平度判定调整的装置及方法
CN112817118A (zh) * 2021-01-18 2021-05-18 中国科学院上海技术物理研究所 一种红外自动对焦快速搜索方法
CN112793587A (zh) * 2021-02-26 2021-05-14 深圳裹动智驾科技有限公司 感知方法及系统
CN112793587B (zh) * 2021-02-26 2022-04-01 深圳安途智行科技有限公司 感知方法及系统
CN114185164A (zh) * 2021-12-17 2022-03-15 重庆切克威科技有限公司 显微镜的快速自动对焦方法
CN114185164B (zh) * 2021-12-17 2022-07-29 重庆切克威科技有限公司 显微镜的快速自动对焦方法
CN114245023A (zh) * 2022-02-24 2022-03-25 浙江华创视讯科技有限公司 一种聚焦处理方法及装置、摄像装置和存储介质
CN115546172A (zh) * 2022-10-19 2022-12-30 广州纳动半导体设备有限公司 基于机器视觉的芯片载板-基板近零间隙测量方法
CN117528259A (zh) * 2024-01-08 2024-02-06 深圳市浩瀚卓越科技有限公司 云台的智能拍摄补光方法、装置、设备及存储介质
CN117528259B (zh) * 2024-01-08 2024-03-26 深圳市浩瀚卓越科技有限公司 云台的智能拍摄补光方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN110754080A (zh) 2020-02-04
CN110754080B (zh) 2021-10-15

Similar Documents

Publication Publication Date Title
WO2020019295A1 (zh) 图像获取方法、成像装置及拍摄系统
US10075651B2 (en) Methods and apparatus for capturing images using multiple camera modules in an efficient manner
US9836639B2 (en) Systems and methods of light modulation in eye tracking devices
US20190109979A1 (en) Multiple lenses system, operation method and electronic device employing the same
JP5409189B2 (ja) 撮像装置及びその制御方法
WO2016132884A1 (ja) 情報処理装置および方法、並びにプログラム
US8994783B2 (en) Image pickup apparatus that automatically determines shooting mode most suitable for shooting scene, control method therefor, and storage medium
US9628717B2 (en) Apparatus, method, and storage medium for performing zoom control based on a size of an object
CN105025208B (zh) 摄像装置、照相机组件、远程操作装置、摄影方法、显示方法以及记录介质
CN109451240B (zh) 对焦方法、装置、计算机设备和可读存储介质
US20170094261A1 (en) Stereoscopic imaging
TW201541141A (zh) 使用多鏡頭的自動對焦系統及其方法
JP5703788B2 (ja) 撮像装置、画像処理装置、画像処理プログラム
US11388331B2 (en) Image capture apparatus and control method thereof
JP2015014672A (ja) カメラ制御装置、カメラシステム、カメラ制御方法、及びプログラム
TW201541143A (zh) 使用多鏡頭的自動對焦系統及其方法
CN111935389B (zh) 拍摄对象切换方法、装置、拍摄设备及可读存储介质
US20230069407A1 (en) Remote operation apparatus and computer-readable medium
JP2021071794A (ja) 主被写体判定装置、撮像装置、主被写体判定方法、及びプログラム
CN116095478A (zh) 影像撷取系统和调整焦点方法
JP7342883B2 (ja) 撮像制御装置、撮像装置、撮像制御方法
JP2018185576A (ja) 画像処理装置および画像処理方法
US11816864B2 (en) Tracking device and tracking method for tracking an object displayed on a display
JP2015019416A (ja) 撮像装置、その制御方法及びプログラム
US20220272274A1 (en) Imaging device, storage medium, and method of controlling imaging device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18928028

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18928028

Country of ref document: EP

Kind code of ref document: A1