WO2023197841A1 - 一种对焦方法、摄像装置、无人机和存储介质 - Google Patents

一种对焦方法、摄像装置、无人机和存储介质 Download PDF

Info

Publication number
WO2023197841A1
WO2023197841A1 PCT/CN2023/083223 CN2023083223W WO2023197841A1 WO 2023197841 A1 WO2023197841 A1 WO 2023197841A1 CN 2023083223 W CN2023083223 W CN 2023083223W WO 2023197841 A1 WO2023197841 A1 WO 2023197841A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
distance
image distance
target
code
Prior art date
Application number
PCT/CN2023/083223
Other languages
English (en)
French (fr)
Inventor
李昭早
Original Assignee
深圳市道通智能航空技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市道通智能航空技术股份有限公司 filed Critical 深圳市道通智能航空技术股份有限公司
Publication of WO2023197841A1 publication Critical patent/WO2023197841A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/673Focus control based on electronic image sensor signals based on contrast or high frequency components of image signals, e.g. hill climbing method

Definitions

  • the embodiments of the present application relate to the technical field of unmanned aerial vehicles, and in particular to a focusing method, a camera device, an unmanned aerial vehicle and a storage medium.
  • Drone focus is generally divided into global focus, regional focus, pointing focus, etc.
  • the current focusing method can easily lead to out-of-focus problems when the target is too small or the target moves. For example, during power inspections, the wires and insulators on the poles are far away from the background, and the targets are small, and the aircraft is moving dynamically. Traditional focusing algorithms often fail to focus on the targets and focus on the background, resulting in blurred targets. .
  • Embodiments of the present application provide a focusing method, a camera device, a drone and a storage medium, which can improve focusing accuracy.
  • embodiments of the present application provide a focusing method, including:
  • the first correspondence reflects the correspondence between the target distance and the image distance
  • the lens Based on the first image distance, the lens scans within the first range to find the second image distance.
  • the clarity of the image at the second image distance is greater than the clarity of the image at the first image distance of the lens.
  • the lens within the first range, the lens has a maximum image clarity at the second image distance.
  • scanning within a first range based on the first image distance to find a second image distance includes:
  • the image distance with the largest image definition is used as the second image distance.
  • scanning within a first range based on the first image distance to find a second image distance includes:
  • the second range is the fifth image distance code n-1 to the sixth image distance code n+1 , wherein the fifth image distance code n-1 is the third image distance code The image distance is the image distance one step before n , and the sixth image distance code n+1 is the image distance one step after the third image distance code n .
  • the fourth image distance code max is: if
  • FV n+1 is the sharpness of the image of the lens at the sixth image distance code n+1
  • FV n is the sharpness of the image of the lens at the third image distance code n
  • FV n -1 is the sharpness of the image of the lens at the fifth image distance code n-1 .
  • the method further includes:
  • the lens is adjusted based on the first image distance.
  • the depth information of the target includes the depth value of each pixel in the target
  • the first distance is the average of the depth values of each pixel in the target.
  • the first correspondence relationship includes at least two distances and at least two image distances, and each image distance corresponds to at least one distance.
  • obtaining the first image distance based on the first distance and the first correspondence includes:
  • the first image distance is the image distance corresponding to the second distance
  • the first image distance code is:
  • code m is the image distance corresponding to the third distance d m
  • code n is the image distance corresponding to the third distance d n .
  • the method further includes:
  • the average value of the sharpness values is used as the sharpness of the image corresponding to the current image distance.
  • the method further includes:
  • the lens is adjusted based on the second image distance.
  • embodiments of the present application also provide a camera device, including
  • a memory the memory is communicatively connected to the at least one processor, the memory stores instructions that can be executed by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one The processor is capable of executing the above methods.
  • embodiments of the present application also provide an unmanned aerial vehicle, including the above-mentioned camera device.
  • embodiments of the present application further provide a computer-readable storage medium that stores computer-executable instructions.
  • the computer-executable instructions When executed by a machine, the machine is caused to execute method as described above.
  • the focusing method, camera device, drone and storage medium of the embodiment of this application first perform target recognition on the acquired image to obtain the first distance of the target, And obtain a preliminary first image distance based on the first distance, and then find a second image distance with higher image definition within the first range based on the first image distance. Since focusing is achieved based on target recognition, that is, the target is recognized first, and then the first image distance is obtained based on the first distance of the target, the focusing accuracy for the target can be improved and the out-of-focus rate can be reduced. Not easily affected by target size, target movement speed, focal length, and complex background.
  • the scanning range can be reduced to a smaller range. Compared with the full focal length climbing scan in a larger range, the focusing speed is faster and the efficiency is higher.
  • Figure 1 is a schematic structural diagram of a camera device according to an embodiment of the present application.
  • Figure 2 is a schematic structural diagram of a drone according to an embodiment of the present application.
  • Figure 3 is a schematic flow chart of the focusing method according to the embodiment of the present application.
  • Figure 4a is a schematic diagram of target recognition in the focusing method according to the embodiment of the present application.
  • Figure 4b is a schematic diagram of depth value adjustment in the focusing method according to the embodiment of the present application.
  • Figure 5 is a schematic diagram of the distance, lens and image distance positions in the focusing method according to the embodiment of the present application.
  • Figure 6 is a schematic diagram of the image distance scanning process in the focusing method according to the embodiment of the present application.
  • Figure 7a is a schematic diagram of the sharpness peak image distance point in the focusing method according to the embodiment of the present application.
  • Figure 7b is a schematic diagram of the sharpness peak image distance point in the focusing method according to the embodiment of the present application.
  • FIG. 1 is a schematic structural diagram of a camera device 10 provided in an embodiment of the present application.
  • the camera device 10 includes a processor 11 , a memory 12 and an optical component 13 .
  • the optical component 13 can use any suitable optical component in the prior art, including lenses, photosensitive elements, etc., to obtain image light signals, and convert the image light signals into image electrical signals to obtain image data.
  • the memory 12 can be used to store non-volatile software programs and non-volatile computer-executable program instructions.
  • the memory 12 may include a program storage area and a data storage area, where the program storage area may store an operating system and an application program required for at least one function; the storage data area may store data created according to the use of the camera device, etc.
  • the memory 12 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device.
  • the memory 12 optionally includes memories remotely located relative to the processor 11 , and these remote memories can be connected to the camera device through a network.
  • Examples of the above-mentioned networks include but are not limited to the Internet, intranets, local area networks, mobile communication networks and combinations thereof.
  • the processor 11 uses various interfaces and lines to connect various parts of the entire camera device, and executes various functions of the camera device and processes data by running or executing software programs stored in the memory 12 and calling data stored in the memory 12 , for example, implement the focusing method described in any embodiment of this application.
  • the camera device of the embodiment of the present application can be used for a drone, and the drone can be any suitable type of drone, such as a fixed-wing drone, a rotary-wing drone, an unmanned airship, an unmanned hot air balloon, etc.
  • the camera device according to the embodiment of the present application can also be used in other machines that require camera functions, such as robots.
  • Figure 2 shows a structure of a drone. Please refer to Figure 2.
  • the drone 100 includes a fuselage 20, an arm 30 connected to the fuselage 20, a power device 40 provided on the arm 30 and a power device 40 provided on the machine arm 30.
  • the camera device 10 of the body is not limited to Figure 2.
  • the power device includes, for example, a motor and a propeller connected to the motor.
  • the rotating shaft of the motor rotates to drive the propeller to rotate to provide lift to the drone.
  • the drone 100 may also include a pan/tilt 50 , through which the camera device 10 is mounted on the body 20 .
  • the pan/tilt 50 is used to reduce or even eliminate the vibration transmitted by the power unit 40 to the camera device 10 to ensure that the camera device 10 can capture stable and clear images or videos.
  • the UAV 100 may also include a vision system (not shown).
  • the vision system may include a vision camera and a vision chip for acquiring images of the surrounding environment, identifying targets, detecting depth information of the targets, and acquiring the environment. Maps etc.
  • FIG 3 shows a flow chart of the focusing method according to the embodiment of the present application.
  • the focusing method can be executed by the camera device or the drone according to the embodiment of the present application.
  • the method includes:
  • the camera device or the drone After the camera device or the drone obtains the environment image through the camera device, it identifies the target in the image and detects the depth information of the target.
  • the target can be any target that needs to be tracked or observed, such as people, cars, animals, cables, insulators, eyes, etc.
  • the depth information of the target is, for example, the depth value of each pixel in the target, and the depth value is the distance value of the pixel from the lens.
  • a deep learning algorithm based on a neural network can be used for target recognition, and a monocular detection or binocular detection method can be used to detect the depth value of each pixel in the target.
  • the depth value of the target in the image can be retained, and the depth value of the background part other than the target in the image is set to 0 to facilitate subsequent calculation of the sharpness or other parameters of the image.
  • the pixels related to the target are calculated instead of all pixels in the image, which can reduce the calculation amount to a certain extent and improve the running speed.
  • Figure 4a shows a schematic diagram of target recognition by a camera device or a drone.
  • the target car is identified.
  • Figure 4b shows a schematic diagram of depth value processing of an image.
  • Figure 4b only the depth value of each pixel point related to the target is retained, and the depth value of the background part other than the target is set to 0.
  • the first distance is the distance between the target and the lens.
  • the first distance may be the average of the depth values of each pixel in the target.
  • the average may be, for example, an arithmetic mean or a weighted average.
  • the first distance may also be other mathematical values that can reflect the distance from the target to the lens.
  • the depth value of the background part in the image when the depth value of the background part in the image is set to 0, assuming that the width of the image is w and the height is h, the depth value of each pixel (x, y) is expressed as d(x, y ), then the first distance D can be expressed as:
  • the image distance (code value) is an important parameter in the camera device.
  • the lens can be adjusted based on the image distance to obtain a higher-definition image. How to adjust the lens based on the image distance can be referred to the existing technology, which will not be described again here.
  • the first correspondence reflects the correspondence between target distance and image distance.
  • the first correspondence relationship may be a mathematical formula, and the first image distance may be obtained based on the obtained first distance and the mathematical formula.
  • the first correspondence relationship may include at least two distances and at least two image distances, each image distance corresponding to at least one distance.
  • one image distance corresponds to one distance
  • one image distance corresponds to two distances.
  • the first correspondence relationship is a calibration table, please refer to Table 1.
  • Each distance value in the calibration table has a corresponding image distance value. After obtaining the first distance, the first image distance can be obtained by looking up the calibration table.
  • Table 1 only shows part of the distances and image distances as examples. In practical applications, more image distances and distances may also be included. The more distances and image distances there are, the smaller the distances between adjacent distances. The smaller the difference, the higher the accuracy of the first image distance obtained.
  • the first corresponding relationship can be calibrated in advance.
  • the targets can be set to be located at a series of distance points, such as 1 meter, 2 meters, 3 meters... and so on. Focus the lens at each distance point and record the code value after the focus is clear to obtain the calibration table.
  • FIG. 5 schematically shows the relationship between the target distance (ie, the object distance) and the image distance.
  • the object distance and the image distance are located on both sides of the lens 131 respectively.
  • the first image distance is obtained based on the first distance and the first correspondence, which may be, if the first distance matches a certain distance in the first correspondence, such as the second distance, then the second distance corresponds to The image distance is used as the first image distance.
  • the first image distance is the image distance code 4 corresponding to the 8-meter distance point.
  • the correspondence can be based on the third distance d m
  • the first image distance is calculated by the image distance code m corresponding to the fourth distance d n and the image distance code n corresponding to it.
  • the first image distance can be the median value of code m and code n , that is, the first image distance code is:
  • the first image distance can also be calculated through the linear binary interpolation method, then the first image distance code is:
  • the lens after acquiring the first image distance, can be directly pulled to the first image distance, and then a small range image distance scan is performed based on the first image distance.
  • the purpose of small-range image distance scanning is to obtain an image distance with higher image definition. First, pull the lens to the first image distance. The effect is to display clear images faster. Later, when scanning a small range of image distance, due to the relatively small scanning range, the change in image clarity may not be visible to the naked eye, or may be visible to the naked eye but the change is not obvious.
  • the impression they get is that the camera device can respond quickly and obtain clear images.
  • the camera device has completed a small-range image distance scan and the image has been adjusted to a clearer image. At this time, the viewer can see a clearer image.
  • the camera device can already obtain To obtain a clearer image, in order to obtain a higher-definition image, you can continue to scan within the first range based on the first image distance to obtain a second image distance with higher image definition.
  • the second image distance may be the image distance point with the highest definition within the second range.
  • the image clarity of the lens is maximum at the second image distance.
  • there may be errors in the maximum resolution which is not the actual maximum resolution. In the embodiment of this application, a slight error is allowed in the maximum definition.
  • the first range may be a small range near the first image distance, for example, from a certain position at the proximal end of the first image distance to a certain position at the far end of the first image distance. For example, take the first image distance as the center, walk 12 steps to the near end, and then walk 12 steps to the far end, then the first range is (first image distance - 12 steps, first image distance + 12 steps ).
  • the first range may not be 12 steps before and after the first image distance.
  • it may be 10 steps, 14 steps, or other steps before and after the first image distance.
  • the first range may be composed of some discrete image distance points from the proximal end to the far end, and each image distance point may be obtained in the first range with a first step.
  • each image distance point is (first image distance - 12 steps, first image distance - 10 steps,..., first image distance - 2 steps, First image distance + 2 steps, ..., first image distance + 10 steps, first image distance + 12 steps), a total of 12 image distance points.
  • the first step length may be selected for the first step length, such as 1 step length, 1.5 step length, etc. This application does not limit the value of the first step length. Relatively speaking, the smaller the value of the first step length, the higher the accuracy, but it will affect the operating efficiency and reduce the operating speed.
  • the image distance points within the first range can be scanned one by one to find the image distance point with the largest image definition to obtain the second image distance.
  • the scanning of the first range means traversing each image distance point within the first range, for each image distance point, pulling the lens to the image distance point, and then acquiring the image of the lens at this image distance point, And get the clarity of that image. After the scanning is completed, compare the image sharpness corresponding to each image distance point, and obtain the image distance point with the largest image definition, which is the second image distance.
  • first image distance-12 steps As an example, pull the lens from code 1 (first image distance -12 steps) to code 12 (first image distance +12 steps) one by one.
  • the third image distance after finding the image distance with the largest image definition in the first range (which can be called the third image distance), in order to improve the accuracy, you can continue to find the image distance in the second range based on the third image distance.
  • the fourth image distance with greater clarity. For example, a fourth image distance corresponding to the sharpness peak is found within the second range, and the fourth image distance is used as the second image distance.
  • the second range may be a small range near the third image distance, for example, from a certain position at the proximal end of the third image distance to a certain position at the far end of the third image distance.
  • the third image is 1 step, 2 steps, 3 steps ahead and behind, etc.
  • the first range is (first image distance-12 steps, first image distance-10 steps,..., first image distance-2 steps, first image distance+2 steps,..., first image distance +10 steps, first image distance +12 steps)
  • the second range may be from the previous image distance point of the third image distance to the next image distance point of the third image distance.
  • the second range is (the fifth image distance code n-1 , the sixth image distance code n+1 ).
  • the fifth image distance code n-1 is the image distance one step before the third image distance code n
  • the sixth image distance code n+1 is the image distance one step after the third image distance code n . distance.
  • the second step length may be 1 step length.
  • the fourth image distance may be obtained based on the properties of the parabola.
  • the fourth image distance code max is: if
  • FV n+1 is the sharpness of the image of the lens at the sixth image distance code n+1
  • FV n is the sharpness of the image of the lens at the third image distance
  • FV n-1 is the sharpness of the image of the lens at the third image distance. The sharpness of the image at the fifth image distance.
  • the sharpness of the image may be a statistical value of sharpness of the image.
  • the sharpness value f(x,y) of each pixel point (x, y) in the image can be obtained based on the image.
  • How to obtain the sharpness value f(x,y) of each pixel point in the image is an existing technology. Please refer to the existing technology, which will not be described again here.
  • the statistical value of the sharpness of the image can only count the sharpness values within the target range, rather than the sharpness values within the entire image range. That is, the points where d(x, y) is 0 in the aforementioned embodiment are not included in the statistics.
  • the sharpness statistical value may be the average of the sharpness values of each pixel point of the target in the image.
  • the average value may be an arithmetic average or a weighted average.
  • the definition statistical value may also be other mathematical statistical values that can reflect the image definition.
  • a control chip can be provided in the camera device, and the focusing method in any embodiment of the present application can be executed by the control chip. Used in camera equipment In the case of drones, this focusing method can also be performed by the drone's flight control chip.
  • the focusing method can also be jointly executed by two or more chips in the drone.
  • the vision chip identifies the target and detects the depth information of the target (for example, step 101), and the camera
  • the control chip or flight control chip in the device controls the lens to focus (for example, steps 102 to 104).
  • the present invention realizes focusing based on target recognition. Compared with the traditional focusing algorithm of the entire area or the pointing rectangular area, it is accurate to every pixel of the target, and the accuracy is significantly improved.
  • the first image distance is first obtained based on the first distance of the target, and then based on the first image distance, a second image distance with higher image definition is scanned in the first range, which can reduce the scanning range to a smaller one. In the range. Compared with the full-focus range climbing scanning method, the focusing speed is fast.
  • identifying the target first and then obtaining the first image distance based on the first distance of the target can improve the focus accuracy and reduce the out-of-focus rate. It is not easily affected by target size, target movement speed, focal length, and complex background. In portrait mode, movie mode, surround mode, motion following and other modes, the focus accuracy is better.
  • the target When the target is small, such as in the field of power inspection, it can identify cables and insulators and accurately lock the focus, which can improve the efficiency of inspection operations.
  • the lens is first pulled to the first image distance (a clearer image can be obtained at the first image distance), and subsequent small scanning is performed for refinement, which can take into account both focusing speed and image clarity.
  • Embodiments of the present application also provide a computer-readable storage medium that stores computer-executable instructions, and the computer-executable instructions are executed by one or more processors, such as a process in Figure 1
  • the processor 11 can enable the above-mentioned one or more processors to execute the focusing method in any of the above-mentioned method embodiments, for example, execute the above-described method steps 101 to 104 in FIG. 3 .
  • Embodiments of the present application also provide a computer program product.
  • the computer program product includes a computer program stored on a non-volatile computer-readable storage medium.
  • the computer program includes program instructions. When the program instructions are processed by a machine When executed, the machine is caused to perform the above focusing method. For example, the above-described method steps 101 to 104 in FIG. 3 are performed.
  • the device embodiments described above are merely illustrative in which they are described as separate components
  • the illustrated units may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each embodiment can be implemented by means of software plus a general hardware platform, and of course can also be implemented by hardware.
  • the program can be stored in a computer-readable storage medium, and the program can be stored in a computer-readable storage medium.
  • the storage medium can be a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM) or a random access memory (Random Access Memory, RAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

本申请实施例涉及一种对焦方法、摄像装置、无人机和存储介质。所述方法包括:在获取的图像中识别目标,并获取目标的深度信息;基于目标的深度信息获取目标的第一距离;基于第一距离和第一对应关系获取第一像距;基于第一像距在第一范围内扫描,寻找第二像距,镜头在所述第二像距处图像的清晰度,大于所述镜头在所述第一像距处图像的清晰度。本申请实施例由于基于目标识别来实现对焦,即先进行目标识别,再基于目标的第一距离获得第一像距,可以提高针对目标的对焦准确率。而且,先获得初步的第一像距,再基于第一像距在第一范围内搜寻图像清晰度更高的第二像距,相对于在一较大范围内全焦段爬坡扫描对焦速度更快、效率更高。

Description

一种对焦方法、摄像装置、无人机和存储介质
本申请要求于2022年04月15日提交中国专利局、申请号为CN2022104000202、申请名称为“一种对焦方法、摄像装置、无人机和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及无人飞行器技术领域,特别涉及一种对焦方法、摄像装置、无人机和存储介质。
背景技术
无人机在航拍时为获得清晰的图像,需要进行对焦,无人机对焦一般分为全局对焦、区域对焦、指点对焦等。目前的对焦方法,在目标过小或者目标移动的场合,容易导致失焦问题。例如,在电力巡检中,电杆上的电线和绝缘子离背景很远,且目标很小,飞机又是动态运行的,传统对焦算法经常聚焦不到目标,而对焦到背景上,导致目标模糊。
发明内容
本申请实施例提供一种对焦方法、摄像装置、无人机和存储介质,能提高对焦准确度。
第一方面,本申请实施例提供了一种对焦方法,包括:
在获取的图像中识别目标,并获取所述目标的深度信息;
基于所述目标的深度信息获取所述目标的第一距离;
基于所述第一距离和第一对应关系获取第一像距,其中,所述第一对应关系反映目标距离和像距的对应关系;
基于所述第一像距在第一范围内扫描,寻找第二像距,镜头在所述 第二像距处图像的清晰度,大于所述镜头在所述第一像距处图像的清晰度。
在一些实施例中,在所述第一范围内,所述镜头在所述第二像距处图像的清晰度最大。
在一些实施例中,所述基于所述第一像距在第一范围内扫描,寻找第二像距,包括:
基于所述第一像距以第一步长在第一范围内扫描,寻找图像清晰度最大的像距;
将所述图像清晰度最大的像距作为所述第二像距。
在一些实施例中,所述基于所述第一像距在第一范围内扫描,寻找第二像距,包括:
基于所述第一像距以第一步长在第一范围内扫描,寻找图像清晰度最大的第三像距;
基于所述第三像距在第二范围内寻找清晰度峰值对应的第四像距;
将所述第四像距作为所述第二像距。
在一些实施例中,所述第二范围为第五像距coden-1至第六像距coden+1,其中,所述第五像距coden-1为所述第三像距coden前一第一步长的像距,所述第六像距coden+1为所述第三像距coden后一第一步长的像距。
在一些实施例中,所述第四像距codemax为:若
则,
否则,
其中,FVn+1为所述镜头在所述第六像距coden+1处图像的清晰度,FVn为所述镜头在所述第三像距coden处图像的清晰度,FVn-1为所述镜头在所述第五像距coden-1处图像的清晰度。
在一些实施例中,所述基于所述第一距离和第一对应关系获取第一像距之后,所述方法还包括:
基于所述第一像距调整所述镜头。
在一些实施例中,所述目标的深度信息包括所述目标中各个像素点的深度值;
所述第一距离为所述目标中各个像素点的深度值的平均值。
在一些实施例中,所述第一对应关系包括至少两个距离和至少两个像距,每一所述像距对应至少一个所述距离。
在一些实施例中,所述基于所述第一距离和第一对应关系获取第一像距,包括:
若所述第一距离匹配所述第一对应关系中的第二距离,则所述第一像距为所述第二距离对应的像距;
若所述第一距离D在第三距离dm和和第四距离dn之间,则所述第一像距code为:
其中,codem为所述第三距离dm对应的像距,coden为所述第三距离dn对应的像距。
在一些实施例中,所述方法还包括:
获取镜头位于当前像距下的图像,以及所述图像中各像素点的清晰度值;
计算所述图像中、所述目标的各个像素点的清晰度值的平均值;
将所述清晰度值的平均值作为所述当前像距对应的图像的清晰度。
在一些实施例中,所述方法还包括:
基于所述第二像距调整所述镜头。
第二方面,本申请实施例还提供了一种摄像装置,包括
至少一个处理器,以及
存储器,所述存储器与所述至少一个处理器通信连接,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行上述的方法。
第三方面,本申请实施例还提供了一种无人机,包括上述的摄像装置。
第四方面,本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,当所述计算机可执行指令被机器执行时,使所述机器执行如上所述的方法。
本申请与现有技术相比,至少具有以下有益效果:本申请实施例的对焦方法、摄像装置、无人机和存储介质,先对获取的图像进行目标识别,以获取目标的第一距离,并基于第一距离获得初步的第一像距,然后基于第一像距在第一范围内寻找图像清晰度更高的第二像距。由于基于目标识别来实现对焦,即先进行目标识别,再基于目标的第一距离获得第一像距,可以提高针对目标的对焦准确率,降低失焦率。不易受目标大小、目标运动速度、焦段长短、背景复杂的影响,。
而且,先获得初步的第一像距,再基于第一像距在第一范围内搜寻图像清晰度更高的第二像距,可以将扫描范围缩小至较小的范围内。相对于在一较大范围内全焦段爬坡扫描,对焦速度更快、效率更高。
附图说明
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。
图1是本申请实施例摄像装置的结构示意图;
图2是本申请实施例无人机的结构示意图;
图3是本申请实施例对焦方法的流程示意图;
图4a是本申请实施例对焦方法中目标识别示意图;
图4b是本申请实施例对焦方法中深度值调整示意图;
图5是本申请实施例对焦方法中距离、镜头和像距位置示意图;
图6是本申请实施例对焦方法中像距扫描过程示意图;
图7a是本申请实施例对焦方法中清晰度峰值像距点示意图;
图7b是本申请实施例对焦方法中清晰度峰值像距点示意图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
请参阅图1,为本申请实施例提供的摄像装置10的结构示意图,摄像装置10包括处理器11、存储器12和光学组件13。其中,光学组件13可以采用现有技术中任意合适的光学组件,包括镜头、感光元件等,用于获取图像光信号,并将图像光信号转换成图像电信号,以获得图像数据。
存储器12作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序指令。存储器12可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据摄像装置的使用所创建的数据等。
此外,存储器12可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器12可选包括相对于处理器11远程设置的存储器,这些远程存储器可以通过网络连接至摄像装置。
上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
处理器11利用各种接口和线路连接整个摄像装置的各个部分,通过运行或执行存储在存储器12内的软件程序,以及调用存储在存储器12内的数据,执行摄像装置的各种功能和处理数据,例如实现本申请任一实施例所述的对焦方法。
本申请实施例的摄像装置可以用于无人机,无人机可以是任意合适类型的无人机,例如固定翼无人机、旋翼无人机、无人飞艇、无人热气球等。除此之外,本申请实施例的摄像装置还可以用于其他需要摄像功能的机器,例如机器人等。
图2示出了无人机的一种结构,请参照图2,无人机100包括机身20、与机身20相连的机臂30、设于机臂30的动力装置40和设于机身的摄像装置10。
其中,动力装置例如包括电机和与电机相连的螺旋桨,电机的转轴转动以带动螺旋桨旋转从而给无人机提供升力。
请参照图2,无人机100还可以包括云台50,摄像装置10通过云台50安装于机身20。云台50用于减轻甚至消除动力装置40传递给摄像装置10的振动,以保证摄像装置10能够拍摄出稳定清晰的图像或视频。
在另一些实施例中,无人机100还可以包括视觉系统(图未示),视觉系统可以包括视觉相机和视觉芯片,用于获取周围环境图像,识别目标、检测目标的深度信息以及获取环境地图等。
本领域技术人员可以理解的,以上仅是对无人机100、摄像装置10硬件结构的举例说明,在实际应用中,还可以根据实际功能需要,为无人机100和摄像装置10设置更多部件,当然,也可以根据功能需要,省略其中一个或者多个部件。
图3示出了本申请实施例对焦方法的一种流程图,该对焦方法可以由本申请实施例的摄像装置或者无人机执行,如图4所示,所述方法包括:
101:在获取的图像中识别目标,并获取所述目标的深度信息。
摄像装置、或者无人机通过摄像装置获取环境图像后,对图像中的目标进行识别,并检测目标的深度信息。其中,目标可以为任何需要跟踪或观察的目标,例如,人、车、动物、电缆、绝缘子、眼睛等。目标的深度信息例如目标中各个像素点的深度值,所述深度值为像素点距离镜头的距离值。
具体的,在一些实施例中,可以利用基于神经网络的深度学习算法进行目标识别,利用单目检测或者双目检测方法等检测目标中各个像素点的深度值。
在一些实施例中,识别出图像中的目标后,可以保留图像中目标的深度值,而将图像中目标之外的背景部分的深度值置0,以便于后续计算图像的清晰度或者其他参数时,只计算目标相关的像素点,而非图像中的所有像素点,能在一定程度上减小计算量,提高运行速度。
图4a示出了摄像装置或无人机进行目标识别的示意图,图4a中识别出目标汽车。图4b示出了对图像进行深度值处理的示意图,图4b中,仅保留目标相关的各像素点的深度值,目标以外的背景部分的深度值被置为0。
102:基于所述目标的深度信息获取所述目标的第一距离。
其中,所述第一距离为目标距离镜头的距离,在一些实施例中,第一距离可以为目标中各像素点的深度值的平均值,平均值例如算术平均值或者加权平均值等。
在另一些实施例中,第一距离也可以是其他能反映目标到镜头距离的数学值。
在一些实施例中,在图像中背景部分的深度值被置为0的场合,假设图像的宽为w,高为h,各像素点(x,y)的深度值表示为d(x,y),则第一距离D可以表示为:
其中,d(x,y)≠0
103:基于所述第一距离和第一对应关系获取第一像距,其中,所述第一对应关系反映目标距离和像距的对应关系。
其中,像距(code值)为摄像装置中的重要参数,可以基于像距调整镜头,以获得清晰度较高的图像。基于像距如何调整镜头,可以参见现有技术,在此不再赘述。
第一对应关系反映目标距离和像距的对应关系。在一些实施例中,第一对应关系可以是数学公式,基于获得的第一距离和所述数学公式可以获得第一像距。
在另一些实施例中,第一对应关系可以包括至少两个距离和至少两个像距,每一像距对应至少一个距离。例如,一个像距对应一个距离,或者一个像距对应两个距离。
第一对应关系例如一标定表,请参照表1,标定表中每一距离值均具有对应的像距值。在获取第一距离后,通过查找标定表,可以获得第一像距。
表1
需要说明的是,表1中只示例性的示出了部分距离和像距,实际应用中,还可以包括更多的像距和距离,距离和像距值越多,相邻距离之间的差越小,则获得的第一像距的精度越高。
在实际应用中,该第一对应关系可以事先标定,例如,可以设置目标分别位于一系列距离点,例如1米、2米、3米…等等。分别在各个距离点处,对镜头进行对焦,记录对焦清晰后的code值,以获取标定表。
图5示意性的示出了目标距离(即物距)和像距的关系,物距和像距分别位于镜头131两侧。
在一些实施例中,基于第一距离和第一对应关系获得第一像距,可以是,若第一距离匹配第一对应关系中的某一距离,例如第二距离,则将第二距离对应的像距作为第一像距。以第一关系为表1为例说明,假如算得的第一距离为8米,第一对应关系中存在8米距离点,则第一像距为8米距离点对应的像距code4
若第一距离不匹配第一对应关系中的任一距离,而位于某两个距离之间,例如位于第三距离dm和第四距离dn之间,则可以基于第三距离dm对应的像距codem和第四距离dn对应的像距coden计算第一像距。
例如,第一像距可以为codem和coden的中位值,即第一像距code为:
在另一些实施例中,也可以通过线性二插值方法计算第一像距,则第一像距code为:
在一些实施例中,在获取第一像距后,可以先直接将镜头拉到第一像距后,再基于第一像距进行小范围像距扫描。小范围像距扫描的目的是,获取图像清晰度更高的像距。先将镜头拉到第一像距,其呈现的效果是,能更快的显示清晰图像。之后,再进行小范围像距扫描时,由于扫描范围相对较小,图像清晰度的改变可能肉眼不可见,或者肉眼可见但变化不明显。
对于观看者来说,其获得的印象是,摄像装置能很快的响应,获得清晰的图像,而在观看者仔细查看时,摄像装置已经完成了小范围像距扫描,图像调整至清晰度更高的图像,此时,观看者能看到更为清晰的图像。对于观看者来说,能快速的观看到清晰度高的图像,客户体验较好。
104:基于所述第一像距在第一范围内扫描,寻找第二像距,镜头在所述第二像距处图像的清晰度,大于所述镜头在所述第一像距处图像的清晰度。
其中,在依据目标的第一距离获得第一像距后,摄像装置已经能获 得较为清晰的图像,为了能获得清晰度更高的图像,可以基于第一像距继续在第一范围内扫描,以获得图像清晰度更高的第二像距。
具体的,在一些实施例中,第二像距可以是第二范围内清晰度最高的像距点。或者说,镜头在第二像距处图像的清晰度最大。当然,实际计算中,最大清晰度可能会存在误差,非实际的最大清晰度。本申请实施例中,允许最大清晰度存在微小误差。
第一范围可以是第一像距附近的一小范围,例如第一像距近端一定位置至第一像距远端一定位置处。例如,以第一像距为中心,往近端走12步长,再往远端走12步长,则第一范围为(第一像距-12步长,第一像距+12步长)。
在其他实施例中,第一范围也可以不为第一像距前、后12步长,例如,可以为第一像距前后10步长、14步长或者其他步长。
其中,在一些实施例中,为计算方便,第一范围可以由一些从近端至远端的离散的像距点组成,可以以第一步长在第一范围内获得各像距点。以第一步长为2步长为例,则各像距点分别为(第一像距-12步长,第一像距-10步长,…,第一像距-2步长,第一像距+2步长,…,第一像距+10步长,第一像距+12步长),共12个像距点。
在另一些实施例中,第一步长也可以选用其他值,例如、1步长、1.5步长等,本申请对第一步长的取值不做限定。相对来说,第一步长取值越小,精度越高,但会影响运行效率,降低运行速度。
则,在部分实施例中,可以逐个扫描第一范围内的像距点,寻找图像清晰度最大的像距点,以获得第二像距。所述扫描第一范围是指,遍历第一范围内的各个像距点,对于每个像距点,均把镜头拉至该像距点处,然后获取镜头在此像距点处的图像,并获取该图像的清晰度。扫描完成后,比较各个像距点对应的图像清晰度,获得图像清晰度最大的像距点,即为第二像距。
以第一范围为(第一像距-12步长,第一像距-10步长,…,第一像距-2步长,第一像距+2步长,…,第一像距+10步长,第一像距+12步长)为例说明,将镜头从code1(第一像距-12步长)逐个拉到code12(第一像距+12步长))。
请参照图6,首先将镜头拉至code1,并获取此时图像的清晰度FV1,然后将镜头拉至code2,并获取此时图像的清晰度FV2,以此类推,code3,code4,…,直至code12。在每个像距点处,均获得图像的清晰度FV,共换取12个清晰度。比较该12个清晰度,确定最大清晰度为FV6,则FV6对应的code6即为第二像距。当然,在另一些实施例中,也可以将镜头从code12逐个拉到code1进行扫描。
在另一些实施例中,在第一范围内寻找到图像清晰度最大的像距(可以称为第三像距)后,为提高精度,还可以继续基于第三像距在第二范围内寻找清晰度更大的第四像距。例如在第二范围内寻找清晰度峰值对应的第四像距,并将第四像距作为所述第二像距。
第二范围可以是第三像距附近的一小范围,例如第三像距近端一定位置至第三像距远端一定位置处。例如第三像距前、后1步长、2步长、3步长等。
在第一范围为(第一像距-12步长,第一像距-10步长,…,第一像距-2步长,第一像距+2步长,…,第一像距+10步长,第一像距+12步长)的实施例中,第二范围可以是第三像距的前一个像距点,至第三像距的后一个像距点。
例如,假设第三像距为coden,则第二范围为(第五像距coden-1,第六像距coden+1)。其中,第五像距coden-1为第三像距coden前一第一步长的像距,第六像距coden+1为第三像距coden后一第一步长的像距。
在一些实施例中,在第二范围内扫描清晰度峰值对应的第四像距,可以参照上述的扫描方法,取一第二步长,逐个扫描第二范围内的各像距点,寻找清晰度峰值及其对应的第四像距。具体的,第二步长,可以为1步长。
在另一些实施例中,可以基于抛物线的性质,获取第四像距。
例如,所述第四像距codemax为:若
则,
否则,
其中,FVn+1为镜头在第六像距coden+1处图像的清晰度,FVn为所述镜头在所述第三像距处图像的清晰度,FVn-1为所述镜头在所述第五像距处图像的清晰度。
在其中一些实施例中,图像的清晰度可以为图像的清晰度统计值。在获取图像后,可以基于图像获取图像中每一个像素点(x,y)的清晰度值f(x,y),其中,如何获取图像中每个像素点的清晰度值为现有技术,请参照现有技术,在此不再赘述。
图像的清晰度统计值,可以仅统计目标范围内的清晰度值,而非整个图像范围内的清晰度值,亦即,前述实施例中d(x,y)为0的点不参与统计。
具体的,清晰度统计值可以为图像中所述目标的各个像素点的清晰度值的平均值。其中,平均值可以为算术平均值,或者加权平均值,在另一些实施例中,清晰度统计值还可以是其他能反映图像清晰度的数学统计值。
即,清晰度统计值FV1为:
其中,d(x,y)≠0
需要说明的是,在实际应用中,可在摄像装置内设置一控制芯片,本申请任一实施例的对焦方法可由该控制芯片执行。在摄像装置应用于 无人机的场合,该对焦方法还可以由无人机的飞控芯片执行。在无人机包括视觉系统的场合,还可以由无人机中的两个以上芯片联合执行该对焦方法,例如,由视觉芯片识别目标,及检测目标的深度信息(例如步骤101),由摄像装置中的控制芯片或者飞控芯片控制镜头进行对焦(例如步骤102至104)。本发明基于目标识别来实现对焦,跟传统对焦算法全区域或者指点矩形区域相比,精确到目标的每个像素,准确度明显提升。
本申请实施例中,先基于目标的第一距离获得第一像距,再基于第一像距在第一范围内扫描图像清晰度更高的第二像距,可以将扫描范围缩小至较小的范围内。相对于全焦段爬坡扫描方法对焦速度快。
而且,先识别目标,再基于目标的第一距离获得第一像距,可以提高对焦准确度,降低失焦率。不易受目标大小、目标运动速度、焦段长短、背景复杂的影响,在人像模式、短片模式、环绕模式、运动追随等模式中,对焦准确性更好。
在目标较小时,例如在电力巡检领域,能识别出电缆、绝缘子并精确锁定对焦,可提升巡检作业的效率。
另外,本申请实施例先将镜头拉到第一像距(第一像距处已经能获得较为清晰的图像),后续小扫描进行精修,能兼顾对焦速度和画面清晰度。
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,该计算机可执行指令被一个或多个处理器执行,例如图1中的一个处理器11,可使得上述一个或多个处理器可执行上述任意方法实施例中的对焦方法,例如,执行以上描述的图3中的方法步骤101至步骤104。
本申请实施例还提供了一种计算机程序产品,所述计算机程序产品包括存储在非易失性计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被机器执行时,使所述机器执行上述的对焦方法。例如,执行以上描述的图3中的方法步骤101至步骤104。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件 说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
通过以上的实施例的描述,本领域普通技术人员可以清楚地了解到各实施例可借助软件加通用硬件平台的方式来实现,当然也可以通过硬件。本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(RandomAccessMemory,RAM)等。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;在本申请的思路下,以上实施例或者不同实施例中的技术特征之间也可以进行组合,步骤可以以任意顺序实现,并存在如上所述的本申请的不同方面的许多其它变化,为了简明,它们没有在细节中提供;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (15)

  1. 一种对焦方法,其特征在于,包括:
    在获取的图像中识别目标,并获取所述目标的深度信息;
    基于所述目标的深度信息获取所述目标的第一距离;
    基于所述第一距离和第一对应关系获取第一像距,其中,所述第一对应关系反映目标距离和像距的对应关系;
    基于所述第一像距在第一范围内扫描,寻找第二像距,镜头在所述第二像距处图像的清晰度,大于所述镜头在所述第一像距处图像的清晰度。
  2. 根据权利要求1所述的方法,其特征在于,在所述第一范围内,所述镜头在所述第二像距处图像的清晰度最大。
  3. 根据权利要求1所述的方法,其特征在于,所述基于所述第一像距在第一范围内扫描,寻找第二像距,包括:
    基于所述第一像距以第一步长在第一范围内扫描,寻找图像清晰度最大的像距;
    将所述图像清晰度最大的像距作为所述第二像距。
  4. 根据权利要求1所述的方法,其特征在于,所述基于所述第一像距在第一范围内扫描,寻找第二像距,包括:
    基于所述第一像距以第一步长在第一范围内扫描,寻找图像清晰度最大的第三像距;
    基于所述第三像距在第二范围内寻找清晰度峰值对应的第四像距;
    将所述第四像距作为所述第二像距。
  5. 根据权利要求4所述的方法,其特征在于,所述第二范围为第五像距coden-1至第六像距coden+1,其中,所述第五像距coden-1为所述 第三像距coden前一第一步长的像距,所述第六像距coden+1为所述第三像距coden后一第一步长的像距。
  6. 根据权利要求5所述的方法,其特征在于,所述第四像距codemax为:若
    则,
    否则,
    其中,FVn+1为所述镜头在所述第六像距coden+1处图像的清晰度,FVn为所述镜头在所述第三像距coden处图像的清晰度,FVn-1为所述镜头在所述第五像距coden-1处图像的清晰度。
  7. 根据权利要求1-6任意一项所述的方法,其特征在于,所述基于所述第一距离和第一对应关系获取第一像距之后,所述方法还包括:
    基于所述第一像距调整所述镜头。
  8. 根据权利要求1-6任意一项所述的方法,其特征在于,所述目标的深度信息包括所述目标中各个像素点的深度值;
    所述第一距离为所述目标中各个像素点的深度值的平均值。
  9. 根据权利要求1-6任意一项所述的方法,其特征在于,所述第 一对应关系包括至少两个距离和至少两个像距,每一所述像距对应至少一个所述距离。
  10. 根据权利要求9所述的方法,其特征在于,所述基于所述第一距离和第一对应关系获取第一像距,包括:
    若所述第一距离匹配所述第一对应关系中的第二距离,则所述第一像距为所述第二距离对应的像距;
    若所述第一距离D在第三距离dm和和第四距离dn之间,则所述第一像距code为:
    其中,codem为所述第三距离dm对应的像距,coden为所述第三距离dn对应的像距。
  11. 根据权利要求1-6任意一项所述的方法,其特征在于,还包括:
    获取镜头位于当前像距下的图像,以及所述图像中各像素点的清晰度值;
    计算所述图像中、所述目标的各个像素点的清晰度值的平均值;
    将所述清晰度值的平均值作为所述当前像距对应的图像的清晰度。
  12. 根据权利要求1所述的方法,其特征在于,还包括:
    基于所述第二像距调整所述镜头。
  13. 一种摄像装置,其特征在于,包括
    至少一个处理器,以及
    存储器,所述存储器与所述至少一个处理器通信连接,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-12任一项所述的方法。
  14. 一种无人机,其特征在于,包括权利要求13所述的摄像装置。
  15. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机可执行指令,当所述计算机可执行指令被机器执行时,使所述机器执行如权利要求1-12任一项所述的方法。
PCT/CN2023/083223 2022-04-15 2023-03-23 一种对焦方法、摄像装置、无人机和存储介质 WO2023197841A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210400020.2 2022-04-15
CN202210400020.2A CN114845050A (zh) 2022-04-15 2022-04-15 一种对焦方法、摄像装置、无人机和存储介质

Publications (1)

Publication Number Publication Date
WO2023197841A1 true WO2023197841A1 (zh) 2023-10-19

Family

ID=82566082

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/083223 WO2023197841A1 (zh) 2022-04-15 2023-03-23 一种对焦方法、摄像装置、无人机和存储介质

Country Status (2)

Country Link
CN (1) CN114845050A (zh)
WO (1) WO2023197841A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114845050A (zh) * 2022-04-15 2022-08-02 深圳市道通智能航空技术股份有限公司 一种对焦方法、摄像装置、无人机和存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150027346A (ko) * 2013-08-30 2015-03-12 한국전력공사 무인항공기의 영상취득장치 및 방법
CN110225249A (zh) * 2019-05-30 2019-09-10 深圳市道通智能航空技术有限公司 一种对焦方法、装置、航拍相机以及无人飞行器
CN112752026A (zh) * 2020-12-31 2021-05-04 深圳市汇顶科技股份有限公司 自动对焦方法、装置、电子设备和计算机可读存储介质
CN113301314A (zh) * 2020-06-12 2021-08-24 阿里巴巴集团控股有限公司 对焦方法、投影仪、成像设备和存储介质
CN114845050A (zh) * 2022-04-15 2022-08-02 深圳市道通智能航空技术股份有限公司 一种对焦方法、摄像装置、无人机和存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107197151A (zh) * 2017-06-16 2017-09-22 广东欧珀移动通信有限公司 自动对焦方法、装置、存储介质及电子设备
CN107566741B (zh) * 2017-10-26 2020-04-14 Oppo广东移动通信有限公司 对焦方法、装置、计算机可读存储介质和计算机设备
CN109831609A (zh) * 2019-03-05 2019-05-31 上海炬佑智能科技有限公司 Tof深度相机及其自动对焦方法
WO2021000063A1 (en) * 2019-06-29 2021-01-07 Qualcomm Incorporated Automatic focus distance extension
CN112585941A (zh) * 2019-12-30 2021-03-30 深圳市大疆创新科技有限公司 对焦方法、装置、拍摄设备、可移动平台和存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150027346A (ko) * 2013-08-30 2015-03-12 한국전력공사 무인항공기의 영상취득장치 및 방법
CN110225249A (zh) * 2019-05-30 2019-09-10 深圳市道通智能航空技术有限公司 一种对焦方法、装置、航拍相机以及无人飞行器
CN113301314A (zh) * 2020-06-12 2021-08-24 阿里巴巴集团控股有限公司 对焦方法、投影仪、成像设备和存储介质
CN112752026A (zh) * 2020-12-31 2021-05-04 深圳市汇顶科技股份有限公司 自动对焦方法、装置、电子设备和计算机可读存储介质
CN114845050A (zh) * 2022-04-15 2022-08-02 深圳市道通智能航空技术股份有限公司 一种对焦方法、摄像装置、无人机和存储介质

Also Published As

Publication number Publication date
CN114845050A (zh) 2022-08-02

Similar Documents

Publication Publication Date Title
CN105744163B (zh) 一种基于深度信息跟踪对焦的摄像机及摄像方法
WO2020014909A1 (zh) 拍摄方法、装置和无人机
WO2019113966A1 (zh) 一种避障方法、装置和无人机
US20190304120A1 (en) Obstacle avoidance system based on embedded stereo vision for unmanned aerial vehicles
WO2019105214A1 (zh) 图像虚化方法、装置、移动终端和存储介质
WO2020200093A1 (zh) 对焦方法、装置、拍摄设备及飞行器
WO2020259474A1 (zh) 追焦方法、装置、终端设备、计算机可读存储介质
US10621456B2 (en) Distance measurement method and apparatus, and unmanned aerial vehicle
CN113196007B (zh) 一种应用于车辆的相机系统
US11057604B2 (en) Image processing method and device
US12007794B2 (en) Method and apparatus for tracking moving target and unmanned aerial vehicle
WO2020239094A1 (zh) 一种对焦方法、装置、航拍相机以及无人飞行器
WO2023197841A1 (zh) 一种对焦方法、摄像装置、无人机和存储介质
WO2019037038A1 (zh) 图像处理方法、装置及服务器
WO2019127518A1 (zh) 避障方法、装置及可移动平台
WO2020207172A1 (zh) 基于三维光场技术的光学无人机监测方法及系统
WO2022057800A1 (zh) 云台摄像机、云台摄像机的追踪控制方法、装置及设备
WO2022257857A1 (zh) 一种对焦方法、对焦装置与无人飞行器
WO2023197844A1 (zh) 对焦方法、对焦装置、电子设备及无人机
WO2023236508A1 (zh) 一种基于亿像素阵列式相机的图像拼接方法及系统
CN113391644A (zh) 一种基于图像信息熵的无人机拍摄距离半自动寻优方法
CN108419052B (zh) 一种多台无人机全景成像方法
WO2020062699A1 (en) Systems and methods for 3-dimensional (3d) positioning of imaging device
CN112585945A (zh) 对焦方法、装置及设备
CN113959398B (zh) 基于视觉的测距方法、装置、可行驶设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23787488

Country of ref document: EP

Kind code of ref document: A1