CN114845050A - Focusing method, camera device, unmanned aerial vehicle and storage medium - Google Patents

Focusing method, camera device, unmanned aerial vehicle and storage medium Download PDF

Info

Publication number
CN114845050A
CN114845050A CN202210400020.2A CN202210400020A CN114845050A CN 114845050 A CN114845050 A CN 114845050A CN 202210400020 A CN202210400020 A CN 202210400020A CN 114845050 A CN114845050 A CN 114845050A
Authority
CN
China
Prior art keywords
image
distance
image distance
target
code
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210400020.2A
Other languages
Chinese (zh)
Inventor
李昭早
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Autel Intelligent Aviation Technology Co Ltd
Original Assignee
Shenzhen Autel Intelligent Aviation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Autel Intelligent Aviation Technology Co Ltd filed Critical Shenzhen Autel Intelligent Aviation Technology Co Ltd
Priority to CN202210400020.2A priority Critical patent/CN114845050A/en
Publication of CN114845050A publication Critical patent/CN114845050A/en
Priority to PCT/CN2023/083223 priority patent/WO2023197841A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/673Focus control based on electronic image sensor signals based on contrast or high frequency components of image signals, e.g. hill climbing method

Abstract

The embodiment of the application relates to a focusing method, a camera device, an unmanned aerial vehicle and a storage medium. The method comprises the following steps: identifying a target in the obtained image and obtaining depth information of the target; acquiring a first distance of a target based on depth information of the target; acquiring a first image distance based on the first distance and the first corresponding relation; and scanning in a first range based on the first image distance, and searching for a second image distance, wherein the definition of the image of the lens at the second image distance is greater than that of the image of the lens at the first image distance. According to the embodiment of the application, focusing is realized based on target identification, namely, the target identification is firstly carried out, and then the first image distance is obtained based on the first distance of the target, so that the focusing accuracy rate aiming at the target can be improved. And, obtain preliminary first image distance first, then search the second image distance that the image definition is higher in the first scope based on first image distance, it is faster, more efficient to scan the focusing speed than the full focus section climbing in a great scope.

Description

Focusing method, camera device, unmanned aerial vehicle and storage medium
Technical Field
The embodiment of the application relates to the technical field of unmanned aerial vehicles, in particular to a focusing method, a camera device, an unmanned aerial vehicle and a storage medium.
Background
Focusing is needed for obtaining clear images when the unmanned aerial vehicle takes an aerial photo, and the focusing of the unmanned aerial vehicle is generally divided into global focusing, area focusing, pointing focusing and the like. The existing focusing method is easy to cause the problem of defocusing when the target is too small or moves. For example, in power routing inspection, wires and insulators on a pole are far away from a background, a target is small, an airplane runs dynamically, and a traditional focusing algorithm often cannot focus on the target, but focuses on the background, so that the target is blurred.
Disclosure of Invention
The embodiment of the application provides a focusing method, a camera device, an unmanned aerial vehicle and a storage medium, which can improve the focusing accuracy.
In a first aspect, an embodiment of the present application provides a focusing method, including:
identifying a target in the obtained image and obtaining depth information of the target;
acquiring a first distance of the target based on the depth information of the target;
acquiring a first image distance based on the first distance and a first corresponding relation, wherein the first corresponding relation reflects the corresponding relation between the target distance and the image distance;
and scanning in a first range based on the first image distance, and searching for a second image distance, wherein the definition of the image of the lens at the second image distance is greater than that of the image of the lens at the first image distance.
In some embodiments, the sharpness of the image at the second image distance is greatest by the lens in the first range.
In some embodiments, said scanning for a second image distance based on said first image distance in a first range comprises:
scanning in a first range by a first step length based on the first image distance, and searching the image distance with the maximum image definition;
and taking the image distance with the maximum image definition as the second image distance.
In some embodiments, said scanning for a second image distance based on said first image distance within a first range comprises:
scanning in a first range by a first step length based on the first image distance, and searching for a third image distance with the maximum image definition;
finding a fourth image distance corresponding to the definition peak value in a second range based on the third image distance;
and taking the fourth image distance as the second image distance.
In some casesIn one embodiment, the second range is a fifth image distance code n-1 Code to sixth image distance n+1 Wherein the fifth image distance code n-1 Is the third image distance code n The image distance of the previous first step length, the sixth image distance code n+1 Is the third image distance code n The image distance of the latter first step.
In some embodiments, the fourth image distance code max Comprises the following steps: if it is
Figure BDA0003599462120000021
Then the process of the first step is carried out,
Figure BDA0003599462120000022
if not, then,
Figure BDA0003599462120000023
wherein, FV n+1 For the lens at the sixth image distance code n+1 Definition of the image, FV n For the lens at the third image distance code n Definition of the image, FV n-1 For the lens at the fifth image distance code n-1 The sharpness of the image.
In some embodiments, after the acquiring the first image distance based on the first distance and the first corresponding relationship, the method further comprises:
and adjusting the lens based on the first image distance.
In some embodiments, the depth information of the target includes depth values of respective pixel points in the target;
the first distance is an average value of depth values of all pixel points in the target.
In some embodiments, the first corresponding relationship comprises at least two distances and at least two image distances, each of the image distances corresponding to at least one of the distances.
In some embodiments, said obtaining a first image distance based on said first distance and a first correspondence comprises:
if the first distance matches a second distance in the first corresponding relationship, the first image distance is an image distance corresponding to the second distance;
if the first distance D is within a third distance D m And a fourth distance d n If so, the first image distance code is:
Figure BDA0003599462120000031
wherein code m Is the third distance d m Corresponding image distance, code n Is the third distance d n The corresponding image distance.
In some embodiments, the method further comprises:
acquiring an image of a lens under a current image distance and a definition value of each pixel point in the image;
calculating the average value of the definition values of all pixel points of the target in the image;
and taking the average value of the definition values as the definition of the image corresponding to the current image distance.
In some embodiments, the method further comprises:
and adjusting the lens based on the second image distance.
In a second aspect, an embodiment of the present application further provides an image pickup apparatus, including
At least one processor, and
a memory communicatively coupled to the at least one processor, the memory storing instructions executable by the at least one processor to enable the at least one processor to perform the method described above.
In a third aspect, an embodiment of the present application further provides an unmanned aerial vehicle, including the above-mentioned camera device.
In a fourth aspect, embodiments of the present application also provide a computer-readable storage medium storing computer-executable instructions that, when executed by a machine, cause the machine to perform the method as described above.
Compared with the prior art, the application has the following beneficial effects at least: according to the focusing method, the camera device, the unmanned aerial vehicle and the storage medium, firstly, the target identification is carried out on the obtained image so as to obtain the first distance of the target, the preliminary first image distance is obtained based on the first distance, and then the second image distance with higher image definition is searched in the first range based on the first image distance. The focusing is realized based on the target identification, namely, the target identification is firstly carried out, and then the first image distance is obtained based on the first distance of the target, so that the focusing accuracy rate of the target can be improved, and the defocus rate is reduced. Is not easily influenced by the size of the target, the moving speed of the target, the length of the focal section and the complexity of the background.
Moreover, a preliminary first image distance is obtained, and then a second image distance with higher image definition is searched in the first range based on the first image distance, so that the scanning range can be reduced to a smaller range. Compared with full-focus climbing scanning in a larger range, the method has the advantages of higher focusing speed and higher efficiency.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
Fig. 1 is a schematic structural diagram of an image pickup apparatus according to an embodiment of the present application;
FIG. 2 is a schematic structural diagram of an unmanned aerial vehicle according to an embodiment of the present application;
FIG. 3 is a schematic flowchart illustrating a focusing method according to an embodiment of the present application;
FIG. 4a is a schematic diagram illustrating target recognition in a focusing method according to an embodiment of the present application;
FIG. 4b is a schematic diagram illustrating depth value adjustment in a focusing method according to an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating the positions of a distance, a lens and an image distance in a focusing method according to an embodiment of the present application;
FIG. 6 is a schematic diagram illustrating an image distance scanning process in a focusing method according to an embodiment of the present application;
FIG. 7a is a schematic diagram of a peak image distance point of sharpness in a focusing method according to an embodiment of the present application;
fig. 7b is a schematic diagram of a resolution peak image distance point in a focusing method according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, a schematic structural diagram of an image capturing apparatus 10 according to an embodiment of the present disclosure is shown, where the image capturing apparatus 10 includes a processor 11, a memory 12, and an optical assembly 13. The optical component 13 may adopt any suitable optical component in the prior art, including a lens, a photosensitive element, and the like, and is configured to acquire an image optical signal and convert the image optical signal into an image electrical signal to obtain image data.
Memory 12, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable program instructions. The memory 12 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the image pickup apparatus, and the like.
Further, the memory 12 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory 12 optionally includes memory located remotely from the processor 11, which may be connected to the camera over a network.
Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor 11 connects various parts of the entire image pickup apparatus using various interfaces and lines, executes various functions of the image pickup apparatus and processes data by running or executing software programs stored in the memory 12 and calling data stored in the memory 12, for example, implementing a focusing method as described in any of the embodiments of the present application.
The camera device of this application embodiment can be used to unmanned aerial vehicle, and unmanned aerial vehicle can be the unmanned aerial vehicle of any suitable type, for example fixed wing unmanned aerial vehicle, rotor unmanned aerial vehicle, unmanned dirigible, unmanned hot air balloon etc.. In addition, the imaging device according to the embodiment of the present application may be used for other apparatuses that require an imaging function, such as a robot.
Fig. 2 shows a structure of the unmanned aerial vehicle, and referring to fig. 2, the unmanned aerial vehicle 100 includes a body 20, a boom 30 connected to the body 20, a power device 40 disposed on the boom 30, and a camera device 10 disposed on the body.
Wherein, power device for example includes the motor and the screw that links to each other with the motor, thereby the pivot of motor rotates in order to drive the screw rotation and provide lift for unmanned aerial vehicle.
Referring to fig. 2, the drone 100 may further include a cradle 50, and the camera device 10 is mounted to the fuselage 20 through the cradle 50. The pan/tilt head 50 is used to reduce or even eliminate the vibration transmitted to the camera device 10 by the power device 40 to ensure that the camera device 10 can capture a stable and clear image or video.
In other embodiments, the drone 100 may further include a vision system (not shown) that may include a vision camera and a vision chip for acquiring images of the surrounding environment, identifying targets, detecting depth information of targets, acquiring maps of the environment, and so on.
Those skilled in the art can understand that the above is only an illustration of the hardware structure of the drone 100 and the camera device 10, and in practical applications, more components may be provided for the drone 100 and the camera device 10 according to actual functional requirements, and of course, one or more of the components may also be omitted according to functional requirements.
Fig. 3 shows a flowchart of a focusing method according to an embodiment of the present application, which may be executed by the camera device or the drone according to the embodiment of the present application, as shown in fig. 4, where the method includes:
101: identifying a target in the acquired image and acquiring depth information of the target.
After the camera device or the unmanned aerial vehicle acquires the environment image through the camera device, the target in the image is identified, and the depth information of the target is detected. The target may be any target to be tracked or observed, such as a person, a vehicle, an animal, a cable, an insulator, an eye, and the like. The depth information of the target is, for example, the depth value of each pixel point in the target, and the depth value is the distance value between the pixel point and the shot.
Specifically, in some embodiments, the target may be identified by using a neural network-based deep learning algorithm, and the depth value of each pixel point in the target may be detected by using a monocular detection method or a binocular detection method.
In some embodiments, after the target in the image is identified, the depth value of the target in the image may be retained, and the depth value of the background portion outside the target in the image is set to 0, so that when the definition or other parameters of the image are subsequently calculated, only the pixel points related to the target are calculated, instead of all the pixel points in the image, and the calculation amount can be reduced to a certain extent, and the running speed can be increased.
Fig. 4a shows a schematic diagram of target recognition by a camera or a drone, and a target automobile is recognized in fig. 4 a. Fig. 4b is a schematic diagram illustrating depth value processing performed on an image, in fig. 4b, only the depth values of the pixel points related to the target are retained, and the depth values of the background portion other than the target are set to 0.
102: and acquiring a first distance of the target based on the depth information of the target.
The first distance is a distance from the target to the lens, and in some embodiments, the first distance may be an average value of depth values of each pixel in the target, such as an arithmetic average value or a weighted average value.
In other embodiments, the first distance may be other mathematical values that reflect the object-to-lens distance.
In some embodiments, where the depth value of the background portion in the image is set to 0, assuming that the width of the image is w and the height of the image is h, and the depth value of each pixel point (x, y) is represented as D (x, y), the first distance D may be represented as:
Figure BDA0003599462120000071
wherein d (x, y) ≠ 0
103: and acquiring a first image distance based on the first distance and a first corresponding relation, wherein the first corresponding relation reflects the corresponding relation between the target distance and the image distance.
The image distance (code value) is an important parameter in the image capturing device, and the lens can be adjusted based on the image distance to obtain an image with high definition. How to adjust the lens based on the image distance can be referred to the prior art, and is not described herein.
The first correspondence reflects a correspondence of the target distance and the image distance. In some embodiments, the first correspondence may be a mathematical formula, and the first image distance may be obtained based on the obtained first distance and the mathematical formula.
In other embodiments, the first correspondence may include at least two distances and at least two image distances, each image distance corresponding to at least one distance. For example, one image distance corresponds to one distance, or one image distance corresponds to two distances.
The first corresponding relationship is, for example, a calibration table, referring to table 1, each distance value in the calibration table has a corresponding image distance value. After the first distance is obtained, the first image distance can be obtained by looking up the calibration table.
TABLE 1
code value code 1 code 2 code 3 code 4 code 5 code 6 code 7 code 8
Distance between two adjacent plates 1 m 2 m 4 m 8 m 16 m 32 m 64 m 128 m
It should be noted that, only some distances and image distances are exemplarily shown in table 1, and in practical applications, more image distances and distances may be included, and the more the distance and image distance values are, the smaller the difference between adjacent distances is, the higher the accuracy of the obtained first image distance is.
In practical applications, the first corresponding relationship may be calibrated in advance, for example, the targets may be set to be located at a series of distance points, such as 1 meter, 2 meters, 3 meters …, and so on. And focusing the lens at each distance point, and recording the code value after the focusing is clear so as to obtain a calibration table.
Fig. 5 schematically shows the relationship between the object distance (i.e., object distance) and the image distance, which are respectively located at two sides of the lens 131.
In some embodiments, the first image distance is obtained based on the first distance and the first corresponding relationship, and if the first distance matches a certain distance in the first corresponding relationship, for example, the second distance, the image distance corresponding to the second distance is taken as the first image distance. Taking the first relationship as table 1 for illustration, if the calculated first distance is 8 meters and 8-meter distance points exist in the first corresponding relationship, the first image distance is the image distance code corresponding to the 8-meter distance point 4
If the first distance does not match any of the first corresponding relations, it is located between two distances, such as the third distance d m And a fourth distance d n May be based on the third distance d m Corresponding image distance code m And a fourth distance d n Corresponding image distance code n A first image distance is calculated.
For example, the first image distance may be code m And code n The median value of (a), the first image distance code, is:
Figure BDA0003599462120000091
in other embodiments, the first image distance may also be calculated by a linear two-interpolation method, and the first image distance code is:
Figure BDA0003599462120000092
in some embodiments, after the first image distance is acquired, the lens may be directly pulled to the first image distance, and then the small-range image distance scanning may be performed based on the first image distance. The purpose of the small-range image distance scanning is to acquire an image distance with higher image definition. The lens is firstly pulled to the first image distance, and the effect of the lens is that clear images can be displayed more quickly. Then, when a small-range image distance scan is performed again, since the scan range is relatively small, the change in the image definition may not be visible to the naked eye or may be visible to the naked eye but the change is not obvious.
The impression is obtained for the viewer that the camera device responds very quickly to obtain a sharp image, whereas when the viewer looks through, the camera device has completed a small range image distance scan and the image is adjusted to a more sharp image, at which time the viewer sees a sharper image. For a viewer, the image with high definition can be rapidly watched, and the customer experience is better.
104: and scanning in a first range based on the first image distance, and searching for a second image distance, wherein the definition of the image of the lens at the second image distance is greater than that of the image of the lens at the first image distance.
After the first image distance is obtained according to the first distance of the target, the image pickup device can obtain a clearer image, and in order to obtain a higher-definition image, the image pickup device can continue to scan within the first range based on the first image distance to obtain a second image distance with higher image definition.
In particular, in some embodiments, the second image distance may be the highest resolution image distance point in the second range. Or, the lens has the maximum sharpness of the image at the second image distance. Of course, in actual calculations, there may be errors in the maximum sharpness, not the actual maximum sharpness. In the embodiment of the application, a slight error exists in the maximum definition.
The first range may be a small range near the first image distance, for example, the first image distance is from a proximal location to a distal location. For example, if the first image distance is taken as the center, the distance is taken 12 steps toward the near end and then the distance is taken 12 steps toward the far end, the first range is (the first image distance is-12 steps, and the first image distance is +12 steps).
In other embodiments, the first range may not be 12 steps before and after the first image distance, for example, 10 steps before and after the first image distance, 14 steps before and after the first image distance, or other steps.
In some embodiments, for ease of calculation, the first range may be composed of discrete image distance points from the proximal end to the distal end, and each image distance point may be obtained within the first range at a first step length. Taking the first step size as 2 step sizes as an example, each image distance point is (first image distance-12 step size, first image distance-10 step size, …, first image distance-2 step size, first image distance +2 step size, …, first image distance +10 step size, first image distance +12 step size) for 12 image distance points.
In other embodiments, the first step length may also be selected from other values, for example, 1 step length, 1.5 step length, and the like, and the value of the first step length is not limited in this application. Relatively speaking, the smaller the first step length value is, the higher the precision is, but the operation efficiency is affected, and the operation speed is reduced.
In some embodiments, the image distance points in the first range may be scanned one by one to find the image distance point with the largest image definition, so as to obtain the second image distance. The scanning of the first range refers to traversing each image distance point in the first range, pulling the lens to the image distance point for each image distance point, then acquiring the image of the lens at the image distance point, and acquiring the definition of the image. And after the scanning is finished, comparing the image definition corresponding to each image distance point to obtain an image distance point with the maximum image definition, namely the second image distance.
Taking the first range (first image distance-12 step length, first image distance-10 step length, …, first image distance-2 step length, first image distance +2 step length, …, first image distance +10 step length, first image distance +12 step length) as an example for explanation, the lens is taken from code 1 (first image distance-12 step length) pull to code one by one 12 (first image distance +12 steps)).
Referring to FIG. 6, first, the lens is pulled to code 1 And obtain thisDefinition FV of temporal image 1 Then pull the lens to code 2 And obtaining the definition FV of the image at that time 2 By analogy, code 3 ,code 4 …, up to code 12 . At each image distance point, the definition FV of the image is obtained, which is exchanged for 12 definitions. Comparing the 12 definitions, and determining the maximum definition as FV 6 Definition of FV 6 Corresponding code 6 Namely the second image distance. Of course, in other embodiments, the lens may be coded 12 Pull to code one by one 1 A scan is performed.
In other embodiments, after the image distance with the highest image definition (which may be referred to as a third image distance) is found in the first range, to improve the accuracy, a fourth image distance with a higher definition may be further continuously found in the second range based on the third image distance. For example, a fourth image distance corresponding to the sharpness peak is found in the second range, and the fourth image distance is taken as the second image distance.
The second range may be a small range near the third image distance, for example, the third image distance is from a proximal location to a distal location. Such as 1 step, 2 steps, 3 steps, etc. before and after the third image distance.
In embodiments where the first range is (first image distance-12 step, first image distance-10 step, …, first image distance-2 step, first image distance +2 step, …, first image distance +10 step, first image distance +12 step), the second range may be from the image distance point immediately preceding the third image distance to the image distance point immediately following the third image distance.
For example, assume the third image distance is code n Then the second range is (fifth image distance code) n-1 Code of sixth image distance n+1 ). Wherein the fifth image distance code n-1 Code for the third image distance n Image distance of the previous first step, sixth image distance code n+1 Code for the third image distance n The image distance of the latter first step.
In some embodiments, the fourth image distance corresponding to the sharpness peak is scanned in the second range, and the sharpness peak and the corresponding fourth image distance are found by taking a second step size and scanning each image distance point in the second range one by one with reference to the scanning method described above. Specifically, the second step size may be 1 step size.
In other embodiments, the fourth image distance may be acquired based on the nature of the parabola.
For example, the fourth image distance code max Comprises the following steps: if it is
Figure BDA0003599462120000111
Then the process of the first step is carried out,
Figure BDA0003599462120000112
if not, then,
Figure BDA0003599462120000121
wherein, FV n+1 For the lens at the sixth image distance code n+1 The definition of the image is shown, FVn is the definition of the image at the third image distance, FV n-1 And the definition of the image of the lens at the fifth image distance is obtained.
In some of these embodiments, the sharpness of the image may be a statistical value of the sharpness of the image. After the image is obtained, the definition value f (x, y) of each pixel point (x, y) in the image can be obtained based on the image, wherein how to obtain the definition value of each pixel point in the image is the prior art, please refer to the prior art, and details are not repeated herein.
The sharpness statistic of the image may only count sharpness values in the target range, not sharpness values in the whole image range, that is, the point where d (x, y) is 0 in the foregoing embodiment does not participate in the statistics.
Specifically, the sharpness statistic may be an average value of sharpness values of pixels of the target in the image. The average value may be an arithmetic average value or a weighted average value, and in other embodiments, the sharpness statistic may also be other mathematical statistics that can reflect the sharpness of the image.
That is, the sharpness statistic FV1 is:
Figure BDA0003599462120000122
wherein d (x, y) ≠ 0
It should be noted that, in practical applications, a control chip may be disposed in the image capturing apparatus, and the focusing method according to any embodiment of the present application may be executed by the control chip. In the case that the camera device is applied to the unmanned aerial vehicle, the focusing method can also be executed by a flight control chip of the unmanned aerial vehicle. Where the drone includes a vision system, the focusing method may also be executed by two or more chips in the drone, for example, the vision chip identifies the target and detects the depth information of the target (e.g., step 101), and the control chip in the image capture device or the flight control chip controls the lens to focus (e.g., steps 102 to 104). The method realizes focusing based on target identification, and compared with the traditional focusing algorithm in a full region or a pointing rectangular region, the method is accurate to each pixel of the target, and the accuracy is obviously improved.
In the embodiment of the application, a first image distance is obtained based on a first distance of a target, and then a second image distance with higher image definition is scanned in a first range based on the first image distance, so that the scanning range can be reduced to a smaller range. The focusing speed is high compared with the full focus segment climbing scanning method.
In addition, the target is recognized first, and then the first image distance is obtained based on the first distance of the target, so that the focusing accuracy can be improved, and the defocusing rate can be reduced. The method is not easily influenced by the size of the target, the movement speed of the target, the length of a focal section and the complexity of a background, and has better focusing accuracy in a portrait mode, a short-film mode, a surrounding mode, a movement following mode and other modes.
When the target is small, for example in the electric power inspection field, can discern cable, insulator and accurate locking focus, can promote the efficiency of patrolling and examining the operation.
In addition, in the embodiment of the application, the lens is firstly pulled to the first image distance (a clear image can be obtained at the first image distance), and then the subsequent small scanning is carried out for fine adjustment, so that the focusing speed and the image definition can be considered at the same time.
Embodiments of the present application also provide a computer-readable storage medium storing computer-executable instructions, which are executed by one or more processors, such as the processor 11 in fig. 1, to enable the one or more processors to perform the focusing method in any of the method embodiments, such as the method steps 101 to 104 in fig. 3 described above.
Embodiments of the present application also provide a computer program product comprising a computer program stored on a non-volatile computer-readable storage medium, the computer program comprising program instructions that, when executed by a machine, cause the machine to perform the above-mentioned focusing method. For example, method steps 101 to 104 in fig. 3 described above are performed.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
Through the above description of the embodiments, those skilled in the art will clearly understand that the embodiments may be implemented by software plus a general hardware platform, and may also be implemented by hardware. It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a computer readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; within the context of the present application, where technical features in the above embodiments or in different embodiments can also be combined, the steps can be implemented in any order and there are many other variations of the different aspects of the present application as described above, which are not provided in detail for the sake of brevity; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (15)

1. A focusing method, comprising:
identifying a target in the obtained image and obtaining depth information of the target;
acquiring a first distance of the target based on the depth information of the target;
acquiring a first image distance based on the first distance and a first corresponding relation, wherein the first corresponding relation reflects the corresponding relation between the target distance and the image distance;
and scanning in a first range based on the first image distance, and searching for a second image distance, wherein the definition of the image of the lens at the second image distance is greater than that of the image of the lens at the first image distance.
2. The method of claim 1, wherein the sharpness of the image at the second image distance is greatest for the lens in the first range.
3. The method of claim 1, wherein scanning for a second image distance based on the first image distance in a first range comprises:
scanning in a first range by a first step length based on the first image distance, and searching the image distance with the maximum image definition;
and taking the image distance with the maximum image definition as the second image distance.
4. The method of claim 1, wherein scanning for a second image distance based on the first image distance within a first range comprises:
scanning in a first range by a first step length based on the first image distance, and searching for a third image distance with the maximum image definition;
finding a fourth image distance corresponding to the definition peak value in a second range based on the third image distance;
and taking the fourth image distance as the second image distance.
5. The method of claim 4, wherein the second range is a fifth image distance code n-1 Code to sixth image distance n+1 Wherein the fifth image distance code n-1 Is the third image distance code n The image distance of the previous first step length, the sixth image distance code n+1 Is the third image distance code n The image distance of the latter first step.
6. The method of claim 5, wherein the fourth image distance code max Comprises the following steps: if it is
Figure FDA0003599462110000021
Then the process of the first step is carried out,
Figure FDA0003599462110000022
if not, then,
Figure FDA0003599462110000023
wherein, FV n+1 For the lens at the sixth image distance code n+1 Definition of the image, FV n For the lens at the third image distance code n Definition of the image, FV n-1 For the lens at the fifth image distance code n-1 The sharpness of the image.
7. The method according to any one of claims 1-6, wherein after the obtaining the first image distance based on the first distance and the first corresponding relationship, the method further comprises:
and adjusting the lens based on the first image distance.
8. The method according to any one of claims 1 to 6, wherein the depth information of the target comprises depth values of respective pixel points in the target;
the first distance is an average value of depth values of all pixel points in the target.
9. The method according to any one of claims 1-6, wherein the first corresponding relationship comprises at least two distances and at least two image distances, each of the image distances corresponding to at least one of the distances.
10. The method of claim 9, wherein obtaining the first image distance based on the first distance and the first corresponding relationship comprises:
if the first distance matches a second distance in the first corresponding relationship, the first image distance is an image distance corresponding to the second distance;
if the first distance D is within a third distance D m And a fourth distance d n If so, the first image distance code is:
Figure FDA0003599462110000031
wherein code m Is the third distance d m Corresponding image distance, code n Is the third distance d n The corresponding image distance.
11. The method of any one of claims 1-6, further comprising:
acquiring an image of a lens under a current image distance and a definition value of each pixel point in the image;
calculating the average value of the definition values of all pixel points of the target in the image;
and taking the average value of the definition values as the definition of the image corresponding to the current image distance.
12. The method of claim 1, further comprising:
and adjusting the lens based on the second image distance.
13. An image pickup apparatus, comprising
At least one processor, and
a memory communicatively coupled to the at least one processor, the memory storing instructions executable by the at least one processor to enable the at least one processor to perform the method of any of claims 1-12.
14. An unmanned aerial vehicle comprising the camera device of claim 13.
15. A computer-readable storage medium having computer-executable instructions stored thereon, which, when executed by a machine, cause the machine to perform the method of any one of claims 1-12.
CN202210400020.2A 2022-04-15 2022-04-15 Focusing method, camera device, unmanned aerial vehicle and storage medium Pending CN114845050A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210400020.2A CN114845050A (en) 2022-04-15 2022-04-15 Focusing method, camera device, unmanned aerial vehicle and storage medium
PCT/CN2023/083223 WO2023197841A1 (en) 2022-04-15 2023-03-23 Focusing method, photographic apparatus, unmanned aerial vehicle, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210400020.2A CN114845050A (en) 2022-04-15 2022-04-15 Focusing method, camera device, unmanned aerial vehicle and storage medium

Publications (1)

Publication Number Publication Date
CN114845050A true CN114845050A (en) 2022-08-02

Family

ID=82566082

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210400020.2A Pending CN114845050A (en) 2022-04-15 2022-04-15 Focusing method, camera device, unmanned aerial vehicle and storage medium

Country Status (2)

Country Link
CN (1) CN114845050A (en)
WO (1) WO2023197841A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023197841A1 (en) * 2022-04-15 2023-10-19 深圳市道通智能航空技术股份有限公司 Focusing method, photographic apparatus, unmanned aerial vehicle, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107197151A (en) * 2017-06-16 2017-09-22 广东欧珀移动通信有限公司 Atomatic focusing method, device, storage medium and electronic equipment
CN107566741A (en) * 2017-10-26 2018-01-09 广东欧珀移动通信有限公司 Focusing method, device, computer-readable recording medium and computer equipment
CN109831609A (en) * 2019-03-05 2019-05-31 上海炬佑智能科技有限公司 TOF depth camera and its Atomatic focusing method
CN112752026A (en) * 2020-12-31 2021-05-04 深圳市汇顶科技股份有限公司 Automatic focusing method, automatic focusing device, electronic equipment and computer readable storage medium
WO2021134179A1 (en) * 2019-12-30 2021-07-08 深圳市大疆创新科技有限公司 Focusing method and apparatus, photographing device, movable platform and storage medium
CN114026841A (en) * 2019-06-29 2022-02-08 高通股份有限公司 Automatic focus extension

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101592165B1 (en) * 2013-08-30 2016-02-11 한국전력공사 Apparatus and method for acquiring image for unmanned aerial vehicle
CN110225249B (en) * 2019-05-30 2021-04-06 深圳市道通智能航空技术有限公司 Focusing method and device, aerial camera and unmanned aerial vehicle
CN113301314B (en) * 2020-06-12 2023-10-24 阿里巴巴集团控股有限公司 Focusing method, projector, imaging apparatus, and storage medium
CN114845050A (en) * 2022-04-15 2022-08-02 深圳市道通智能航空技术股份有限公司 Focusing method, camera device, unmanned aerial vehicle and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107197151A (en) * 2017-06-16 2017-09-22 广东欧珀移动通信有限公司 Atomatic focusing method, device, storage medium and electronic equipment
CN107566741A (en) * 2017-10-26 2018-01-09 广东欧珀移动通信有限公司 Focusing method, device, computer-readable recording medium and computer equipment
CN109831609A (en) * 2019-03-05 2019-05-31 上海炬佑智能科技有限公司 TOF depth camera and its Atomatic focusing method
CN114026841A (en) * 2019-06-29 2022-02-08 高通股份有限公司 Automatic focus extension
WO2021134179A1 (en) * 2019-12-30 2021-07-08 深圳市大疆创新科技有限公司 Focusing method and apparatus, photographing device, movable platform and storage medium
CN112752026A (en) * 2020-12-31 2021-05-04 深圳市汇顶科技股份有限公司 Automatic focusing method, automatic focusing device, electronic equipment and computer readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023197841A1 (en) * 2022-04-15 2023-10-19 深圳市道通智能航空技术股份有限公司 Focusing method, photographic apparatus, unmanned aerial vehicle, and storage medium

Also Published As

Publication number Publication date
WO2023197841A1 (en) 2023-10-19

Similar Documents

Publication Publication Date Title
CN110248097B (en) Focus tracking method and device, terminal equipment and computer readable storage medium
US10698308B2 (en) Ranging method, automatic focusing method and device
US20170293796A1 (en) Flight device and flight control method
CN109905604B (en) Focusing method and device, shooting equipment and aircraft
CN108140245B (en) Distance measurement method and device and unmanned aerial vehicle
KR20160033102A (en) Iris imaging apparatus and methods for configuring an iris imaging apparatus
CN110570454A (en) Method and device for detecting foreign matter invasion
CN112004025B (en) Unmanned aerial vehicle automatic driving zooming method, system and equipment based on target point cloud
CN107888819A (en) A kind of auto focusing method and device
CN106031148B (en) Imaging device, method of auto-focusing in an imaging device and corresponding computer program
CN109559353B (en) Camera module calibration method and device, electronic equipment and computer readable storage medium
CN111950426A (en) Target detection method and device and delivery vehicle
CN104184935A (en) Image shooting device and method
CN114845050A (en) Focusing method, camera device, unmanned aerial vehicle and storage medium
US9851549B2 (en) Rapid autofocus method for stereo microscope
CN112866553B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN113391644B (en) Unmanned aerial vehicle shooting distance semi-automatic optimization method based on image information entropy
CN108347577B (en) Imaging system and method
WO2021168707A1 (en) Focusing method, apparatus and device
CN112136312A (en) Method for obtaining target distance, control device and mobile platform
CN111260538A (en) Positioning and vehicle-mounted terminal based on long-baseline binocular fisheye camera
CN113409331B (en) Image processing method, image processing device, terminal and readable storage medium
US9953431B2 (en) Image processing system and method for detection of objects in motion
CN112866546B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN113959398B (en) Distance measurement method and device based on vision, drivable equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination