CN117479011A - Unmanned aerial vehicle automatic focusing method and device and electronic equipment - Google Patents

Unmanned aerial vehicle automatic focusing method and device and electronic equipment Download PDF

Info

Publication number
CN117479011A
CN117479011A CN202311375844.XA CN202311375844A CN117479011A CN 117479011 A CN117479011 A CN 117479011A CN 202311375844 A CN202311375844 A CN 202311375844A CN 117479011 A CN117479011 A CN 117479011A
Authority
CN
China
Prior art keywords
focusing
preset
focused
operator
subarea
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311375844.XA
Other languages
Chinese (zh)
Inventor
孙友彬
陈溯
沈洋
王亚辉
王发明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Lingkong Electronic Technology Co Ltd
Original Assignee
Xian Lingkong Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Lingkong Electronic Technology Co Ltd filed Critical Xian Lingkong Electronic Technology Co Ltd
Priority to CN202311375844.XA priority Critical patent/CN117479011A/en
Publication of CN117479011A publication Critical patent/CN117479011A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/675Focus control based on electronic image sensor signals comprising setting of focusing regions

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

An unmanned aerial vehicle automatic focusing method, an unmanned aerial vehicle automatic focusing device and electronic equipment relate to the technical field of automatic focusing. The method comprises the following steps: acquiring an original image, wherein the original image comprises a target to be focused; acquiring a preset focusing area; the original image comprises a preset focusing area; the preset focusing area comprises a target to be focused; dividing a preset focusing region into regions to obtain focusing sub-regions; the preset focusing area comprises a plurality of focusing subareas; acquiring a first focusing subarea and a second focusing subarea, wherein the first focusing subarea and the second focusing subarea are any two focusing subareas in a preset focusing area; judging the magnitude relation between the first gradient value in the first focusing subarea and the second gradient value in the second focusing Jiao Ziou; and if the first gradient value is larger than the second gradient value, taking the first focusing subarea as a focusing area. By implementing the technical scheme provided by the application, the problem of long time consumption in focusing in the automatic focusing technology, namely the problem of low automatic focusing efficiency, is improved.

Description

Unmanned aerial vehicle automatic focusing method and device and electronic equipment
Technical Field
The application relates to the technical field of automatic focusing, in particular to an unmanned aerial vehicle automatic focusing method, an unmanned aerial vehicle automatic focusing device and electronic equipment.
Background
In recent years, an autofocus technique has been widely used in various fields, and one method that is generally used in the autofocus technique is a hill climbing algorithm (Hill Climbing Algorithm).
The hill climbing algorithm is an optimization algorithm based on local search and is used for searching a local optimal solution in a search space. The basic idea of the conventional hill climbing algorithm is to move to a neighbor solution with a better evaluation value (objective function value) by continuously searching in the neighborhood of the current solution, starting from a randomly selected starting point. This process may continue until no better neighbor solutions than the current solution are found or a predetermined stop condition is reached. However, this method is used for an autofocus technique, and there is still a problem in that focusing takes a long time, that is, in that autofocus efficiency is low.
Therefore, there is a need for a method, apparatus and electronic device for automatic focusing of a unmanned aerial vehicle.
Disclosure of Invention
The application provides an unmanned aerial vehicle automatic focusing method, device and electronic equipment, solves automatic focusing technology, still has the longer problem of consuming time when focusing, and automatic focusing inefficiency's problem promptly.
In a first aspect of the present application, a method for automatic focusing of a unmanned aerial vehicle is provided, where the method is applied to the unmanned aerial vehicle, and the method specifically includes the following steps: acquiring a plurality of images to be focused; the plurality of images to be focused comprise a first image to be focused; acquiring a first Laplace operator in a first image to be focused through Laplace evaluation function evaluation; judging whether the first operator is greater than or equal to a preset operator; if the first operator is greater than or equal to the preset operator, taking the first image to be focused as an original image; the original image comprises an object to be focused; acquiring a preset focusing area, wherein the original image comprises the preset focusing area; the preset focusing area comprises a target to be focused; dividing a preset focusing region into regions to obtain focusing sub-regions; the preset focusing area comprises a plurality of focusing subareas; acquiring a first focusing subarea and a second focusing subarea, wherein the first focusing subarea and the second focusing subarea are any two different focusing subareas in a preset focusing area; judging the magnitude relation between the first gradient value in the first focusing subarea and the second gradient value in the second focusing Jiao Ziou; the first gradient value and the second gradient value are obtained through a climbing algorithm; and if the first gradient value is larger than the second gradient value, taking the first focusing subarea as a focusing area so that the unmanned aerial vehicle can complete automatic focusing on the original image.
According to the technical scheme, firstly, whether the definition of the image meets the requirement can be judged before the focusing strategy is executed through the Laplace evaluation function, the accuracy of the automatic focusing process is improved, then the preset focusing area is divided into a certain number of subareas, gradient maximum values are calculated in each subarea, and the maximum values are compared, so that the maximum gradient value of the whole preset focusing area is found by utilizing the advantages of parallel calculation, the efficiency of the whole automatic focusing is improved, the time consumption of the automatic focusing is shortened, and the position of the maximum gradient value of the preset focusing area is the position of clear imaging of a target.
Optionally, acquiring a surrounding environmental picture, wherein the environmental picture comprises environmental features, and the environmental features comprise scenery features, target features, main motion features and contrast features; and acquiring a shooting mode corresponding to the environmental characteristic from a preset shooting mode storage library, wherein the preset storage library is used for storing the corresponding relation between the environmental characteristic and the shooting mode.
Through the technical scheme, the unmanned aerial vehicle can select the shooting mode of the image in the preset shooting mode library according to the surrounding environment by constructing the preset shooting mode library, so that a more suitable shooting mode is selected.
Optionally, the plurality of images to be focused are acquired according to a shooting mode, where the shooting mode includes a close-range mode, a far-range mode, a deep-range mode, an HDR mode, and a following mode.
Through the technical scheme, the image to be focused with the target to be focused is acquired according to the selected shooting mode, so that the setting of the unmanned aerial vehicle camera can be optimized according to different shooting scenes and requirements, and the optimal image quality is obtained.
Optionally, if the first operator is smaller than the preset operator, acquiring a second image to be focused, and evaluating through a laplace evaluation function to acquire a second operator in the second image to be focused; the plurality of images to be focused comprise a second image to be focused; judging whether the second operator is greater than or equal to a preset operator; and if the second operator is greater than or equal to the preset operator, taking the second image to be focused as the original image.
According to the technical scheme, if the captured image is judged to not meet the requirement through the Laplace evaluation function, namely, not clear enough, the image is captured again, and the Laplace evaluation function is used for judging the definition of the image again until the clear image is captured, so that the accuracy in the focusing process is further improved.
Optionally, judging whether the target to be focused belongs to an aerial target or a ground target; if the target to be focused is an aerial target, taking the first preset focusing area as the preset focusing area of the target to be focused; the first preset focusing area is the middle area of the original image; if the target to be focused is a ground target, taking the second preset focusing area as the preset focusing area of the target to be focused; the second preset focus area is a bottom area of the original image.
By the technical scheme, if the object to be focused belongs to an aerial object, a region to be focused with the area smaller than or equal to that of the original image is constructed by taking the geometric center of the original image as the center; if the object to be focused belongs to the ground object, the bottom of the original image is taken as a starting point, the geometric center direction of the original image is taken as a construction direction, and a region to be focused with the area smaller than or equal to the original image is constructed, so that the final calculated amount can be reduced by reducing the range of the focusing region, and the automatic focusing speed is accelerated.
Optionally, comparing the first gradient value, the second gradient value and a preset threshold value; the first gradient value and the second gradient value are larger than or equal to a preset threshold value.
Through the technical scheme, the preset threshold value is set, and the preset threshold value is used for eliminating smaller gradient values, so that the calculation speed is improved, and the automatic focusing efficiency is further improved.
Optionally, the preset threshold is an average value of a plurality of gradient values in a plurality of focus sub-areas.
Through the technical scheme, an average value formula is usedCalculating to obtain a preset threshold value, wherein ∈>To preset threshold value, x i Is the gradient value size in the i-th focus subregion.
In a second aspect of the present application, a device for automatic focusing of a drone is provided, where the device is a drone, and the device includes an acquisition module, a segmentation module, and a processing module.
The acquisition module is used for acquiring a plurality of images to be focused; the plurality of images to be focused comprise a first image to be focused; acquiring a preset focusing area, wherein the original image comprises the preset focusing area; the preset focusing area comprises a target to be focused; acquiring a first focusing subarea and a second focusing subarea, wherein the first focusing subarea and the second focusing subarea are any two different focusing subareas in a preset focusing area;
the segmentation module is used for carrying out region segmentation on the preset focusing region to obtain a focusing sub-region; the preset focusing area comprises a plurality of focusing subareas;
The processing module is used for obtaining a first Laplace operator in the first image to be focused through Laplace evaluation function evaluation; judging whether the first operator is greater than or equal to a preset operator; if the first operator is greater than or equal to the preset operator, taking the first image to be focused as an original image; the original image comprises an object to be focused; judging the magnitude relation between the first gradient value in the first focusing subarea and the second gradient value in the second focusing Jiao Ziou; the first gradient value and the second gradient value are obtained through a climbing algorithm; and if the first gradient value is larger than the second gradient value, taking the first focusing subarea as a focusing area so that the unmanned aerial vehicle can complete automatic focusing on the original image.
Optionally, the acquiring module is configured to acquire an ambient image, where the ambient image includes an ambient feature, and the ambient feature includes a scene feature, a target feature, a subject motion feature, and a contrast feature; and acquiring a shooting mode corresponding to the environmental characteristic from a preset shooting mode storage library, wherein the preset storage library is used for storing the corresponding relation between the environmental characteristic and the shooting mode.
Optionally, the acquiring module is configured to acquire a plurality of images to be focused according to a shooting mode, where the shooting mode includes a close-range mode, a far-range mode, a deep-range mode, an HDR mode, and a following mode.
Optionally, the acquiring module is configured to acquire the second image to be focused if the first operator is smaller than the preset operator, and acquire the second operator in the second image to be focused through evaluation of a laplace evaluation function; the plurality of images to be focused comprise a second image to be focused; judging whether the second operator is greater than or equal to a preset operator; and if the second operator is greater than or equal to the preset operator, taking the second image to be focused as the original image.
Optionally, the segmentation module is used for judging whether the target to be focused belongs to an aerial target or a ground target; if the target to be focused is an aerial target, taking the first preset focusing area as the preset focusing area of the target to be focused; the first preset focusing area is the middle area of the original image; if the target to be focused is a ground target, taking the second preset focusing area as the preset focusing area of the target to be focused; the second preset focus area is a bottom area of the original image.
Optionally, the processing module is configured to compare the first gradient value, the second gradient value and a preset threshold value; the first gradient value and the second gradient value are larger than or equal to a preset threshold value.
In a third aspect the present application provides an electronic device comprising a processor, a memory for storing instructions, a user interface and a network interface for communicating with other devices, the processor for executing instructions stored in the memory to cause the electronic device to perform a method as any one of the above.
In a fourth aspect of the present application there is provided a computer readable storage medium storing a computer program for execution by a processor of a method as any one of the above.
One or more technical solutions provided in the embodiments of the present application at least have the following technical effects or advantages:
1. according to the technical scheme, the preset focusing area is divided into a certain number of subareas, gradient maximum values are calculated in each subarea, and the magnitudes of the maximum values are compared, so that the maximum gradient value of the whole preset focusing area is found by utilizing the advantage of parallel calculation, the efficiency of integral automatic focusing is improved, the time consumption of automatic focusing is shortened, and the position of the maximum gradient value of the preset focusing area is the position of clear imaging of a target.
2. By the Laplace evaluation function, whether the definition of the image meets the requirement can be judged before the focusing strategy is executed, and the accuracy of the automatic focusing process is improved.
3. The final calculated amount can be reduced by reducing the range of the focusing area, so that the automatic focusing speed is accelerated; and a preset threshold value is set, and the preset threshold value is used for eliminating smaller gradient values, so that the calculation speed is improved, and the automatic focusing efficiency is further improved.
Drawings
Fig. 1 is a schematic flow chart of a method for automatic focusing of a unmanned aerial vehicle according to an embodiment of the present application;
fig. 2a is a schematic diagram of one embodiment of a method for automatic focusing of a drone according to an embodiment of the present application;
fig. 2b is a schematic diagram of another embodiment of a method for automatic focusing of a drone according to an embodiment of the present application;
fig. 3 is a structural diagram of an apparatus for automatic focusing of a unmanned aerial vehicle according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Reference numerals illustrate: 31. an acquisition module; 32. a segmentation module; 33. a processing module; 400. an electronic device; 401. a processor; 402. a memory; 403. a user interface; 404. a network interface; 405. a communication bus.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present application, but not all embodiments.
The terminology used in the following embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of this application, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates to the contrary. It should also be understood that the term "and/or" as used in this application refers to and encompasses any or all possible combinations of one or more of the listed items.
The terms "first," "second," and the like, are used below for descriptive purposes only and are not to be construed as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature, and in the description of embodiments of the present application, unless otherwise indicated, the meaning of "a plurality" is two or more.
Before describing embodiments of the present application, some terms referred to in the embodiments of the present application will be first defined and described.
In order to make the technical scheme of the present invention better understood by those skilled in the art, the present invention will be further described in detail with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of a method for automatic focusing of a drone according to an embodiment of the present invention is shown, where the method is applied to a drone, and the flowchart mainly includes the following steps: s101 to S106.
Step S101, acquiring an original image, wherein the original image comprises an object to be focused.
Specifically, an original image, which refers to an image that has not undergone autofocusing, can be photographed by a camera mounted on the unmanned aerial vehicle; the original image comprises an object to be focused, the object to be focused can be determined through the image of the previous frame or the images of the previous frames, namely, after the image comprising the object to be focused is determined, the image is used as the original image for focusing operation; the present application is directed to how to autofocus an object to be focused on an original image.
In one possible implementation, step S101 further includes: acquiring surrounding environment pictures, wherein the environment pictures comprise environment features, and the environment features comprise scenery features, target features, main motion features and contrast features; and acquiring a shooting mode corresponding to the environmental characteristic from a preset shooting mode storage library, wherein the preset storage library is used for storing the corresponding relation between the environmental characteristic and the shooting mode.
Specifically, before the unmanned aerial vehicle shoots an image, the unmanned aerial vehicle firstly acquires a picture of surrounding environment through a camera, and confirms a corresponding shooting mode in a preset shooting mode storage library through various features on the picture. The shooting mode is preset for an unmanned aerial vehicle control user, the unmanned aerial vehicle sets different shooting modes according to various features of different environments, the unmanned aerial vehicle carries out similarity calculation on features in pictures of surrounding environments and features corresponding to each shooting mode preset by the unmanned aerial vehicle control user, a plurality of obtained similarity calculation results are obtained, and the unmanned aerial vehicle takes the shooting mode corresponding to the largest similarity result in the similarity results as the shooting mode of the surrounding environments. In this embodiment, the user may change the corresponding relationship between the shooting mode and the environmental feature at any time according to the actual shooting requirement, and the corresponding relationship between the shooting mode and the environmental feature is not limited in this embodiment.
In one possible implementation, step S101 further includes: according to a shooting mode, a plurality of images to be focused are acquired, wherein the shooting mode comprises a near view mode, a far view mode, a deep view mode, an HDR mode and a following mode.
Specifically, the unmanned aerial vehicle shoots a plurality of images to be focused of the target to be focused according to the determined shooting mode. For example, when an unmanned aerial vehicle shoots a flower or small object at close distance, a smaller aperture (e.g., f/2.8 to f/5.6) needs to be selected to achieve a shallow depth of field, clear subject matter, and blurred background, shutter speed and ISO depend on light conditions to ensure proper exposure, so a close range mode can be selected; when the drone photographs a landscape or a city overview, a larger aperture (e.g. f/8 to f/16) needs to be chosen in this case to ensure that both the foreground and the background remain clear, the shutter speed and ISO are again adjusted according to the light conditions, so the deep-view mode can be chosen. When an unmanned aerial vehicle takes a scene with a wide range of brightness, such as sunrise or sunset, the camera will take multiple pictures, each exposure being different, and then combine them to preserve more detail, so an HDR mode can be selected, a typical HDR setting might include exposure compensation of 0, then multiple exposures, typically +1, -1EV, etc.; when the unmanned aerial vehicle shoots a moving subject, such as a sports game or a travel team, the unmanned aerial vehicle will automatically follow the subject and use continuous auto-focusing to keep the subject clear, the shutter speed and ISO are adjusted according to the light conditions and the movement speed, so the following mode can be selected; when the drone shoots a distant scene or sky, such as a cloud or mountain, a smaller aperture needs to be selected to capture the detail, the shutter speed and ISO depend on the actual light conditions, so the long range mode can be selected. In the present embodiment, the above-mentioned plurality of shooting modes are taken as an example, and in the case of practical application, the present embodiment is not limited to the above-mentioned plurality of shooting modes, and the type and the number of shooting modes in the unmanned aerial vehicle preset shooting mode repository are not limited.
In one possible implementation, step S101 further includes: acquiring a plurality of images to be focused; the plurality of images to be focused comprise a first image to be focused; acquiring a first Laplace operator in a first image to be focused through Laplace evaluation function evaluation; judging whether the first operator is greater than or equal to a preset operator; and if the first operator is greater than or equal to the preset operator, taking the first image to be focused as an original image.
Specifically, by Laplace's evaluation function
And judging the definition of the original image, wherein M and N are the height and the width of the original image, F (x, y) is the gray value of a pixel point with coordinates of (x, y) in the original image, and S is a sampling interval, and can be set according to actual conditions.
Illustrating: setting a preset operator L 0 Calculating a first Laplace operator L in a first image to be focused 1 If L 1 Greater than or equal to a preset operator L 0 It is indicated that the original image is clear and that subsequent autofocus operations may be performed on the image.
In one possible implementation, step S101 further includes: if the first Laplace operator is smaller than the preset operator, acquiring a second image to be focused, and acquiring a second Laplace operator in the second image to be focused through Laplace evaluation function evaluation; the plurality of images to be focused comprise a second image to be focused; judging whether the second operator is greater than or equal to a preset operator; and if the second operator is greater than or equal to the preset operator, taking the second image to be focused as the original image.
Specifically, if the sharpness of the original image does not meet the preset condition, that is, the Laplace operator of the original image is smaller than the preset Laplace operator, a new original image is acquired again, the Laplace operator in the newly acquired original image is calculated, if the newly calculated Laplace operator is still smaller than the preset Laplace operator, the step is repeated until the image with the Laplace operator larger than the preset Laplace operator is acquired, and the image is the image with the sharpness meeting the preset condition.
Illustrating: if L 1 Less than a preset operator L 0 Re-acquiring an original image, wherein the image is a second image to be focused, and calculating a second Las operator L in the second image to be focused 2 Comparison of L 2 And L is equal to 0 If L is the size relation of 2 Greater than or equal to a preset operator L 0 It is indicated that the newly acquired original image is clear and that subsequent autofocus operations may be performed on the image.
Step S102, acquiring a preset focusing area; the original image comprises a preset focusing area; the preset focus area includes an object to be focused.
Specifically, according to the target to be focused in the original image determined in step S101, determining a target area to be focused as a preset focusing area, and determining a target type of the target to be focused; different target types use different focusing strategies, i.e. different preset focusing areas are selected. It should be noted that, the preset focusing area is obtained according to a large amount of experimental data, and the target to be focused can be limited in the preset focusing area no matter how the target is selected; the area of the preset focusing area is smaller than or equal to the area of the original image and is in the original image.
In one possible implementation, step S102 further includes: judging whether the object to be focused belongs to an aerial object or a ground object; if the target to be focused is an aerial target, taking the first preset focusing area as the preset focusing area of the target to be focused; the first preset focus area is a middle area of the original image.
Specifically, the target to be focused may be an aerial target or a ground target, and the unmanned aerial vehicle determines through features of the target to be focused in the image, for example, scale information: judging through the scale of the target to be focused in the image; the aerial targets are usually of a larger scale, while the ground targets are usually of a relatively smaller scale, and the category of the targets can be primarily judged by detecting the scale of the targets. Other methods may further include determining through information such as motion characteristics and geometric characteristics of the object to be focused, or may also determine through the above information after synthesis, which is not limited in this embodiment. If the object to be focused belongs to the aerial object, a first area to be focused with the area smaller than or equal to that of the original image is constructed by taking the geometric center of the original image as the center.
Illustrating: referring to fig. 2a, an embodiment of an automatic focusing method for an unmanned aerial vehicle according to an embodiment of the present invention is shown, where when a target to be focused belongs to an aerial target, a preset focusing area may be set to a rectangular area with a length and width of 1/2 of the length and width of an original image, assuming that the original image is a rectangle with a length and width of M, N, the preset focusing area may be a rectangle with a length and width of M/2 and a length and width of N/2, respectively, and the geometric center of the preset area coincides with the geometric center of the original image. It should be noted that, the length and width of the first preset focusing area are 1/2 and the first preset focusing area is rectangular, which is only one case proposed for simplifying the description in this embodiment, in a real case, the first preset focusing area may also be an irregularly shaped graph, which only needs to ensure that the area of the first preset focusing area is smaller than or equal to the original image and is within the original image.
In one possible implementation, step S102 further includes: if the target to be focused is a ground target, taking the second preset focusing area as the preset focusing area of the target to be focused; the second preset focus area is a bottom area of the original image.
Specifically, if the object to be focused belongs to the ground object, constructing a region to be focused with an area smaller than or equal to the original image by taking the bottom of the original image as a starting point and taking the geometric center direction of the original image as a construction direction.
Illustrating: referring to fig. 2b, another embodiment of an automatic focusing method for an unmanned aerial vehicle according to an embodiment of the present invention is shown, where when a target to be focused belongs to a ground target, a second preset focusing area may be set to be a trapezoid area with a bottom being 2/3 of the length of an original image and a top being 1/2 of the width of the original image; assuming that the original image is rectangular with a length and width of M, N, respectively, the preset focusing area may be a trapezoid with a height of 2M/3 and M, N/2, respectively, above ground and below ground, and the bottom side of the trapezoid coincides with the bottom side of the original image. It should be noted that the second preset focusing area of the ground object may be an irregularly shaped pattern, which is only required to ensure that the area of the second preset focusing area is smaller than or equal to the original image and is within the original image.
Step S103, carrying out region segmentation on a preset focusing region to obtain a focusing sub-region; the preset focus area includes a plurality of focus sub-areas.
Specifically, a preset dividing window with a fixed size is preset, the preset dividing window is horizontally moved from one end of a preset focusing region to the other end, the distance of one preset dividing window is moved each time, and the region marked by the preset dividing window is marked as a focusing sub-region; after the preset division window moves to the other end, the preset division window is moved to the vertical direction by a distance of the preset division window, and then the operations in the horizontal direction and the vertical direction are repeated until the preset focusing area is divided.
Step S104, acquiring a first focusing subarea and a second focusing subarea, wherein the first focusing subarea and the second focusing subarea are any two focusing subareas in a preset focusing area.
Specifically, after a preset focusing area is divided into a certain number of focusing subareas by a preset dividing mode, any one of the areas is selected as a first focusing subarea, and any other area is selected as a second focusing subarea; assuming that n focus sub-areas are obtained by a preset division manner, the nth focus sub-area can be obtained.
Step S105, judging the magnitude relation between the first gradient value in the first focusing subarea and the second gradient value in the second focusing Jiao Ziou; the first gradient value and the second gradient value are obtained through a climbing algorithm.
Specifically, n gradient maxima in the n focus subregions in step S104 are calculated by a hill climbing algorithm, and then the magnitudes of the n gradient maxima are compared.
Illustrating: the description of the climbing algorithm is carried out by using a single focusing subarea, a starting pixel point of the focusing subarea is selected, the starting pixel point is used as a calculation starting point, and the coordinate of the starting pixel point is assumed to be F 0 Respectively calculate and F 0 Adjacent 8 coordinates and coordinate F 0 To obtain the gradient value of the coordinate F 0 Correlation ofAnd F 0 Adjacent 8 coordinates and coordinate F 0 Forming a grid; calculation and coordinates F 0 The magnitude relation of the related 8 different gradient values is recorded as x, and the maximum gradient value is recorded as x 1 And the sitting mark corresponding to the maximum gradient value is F 1 The method comprises the steps of carrying out a first treatment on the surface of the Then to the coordinate F 1 Repeating the steps, and respectively calculating and F 1 Adjacent 8 coordinates and coordinate F 1 To obtain the gradient value of the coordinate F 1 The associated 8 different gradient values are compared with the coordinate F 1 The magnitude relation of the related 8 gradient values is recorded as x, and the maximum gradient value is recorded as x 2 And the maximum gradient value corresponds to the sitting mark F 2 The method comprises the steps of carrying out a first treatment on the surface of the Judging x 2 Whether or not it is greater than x 1 If x 2 Greater than x 1 Repeating the above operation until x appears n+1 Less than x n Up to this point, x n In the form of coordinates F 0 The maximum gradient value calculated for the starting point will be calculated as the coordinate F 0 The maximum gradient value calculated for the starting point is denoted as x max0 . Continuously selecting a plurality of different initial pixel points of the focusing subarea, repeating the calculation steps to obtain a plurality of maximum gradient values calculated by taking the plurality of different initial pixel points as starting points, and assuming that the maximum gradient values are respectively x max1 、x max2 、x max3 、……x maxm Wherein x is maxm For calculating the corresponding maximum gradient value by taking the mth initial pixel point as the starting point, calculating x max1 、x max2 、x max3 、……x maxm And taking the maximum gradient value as the gradient maximum value of the focusing subarea.
In one possible implementation, step S105 further includes: comparing the first gradient value, the second gradient value and the preset threshold value; the first gradient value and the second gradient value are larger than or equal to a preset threshold value.
Specifically, a preset threshold value is set, the gradient maximum value in each focusing subarea is compared with the preset threshold value, and when the gradient maximum value is smaller than the preset threshold value, the gradient maximum value is subjected to rejection operation; when the gradient maximum value is larger than a preset threshold value, reserving the gradient maximum value; finally, the larger operation is only performed in the retained gradient maximum.
In one possible implementation, step S105 further includes: the preset threshold is an average value of a plurality of gradient values in the focus subregions.
Specifically, an average formula is usedCalculating to obtain a preset threshold, wherein x is the preset threshold i Is the gradient value size in the i-th focus subregion.
For example, assume that the preset focus region is divided into 5 focus sub-regions, wherein the maximum gradient values are x respectively 1 To x 5 Presetting a threshold valueIs->
And S106, if the first gradient value is larger than the second gradient value, taking the first focusing subarea as a focusing area.
Specifically, by performing the sorting operation on the maximum gradient values retained in step S105, the focus sub-area in which the maximum gradient value is located is used as the focus area for auto-focusing.
Illustrating: if x 1 To x 5 The maximum gradient values corresponding to the first focus subarea to the fifth focus subarea are respectively obtained, and the sorting operation is carried out through the maximum gradient values to obtain x 3 >x 5 >x 2 >x 1 >x 4 Then x is 3 The corresponding third focus area serves as the autofocus focus area.
The application also provides an unmanned aerial vehicle automatic focusing device, which comprises an acquisition module 31, a segmentation module 32 and a processing module 33.
An acquisition module 31 for acquiring a plurality of images to be focused; the plurality of images to be focused comprise a first image to be focused; acquiring a preset focusing area, wherein the original image comprises the preset focusing area; the preset focusing area comprises a target to be focused; acquiring a first focusing subarea and a second focusing subarea, wherein the first focusing subarea and the second focusing subarea are any two different focusing subareas in a preset focusing area;
the dividing module 32 is configured to divide a preset focusing area into areas to obtain focusing sub-areas; the preset focusing area comprises a plurality of focusing subareas;
a processing module 33, configured to obtain a first laplace operator in the first image to be focused through laplace evaluation function evaluation; judging whether the first operator is greater than or equal to a preset operator; if the first operator is greater than or equal to the preset operator, taking the first image to be focused as an original image; the original image comprises an object to be focused; judging the magnitude relation between the first gradient value in the first focusing subarea and the second gradient value in the second focusing Jiao Ziou; the first gradient value and the second gradient value are obtained through a climbing algorithm; and if the first gradient value is larger than the second gradient value, taking the first focusing subarea as a focusing area so that the unmanned aerial vehicle can complete automatic focusing on the original image.
Optionally, the acquiring module 31 is configured to acquire an ambient image, where the ambient image includes an ambient feature, and the ambient feature includes a scene feature, a target feature, a subject motion feature, and a contrast feature; and acquiring a shooting mode corresponding to the environmental characteristic from a preset shooting mode storage library, wherein the preset storage library is used for storing the corresponding relation between the environmental characteristic and the shooting mode.
Optionally, the acquiring module 31 is configured to acquire a plurality of images to be focused according to a shooting mode, where the shooting mode includes a close-range mode, a far-range mode, a deep-range mode, an HDR mode, and a following mode.
Optionally, the obtaining module 31 is configured to obtain the second image to be focused if the first operator is smaller than the preset operator, and obtain the second operator in the second image to be focused through evaluation of a laplace evaluation function; the plurality of images to be focused comprise a second image to be focused; judging whether the second operator is greater than or equal to a preset operator; and if the second operator is greater than or equal to the preset operator, taking the second image to be focused as the original image.
Optionally, the segmentation module 32 is configured to determine whether the object to be focused belongs to an aerial object or a ground object; if the target to be focused is an aerial target, taking the first preset focusing area as the preset focusing area of the target to be focused; the first preset focusing area is the middle area of the original image; if the target to be focused is a ground target, taking the second preset focusing area as the preset focusing area of the target to be focused; the second preset focusing area is the bottom area of the original image
Optionally, the processing module 33 is configured to compare the first gradient value, the second gradient value and a preset threshold value; the first gradient value and the second gradient value are larger than or equal to a preset threshold value.
It should be noted that: in the device provided in the above embodiment, when implementing the functions thereof, only the division of the above functional modules is used as an example, in practical application, the above functional allocation may be implemented by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to implement all or part of the functions described above. In addition, the embodiments of the apparatus and the method provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the embodiments of the method are detailed in the method embodiments, which are not repeated herein.
The application also discloses electronic equipment. Referring to fig. 4, fig. 4 is a schematic structural diagram of an electronic device according to the disclosure in an embodiment of the present application. The electronic device 400 may include: at least one processor 401, a memory 402, a user interface 403, at least one network interface 404, and at least one communication bus 405.
Wherein a communication bus 405 is used to enable connected communications between these components.
The user interface 403 may include a Display screen (Display) and a Camera (Camera), and the optional user interface 403 may further include a standard wired interface and a standard wireless interface.
The network interface 404 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Wherein the processor 401 may include one or more processing cores. The processor 401 connects the various parts within the overall drone with various interfaces and lines, performs various functions of the drone and processes data by running or executing instructions, programs, code sets, or instruction sets stored in the memory 402, and invoking data stored in the memory 402. Alternatively, the processor 401 may be implemented in at least one hardware form of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 401 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 401 and may be implemented by a single chip.
The Memory 402 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 402 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). Memory 402 may be used to store instructions, programs, code sets, or instruction sets. The memory 402 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the above-described various method embodiments, etc.; the storage data area may store data or the like involved in the above respective method embodiments. The memory 402 may also optionally be at least one storage device located remotely from the aforementioned processor 401. Referring to fig. 4, an operating system, a network communication module, a user interface module, and a drone autofocus application may be included in a memory 402 as a computer storage medium.
In the electronic device 400 shown in fig. 4, the user interface 403 is mainly used as an interface for providing input for a user, and obtains data input by the user; and the processor 401 may be used to invoke a drone autofocus application stored in the memory 402 that, when executed by the one or more processors 401, causes the electronic device 400 to perform the method as described in one or more of the embodiments above. It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided herein, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, such as a division of units, merely a division of logic functions, and there may be additional divisions in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some service interface, device or unit indirect coupling or communication connection, electrical or otherwise.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in whole or in part in the form of a software product stored in a memory, comprising several instructions for causing a computer device (which may be a personal computer, a drone or a network device, etc.) to perform all or part of the steps of the methods of the various embodiments of the present application. And the aforementioned memory includes: various media capable of storing program codes, such as a U disk, a mobile hard disk, a magnetic disk or an optical disk.
As used in the above embodiments, the term "when …" may be interpreted as "if …" or "after …" or "in response to determination …" or "in response to detection …" depending on the context. Similarly, the phrase "at the time of determination …" or "if detected (a stated condition or event)" may be interpreted as "if determined …" or "responsive to determination …" or "upon detection (a stated condition or event)" or "responsive to detection (a stated condition or event)" depending on the context.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website site, computer, drone, or data center to another website site, computer, drone, or data center via a wired (e.g., coaxial cable, optical fiber, digital subscriber line) or wireless means. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a drone, data center, or the like, which contains one or more available medium integration. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), etc.
Those of ordinary skill in the art will appreciate that implementing all or part of the above-described method embodiments may be accomplished by a computer program to instruct related hardware, the program may be stored in a computer readable storage medium, and the program may include the above-described method embodiments when executed. And the aforementioned storage medium includes: ROM or random access memory RAM, magnetic or optical disk, etc.
The above embodiments are merely for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A method of unmanned aerial vehicle autofocus, the method comprising:
acquiring a plurality of images to be focused; the plurality of images to be focused include a first image to be focused;
Obtaining a first Laplace operator in the first image to be focused through Laplace evaluation function evaluation;
judging whether the first operator is greater than or equal to a preset operator or not;
if the first operator is greater than or equal to the preset operator, the first image to be focused is taken as an original image; the original image comprises an object to be focused;
acquiring a preset focusing area, wherein the original image comprises the preset focusing area; the preset focusing area comprises the target to be focused;
performing region segmentation on the preset focusing region to obtain a focusing sub-region; the preset focusing area comprises a plurality of focusing subareas;
acquiring a first focusing subarea and a second focusing subarea, wherein the first focusing subarea and the second focusing subarea are any two different focusing subareas in the preset focusing area;
judging the magnitude relation between a first gradient value in the first focusing subarea and a second gradient value in the second focusing subarea Jiao Ziou; the first gradient value and the second gradient value are obtained through a climbing algorithm;
and if the first gradient value is larger than the second gradient value, taking the first focusing subarea as a focusing area so that the unmanned aerial vehicle can complete automatic focusing on the original image.
2. The method of claim 1, wherein a plurality of images to be focused are acquired at the acquiring; before the plurality of images to be focused includes the first image to be focused, the method further includes:
acquiring surrounding environment pictures, wherein the environment pictures comprise environment features, and the environment features comprise scenery features, target features, main body motion features and contrast features;
and acquiring a shooting mode corresponding to the environmental characteristic from a preset shooting mode storage library, wherein the preset storage library is used for storing the corresponding relation between the environmental characteristic and the shooting mode.
3. The method according to claim 2, wherein the acquiring a plurality of images to be focused specifically comprises:
and acquiring a plurality of images to be focused according to the shooting mode, wherein the shooting mode comprises a close-range mode, a far-range mode, a deep-range mode, an HDR mode and a following mode.
4. The method of claim 1, wherein after determining whether the first operator is greater than or equal to a preset operator, the method further comprises:
if the first operator is smaller than the preset operator, acquiring a second image to be focused, and acquiring a second operator in the second image to be focused through Laplace evaluation function evaluation; the plurality of images to be focused comprise a second image to be focused;
Judging whether the second operator is greater than or equal to a preset operator or not;
and if the second operator is greater than or equal to the preset operator, taking the second image to be focused as the original image.
5. The method according to claim 1, further comprising the step of obtaining a preset focus area, specifically comprising:
judging whether the target to be focused belongs to an aerial target or a ground target; if the target to be focused is the aerial target, taking a first preset focusing area as a preset focusing area of the target to be focused; the first preset focusing area is the middle area of the original image;
if the target to be focused is the ground target, taking a second preset focusing area as the preset focusing area of the target to be focused; the second preset focusing area is the bottom area of the original image.
6. The method of claim 1, wherein prior to determining the magnitudes of the first gradient value in the first focus subregion and the second gradient value in the second focus subregion, the method further comprises:
comparing the first gradient value, the second gradient value and a preset threshold value;
The first gradient value and the second gradient value are larger than or equal to the preset threshold value.
7. The method of claim 5, further comprising, the preset threshold being an average of a plurality of gradient values in the plurality of focus sub-areas.
8. An apparatus for automatic focusing of a drone, characterized in that it comprises an acquisition module (31), a segmentation module (32) and a processing module (33),
the acquisition module (31) is used for acquiring a plurality of images to be focused; the plurality of images to be focused include a first image to be focused; acquiring a preset focusing area, wherein the original image comprises the preset focusing area; the preset focusing area comprises the target to be focused; acquiring a first focusing subarea and a second focusing subarea, wherein the first focusing subarea and the second focusing subarea are any two different focusing subareas in the preset focusing area;
the segmentation module (32) is used for carrying out region segmentation on the preset focusing region to obtain a focusing sub-region; the preset focusing area comprises a plurality of focusing subareas;
the processing module (33) is configured to acquire a first laplace operator in the first image to be focused through laplace evaluation function evaluation; judging whether the first operator is greater than or equal to a preset operator or not; if the first operator is greater than or equal to the preset operator, the first image to be focused is taken as an original image; the original image comprises an object to be focused; judging the magnitude relation between a first gradient value in the first focusing subarea and a second gradient value in the second focusing subarea Jiao Ziou; the first gradient value and the second gradient value are obtained through a climbing algorithm; and if the first gradient value is larger than the second gradient value, taking the first focusing subarea as a focusing area so that the unmanned aerial vehicle can complete automatic focusing on the original image.
9. An electronic device comprising a processor (401), a memory (402), a user interface (403) and a network interface (404), the memory (402) being configured to store instructions, the user interface (403) and the network interface (404) being configured to communicate to other devices, the processor (401) being configured to execute the instructions stored in the memory (402) to cause the electronic device (400) to perform the method according to any one of claims 1 to 7.
10. A computer readable storage medium storing instructions which, when executed, perform the method steps of any one of claims 1 to 4.
CN202311375844.XA 2023-10-23 2023-10-23 Unmanned aerial vehicle automatic focusing method and device and electronic equipment Pending CN117479011A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311375844.XA CN117479011A (en) 2023-10-23 2023-10-23 Unmanned aerial vehicle automatic focusing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311375844.XA CN117479011A (en) 2023-10-23 2023-10-23 Unmanned aerial vehicle automatic focusing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN117479011A true CN117479011A (en) 2024-01-30

Family

ID=89630413

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311375844.XA Pending CN117479011A (en) 2023-10-23 2023-10-23 Unmanned aerial vehicle automatic focusing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN117479011A (en)

Similar Documents

Publication Publication Date Title
CN110149482B (en) Focusing method, focusing device, electronic equipment and computer readable storage medium
CN110866480B (en) Object tracking method and device, storage medium and electronic device
JP6469932B2 (en) System and method for performing automatic zoom
CN108322646B (en) Image processing method, image processing device, storage medium and electronic equipment
US10917571B2 (en) Image capture device control based on determination of blur value of objects in images
CN109889724B (en) Image blurring method and device, electronic equipment and readable storage medium
CN110839129A (en) Image processing method and device and mobile terminal
CN113766125B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN109474780B (en) Method and device for image processing
JP2019053732A (en) Dynamic generation of image of scene based on removal of unnecessary object existing in the scene
EP3793188A1 (en) Image processing method, electronic device, and computer readable storage medium
CN113837079B (en) Automatic focusing method, device, computer equipment and storage medium of microscope
CN110324532B (en) Image blurring method and device, storage medium and electronic equipment
CN110650291B (en) Target focus tracking method and device, electronic equipment and computer readable storage medium
JP2012054810A (en) Image processing device, image processing method, imaging apparatus, and image processing program
JP2020514891A (en) Optical flow and sensor input based background subtraction in video content
CN112017137B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN110490196B (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN105247567A (en) Image refocusing
WO2020042126A1 (en) Focusing apparatus, method and related device
CN112333379A (en) Image focusing method and device and image acquisition equipment
CN114390201A (en) Focusing method and device thereof
CN113391644A (en) Unmanned aerial vehicle shooting distance semi-automatic optimization method based on image information entropy
CN113923347A (en) Automatic focusing method and device, shooting terminal and computer readable storage medium
CN110689007B (en) Subject recognition method and device, electronic equipment and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination