WO2021168707A1 - 对焦方法、装置及设备 - Google Patents

对焦方法、装置及设备 Download PDF

Info

Publication number
WO2021168707A1
WO2021168707A1 PCT/CN2020/076839 CN2020076839W WO2021168707A1 WO 2021168707 A1 WO2021168707 A1 WO 2021168707A1 CN 2020076839 W CN2020076839 W CN 2020076839W WO 2021168707 A1 WO2021168707 A1 WO 2021168707A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
imaging image
imaging
area
focused
Prior art date
Application number
PCT/CN2020/076839
Other languages
English (en)
French (fr)
Inventor
任创杰
胡晓翔
封旭阳
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN202080004236.6A priority Critical patent/CN112585945A/zh
Priority to PCT/CN2020/076839 priority patent/WO2021168707A1/zh
Publication of WO2021168707A1 publication Critical patent/WO2021168707A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/675Focus control based on electronic image sensor signals comprising setting of focusing regions

Definitions

  • This application relates to the field of shooting technology, and in particular to a focusing method, device and equipment.
  • Focusing refers to the process of changing the distance and distance of the camera through the camera's focusing mechanism to make the image of the object clear. Focusing can include auto focus and manual focus.
  • the target area selected by auto focus is a rectangular area.
  • the rectangular area in the center of the screen can be selected as the target area.
  • the selected rectangular target area may not only include the object to be photographed, such as a person, but also a background such as buildings and trees.
  • the above method of focusing based on the rectangular target area has the problem of low focusing accuracy.
  • the embodiments of the present application provide a focusing method, device, and equipment to solve the problem of low focusing accuracy in the prior art method of focusing based on a rectangular target area.
  • an embodiment of the present application provides a focusing method, which is applied to a drone for power inspection.
  • the drone is provided with an image acquisition device, and the image acquisition device is used to perform power inspection on the drone.
  • shooting images including the power equipment to be inspected the method includes:
  • an embodiment of the present application provides a focusing method, including:
  • the lens parameters of the image acquisition device are adjusted to adjust the clarity of the object corresponding to the area to be focused in the imaged image of the image acquisition device, so that the image acquisition device focuses on the to-be-focused area.
  • the focus area corresponds to the object.
  • an embodiment of the present application provides a drone, the drone body, a power system provided on the body, an image acquisition device, and a focusing device;
  • the power system is used to provide power for the UAV
  • the image acquisition device is used to shoot an image including the power equipment to be inspected during the power inspection of the drone;
  • the focusing device includes a memory and a processor
  • the memory is used to store program code
  • the processor calls the program code, and when the program code is executed, is used to perform the following operations:
  • an embodiment of the present application provides a focusing device, the device including: a memory and a processor;
  • the memory is used to store program code
  • the processor calls the program code, and when the program code is executed, is used to perform the following operations:
  • the lens parameters of the image acquisition device are adjusted to adjust the clarity of the object corresponding to the area to be focused in the imaged image of the image acquisition device, so that the image acquisition device focuses on the to-be-focused area.
  • the focus area corresponds to the object.
  • an embodiment of the present application provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, the computer program includes at least one piece of code, the at least one piece of code can be executed by a computer to control the The computer executes the method described in any one of the above-mentioned first aspects.
  • an embodiment of the present application provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, the computer program includes at least one piece of code, the at least one piece of code can be executed by a computer to control the The computer executes the method described in any one of the above second aspects.
  • an embodiment of the present application provides a computer program, when the computer program is executed by a computer, it is used to implement the method described in any one of the above-mentioned first aspects.
  • an embodiment of the present application provides a computer program, when the computer program is executed by a computer, it is used to implement the method described in any one of the above second aspects.
  • the embodiments of the present application provide a focusing method, device, and equipment.
  • the imaging image is acquired by an image acquisition device, the foreground pixels in the imaging image are recognized by a preset algorithm, and at least a part of the area occupied by the foreground pixels in the imaging image is determined as imaging.
  • the area to be focused in the image corresponding to the power device to be inspected, and the lens parameters of the image acquisition device are adjusted according to the area to be focused in the imaging image corresponding to the power device to be inspected, so as to adjust the power device to be inspected in the image acquisition device
  • the sharpness of the imaging image in the power inspection process realizes that the focus of the image acquisition device can be adjusted according to the area occupied by the power equipment to be inspected in the imaging image, so that the image acquisition device can focus on the power equipment to be inspected, improving
  • the accuracy of the focus is improved, which is beneficial to improve the clarity of the photographed electric equipment to be inspected.
  • FIG. 1 is a schematic diagram of an application scenario of a focusing method provided by an embodiment of the application
  • FIG. 2 is a schematic flowchart of a focusing method provided by an embodiment of the application.
  • 3A is a schematic diagram of foreground pixels provided by an embodiment of the application.
  • FIG. 3B is a schematic diagram of a to-be-focused area provided by an embodiment of the application.
  • FIG. 4 is a schematic flowchart of a focusing method provided by another embodiment of this application.
  • Fig. 5 is a schematic structural diagram of a neural network model provided by an embodiment of the application.
  • FIG. 6 is a schematic flowchart of a focusing method provided by another embodiment of this application.
  • FIG. 7 is a schematic diagram of a preset direction provided by an embodiment of this application.
  • FIG. 8A is a schematic diagram of a region of interest provided by an embodiment of this application.
  • FIGS. 8B and 8C are schematic diagrams of the enlarged area provided by an embodiment of the application.
  • FIG. 9 is a schematic diagram of prompting the user of the area to be focused according to an embodiment of the application.
  • FIG. 10 is a schematic flowchart of a focusing method provided by another embodiment of this application.
  • FIG. 11 is a schematic structural diagram of a focusing device provided by an embodiment of the application.
  • Fig. 12 is a schematic structural diagram of a drone provided by an embodiment of the application.
  • the focusing method provided by the embodiment of the present application may be applied to the focusing system 10 shown in FIG. 1, and the focusing system 10 may include an image acquisition device 11 and a focusing device 12.
  • the image acquisition device 11 is used to acquire images; the focusing device 12 can acquire an imaging image from the image acquisition device, and based on the imaging image, use the focusing method provided in the embodiment of the application to process, so as to adjust the position of the subject to be photographed in the image acquisition device 11
  • the sharpness in the imaging image enables the image acquisition device 11 to focus on the subject to be photographed.
  • the image acquisition device 11 includes a visible light camera, an infrared camera, and the like.
  • the focusing system 10 can be applied to any scene that requires focusing control.
  • the focusing system 10 may be applied to a digital camera, a smart phone that provides a shooting function, a drone, and the like.
  • an imaging image is collected by an image acquisition device, a preset algorithm is used to identify foreground pixels in the imaging image, and at least a part of the area occupied by the foreground pixels in the imaging image is determined as the to-be-focused image in the imaging image.
  • FIG. 2 is a schematic flowchart of a focusing method provided by an embodiment of this application.
  • the execution subject of this embodiment may be a focusing device.
  • the method of this embodiment may include:
  • step 201 an imaging image is collected by an image acquisition device, and a preset algorithm is used to identify foreground pixels in the obtained imaging image.
  • the imaging image is used to perform focus control on the image acquisition device.
  • the imaging image may include foreground objects and background objects.
  • the foreground objects can be considered as the subject to be photographed.
  • the photographing subject may be, for example, a person, an animal, an electric device, or the like.
  • the image is composed of individual pixels, and the imaged image is no exception.
  • the pixels in the imaging image can be divided into foreground pixels and background pixels based on the foreground objects and background objects in the imaging image.
  • the foreground pixel in the imaging image may refer to the pixel occupied by the foreground object in the imaging image
  • the background pixel in the imaging image may refer to the pixel occupied by the background object in the imaging image.
  • a preset algorithm may be used to identify the foreground pixel corresponding to the foreground object from the pixels of the imaging image.
  • the foreground pixel corresponding to the foreground object can be identified from all pixels of the imaging image, or the foreground pixel corresponding to the foreground object can be identified from the pixels corresponding to a partial area of the imaging image.
  • the specific algorithm used to identify the foreground pixels in the imaging image can be flexibly designed according to requirements.
  • a neural network model can be used to identify foreground pixels, which is beneficial to reduce the difficulty of algorithm design and improve recognition accuracy.
  • Step 202 Determine at least a part of the area occupied by the foreground pixels in the imaging image as an area to be focused in the imaging image.
  • the area to be focused may refer to an area targeted by the image acquisition device for focusing. Since the foreground object is usually the subject to be photographed, the to-be-focused area in the imaging image is consistent with the foreground pixels in the imaging image.
  • the area to be focused can be determined according to the foreground pixels in the imaging image. Specifically, the area occupied by the foreground pixels in the imaging image may be determined as the area to be focused in the imaging image.
  • the area to be focused may be as shown in the area filled with diagonal lines in FIG. 3B.
  • a small box in FIGS. 3A and 3B can represent a pixel, a number in the small box can represent a background pixel, and a number 1 in the small box can represent a foreground pixel.
  • a pixel of 0 can represent a background pixel, and a pixel of 1 can represent a background pixel.
  • Step 203 Adjust the lens parameters of the image acquisition device according to the area to be focused, so as to adjust the clarity of the object corresponding to the area to be focused in the imaged image of the image acquisition device, so that the image acquisition device can focus
  • the area to be focused corresponds to an object.
  • the lens parameters of the image acquisition device are adjusted.
  • focusing is also called focusing, which is not to change the focal length of the lens but to change the image distance.
  • Adjust the distance between the imaging surface and the lens so that the distance from the imaging surface to the optical center is equal to the image distance, so that the object can be clearly imaged onto the film ( Photosensitive element) on.
  • the process of adjusting the image acquisition device so that the subject to be photographed is clearly imaged is the focusing process.
  • the focusing process can be realized based on the area to be focused, that is, the lens parameters of the image acquisition device are adjusted to make the object corresponding to the area to be focused (that is, the object corresponding to the foreground pixel) can be clearly imaged, where the clear image of the object corresponding to the area to be focused can be understood as
  • the image acquisition device focuses on the object corresponding to the area to be focused. Since the object corresponding to the foreground pixels is usually the subject to be photographed, the subject to be photographed can be clearly imaged.
  • the lens parameter adjusted according to the to-be-focused area may specifically be a parameter that affects the imaging surface of the image acquisition device and the lens distance.
  • the focus ring can be used to change the distance from the sharpest plane of imaging to the lens. Based on this, the lens parameters adjusted according to the area to be focused can specifically be the rotation direction and number of rotations of the focus ring.
  • the imaging image is collected by the image acquisition device, the foreground pixels in the imaging image are recognized by a preset algorithm, at least a part of the area occupied by the foreground pixels in the imaging image is determined as the area to be focused in the imaging image, and according to The area to be focused adjusts the lens parameters of the image acquisition device to adjust the sharpness of the object corresponding to the area to be focused in the imaging image of the image acquisition device, and realizes the adjustment of the focus of the image acquisition device according to the area occupied by the foreground pixels in the imaging image.
  • the subject is usually the foreground of the lens, and the area occupied by the foreground pixels does not include background information, so that the image acquisition device can focus on the object corresponding to the foreground pixels, and the image area based on the focus control in the related art includes background information
  • the problem of low focus accuracy due to the influence of background information is avoided, the focus accuracy is improved, and the sharpness of the photographed object is improved.
  • FIG. 4 is a schematic flowchart of a focusing method provided by another embodiment of the application. This embodiment mainly describes an optional implementation manner based on the embodiment shown in FIG. 2. As shown in Figure 4, the method of this embodiment may include:
  • Step 401 Acquire an imaging image through an image acquisition device, and input the imaging image into a pre-trained first neural network model to obtain a first output result.
  • the first output result includes that each pixel in the imaging image is a foreground pixel Confidence level.
  • the first neural network model may specifically be a convolutional neural network (Convolutional Neural Networks, CNN) model.
  • the structure of the first neural network model may be as shown in FIG. 5, for example.
  • the first neural network model may include multiple computing nodes, and each computing node may include a convolution (Conv) layer, batch normalization (BN), and an activation function ReLU.
  • the nodes can be connected by skip connection, the input data of K ⁇ H ⁇ W can be input into the first neural network model, and after processing by the first neural network model, C ⁇ H ⁇ W can be obtained The output data.
  • K can represent the number of input channels
  • K can be equal to 3, respectively corresponding to three channels of red (R, red), green (G, green) and blue (B, blue)
  • H can represent the height of the imaging image
  • W can indicate the width of the imaged image
  • C equal to 2 can indicate that the number of output channels is 2.
  • the first output result of the first neural network model may include the confidence feature maps respectively output by the two output channels. These two output channels may correspond to the two object categories one-to-one, and the two object categories may be foreground respectively. Category and background category, the pixel value of the confidence feature map of a single object category is used to characterize the probability that the pixel is the object category.
  • the pixel position in the confidence feature map 1 is (100, 100)
  • the pixel value of is 90, which can represent the pixel at the pixel position (100, 100).
  • the probability that it is a foreground pixel is 90%.
  • the pixel value of the pixel position (100, 80) in the confidence feature Figure 2 is 20, which can represent the pixel position (100 ,80) the probability of being a foreground pixel is 20%.
  • Step 402 Determine foreground pixels in the imaging image according to the first output result.
  • a pixel in the imaged image with a confidence that the foreground pixel is greater than a preset threshold may be determined as a foreground pixel.
  • a confidence characteristic map 1 whose probability of being a foreground pixel is greater than 80% may be determined as foreground pixels.
  • step 401 and step 402 alternatively, the foreground pixels in the imaging image may be identified based on the second neural network model trained in advance.
  • step 401 and step 402 can be replaced with the following step A and step B.
  • Step A Collect an imaging image through an image acquisition device, and input the imaging image into a pre-trained second neural network model to obtain a second output result.
  • the second output result includes that each pixel in the imaging image is at least one The confidence level of each foreground category pixel in the foreground category;
  • Step B Determine pixels of each foreground category in the imaging image according to the second output result to obtain foreground pixels in the imaging image.
  • the second neural network model may specifically be a CNN model, and its structure is similar to the structure of the first neural network model shown in FIG.
  • the second output result of the second neural network model may include the confidence feature maps respectively output by the C output channels.
  • C is greater than 2.
  • These C output channels may correspond to C object categories one-to-one, and these C object categories Specifically, it may include multiple specific foreground categories and background categories, and the pixel value of the confidence feature map of a single object category is used to characterize the probability that the pixel is the object category.
  • the second output result can include the confidence feature graph 3, the confidence feature graph 4, and the confidence feature graph 5, and the confidence feature graph 3 corresponds to a specific foreground category 1, and the confidence feature graph 4 corresponds to a specific foreground category 2.
  • Confidence Feature map 5 corresponds to the background category, then the pixel value of the pixel location (100, 100) in the confidence feature map 3 is 90, which can represent the pixel at the pixel location (100, 100).
  • the probability that it is a foreground pixel of a specific foreground category 1 is 90%, confidence
  • the pixel value of the pixel location (100, 60) in Figure 4 is 85, which can represent the pixel at the pixel location (100, 60).
  • the probability that it is a foreground pixel of a specific foreground category 2 is 85%.
  • the confidence feature map, 5 The pixel value of the middle pixel position (100, 80) is 20, which can indicate that the probability that the pixel at the pixel position (100, 80) is a foreground pixel is 20%.
  • a pixel in the imaging image with a confidence level of a foreground category pixel greater than a preset threshold may be determined as a foreground pixel.
  • the pixels in the confidence feature graph 3 and the confidence feature graph 4 that are foreground category pixels with a probability greater than 80% can be determined as foreground pixels.
  • the second neural network model By using the second neural network model to identify foreground pixels, it is possible to identify foreground pixels of a specific category in the imaging image, so as to achieve focusing based on the area occupied by foreground pixels of a specific category, which is conducive to improving the pertinence of identifying foreground pixels, thereby improving Accuracy of focus.
  • Step 403 Determine at least a part of the area occupied by the foreground pixels in the imaging image as an area to be focused in the imaging image.
  • step 403 is similar to step 202, and will not be repeated here.
  • Step 404 Adjust the lens parameters of the image acquisition device according to the area to be focused to adjust the clarity of the object corresponding to the area to be focused in the imaged image of the image acquisition device, so that the image acquisition device can focus
  • the area to be focused corresponds to an object.
  • the lens parameters of the image acquisition device may be adjusted according to the area to be focused until the quality of the image of the area to be focused meets a certain condition. Since the sharpness of an image is an important indicator to measure the quality of the image, the quality of the image in the area to be focused meets a certain condition, which can indicate that the sharpness of the image in the area to be focused meets a certain condition. When the sharpness of the image of the area to be focused meets certain conditions, it may indicate that the image of the object corresponding to the area to be focused is clear, that is, the image acquisition device focuses on the object corresponding to the area to be focused.
  • the first output result is obtained by inputting the imaging image into the pre-trained first neural network model, the foreground pixels in the imaging image are determined according to the first output result, and the foreground pixels in the imaging image are accounted for At least part of the area is determined as the area to be focused in the imaging image, and the lens parameters of the image acquisition device are adjusted according to the area to be focused to adjust the clarity of the object corresponding to the area to be focused in the imaging image of the image acquisition device.
  • the network model determines the foreground pixels in the imaging image, and adjusts the focus of the image acquisition device based on the area occupied by the foreground pixels.
  • FIG. 6 is a schematic flowchart of a focusing method provided by another embodiment of this application. Based on the embodiment shown in FIG. 2, this embodiment mainly describes another optional implementation manner. As shown in FIG. 6, the method of this embodiment may include:
  • Step 601 Acquire an imaging image through an image acquisition device, and determine a region of interest in the imaging image.
  • the region of interest can refer to the area that needs to be processed in the form of boxes, circles, ellipses, irregular polygons, etc., from the processed image in machine vision and image processing.
  • the region of interest in the imaging image can be automatically obtained based on various operators and functions.
  • the region of interest may be, for example, the region corresponding to the tracking frame in the target tracking algorithm.
  • the region of interest may also be other types of regions, which is not limited in this application. It should be noted that the specific method for determining the region of interest can be flexibly implemented according to requirements, which is not limited in this application.
  • Step 602 Use a preset algorithm to process the image corresponding to the region of interest in the imaging image to identify foreground pixels corresponding to the region of interest in the imaging image.
  • the image corresponding to the region of interest may be an image of the region of interest in the imaging image. That is, the corresponding foreground pixels can be identified based on the image of the region of interest in the imaging image.
  • the region of interest may be expanded by at least one pixel in a preset direction in the imaging image to obtain the expanded region; the image corresponding to the region of interest is The image of the enlarged area in the imaging image. Based on this, the image corresponding to the region of interest can include not only the image content in the region of interest, but also the image content adjacent to the region of interest. For some reason, the region of interest cannot include the complete subject to be photographed. Next, it is possible to avoid the problem that only the foreground pixels in the image of the region of interest are recognized, resulting in failure to recognize all the foreground pixels of the subject to be photographed based on the region of interest.
  • the preset direction may include one or more of the eight directions shown in FIG. 7.
  • the region of interest in the imaging image is shown in Figure 8A
  • the preset direction is the direction 1 of the eight directions as an example
  • the expanded region obtained by expanding the region of interest by one pixel in the preset direction can be as Shown in Figure 8B.
  • the preset direction includes the eight directions shown in FIG. It can be as shown in Figure 8C.
  • a small box in FIG. 8A, FIG. 8B, and FIG. 8C can represent a pixel.
  • the number of pixels expanded in each direction is the same as an example. It is understandable that the number of pixels expanded in different directions can also be different. .
  • the image corresponding to the region of interest in the imaging image may be processed based on the neural network model to identify the foreground pixels corresponding to the region of interest in the imaging image.
  • the image corresponding to the region of interest may be input to a pre-trained first neural network model to obtain an output result, the output result including the confidence that each pixel in the image corresponding to the region of interest is a foreground pixel , And according to the output result, determine the foreground pixels in the image corresponding to the region of interest to obtain the foreground pixels corresponding to the region of interest in the imaging image.
  • the image corresponding to the region of interest may be input to a pre-trained second neural network model to obtain an output result, and the output result includes that each pixel in the image corresponding to the region of interest is each foreground in at least one foreground category According to the confidence level of the category pixels, according to the output result, the pixels of each foreground category in the image corresponding to the region of interest are determined to obtain the foreground pixels corresponding to the region of interest in the imaging image.
  • Step B Determine pixels of each foreground category in the imaging image according to the second output result to obtain foreground pixels in the imaging image.
  • Step 603 Determine at least a part of the area occupied by the foreground pixels in the imaging image as an area to be focused in the imaging image.
  • step 403 is similar to step 202, and will not be repeated here.
  • Step 604 Adjust the lens parameters of the image acquisition device according to the area to be focused to adjust the clarity of the object corresponding to the area to be focused in the imaged image of the image acquisition device, so that the image acquisition device can focus
  • the area to be focused corresponds to an object.
  • step 604 is similar to step 203 and step 404, and will not be repeated here.
  • the imaging image is acquired by the image acquisition device, the region of interest in the imaging image is determined, and the image corresponding to the region of interest in the imaging image is processed using a preset algorithm to identify the foreground pixels corresponding to the region of interest in the imaging image.
  • the sharpness of the image acquisition device can be adjusted according to the area occupied by the foreground pixels corresponding to the region of interest in the imaging image.
  • the foreground pixels corresponding to the region of interest are usually The object of interest corresponds to pixels, and the focus of the acquisition device is adjusted based on the area occupied by the foreground pixels corresponding to the area of interest, so that the subject to be photographed for focusing can be the object of interest, thereby ensuring the accuracy of the subject to be photographed.
  • the method may further include: prompting the user of the area to be focused on in the shooting interface, so that the user can learn the currently focused area.
  • the imaged image may be displayed to the user, and the area to be focused may be marked in the imaged image.
  • the user may be prompted with the area to be focused in the manner shown in FIG. 9, and the area framed by the black frame in the imaging image in FIG. 9 may indicate the area to be focused.
  • the user may be prompted with other ways of the area to be focused, which is not limited in this application.
  • the adjustment operation can be flexibly implemented according to requirements.
  • the adjustment operation includes one or more of the following: a position adjustment operation, a shape adjustment operation, or a size adjustment operation.
  • the user can adjust the focus area of the image acquisition device as needed, so that the image acquisition device can be based on the adjusted area to be focused by the user. Focusing on the focus area is conducive to improving the flexibility of focusing.
  • FIG. 10 is a schematic flowchart of a focusing method provided by another embodiment of the application. Based on the foregoing embodiments, this embodiment mainly describes a specific implementation of the focusing method applied to the power inspection of drones, where: An image acquisition device is provided on the drone, and the image acquisition device is used to shoot an image including the power equipment to be inspected during the power inspection of the drone. As shown in FIG. 10, the method of this embodiment may include:
  • Step 101 Collect an imaging image by an image acquisition device, and use a preset algorithm to identify foreground pixels in the imaging image.
  • the power equipment to be inspected may be the subject to be photographed, and the power equipment to be inspected may include, for example, electric wires, telephone poles, solar panels of photovoltaic power stations, and the like.
  • the power equipment to be inspected may also be other equipment, which is not limited in this application.
  • Step 102 Determine at least a part of the area occupied by the foreground pixels in the imaging image as an area to be focused in the imaging image corresponding to the power device to be inspected.
  • the foreground pixels composing the preset shape may be directly based on the shape formed by the foreground pixels to determine whether the current foreground pixel is a pixel corresponding to the power device to be inspected. For example, when the shape formed by the pixels of the current scene is linear, it can be determined that the current foreground pixels are the pixels corresponding to the wires to be inspected. There is also a situation in which some foreground pixels are not part of the power equipment to be inspected. At this time, similar means can be used to eliminate interfering foreground pixels. For example, the foreground pixels composing the preset shape are retained, and the foreground pixels composing the preset shape are deleted.
  • the foreground pixel in the imaging image is the pixel corresponding to the power device to be inspected, that is, the power device to be inspected is used as the foreground of the imaging image, it means that the in-focus corresponding to the power device to be inspected in the imaging image can be determined based on the foreground pixel area.
  • the foreground pixel in the imaging image is not the pixel corresponding to the power device to be inspected, that is, the power device to be inspected is not used as the foreground of the imaging image, the area occupied by the foreground pixel cannot be used for the focus of the device to be inspected, so it cannot be based on the aforementioned
  • the foreground pixels determine the area to be focused corresponding to the power device to be inspected in the imaging image. After that, based on the new imaging image used for the image acquisition device to focus, the method of step 101 to step 103 can be used to perform focus control.
  • a preset algorithm can be used to identify foreground pixels in the new imaging image, and determine Whether the foreground pixels in the new imaging image are the pixels corresponding to the power equipment to be inspected, and in the case that the foreground pixels in the new imaging image are the pixels corresponding to the power equipment to be inspected, the new imaging image
  • the area occupied by the foreground pixels is determined to be the area to be focused for the power device to be inspected in the new imaging image, and adjusted according to the area to be focused for the power device to be inspected in the new imaging image
  • the lens parameters of the image acquisition device are used to adjust the sharpness of the power device to be inspected in the imaging image of the image acquisition device, so that the image acquisition device focuses on the power device to be inspected.
  • the attitude of the drone and/or the pan/tilt used for carrying the image acquisition device can be controlled to change the field of view of the image acquisition device
  • the imaged image is continuously acquired until the foreground pixel is a new imaged image corresponding to the pixel of the power device to be inspected.
  • the area occupied by the foreground pixels in the imaging image is the area corresponding to the electric device to be inspected in the imaging image.
  • the area to be focused can be used to focus the power equipment to be inspected, so that the power equipment to be inspected in the image acquisition device can be imaged clearly , So that the image acquisition device can capture the clear power equipment to be inspected.
  • Step 103 Adjust the lens parameters of the image acquisition device according to the area to be focused in the imaging image corresponding to the power device to be inspected, so as to adjust the imaging of the power device to be inspected in the image acquisition device.
  • the clarity in the image so that the image acquisition device focuses on the power equipment to be inspected.
  • step 103 may specifically include: adjusting the lens parameters of the image acquisition device according to the area to be focused until the quality of the image of the area to be focused meets a certain condition.
  • adjusting the lens parameters until the image quality of the area to be focused meets certain conditions refer to the relevant description of the foregoing embodiment, which will not be repeated here.
  • the imaging image is collected by the image acquisition device, the foreground pixels in the imaging image are recognized by a preset algorithm, and at least part of the area occupied by the foreground pixels in the imaging image is determined as the imaging image corresponding to the power equipment to be inspected Adjust the lens parameters of the image acquisition device according to the area to be focused in the imaging image corresponding to the power equipment to be inspected to adjust the clarity of the power equipment to be inspected in the imaging image of the image acquisition device to achieve In the power inspection process, the focus of the image acquisition device can be adjusted according to the area occupied by the power equipment to be inspected in the imaging image, so that the image acquisition device can focus on the power equipment to be inspected, which improves the accuracy of focusing and is beneficial to improve The clarity of the photographed electrical equipment to be inspected.
  • step 101 controlling the drone to fly to the target waypoint where images need to be taken; adjusting according to the inspection parameters corresponding to the target waypoint
  • the attitude of the drone and/or the attitude of the pan/tilt used for carrying the image acquisition device so that the power equipment to be inspected can be used as the foreground in the current imaging of the image acquisition device. Based on this, it is possible to obtain the imaging image of the power equipment to be inspected as the foreground and used for the image acquisition device to focus.
  • the drone can automatically fly to the target waypoint where images need to be captured according to the cruise route, or the drone can be manually controlled by the user to fly to the target flight where images need to be captured according to the control of the control device. point.
  • the attitude of the drone can affect the field of view of the image acquisition device, so that the power equipment to be inspected in the current imaging of the image acquisition device can be adjusted by adjusting the attitude of the drone. As a prospect.
  • the image acquisition device can be set on the drone through a pan/tilt.
  • the pan/tilt can be used to change the orientation of the image acquisition device, thereby changing the field of view of the image acquisition device. Therefore, the posture of the pan/tilt can also affect the image acquisition device's performance.
  • the field of view so that the power equipment to be inspected in the current imaging of the image acquisition device can be used as the foreground by adjusting the attitude of the pan/tilt.
  • the method may further include: in a case where the image acquisition apparatus focuses on the power device to be inspected, shooting an image including the power device to be inspected. Based on this, a clear image of the power equipment to be inspected can be stored, so that subsequent fault detection can be performed on the power equipment to be inspected based on the clear image of the power equipment to be inspected.
  • FIG. 11 is a schematic structural diagram of a focusing device provided by an embodiment of this application.
  • the device 110 may include a processor 111 and a memory 112.
  • the memory 112 is used to store program codes
  • the processor 111 calls the program code, and when the program code is executed, is used to perform the following operations:
  • the lens parameters of the image acquisition device are adjusted to adjust the clarity of the object corresponding to the area to be focused in the imaged image of the image acquisition device, so that the image acquisition device focuses on the to-be-focused area.
  • the focus area corresponds to the object.
  • the focusing device provided in this embodiment can be used to implement the technical solutions of the method embodiments of FIG. 2, FIG. 4, and FIG.
  • FIG. 12 is a schematic structural diagram of an unmanned aerial vehicle provided by an embodiment of the application.
  • the unmanned aerial vehicle 120 may include: a fuselage 121, a power system 122 provided on the fuselage 121, and image acquisition Device 123 and focusing device 124;
  • the power system 122 is used to provide power for the drone
  • the image acquisition device 123 is used to shoot an image including the power equipment to be inspected during the power inspection of the drone;
  • the focusing device 124 includes a memory and a processor
  • the memory is used to store program code
  • the processor calls the program code, and when the program code is executed, is used to perform the following operations:
  • the drone 120 may further include a pan/tilt 125, and the image acquisition device 123 may be installed on the fuselage 121 through the pan/tilt 125.
  • the drone can also include other components or devices, which are not listed here.
  • a person of ordinary skill in the art can understand that all or part of the steps in the foregoing method embodiments can be implemented by a program instructing relevant hardware.
  • the aforementioned program can be stored in a computer readable storage medium. When the program is executed, it executes the steps including the foregoing method embodiments; and the foregoing storage medium includes: ROM, RAM, magnetic disk, or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

一种对焦方法、装置及设备,该方法包括:通过图像获取装置采集成像图像,采用预设的算法识别成像图像中的前景像素;将成像图像中前景像素所占的至少部分区域,确定为成像图像中与待巡检电力设备对应的待对焦区域;根据成像图像中与待巡检电力设备对应的待对焦区域,调整图像获取装置的镜头参数,以调整待巡检电力设备在图像获取装置的成像图像中的清晰度。本申请使得图像获取装置能够针对待巡检电力设备对焦,提高了对焦的准确度,有利于提高所拍摄的待巡检电力设备的清晰度。

Description

对焦方法、装置及设备 技术领域
本申请涉及拍摄技术领域,尤其涉及一种对焦方法、装置及设备。
背景技术
对焦是指通过相机对焦机构变动物距和相距的位置,使被拍物成像清晰的过程,对焦可以包括自动对焦和手动对焦。
目前,对于自动对焦,需要先选定目标区域,然后根据该目标区域进行对焦。通常,自动对焦选定的目标区域是矩形区域,以中心对焦为例,可以选定画面中心的矩形区域为目标区域。然而,由于被拍物并非固定为矩形,因此选定的矩形目标区域中除了包括被拍物例如人,还可能包括背景例如建筑物、树木。
因此,上述根据矩形目标区域进行对焦的方式,存在对焦准确度较低的问题。
发明内容
本申请实施例提供一种对焦方法、装置及设备,用以解决现有技术中根据矩形目标区域进行对焦的方式,存在对焦准确度较低的问题。
第一方面,本申请实施例提供一种对焦方法,应用于无人机进行电力巡检,所述无人机上设有图像获取装置,所述图像获取装置用于在所述无人机进行电力巡检的过程中拍摄包括待巡检电力设备的图像;所述方法包括:
通过图像获取装置采集成像图像,采用预设的算法识别所述成像图像中的前景像素;
将所述成像图像中所述前景像素所占的至少部分区域,确定为所述成像 图像中与所述待巡检电力设备对应的待对焦区域;
根据所述成像图像中与所述待巡检电力设备对应的待对焦区域,调整所述图像获取装置的镜头参数,以调整所述待巡检电力设备在所述图像获取装置的成像图像中的清晰度,以便所述图像获取装置对焦所述待巡检电力设备。
第二方面,本申请实施例提供一种对焦方法,包括:
通过图像获取装置采集成像图像,采用预设的算法识别所述成像图像中的前景像素;
将所述成像图像中所述前景像素所占的至少部分区域,确定为所述成像图像中的待对焦区域;
根据所述待对焦区域,调整所述图像获取装置的镜头参数,以调整所述待对焦区域对应物体在所述图像获取装置的成像图像中的清晰度,以便所述图像获取装置对焦所述待对焦区域对应物体。
第三方面,本申请实施例提供一种无人机,所述无人机机身、设置于所述机身上的动力系统、图像获取装置和对焦装置;
所述动力系统,用于为所述无人机提供动力;
所述图像获取装置,用于在所述无人机进行电力巡检的过程中拍摄包括待巡检电力设备的图像;
所述对焦装置包括存储器和处理器;
所述存储器,用于存储程序代码;
所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
通过图像获取装置采集成像图像,采用预设的算法识别所述成像图像中的前景像素;
将所述成像图像中所述前景像素所占的至少部分区域,确定为所述成像图像中与所述待巡检电力设备对应的待对焦区域;
根据所述成像图像中与所述待巡检电力设备对应的待对焦区域,调整所述图像获取装置的镜头参数,以调整所述待巡检电力设备在所述图像获取装置的成像图像中的清晰度,以便所述图像获取装置对焦所述待巡检电力设备。
第四方面,本申请实施例提供一种对焦装置,所述装置包括:存储器和处理器;
所述存储器,用于存储程序代码;
所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
通过图像获取装置采集成像图像,采用预设的算法识别所述成像图像中的前景像素;
将所述成像图像中所述前景像素所占的至少部分区域,确定为所述成像图像中的待对焦区域;
根据所述待对焦区域,调整所述图像获取装置的镜头参数,以调整所述待对焦区域对应物体在所述图像获取装置的成像图像中的清晰度,以便所述图像获取装置对焦所述待对焦区域对应物体。
第五方面,本申请实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序包含至少一段代码,所述至少一段代码可由计算机执行,以控制所述计算机执行上述第一方面任一项所述的方法。
第六方面,本申请实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序包含至少一段代码,所述至少一段代码可由计算机执行,以控制所述计算机执行上述第二方面任一项所述的方法。
第七方面,本申请实施例提供一种计算机程序,当所述计算机程序被计算机执行时,用于实现上述第一方面任一项所述的方法。
第八方面,本申请实施例提供一种计算机程序,当所述计算机程序被计算机执行时,用于实现上述第二方面任一项所述的方法。
本申请实施例提供一种对焦方法、装置及设备,通过图像获取装置采集成像图像,采用预设的算法识别成像图像中的前景像素,将成像图像中前景像素所占的至少部分区域确定为成像图像中与待巡检电力设备对应的待对焦区域,并根据成像图像中与待巡检电力设备对应的待对焦区域,调整图像获取装置的镜头参数,以调整待巡检电力设备在图像获取装置的成像图像中的清晰度,实现了在电力巡检过程中能够根据成像图像中待巡检电力设备所占区域调整图像获取装置的对焦,使得图像获取装置能够针对待巡检电力设备对焦,提高了对焦的准确度,有利于提高所拍摄的待巡检电力设备的清晰度。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的对焦方法的应用场景示意图;
图2为本申请一实施例提供的对焦方法的流程示意图;
图3A为本申请实施例提供的前景像素的示意图;
图3B为本申请实施例提供的待对焦区域的示意图;
图4为本申请另一实施例提供的对焦方法的流程示意图;
图5为本申请实施例提供的神经网络模型的结构示意图;
图6为本申请又一实施例提供的对焦方法的流程示意图;
图7为本申请实施例提供的预设方向的示意图;
图8A为本申请实施例提供的感兴趣区域的示意图;
图8B和图8C为本申请实施例提供的扩大后区域的示意图;
图9为本申请实施例提供的向用户提示待对焦区域的示意图;
图10为本申请又一实施例提供的对焦方法的流程示意图;
图11为本申请一实施例提供的对焦装置的结构示意图;
图12为本申请一实施例提供的无人机的结构示意图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请实施例提供的对焦方法可以应用于如图1所示的对焦系统10,对焦系统10可以包括图像获取装置11和对焦装置12。其中,图像获取装置11用于采集图像;对焦装置12可以从图像获取装置获得成像图像,并基于成像图像 采用本申请实施例提供的对焦方法进行处理,以调整待拍摄主体在图像获取装置11的成像图像中的清晰度,使得图像获取装置11能够针对待拍摄主体合焦。所述图像获取装置11包括可见光摄像机、红外相机等。
需要说明的是,对焦系统10可以应用于任何需要进行对焦控制的场景。示例性的,对焦系统10可以应用于数码相机、提供拍摄功能的智能手机、无人机等。
本申请实施例提供的对焦方法,通过图像获取装置采集成像图像,采用预设的算法识别成像图像中的前景像素,将成像图像中前景像素所占的至少部分区域确定为成像图像中的待对焦区域,并根据待对焦区域调整图像获取装置的镜头参数,以调整待对焦区域对应物体在图像获取装置的成像图像中的清晰度,实现了根据成像图像中前景像素所占区域调整图像获取装置的对焦。
下面结合附图,对本申请的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
图2为本申请一实施例提供的对焦方法的流程示意图,本实施例的执行主体可以为对焦装置。如图2所示,本实施例的方法可以包括:
步骤201,通过图像获取装置采集成像图像,采用预设的算法,识别获得的成像图像中的前景像素。
本步骤中,所述成像图像用于对所述图像获取装置进行对焦控制,所述成像图像中可以包括前景物体和背景物体,其中前景物体可以认为是待拍摄主体,根据拍摄需求的不同,待拍摄主体例如可以为人物、动物、电力设备等。
图像是由一个个的像素组成,成像图像也不例外。成像图像中的像素可以基于成像图像中的前景物体和背景物体划分为前景像素和背景像素。其中,成像图像中的前景像素可以是指成像图像中前景物体所占的像素,成像图像中的背景像素可以是指成像图像中背景物体所占的像素。
具体的,可以采用预设的算法,从所述成像图像的像素中识别出前景物体对应的前景像素。示例性的,可以从所述成像图像的所有像素中识别出前景物体对应的前景像素,或者,可以从所述成像图像的部分区域对应的像素中识别出前景物体对应的前景像素。需要说明的是,对于识别成像图像中前景像素所采用的具体算法,可以根据需求灵活设计。可选的,可以采用神经 网络模型识别前景像素,有利于降低算法设计难度,提高识别准确性。
步骤202,将所述成像图像中所述前景像素所占的至少部分区域,确定为所述成像图像中的待对焦区域。
本步骤中,所述待对焦区域可以是指图像获取装置对焦所针对的区域。由于前景物体通常为待拍摄主体,因此所述成像图像中的待对焦区域与所述成像图像中的前景像素一致。
基于此,可以根据所述成像图像中的前景像素,确定待对焦区域。具体的,可以将所述成像图像中所述前景像素所占的区域确定为所述成像图像中的待对焦区域。
以成像图像中的前景像素和背景像素如图3A所示为例,待对焦区域可以如图3B中斜线填充区域所示。图3A和图3B中一个小方框可以表示一个像素,小方框中的数字为0可以表示是背景像素,小方框中的数字1可以表示是前景像素。像素为0可以表示背景像素,像素为1可以表示背景像素。
步骤203,根据所述待对焦区域,调整所述图像获取装置的镜头参数,以调整所述待对焦区域对应物体在所述图像获取装置的成像图像中的清晰度,以便所述图像获取装置对焦所述待对焦区域对应物体。
本步骤中,基于所述图像获取装置需要针对所述待对焦区域对焦的需求,调整所述图像获取装置的镜头参数。其中,对焦也称为调焦,其并不是改变镜头的焦距而是改变像距,调整成像面和镜头距离,使成像面到光心的距离等于像距,使物体可以清晰的成像到胶片(感光元件)上。调整图像获取装置使待拍摄主体成像清晰的过程就是对焦过程。具体的,可以基于所述待对焦区域实现对焦过程,即调整图像获取装置的镜头参数使待对焦区域对应物体(即前景像素对应物体)成像清晰,其中,待对焦区域对应物体成像清晰可以理解为图像获取装置对焦所述待对焦区域对应物体,由于前景像素对应物体通常为待拍摄主体,从而能够使得待拍摄主体成像清晰。
根据所述待对焦区域所调整的镜头参数具体可以为影响所述图像获取装置成像面和镜头距离的参数。在一个实施例中,对焦环可以用于改变成像最清晰的平面到镜头的距离,基于此根据待对焦区域所调整的镜头参数具体可以为对焦环的转动方向和转动圈数。
本实施例中,通过图像获取装置采集成像图像,采用预设的算法识别成像图像中的前景像素,将成像图像中前景像素所占的至少部分区域确定为成 像图像中的待对焦区域,并根据待对焦区域调整图像获取装置的镜头参数,以调整待对焦区域对应物体在图像获取装置的成像图像中的清晰度,实现了根据成像图像中前景像素所占区域调整图像获取装置的对焦,由于待拍摄主体通常是作为镜头的前景,且前景像素所占区域中不包括背景信息,因此使得图像获取装置能够针对前景像素对应物体对焦,与相关技术中进行对焦控制所基于的图像区域中包括背景信息相比,避免了由于背景信息的影响导致对焦准确度较低的问题,提高了对焦的准确度,从而提高了所拍摄物体的清晰度。
图4为本申请另一实施例提供的对焦方法的流程示意图,本实施例在图2所示实施例的基础上,主要描述了一种可选的实现方式。如图4所示,本实施例的方法可以包括:
步骤401,通过图像获取装置采集成像图像,将所述成像图像输入预先训练好的第一神经网络模型,得到第一输出结果,所述第一输出结果包括所述成像图像中各像素是前景像素的置信度。
本步骤中,所述第一神经网络模型具体可以为卷积神经网络(Convolutional Neural Networks,CNN)模型。所述第一神经网络模型的结构例如可以如图5所示。如图5所示,所述第一神经网络模型可以包括多个计算节点,每个计算节点中可以包括卷积(Conv)层、批量归一化(Batch Normalization,BN)以及激活函数ReLU,计算节点之间可以采用跳跃连接(Skip Connection)方式连接,K×H×W的输入数据可以输入所述第一神经网络模型,经过所述第一神经网络模型处理后,可以获得C×H×W的输出数据。其中,K可以表示输入通道的个数,K可以等于3,分别对应红(R,red)、绿(G,green)和蓝(B,blue)共三个通道;H可以表示成像图像的高,W可以表示成像图像的宽,C等于2可以表示输出通道个数为2。
所述第一神经网络模型的第一输出结果可以包括2个输出通道分别输出的置信度特征图,这2个输出通道可以与2个对象类别一一对应,这2个对象类别可以分别为前景类别和背景类别,单个对象类别的置信度特征图的像素值用于表征像素是所述对象类别的概率。
假设第一输出结果可以包括置信度特征图1和置信度特征图2,且置信度特征图1对应前景类别,置信度特征图2对应背景类别,则置信度特征图1中像素位置(100,100)的像素值是90,可以表示像素位置(100,100)的像素其是 前景像素的概率为90%,置信度特征图2中像素位置(100,80)的像素值是20,可以表示像素位置(100,80)的像素其是前景像素的概率为20%。
步骤402,根据所述第一输出结果,确定所述成像图像中的前景像素。
本步骤中,示例性的,可以基于所述第一输出结果,将所述成像图像中是前景像素的置信度大于预设阈值的像素确定为前景像素。例如,可以基于前述置信度特征图1,将置信度特征图1中是前景像素的概率大于80%的像素确定为前景像素。
对于步骤401和步骤402,可替换的,可以基于预先训练好的第二神经网络模型识别成像图像中的前景像素。示例性的,步骤401和步骤402可以替换为如下步骤A和步骤B。
步骤A,通过图像获取装置采集成像图像,将所述成像图像输入预先训练好的第二神经网络模型,得到第二输出结果,所述第二输出结果包括所述成像图像中各像素是至少一个前景类别中各前景类别像素的置信度;
步骤B,根据所述第二输出结果,确定所述成像图像中各前景类别的像素,以得到所述成像图像中的前景像素。
其中,所述第二神经网络模型具体可以为CNN模型,其结构与图5所示的第一神经网络模型的结构类似,区别主要在于输出通道的个数C大于2。
所述第二神经网络模型的第二输出结果可以包括C个输出通道分别输出的置信度特征图,C大于2,这C个输出通道可以与C个对象类别一一对应,这C个对象类别具体可以包括多个特定前景类别和背景类别,单个对象类别的置信度特征图的像素值用于表征像素是所述对象类别的概率。
假设第二输出结果可以包括置信度特征图3、置信度特征图4和置信度特征图5,且置信度特征图3对应特定前景类别1,置信度特征图4对应特定前景类别2,置信度特征图5对应背景类别,则置信度特征图3中像素位置(100,100)的像素值是90,可以表示像素位置(100,100)的像素其是特定前景类别1的前景像素的概率为90%,置信度特征图4中像素位置(100,60)的像素值是85,可以表示像素位置(100,60)的像素其是特定前景类别2的前景像素的概率为85%,置信度特征图,5中像素位置(100,80)的像素值是20,可以表示像素位置(100,80)的像素其是前景像素的概率为20%。
示例性的,可以基于所述第二输出结果,将所述成像图像中是前景类别像素的置信度大于预设阈值的像素确定为前景像素。例如,可以基于前述置 信度特征图3和置信度特征图4,将置信度特征图3和置信度特征图4中是前景类别像素的概率大于80%的像素确定为前景像素。
通过采用第二神经网络模型识别前景像素,能够实现识别成像图像中特定类别的前景像素,以实现基于特定类别的前景像素所占的区域进行对焦,有利于提高识别前景像素的针对性,从而提高对焦的准确性。
步骤403,将所述成像图像中所述前景像素所占的至少部分区域,确定为所述成像图像中的待对焦区域。
需要说明的是,步骤403与步骤202类似,在此不再赘述。
步骤404,根据所述待对焦区域,调整所述图像获取装置的镜头参数,以调整所述待对焦区域对应物体在所述图像获取装置的成像图像中的清晰度,以便所述图像获取装置对焦所述待对焦区域对应物体。
本步骤中,示例性的,可以根据所述待对焦区域,调整所述图像获取装置的镜头参数,直至所述待对焦区域的图像其质量满足一定条件。由于图像的清晰度是衡量图像质量优劣的重要指标,因此待对焦区域的图像的质量满足一定条件,可以表示待对焦区域的图像的清晰度满足一定条件。在待对焦区域的图像的清晰度满足一定条件情况下,可以表示待对焦区域对应物体成像清晰,即图像获取装置对焦所述待对焦区域对应物体。
需要说明的是,该一定条件可以根据对于图像质量的具体要求进行灵活实现,本申请对此不作限定。
本实施例中,通过将所述成像图像输入预先训练好的第一神经网络模型得到第一输出结果,根据所述第一输出结果确定成像图像中的前景像素,将成像图像中前景像素所占的至少部分区域确定为成像图像中的待对焦区域,并根据待对焦区域调整图像获取装置的镜头参数,以调整待对焦区域对应物体在图像获取装置的成像图像中的清晰度,实现了基于神经网络模型确定成像图像中的前景像素,并基于前景像素所占区域调整图像获取装置的对焦。
图6为本申请又一实施例提供的对焦方法的流程示意图,本实施例在图2所示实施例的基础上,主要描述了另一种可选的实现方式。如图6所示,本实施例的方法可以包括:
步骤601,通过图像获取装置采集成像图像,确定所述成像图像中的感兴趣区域。
本步骤中,感兴趣区域(region of interest,ROI)可以是指机器视觉、图 像处理中,从被处理的图像以方框、圆、椭圆、不规则多边形等方式勾勒出需要处理的区域。示例性的,可以基于各种算子(Operator)和函数自动求得所述成像图像中的感兴趣区域。所述感兴趣区域例如可以为目标跟踪算法中跟踪框对应区域,当然,在其他实施例中,感兴趣区域还可以为其他类型区域,本申请对此不作限定。需要说明的是,对于确定感兴趣区域的具体方式,可以根据需求灵活实现,本申请对此不作限定。
步骤602,采用预设的算法处理所述成像图像中所述感兴趣区域对应的图像,以识别所述成像图像中所述感兴趣区域对应的前景像素。
本步骤中,可选的,所述感兴趣区域对应的图像可以为所述成像图像中所述感兴趣区域的图像。即,可以基于成像图像中感兴趣区域的图像识别对应的前景像素。
或者,在确定成像图像中的感兴趣区域之后,可以在所述成像图像中,将所述感兴趣区域沿预设方向扩大至少一个像素,得到扩大后区域;所述感兴趣区域对应的图像为所述成像图像中所述扩大后区域的图像。基于此,感兴趣区域对应的图像不但可以包括感兴趣区域中的图像内容,还可以包括感兴趣区域相邻的图像内容,在由于一定原因导致感兴趣区域中未能包括完整的待拍摄主体情况下,能够避免由于仅识别感兴趣区域的图像中的前景像素,导致基于感兴趣区域未能全部识别待拍摄主体的前景像素的问题。
其中,所述预设方向可以包括如图7所示的8个方向中的一个或多个。例如,假设成像图像中的感兴趣区域如图8A所示,且预设方向为8个方向中的方向1为例,将感兴趣区域沿预设方向扩大一个像素所得到的扩大后区域可以如图8B所示。又例如,假设成像图像中的感兴趣区域如图8A所示,且预设方向包括图7所示的8个方向为例,将感兴趣区域沿预设方向扩大一个像素所得到的扩大后区域可以如图8C所示。需要说明的是,图8A、图8B和图8C中一个小方框可以表示一个像素,图8C中以各方向扩大的像素数量相同为例,可以理解的是不同方向扩大的像素数量也可以不同。
类似的,步骤602中可以基于神经网络模型处理成像图像中感兴趣区域对应的图像,以识别所述成像图像中所述感兴趣区域对应的前景像素。可选的,可以将所述感兴趣区域对应的图像输入预先训练好的第一神经网络模型,得到输出结果,该输出结果包括所述感兴趣区域对应的图像中各像素是前景像素的置信度,并根据该输出结果,确定所述感兴趣区域对应的图像中的前景 像素,以得到所述成像图像中感兴趣区域对应的前景像素。或者,可以将所述感兴趣区域对应的图像输入预先训练好的第二神经网络模型,得到输出结果,该输出结果包括所述感兴趣区域对应的图像中各像素是至少一个前景类别中各前景类别像素的置信度,根据该输出结果,确定所述感兴趣区域对应的图像中各前景类别的像素,以得到所述成像图像中感兴趣区域对应的前景像素。
步骤B,根据所述第二输出结果,确定所述成像图像中各前景类别的像素,以得到所述成像图像中的前景像素。需要说明的是,基于第一神经网络模型或第二神经网络模型处理成像图像中感兴趣区域对应图像的实现方式,与前述图4所示实施例中基于第一神经网络模型或第二神经网络模型处理成像图像的具体方式类似,在此不再赘述。
步骤603,将所述成像图像中所述前景像素所占的至少部分区域,确定为所述成像图像中的待对焦区域。
需要说明的是,步骤403与步骤202类似,在此不再赘述。
步骤604,根据所述待对焦区域,调整所述图像获取装置的镜头参数,以调整所述待对焦区域对应物体在所述图像获取装置的成像图像中的清晰度,以便所述图像获取装置对焦所述待对焦区域对应物体。
需要说明的是,步骤604与步骤203、步骤404类似,在此不再赘述。
本实施例中,通过图像获取装置采集成像图像,确定成像图像中的感兴趣区域,采用预设的算法处理成像图像中感兴趣区域对应的图像以识别成像图像中感兴趣区域对应的前景像素,将成像图像中前景像素所占的至少部分区域确定为成像图像中的待对焦区域,并根据待对焦区域调整图像获取装置的镜头参数,以调整待对焦区域对应物体在图像获取装置的成像图像中的清晰度,实现了根据成像图像中感兴趣区域对应的前景像素所占区域调整图像获取装置的对焦,由于感兴趣区域中通常可以包括感兴趣的物体,因此感兴趣区域对应的前景像素通常为感兴趣物体对应像素,基于感兴趣区域对应的前景像素所占区域调整获取装置的对焦,能够实现对焦所针对的待拍摄主体为感兴趣的物体,从而能够确保待拍摄主体的准确性。
在上述实施例的基础上,可选的,还可以包括:在拍摄界面中向用户提示所述待对焦区域,以便用户能够获知当前对焦的区域。示例性的,可以向用户显示成像图像,并在成像图像中标注待对焦区域。例如,可以通过图9的 方式向用户提示待对焦区域,图9中的成像图像中黑色框所框出的区域可以表示待对焦区域。当然,在其他实施例中,也可以通过其他方式向用户提示待对焦区域,本申请对此不作限定。
在拍摄界面中向用户提示待对焦区域的基础上,可选的,还可以包括:基于所述用户针对所述待对焦区域的调整操作,得到调整后的待对焦区域;以及,根据所述调整后的待对焦区域,调整所述图像获取装置的镜头参数,直至所述待对焦区域的图像其质量满足一定条件。其中,所述调整操作可以根据需求灵活实现,示例性的,所述调整操作包括下述中的一种或多种:位置调整操作、形状调整操作或大小调整操作。
通过获取调整操作,并根据基于调整操作得到的调整后的待对焦区域调整图像获取装置的镜头参数,使得用户能够根据需要调整图像获取装置的对焦区域,以便图像获取装置能够基于用户调整后的待对焦区域对焦,有利于提高对焦的灵活性。
图10为本申请又一实施例提供的对焦方法的流程示意图,本实施例在前述实施例的基础上,主要描述了对焦方法应用于无人机电力巡检的一种具体实现方式,其中,无人机上设有图像获取装置,所述图像获取装置用于在所述无人机进行电力巡检的过程中拍摄包括待巡检电力设备的图像。如图10所示,本实施例的方法可以包括:
步骤101,通过图像获取装置采集成像图像,采用预设的算法识别所述成像图像中的前景像素。
本步骤中,待巡检电力设备可以作为待拍摄主体,待巡检电力设备例如可以包括电线、电线杆、光伏发电站的太阳电池板等。当然,在其他实施例中,待巡检电力设备还可以为其他设备,本申请对此不作限定。
需要说明的是,识别成像图像中前景像素的具体方式,可以参见前述实施例的相关描述,在此不再赘述。
步骤102,将所述成像图像中所述前景像素所占的至少部分区域,确定为所述成像图像中与所述待巡检电力设备对应的待对焦区域。
又例如,可以直接基于前景像素所构成的形状,来确定当前前景像素是否为待巡检电力设备对应像素。例如,当前景像素构成的形状为线型时,可以确定当前前景像素为待巡检电线对应的像素。还存在一种情况是,有部分前景像素不是属于待巡检电力设备,这时可以通过相似手段的将干扰前景像 素剔除。例如,将组成预设形状的前景像素保留,把出该组成预设形状的前景像素删除。
若成像图像中的前景像素是待巡检电力设备对应的像素,即待巡检电力设备作为成像图像的前景,则表示可以基于所述前景像素确定成像图像中待巡检电力设备对应的待对焦区域。
若成像图像中前景像素不是待巡检电力设备对应的像素,即待巡检电力设备未作为成像图像的前景,则由于前景像素所占区域无法用于待巡检设备对焦,因此无法基于所述前景像素确定成像图像中待巡检电力设备对应的待对焦区域。之后,可以基于用于所述图像获取装置对焦的新成像图像,采用步骤101-步骤103的方法进行对焦控制,具体的,可以采用预设的算法识别所述新成像图像中的前景像素,确定所述新成像图像中的前景像素是否为待巡检电力设备对应像素,以及在所述新成像图像中的前景像素是待巡检电力设备对应像素情况下,将所述新成像图像中所述前景像素所占的区域,确定为所述新成像图像中针对所述待巡检电力设备的待对焦区域,根据所述新成像图像中针对所述待巡检电力设备的待对焦区域,调整所述图像获取装置的镜头参数,以调整所述待巡检电力设备在所述图像获取装置的成像图像中的清晰度,以便所述图像获取装置对焦所述待巡检电力设备。
可选的,为了使得图像获取装置能够获得待巡检电力设备作为前景的新成像图像,可以控制无人机和/或用于搭载图像获取装置的云台的姿态,以改变图像获取装置的视野范围,不断获取成像图像,直至能够得到前景像素为待巡检电力设备对应像素的新成像图像,以便于能够针对待巡检电力设备对焦。
本步骤中,由于待巡检电力设备是作为成像图像的前景,因此成像图像中前景像素所占的区域即为成像图像中待巡检电力设备对应的区域。并且,由于在电力巡检的过程中需要针对待巡检电力设备拍摄图像,因此所述待对焦区域可以用于待巡检电力设备的对焦,以便图像获取装置中待巡检电力设备能够成像清晰,从而使得图像获取装置能够拍摄到清晰的待巡检电力设备。
步骤103,根据所述成像图像中与所述待巡检电力设备对应的待对焦区域,调整所述图像获取装置的镜头参数,以调整所述待巡检电力设备在所述图像获取装置的成像图像中的清晰度,以便所述图像获取装置对焦所述待巡检电力设备。
本步骤中,示例性的,步骤103具体可以包括:根据所述待对焦区域,调整所述图像获取装置的镜头参数,直至所述待对焦区域的图像其质量满足一定条件。关于调整镜头参数直至待对焦区域的图像其质量满足一定条件的具体说明,可以参见前述实施例的相关描述,在此不再赘述。
本实施例中,通过图像获取装置采集成像图像,采用预设的算法识别成像图像中的前景像素,将成像图像中前景像素所占的至少部分区域确定为成像图像中与待巡检电力设备对应的待对焦区域,并根据成像图像中与待巡检电力设备对应的待对焦区域,调整图像获取装置的镜头参数,以调整待巡检电力设备在图像获取装置的成像图像中的清晰度,实现了在电力巡检过程中能够根据成像图像中待巡检电力设备所占区域调整图像获取装置的对焦,使得图像获取装置能够针对待巡检电力设备对焦,提高了对焦的准确度,有利于提高所拍摄的待巡检电力设备的清晰度。
在图10所示实施例的基础上,在步骤101之前还可以包括如下步骤:控制所述无人机飞行至需要拍摄图像的目标航点;根据所述目标航点对应的巡检参数,调整所述无人机的姿态和/或用于搭载所述图像获取装置的云台的姿态,以便在所述图像获取装置的当前成像中所述待巡检电力设备能够作为前景。基于此,使得能够获得所述待巡检电力设备作为前景且用于所述图像获取装置对焦的成像图像。
可选的,所述无人机可以根据巡航航线自动飞行至需要拍摄图像的目标航点,或者,所述无人机可以根据控制设备的控制,由用户手动控制飞行至需要拍摄图像的目标航点。
由于图像获取装置设置在无人机上,因此,无人机的姿态可以影响图像获取装置的视野范围,从而可以通过调整无人机的姿态使得在图像获取装置的当前成像中待巡检电力设备能够作为前景。
可选的,图像获取装置可以通过云台设置在无人机上,云台可以用于改变图像获取装置的朝向,从而改变图像获取装置的视野范围,因此云台的姿态也可以影响图像获取装置的视野范围,从而可以通过调整云台的姿态使得图像获取装置的当前成像中待巡检电力设备能够作为前景。
可选的,步骤103之后还可以包括:在所述图像获取装置对焦所述待巡检电力设备的情况下,拍摄包括所述待巡检电力设备的图像。基于此,可以存储待巡检电力设备的清晰图像,以便后续能够根据待巡检电力设备的清晰图 像进一步针对待巡检电力设备进行故障检测。
图11为本申请一实施例提供的对焦装置的结构示意图,如图11所示,该装置110可以包括:处理器111和存储器112。
所述存储器112,用于存储程序代码;
所述处理器111,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
通过图像获取装置采集成像图像,采用预设的算法识别所述成像图像中的前景像素;
将所述成像图像中所述前景像素所占的至少部分区域,确定为所述成像图像中的待对焦区域;
根据所述待对焦区域,调整所述图像获取装置的镜头参数,以调整所述待对焦区域对应物体在所述图像获取装置的成像图像中的清晰度,以便所述图像获取装置对焦所述待对焦区域对应物体。
本实施例提供的对焦装置,可以用于执行前述图2、图4、图6方法实施例的技术方案,其实现原理和技术效果与方法实施例类似,在此不再赘述。
图12为本申请一实施例提供的无人机的结构示意图,如图12所示,该无人机120可以包括:机身121、设置于所述机身121上的动力系统122、图像获取装置123和对焦装置124;
所述动力系统122,用于为所述无人机提供动力;
所述图像获取装置123,用于在所述无人机进行电力巡检的过程中拍摄包括待巡检电力设备的图像;
所述对焦装置124包括存储器和处理器;
所述存储器,用于存储程序代码;
所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
通过图像获取装置采集成像图像,采用预设的算法识别所述成像图像中的前景像素;
将所述成像图像中所述前景像素所占的至少部分区域,确定为所述成像图像中与所述待巡检电力设备对应的待对焦区域;
根据所述成像图像中与所述待巡检电力设备对应的待对焦区域,调整所述图像获取装置的镜头参数,以调整所述待巡检电力设备在所述图像获取装 置的成像图像中的清晰度,以便所述图像获取装置对焦所述待巡检电力设备。
可选的,无人机120还可以包括云台125,图像获取装置123可以通过云台125设置在机身121上。当然,无人机除上述列出装置外,还可以包括其他元件或装置,这里不一一例举。
本领域普通技术人员可以理解:实现上述各方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成。前述的程序可以存储于一计算机可读取存储介质中。该程序在执行时,执行包括上述各方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上各实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述各实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (46)

  1. 一种对焦方法,应用于无人机进行电力巡检,其特征在于,所述无人机上设有图像获取装置,所述图像获取装置用于在所述无人机进行电力巡检的过程中拍摄包括待巡检电力设备的图像;所述方法包括:
    通过图像获取装置采集成像图像,采用预设的算法识别所述成像图像中的前景像素;
    将所述成像图像中所述前景像素所占的至少部分区域,确定为所述成像图像中与所述待巡检电力设备对应的待对焦区域;
    根据所述成像图像中与所述待巡检电力设备对应的待对焦区域,调整所述图像获取装置的镜头参数,以调整所述待巡检电力设备在所述图像获取装置的成像图像中的清晰度,以便所述图像获取装置对焦所述待巡检电力设备。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    确定所述成像图像中的感兴趣区域;
    所述采用预设的算法识别所述成像图像中的前景像素,包括:
    采用预设的算法处理所述成像图像中所述感兴趣区域对应的图像,以识别所述成像图像中所述感兴趣区域对应的前景像素。
  3. 根据权利要求2所述的方法,其特征在于,所述感兴趣区域对应的图像为所述成像图像中所述感兴趣区域的图像。
  4. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    在所述成像图像中,将所述感兴趣区域沿预设方向扩大至少一个像素,得到扩大后区域;
    所述感兴趣区域对应的图像为所述成像图像中所述扩大后区域的图像。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述采用预设的算法识别所述成像图像中的前景像素,包括:
    将所述成像图像输入预先训练好的第一神经网络模型,得到第一输出结果,所述第一输出结果包括所述成像图像中各像素是前景像素的置信度;
    根据所述第一输出结果,确定所述成像图像中的前景像素。
  6. 根据权利要求5所述的方法,其特征在于,所述根据所述第一输出结果,确定所述成像图像中的前景像素,包括:
    根据所述第一输出结果,将所述成像图像中是前景像素的置信度大于预设阈值的像素确定为前景像素。
  7. 根据权利要求1-4任一项所述的方法,其特征在于,所述采用预设的算法识别所述成像图像中的前景像素,包括:
    将所述成像图像输入预先训练好的第二神经网络模型,得到第二输出结果,所述第二输出结果包括所述成像图像中各像素是至少一个前景类别中各前景类别像素的置信度;
    根据所述第二输出结果,确定所述成像图像中各前景类别的像素,以得到所述成像图像中的前景像素。
    根据所述成像图像中的前景像素,确定所述成像图像中与所述待巡检电力设备对应的待对焦区域;
  8. 根据权利要求1-4任一项所述的方法,其特征在于,所述根据所述成像图像中与所述待巡检电力设备对应的待对焦区域,调整所述图像获取装置的镜头参数,以调整所述待巡检电力设备在所述图像获取装置的成像图像中的清晰度,使所述图像获取装置针对所述待巡检电力设备合焦,包括:
    根据所述待对焦区域,调整所述图像获取装置的镜头参数,直至所述待对焦区域的图像其质量满足一定条件。
  9. 根据权利要求1-4任一项所述的方法,其特征在于,所述方法还包括:
    控制所述无人机飞行至需要拍摄图像的目标航点;
    根据所述目标航点对应的巡检参数,调整所述无人机的姿态和/或用于搭载所述图像获取装置的云台的姿态,以便在所述图像获取装置的当前成像中所述待巡检电力设备能够作为前景。
  10. 根据权利要求1-4任一项所述的方法,其特征在于,所述方法还包括:在所述图像获取装置对焦所述待巡检电力设备的情况下,拍摄包括所述待巡检电力设备的图像。
  11. 一种对焦方法,其特征在于,包括:
    通过图像获取装置采集成像图像,采用预设的算法识别所述成像图像中的前景像素;
    将所述成像图像中所述前景像素所占的至少部分区域,确定为所述成像图像中的待对焦区域;
    根据所述待对焦区域,调整所述图像获取装置的镜头参数,以调整所述待对焦区域对应物体在所述图像获取装置的成像图像中的清晰度,以便所述图像获取装置对焦所述待对焦区域对应物体。
  12. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    确定所述成像图像中的感兴趣区域;
    所述采用预设的算法识别所述成像图像中的前景像素,包括:
    采用预设的算法处理所述成像图像中所述感兴趣区域对应的图像,以识别所述成像图像中所述感兴趣区域对应的前景像素。
  13. 根据权利要求12所述的方法,其特征在于,所述感兴趣区域对应的图像为所述成像图像中所述感兴趣区域的图像。
  14. 根据权利要求12所述的方法,其特征在于,所述方法还包括:
    在所述成像图像中,将所述感兴趣区域沿预设方向扩大至少一个像素,得到扩大后区域;
    所述感兴趣区域对应的图像为所述成像图像中所述扩大后区域的图像。
  15. 根据权利要求11-14任一项所述的方法,其特征在于,所述采用预设的算法识别所述成像图像中的前景像素,包括:
    将所述成像图像输入预先训练好的第一神经网络模型,得到第一输出结果,所述第一输出结果包括所述成像图像中各像素是前景像素的置信度;
    根据所述第一输出结果,确定所述成像图像中的前景像素。
  16. 根据权利要求15所述的方法,其特征在于,所述根据所述第一输出结果,确定所述成像图像中的前景像素,包括:
    根据所述第一输出结果,将所述成像图像中是前景像素的置信度大于预设阈值的像素确定为前景像素。
  17. 根据权利要求11-14任一项所述的方法,其特征在于,所述采用预设的算法识别所述成像图像中的前景像素,包括:
    将所述成像图像输入预先训练好的第二神经网络模型,得到第二输出结果,所述第二输出结果包括所述成像图像中各像素是至少一个前景类别中各前景类别像素的置信度;
    根据所述第二输出结果,确定所述成像图像中各前景类别的像素,以得到所述成像图像中的前景像素。
  18. 根据权利要求11-14任一项所述的方法,其特征在于,所述根据所述待对焦区域,调整所述图像获取装置的镜头参数,以使所述图像获取装置针对所述待对焦区域对应物体合焦,包括:
    根据所述待对焦区域,调整所述图像获取装置的镜头参数,直至所述待 对焦区域的图像其质量满足一定条件。
  19. 根据权利要求11-14任一项所述的方法,其特征在于,所述方法还包括:
    在拍摄界面中向用户提示所述待对焦区域。
  20. 根据权利要求19所述的方法,其特征在于,所述方法还包括:
    基于所述用户针对所述待对焦区域的调整操作,得到调整后的待对焦区域;
    根据所述调整后的待对焦区域,调整所述图像获取装置的镜头参数,直至所述待对焦区域的图像其质量满足一定条件。
  21. 根据权利要求20所述的方法,其特征在于,所述调整操作包括下述中的一种或多种:
    位置调整操作、形状调整操作或大小调整操作。
  22. 一种无人机,其特征在于,所述无人机机身、设置于所述机身上的动力系统、图像获取装置和对焦装置;
    所述动力系统,用于为所述无人机提供动力;
    所述图像获取装置,用于在所述无人机进行电力巡检的过程中拍摄包括待巡检电力设备的图像;
    所述对焦装置包括存储器和处理器;
    所述存储器,用于存储程序代码;
    所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
    通过图像获取装置采集成像图像,采用预设的算法识别所述成像图像中的前景像素;
    将所述成像图像中所述前景像素所占的至少部分区域,确定为所述成像图像中与所述待巡检电力设备对应的待对焦区域;
    根据所述成像图像中与所述待巡检电力设备对应的待对焦区域,调整所述图像获取装置的镜头参数,以调整所述待巡检电力设备在所述图像获取装置的成像图像中的清晰度,以便所述图像获取装置对焦所述待巡检电力设备。
  23. 根据权利要求22所述的无人机,其特征在于,所述处理器还用于:
    确定所述成像图像中的感兴趣区域;
    所述处理器用于采用预设的算法识别所述成像图像中的前景像素,具体 包括:
    采用预设的算法处理所述成像图像中所述感兴趣区域对应的图像,以识别所述成像图像中所述感兴趣区域对应的前景像素。
  24. 根据权利要求23所述的无人机,其特征在于,所述感兴趣区域对应的图像为所述成像图像中所述感兴趣区域的图像。
  25. 根据权利要求23所述的无人机,其特征在于,所述处理器还用于:
    在所述成像图像中,将所述感兴趣区域沿预设方向扩大至少一个像素,得到扩大后区域;
    所述感兴趣区域对应的图像为所述成像图像中所述扩大后区域的图像。
  26. 根据权利要求22-25任一项所述的无人机,其特征在于,所述处理器用于采用预设的算法识别所述成像图像中的前景像素,具体包括:
    将所述成像图像输入预先训练好的第一神经网络模型,得到第一输出结果,所述第一输出结果包括所述成像图像中各像素是前景像素的置信度;
    根据所述第一输出结果,确定所述成像图像中的前景像素。
  27. 根据权利要求26所述的无人机,其特征在于,所述处理器用于根据所述第一输出结果,确定所述成像图像中的前景像素,具体包括:
    根据所述第一输出结果,将所述成像图像中是前景像素的置信度大于预设阈值的像素确定为前景像素。
  28. 根据权利要求22-25任一项所述的无人机,其特征在于,所述处理器用于采用预设的算法识别所述成像图像中的前景像素,具体包括:
    将所述成像图像输入预先训练好的第二神经网络模型,得到第二输出结果,所述第二输出结果包括所述成像图像中各像素是至少一个前景类别中各前景类别像素的置信度;
    根据所述第二输出结果,确定所述成像图像中各前景类别的像素,以得到所述成像图像中的前景像素。
    根据所述成像图像中的前景像素,确定所述成像图像中与所述待巡检电力设备对应的待对焦区域;
  29. 根据权利要求22-25任一项所述的无人机,其特征在于,所述处理器用于根据所述成像图像中与所述待巡检电力设备对应的待对焦区域,调整所述图像获取装置的镜头参数,以调整所述待巡检电力设备在所述图像获取装置的成像图像中的清晰度,使所述图像获取装置针对所述待巡检电力设备合 焦,具体包括:
    根据所述待对焦区域,调整所述图像获取装置的镜头参数,直至所述待对焦区域的图像其质量满足一定条件。
  30. 根据权利要求22-25任一项所述的无人机,其特征在于,所述处理器还用于:
    控制所述无人机飞行至需要拍摄图像的目标航点;
    根据所述目标航点对应的巡检参数,调整所述无人机的姿态和/或用于搭载所述图像获取装置的云台的姿态,以便在所述图像获取装置的当前成像中所述待巡检电力设备能够作为前景。
  31. 根据权利要求22-25任一项所述的无人机,其特征在于,所述处理器还用于:在所述图像获取装置对焦所述待巡检电力设备的情况下,拍摄包括所述待巡检电力设备的图像。
  32. 一种对焦装置,其特征在于,所述装置包括:存储器和处理器;
    所述存储器,用于存储程序代码;
    所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
    通过图像获取装置采集成像图像,采用预设的算法识别所述成像图像中的前景像素;
    将所述成像图像中所述前景像素所占的至少部分区域,确定为所述成像图像中的待对焦区域;
    根据所述待对焦区域,调整所述图像获取装置的镜头参数,以调整所述待对焦区域对应物体在所述图像获取装置的成像图像中的清晰度,以便所述图像获取装置对焦所述待对焦区域对应物体。
  33. 根据权利要求32所述的装置,其特征在于,所述处理器还用于:
    确定所述成像图像中的感兴趣区域;
    所述处理器用于采用预设的算法识别所述成像图像中的前景像素,具体包括:
    采用预设的算法处理所述成像图像中所述感兴趣区域对应的图像,以识别所述成像图像中所述感兴趣区域对应的前景像素。
  34. 根据权利要求33所述的装置,其特征在于,所述感兴趣区域对应的图像为所述成像图像中所述感兴趣区域的图像。
  35. 根据权利要求33所述的装置,其特征在于,所述处理器还用于:
    在所述成像图像中,将所述感兴趣区域沿预设方向扩大至少一个像素,得到扩大后区域;
    所述感兴趣区域对应的图像为所述成像图像中所述扩大后区域的图像。
  36. 根据权利要求32-35任一项所述的装置,其特征在于,所述处理器用于采用预设的算法识别所述成像图像中的前景像素,具体包括:
    将所述成像图像输入预先训练好的第一神经网络模型,得到第一输出结果,所述第一输出结果包括所述成像图像中各像素是前景像素的置信度;
    根据所述第一输出结果,确定所述成像图像中的前景像素。
  37. 根据权利要求36所述的装置,其特征在于,所述处理器用于根据所述第一输出结果,确定所述成像图像中的前景像素,具体包括:
    根据所述第一输出结果,将所述成像图像中是前景像素的置信度大于预设阈值的像素确定为前景像素。
  38. 根据权利要求32-35任一项所述的装置,其特征在于,所述处理器用于采用预设的算法识别所述成像图像中的前景像素,具体包括:
    将所述成像图像输入预先训练好的第二神经网络模型,得到第二输出结果,所述第二输出结果包括所述成像图像中各像素是至少一个前景类别中各前景类别像素的置信度;
    根据所述第二输出结果,确定所述成像图像中各前景类别的像素,以得到所述成像图像中的前景像素。
  39. 根据权利要求32-35任一项所述的装置,其特征在于,所述处理器用于根据所述待对焦区域,调整所述图像获取装置的镜头参数,以使所述图像获取装置针对所述待对焦区域对应物体合焦,具体包括:
    根据所述待对焦区域,调整所述图像获取装置的镜头参数,直至所述待对焦区域的图像其质量满足一定条件。
  40. 根据权利要求32-35任一项所述的装置,其特征在于,所述处理器还用于:
    在拍摄界面中向用户提示所述待对焦区域。
  41. 根据权利要求40所述的装置,其特征在于,所述处理器还用于:
    基于所述用户针对所述待对焦区域的调整操作,得到调整后的待对焦区域;
    根据所述调整后的待对焦区域,调整所述图像获取装置的镜头参数,直至所述待对焦区域的图像其质量满足一定条件。
  42. 根据权利要求41所述的装置,其特征在于,所述调整操作包括下述中的一种或多种:
    位置调整操作、形状调整操作或大小调整操作。
  43. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序包含至少一段代码,所述至少一段代码可由计算机执行,以控制所述计算机执行如权利要求1-10任一项所述的方法。
  44. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序包含至少一段代码,所述至少一段代码可由计算机执行,以控制所述计算机执行如权利要求11-21任一项所述的方法。
  45. 一种计算机程序,其特征在于,当所述计算机程序被计算机执行时,用于实现如权利要求1-10任一项所述的方法。
  46. 一种计算机程序,其特征在于,当所述计算机程序被计算机执行时,用于实现如权利要求11-21任一项所述的方法。
PCT/CN2020/076839 2020-02-26 2020-02-26 对焦方法、装置及设备 WO2021168707A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080004236.6A CN112585945A (zh) 2020-02-26 2020-02-26 对焦方法、装置及设备
PCT/CN2020/076839 WO2021168707A1 (zh) 2020-02-26 2020-02-26 对焦方法、装置及设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/076839 WO2021168707A1 (zh) 2020-02-26 2020-02-26 对焦方法、装置及设备

Publications (1)

Publication Number Publication Date
WO2021168707A1 true WO2021168707A1 (zh) 2021-09-02

Family

ID=75145418

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/076839 WO2021168707A1 (zh) 2020-02-26 2020-02-26 对焦方法、装置及设备

Country Status (2)

Country Link
CN (1) CN112585945A (zh)
WO (1) WO2021168707A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113992823A (zh) * 2021-09-27 2022-01-28 国网浙江省电力有限公司金华供电公司 一种基于多信息源的一二次设备故障智能诊断方法
CN114845041A (zh) * 2021-12-30 2022-08-02 齐之明光电智能科技(苏州)有限公司 一种用于纳米颗粒成像的对焦方法、装置及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120087572A1 (en) * 2010-10-11 2012-04-12 Goksel Dedeoglu Use of Three-Dimensional Top-Down Views for Business Analytics
CN102780847A (zh) * 2012-08-14 2012-11-14 北京汉邦高科数字技术股份有限公司 一种针对运动目标的摄像机自动对焦控制方法
CN103235602A (zh) * 2013-03-25 2013-08-07 山东电力集团公司电力科学研究院 一种电力巡线无人机自动拍照控制设备及控制方法
CN105629631A (zh) * 2016-02-29 2016-06-01 广东欧珀移动通信有限公司 控制方法、控制装置及电子装置
US20170006211A1 (en) * 2015-07-01 2017-01-05 Sony Corporation Method and apparatus for autofocus area selection by detection of moving objects
CN108924419A (zh) * 2018-07-09 2018-11-30 国网福建省电力有限公司漳州供电公司 一种面向输电线路巡检的无人机摄像变焦控制系统

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6163453B2 (ja) * 2014-05-19 2017-07-12 本田技研工業株式会社 物体検出装置、運転支援装置、物体検出方法、および物体検出プログラム
CN104658011B (zh) * 2015-01-31 2017-09-29 北京理工大学 一种智能交通运动目标检测跟踪方法
CN104766052B (zh) * 2015-03-24 2018-10-16 广州视源电子科技股份有限公司 一种人脸识别方法、系统及用户终端、服务器
US9989965B2 (en) * 2015-08-20 2018-06-05 Motionloft, Inc. Object detection and analysis via unmanned aerial vehicle
CN105391939B (zh) * 2015-11-04 2017-09-29 腾讯科技(深圳)有限公司 无人机拍摄控制方法和装置、无人机拍摄方法和无人机
KR101769601B1 (ko) * 2016-07-13 2017-08-18 아이디어주식회사 자동추적 기능을 갖는 무인항공기
CN107465855B (zh) * 2017-08-22 2020-05-29 上海歌尔泰克机器人有限公司 图像的拍摄方法及装置、无人机
CN107729808B (zh) * 2017-09-08 2020-05-01 国网山东省电力公司电力科学研究院 一种用于输电线路无人机巡检的图像智能采集系统及方法
CN108984657B (zh) * 2018-06-28 2020-12-01 Oppo广东移动通信有限公司 图像推荐方法和装置、终端、可读存储介质
CN108810418B (zh) * 2018-07-16 2020-09-11 Oppo广东移动通信有限公司 图像处理方法、装置、移动终端及计算机可读存储介质
CN109343573A (zh) * 2018-10-31 2019-02-15 云南兆讯科技有限责任公司 基于光场拍摄技术的电力设备巡检图影像采集处理系统
CN109743499A (zh) * 2018-12-29 2019-05-10 武汉云衡智能科技有限公司 一种应用于图像识别的变焦无人机和变焦无人机控制方法
CN109886209A (zh) * 2019-02-25 2019-06-14 成都旷视金智科技有限公司 异常行为检测方法及装置、车载设备
CN110133440B (zh) * 2019-05-27 2021-07-06 国电南瑞科技股份有限公司 基于杆塔模型匹配及视觉导航的电力无人机及巡检方法
CN110149482B (zh) * 2019-06-28 2021-02-02 Oppo广东移动通信有限公司 对焦方法、装置、电子设备和计算机可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120087572A1 (en) * 2010-10-11 2012-04-12 Goksel Dedeoglu Use of Three-Dimensional Top-Down Views for Business Analytics
CN102780847A (zh) * 2012-08-14 2012-11-14 北京汉邦高科数字技术股份有限公司 一种针对运动目标的摄像机自动对焦控制方法
CN103235602A (zh) * 2013-03-25 2013-08-07 山东电力集团公司电力科学研究院 一种电力巡线无人机自动拍照控制设备及控制方法
US20170006211A1 (en) * 2015-07-01 2017-01-05 Sony Corporation Method and apparatus for autofocus area selection by detection of moving objects
CN105629631A (zh) * 2016-02-29 2016-06-01 广东欧珀移动通信有限公司 控制方法、控制装置及电子装置
CN108924419A (zh) * 2018-07-09 2018-11-30 国网福建省电力有限公司漳州供电公司 一种面向输电线路巡检的无人机摄像变焦控制系统

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113992823A (zh) * 2021-09-27 2022-01-28 国网浙江省电力有限公司金华供电公司 一种基于多信息源的一二次设备故障智能诊断方法
CN113992823B (zh) * 2021-09-27 2023-12-08 国网浙江省电力有限公司金华供电公司 一种基于多信息源的一二次设备故障智能诊断方法
CN114845041A (zh) * 2021-12-30 2022-08-02 齐之明光电智能科技(苏州)有限公司 一种用于纳米颗粒成像的对焦方法、装置及存储介质
CN114845041B (zh) * 2021-12-30 2024-03-15 齐之明光电智能科技(苏州)有限公司 一种用于纳米颗粒成像的对焦方法、装置及存储介质

Also Published As

Publication number Publication date
CN112585945A (zh) 2021-03-30

Similar Documents

Publication Publication Date Title
CN111272148B (zh) 输电线路无人机自主巡检自适应成像质量优化方法
WO2018201809A1 (zh) 基于双摄像头的图像处理装置及方法
WO2021189456A1 (zh) 无人机巡检方法、装置及无人机
WO2020200093A1 (zh) 对焦方法、装置、拍摄设备及飞行器
CN104065859B (zh) 一种全景深图像的获取方法及摄像装置
CN111583116A (zh) 基于多摄像机交叉摄影的视频全景拼接融合方法及系统
US20170366804A1 (en) Light field collection control methods and apparatuses, light field collection devices
US8340512B2 (en) Auto focus technique in an image capture device
CN108549413A (zh) 一种云台旋转控制方法、装置及无人飞行器
CN112425148B (zh) 摄像装置、无人移动体、摄像方法、系统及记录介质
WO2019037038A1 (zh) 图像处理方法、装置及服务器
WO2021168707A1 (zh) 对焦方法、装置及设备
WO2021134179A1 (zh) 对焦方法、装置、拍摄设备、可移动平台和存储介质
WO2020024112A1 (zh) 一种拍摄处理方法、设备及存储介质
US10602064B2 (en) Photographing method and photographing device of unmanned aerial vehicle, unmanned aerial vehicle, and ground control device
WO2021037286A1 (zh) 一种图像处理方法、装置、设备及存储介质
CN113391644B (zh) 一种基于图像信息熵的无人机拍摄距离半自动寻优方法
CN104184935A (zh) 影像拍摄设备及方法
CN112307912A (zh) 一种基于摄像头确定人员轨迹的方法及系统
CN115578662A (zh) 无人机前端图像处理方法、系统、存储介质及设备
CN109587392B (zh) 监控设备的调整方法及装置、存储介质、电子装置
CN114020039A (zh) 无人机巡检杆塔自动对焦系统及方法
JP2017139646A (ja) 撮影装置
CN114866705B (zh) 自动曝光方法、存储介质及电子设备
CN116185065A (zh) 无人机巡检方法、装置及非易失性存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20921664

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20921664

Country of ref document: EP

Kind code of ref document: A1