CN115578662A - Unmanned aerial vehicle front-end image processing method, system, storage medium and equipment - Google Patents

Unmanned aerial vehicle front-end image processing method, system, storage medium and equipment Download PDF

Info

Publication number
CN115578662A
CN115578662A CN202211471383.1A CN202211471383A CN115578662A CN 115578662 A CN115578662 A CN 115578662A CN 202211471383 A CN202211471383 A CN 202211471383A CN 115578662 A CN115578662 A CN 115578662A
Authority
CN
China
Prior art keywords
image
target
image frame
unmanned aerial
aerial vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211471383.1A
Other languages
Chinese (zh)
Inventor
于晓艳
刘天立
刘越
杨仁明
孙磊
吴见
孙晓斌
姜可孟
魏传虎
苑雨薇
张毅
齐帅
李缘
吕建红
耿凯伦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Intelligent Technology Co Ltd
Original Assignee
State Grid Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Intelligent Technology Co Ltd filed Critical State Grid Intelligent Technology Co Ltd
Priority to CN202211471383.1A priority Critical patent/CN115578662A/en
Publication of CN115578662A publication Critical patent/CN115578662A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Remote Sensing (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the technical field of front-end image processing of unmanned aerial vehicles, and provides a method, a system, a storage medium and equipment for processing front-end images of unmanned aerial vehicles. The method for processing the front-end image of the unmanned aerial vehicle comprises the steps of judging the type of an electric power component to be shot according to the name of the component of a shooting point of the unmanned aerial vehicle, calling a pre-trained target recognition model, and carrying out target recognition on an image frame in a video stream; extracting all recognition results of the types of the power components to be shot, calculating the area of each target, taking the target with the largest area as a final target and confirming coordinate information of the final target; calculating the offset of the final target position from the image center position to control the tripod head to adjust the angle so that the target is positioned at the lens center position, thereby realizing image deviation correction; and judging the exposure state of the current image frame according to the histogram of the current rectified image frame, and adjusting the exposure amount of the camera to enable the brightness of the image frame to be within a normal range. Which can significantly improve image quality.

Description

Unmanned aerial vehicle front-end image processing method, system, storage medium and equipment
Technical Field
The invention belongs to the technical field of front-end image processing of unmanned aerial vehicles, and particularly relates to a front-end image processing method, a front-end image processing system, a storage medium and front-end image processing equipment of an unmanned aerial vehicle.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
The traditional power transmission line inspection is carried out manually, the labor intensity is high, the efficiency is low, the working condition is hard, and the requirements of the construction and the development of a modern power grid cannot be met. The unmanned aerial vehicle gradually replaces manpower with the characteristics of high efficiency and flexibility to become a conventional route inspection means. Although the ground flyer operates the unmanned aerial vehicle to carry out inspection operation, the inspection efficiency is improved, the problems of uneven inspection effect, long inspection time, limited inspection range and the like still exist. With the development of technologies such as a 3D point cloud technology, route planning, a machine nest and artificial intelligence, the unmanned aerial vehicle autonomous inspection technology without manual participation begins to be comprehensively popularized, the limitation of time and environment is broken through, and the inspection efficiency is greatly improved. At the present stage, the shooting point position when the unmanned aerial vehicle autonomously patrols and examines completely depends on the course planning of earlier stage, and there is the deviation in the point position that electric power unit was shot to some, leads to the electric power unit that will shoot to be incomplete in the image, or is located the image edge. In addition, unmanned aerial vehicle independently patrols and examines and has broken through the time limit, because it is indefinite to patrol and examine time, environmental weather is indefinite, and the image of shooing probably has the overexposure or underexposure condition, and this will produce direct influence to later stage image analysis effect.
In a conventional power component identification method, an edge detection method or a threshold segmentation method is generally used at a server side to identify a power component in an image. However, for pictures with complex backgrounds, the conventional method may cause poor recognition accuracy due to more interference. The prior art provides a standardized data acquisition system and a standardized data acquisition method for line inspection, which combine a pre-planned inspection line route and aircraft camera parameters to realize fixed-point photographing of the inspection line, but the scheme depends on GPS positioning precision and route planning precision, and photographed images may have deviation. The prior art also provides an intelligent image acquisition system and an intelligent image acquisition method for unmanned aerial vehicle inspection of a power transmission line, which identify typical components in tower image frames through an improved Faster-RCNN network, and then adjust a camera to improve the image quality, but the shot images still have the problems of overexposure and underexposure caused by time and weather.
Disclosure of Invention
In order to solve the technical problems in the background art, the invention provides an unmanned aerial vehicle front-end image rectification and automatic exposure method and system, which can automatically identify an electric power component according to planned route information and a shooting point name, adjust the angle of a holder, place the component in the center of an image, and finish image rectification; and by combining with an automatic exposure technology, the brightness of the shot image is kept within a normal range, the image quality is obviously improved, and a good foundation is laid for subsequent image analysis.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention provides a method for processing a front-end image of an unmanned aerial vehicle.
In one or more embodiments, a drone front-end image processing method includes:
judging the type of an electric power component to be shot according to the name of the component at the shooting point of the unmanned aerial vehicle, calling a pre-trained target recognition model, and performing target recognition on an image frame in a video stream;
extracting all recognition results of the types of the power components to be shot, calculating the area of each target, taking the target with the largest area as a final target and confirming coordinate information of the final target;
calculating the pixel-level offset of the position of the final target from the central position of the image to control the angle adjustment of the holder so that the target is positioned at the central position of the lens to realize image deviation correction;
and judging the exposure state of the current image frame according to the histogram of the current rectified image frame, and further adjusting the exposure amount of the camera so as to enable the brightness of the image frame to be within a normal range.
In the process of image rectification, an adjustment parameter of the pan/tilt head is obtained according to the relationship among the image coordinate system, the camera coordinate system and the world coordinate system based on the pixel-level offset of the position of the final target from the image center position.
As an embodiment, the exposure state includes normal exposure, overexposure, and underexposure.
In one embodiment, if the part of the histogram of the current rectified image frame exceeding 60% is distributed on the left side, the exposure state of the current image frame is underexposed.
As an embodiment, if the part of the histogram of the current rectified image frame exceeding 60% is distributed on the right side, the exposure state of the current image frame is overexposure.
As an embodiment, if the histogram distribution of the current rectified image frame is balanced, the exposure state of the current image frame is normal exposure.
The invention provides a front-end image processing system of an unmanned aerial vehicle in a second aspect.
In one or more embodiments, a drone front-end image processing system includes:
the target identification module is used for judging the type of the electric power component to be shot according to the component name of the shooting point of the unmanned aerial vehicle, calling a pre-trained target identification model and carrying out target identification on the image frame in the video stream;
a target determination module for extracting all recognition results of types of power components to be photographed, calculating the area of each target, taking the target with the largest area as a final target and confirming coordinate information thereof;
the image deviation rectifying module is used for calculating the pixel-level deviation of the position of the final target from the central position of the image so as to control the tripod head to adjust the angle to enable the target to be positioned at the central position of the lens to realize image deviation rectification;
and the exposure adjusting module is used for judging the exposure state of the current image frame according to the histogram of the current rectified image frame so as to adjust the exposure of the camera, so that the brightness of the image frame is in a normal range.
As an implementation manner, in the image rectification module, based on the pixel-level offset of the position of the final target from the image center position, the adjustment parameter of the pan/tilt head is obtained according to the relationship between the image coordinate system, the camera coordinate system, and the world coordinate system.
In one embodiment, the exposure adjustment module includes a normal exposure, an overexposure and an underexposure.
In one embodiment, in the exposure adjustment module, if a part of the histogram of the current deskewed image frame that exceeds 60% is distributed on the left side, the exposure state of the current image frame is underexposed.
In one embodiment, in the exposure adjustment module, if a part of the histogram of the current deskewed image frame that exceeds 60% is distributed on the right side, the exposure state of the current image frame is overexposure.
As an embodiment, in the exposure adjustment module, if the histogram distribution of the current rectified image frame is balanced, the exposure state of the current image frame is normal exposure.
A third aspect of the invention provides a computer-readable storage medium.
In one or more embodiments, a computer-readable storage medium has stored thereon a computer program which, when executed by a processor, carries out the steps in the drone front end image processing method as described above.
A fourth aspect of the invention provides an electronic device.
In one or more embodiments, an electronic device includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor executes the program to implement the steps in the unmanned aerial vehicle front-end image processing method as described above.
Compared with the prior art, the invention has the beneficial effects that:
the unmanned aerial vehicle front-end image rectification and automatic exposure technology is innovatively provided, the problems of incomplete shooting, overexposure and underexposure of an inspection image component are solved, and the rectification of the image is realized by controlling the angle of a tripod head to be adjusted according to the pixel-level offset of the position of a final target from the central position of an image so that the target is positioned at the central position of a lens; according to the histogram of the current image frame after deviation rectification, the exposure state of the current image frame is judged, and then the exposure of the camera is adjusted, so that the brightness of the image frame is in a normal range, the quality of the unmanned aerial vehicle autonomous inspection image is improved, and a foundation is laid for subsequent image analysis and defect identification.
Advantages of additional aspects of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the invention and not to limit the invention.
FIG. 1 is a flow chart of a method for correcting the front-end image of an unmanned aerial vehicle and automatically exposing the front-end image of the unmanned aerial vehicle according to an embodiment of the invention;
FIG. 2 is a schematic diagram of a process of rectifying an image deviation at the front end of a human-machine according to an embodiment of the present invention.
Detailed Description
The invention is further described with reference to the following figures and examples.
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
Example one
Referring to fig. 1, the embodiment provides an unmanned aerial vehicle front-end image processing method, which specifically includes the following steps:
step 1: the method comprises the steps of judging the type of an electric power component to be shot according to the name of the component at the shooting point of the unmanned aerial vehicle, calling a pre-trained target recognition model, and carrying out target recognition on an image frame in a video stream.
Before step 1, the method further comprises the following steps of judging the type of the power component to be shot according to the component name of the shooting position of the unmanned aerial vehicle:
receiving an autonomous inspection task, analyzing a pre-planned route, and starting autonomous inspection according to the route;
and when the unmanned aerial vehicle reaches the waypoint, obtaining the part name of the shooting point position.
In step 1, the target identification model adopts a Yolov5s model.
It is understood herein that in other embodiments, the object recognition model can be implemented by using other existing object recognition algorithms, and those skilled in the art can specifically select the object recognition model according to the actual situation, and the detailed description is omitted here.
Specifically, the training process of the Yolov5s model is as follows:
(a.1) sample library construction. And building a sample library based on the unmanned aerial vehicle inspection image of the power transmission line in the Shandong area. And (3) directly converting the labeling information required by target recognition into xml for storage by adopting open-source LabelImg software. And dividing the sample library into a training set, a verification set and a test set according to the following steps of 7. And (5) all the tags in the xml format are converted into the txt format available for the Yolo model, and the txm format and corresponding images are saved into a training set, a verification set and a test set.
(a.2) image preprocessing. The brightness of the normal exposure image is adjusted by using the Gamma conversion function, and the overexposed and underexposed images are simulated, so that on one hand, the original data amount can be increased by two times, on the other hand, overexposed and underexposed samples are also increased, and the model can still accurately identify the target part in the overexposed and underexposed working scenes. The formula of the Gamma transformation is as follows:
Figure 230674DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 50863DEST_PATH_IMAGE002
is the input image of the image to be displayed,
Figure 403347DEST_PATH_IMAGE003
is the output of the image or images,
Figure 283578DEST_PATH_IMAGE004
is a gray scale factor, usually taken as 1,
Figure 721513DEST_PATH_IMAGE005
is a gamma factor when
Figure 578348DEST_PATH_IMAGE006
When the brightness of the image is reduced, when
Figure 50918DEST_PATH_IMAGE007
When the image brightness becomes bright.
In the embodiment, the training images are subjected to brightness adjustment by using Gamma transformation, so that the over-exposed training images and the under-exposed training images are obtained, and the recognition rate of the model on the parts in the over-exposed training images and the under-exposed training images is improved; the sample library is enriched, and one normal training picture is changed into 3 training pictures (a normal picture, an overexposed picture and an underexposed picture).
(a.3) model training. And inputting the image into a network, setting the number, name, iteration times and the like of label categories, and starting training. At the input end, the method adopts the Mosaic data enhancement and the self-adaptive anchor frame calculation, and splices the input pictures in the modes of random zooming, random cutting and random arrangement so as to enrich the background and small targets of the detected object. Before an image enters a backbone network, increasing image characteristics through Focus structure slicing operation and convolution operation; and the CSPDarkNet53 is used for extracting rich features in the backbone network, so that the parameter number and the FLOPS value of the model are reduced, and the reasoning speed and the accuracy are improved. And fusing the network features from top to bottom and from bottom to top by adopting FPN + PAN in the Neck structure. And calculating the position Loss and screening the target frame by using the CIOU _ Loss + DIOU _ nms at the output end. And calculating the error between the real value of the target and the predicted value of the network by forward propagation, and then continuously minimizing the error by backward propagation to update and optimize the network parameters, so that the model has good recognition rate on the class target contained in the training set. In the training process, the performance of the training model and whether the network is converged are regularly verified on a verification test set, and if the under-fitting or over-fitting phenomenon occurs, a loss curve graph is drawn, parameters of the training network are modified, and the model is optimized and upgraded.
(a.4) model evaluation. Identifying the test set by using the trained model, and finding out the rate of each part
Figure 102050DEST_PATH_IMAGE008
As an evaluation index, the formula is as follows:
Figure 27281DEST_PATH_IMAGE009
wherein the content of the first and second substances,
Figure 720430DEST_PATH_IMAGE010
indicating the number of correctly identified samples,
Figure 188452DEST_PATH_IMAGE011
indicating the number of unidentified samples.
When the unmanned aerial vehicle starts autonomous inspection, a waypoint is reached according to route analysis information, and if parts to be shot by the waypoint belong to the insulator type, an insulator type model is called; otherwise, calling the conducting wire and ground wire model, taking frames from the video stream to identify the electric power components, and outputting all component types and position coordinates in the frames.
And if the names of the parts at the shooting positions are tower overall appearance, tower heads, tower bodies and other non-insulator and ground wire related parts, the shooting action is directly finished without target recognition.
And 2, step: all recognition results of the types of the power components to be photographed are extracted, the area of each target is calculated, the target with the largest area is taken as a final target, and coordinate information of the final target is confirmed.
The specific process of the step 2 is as follows:
and extracting one or more position coordinates of the part from all the results according to the part name to be shot by the waypoint in the route information. If the number of the coordinate positions of the type of component is multiple, calculating the area occupied by all the coordinate position frames, and taking an identification frame with the largest area as a final target position; if only one coordinate position of the type of part is available, the identification frame is directly used as a final target position; if the part has no identification frame in the frame, the automatic exposure link is directly entered.
And 3, step 3: and calculating the pixel-level offset of the position of the final target from the central position of the image to control the tripod head to adjust the angle so that the target is positioned at the central position of the lens to realize image deviation correction.
In the image deviation rectifying process, the adjusting parameters of the holder are obtained according to the relation between the image coordinate system, the camera coordinate system and the world coordinate system based on the pixel-level offset of the position of the final target from the image center position. As shown in fig. 2, the black frame represents the whole screen, the same color frame represents the same type of parts, and the whole process from recognition to correction is completed when the parts in the frame are recognized as being in the blue color by the camera.
And 4, step 4: and judging the exposure state of the current image frame according to the histogram of the current rectified image frame, and further adjusting the exposure of the camera to enable the brightness of the image frame to be within a normal range. Wherein the exposure state comprises normal exposure, overexposure and underexposure.
After the electric power component to be shot is adjusted to the center position of the picture, the current video frame is taken and the brightness component of the current video frame is obtained
Figure 472803DEST_PATH_IMAGE012
The calculation formula is as follows:
Figure 321548DEST_PATH_IMAGE013
wherein, the first and the second end of the pipe are connected with each other,
Figure 349547DEST_PATH_IMAGE014
is at present3 color components of the frame.
And (5) calculating a histogram after down-sampling the image. For faster calculation, each of the luminance components of the current frame is used
Figure 531129DEST_PATH_IMAGE015
Each pixel takes one value to form a down-sampling image and calculate a gray histogram, and the formula is as follows:
Figure 658485DEST_PATH_IMAGE016
wherein the content of the first and second substances,
Figure 558308DEST_PATH_IMAGE017
is a coordinate of
Figure 62102DEST_PATH_IMAGE018
The value of the pixel of (a) is,
Figure 363770DEST_PATH_IMAGE019
Figure 662027DEST_PATH_IMAGE020
are gray levels.
Normalizing the histogram:
Figure 49146DEST_PATH_IMAGE021
adjusting exposure compensation amount according to histogram distribution:
if the part of the histogram of the current rectified image frame, which exceeds 60%, is distributed on the left side, the exposure state of the current image frame is underexposed. In this case, the camera aperture is increased or the shutter speed is decreased to increase the exposure amount.
If the part of the histogram of the current rectified image frame, which exceeds 60%, is distributed on the right side, the exposure state of the current image frame is overexposure. In this case, the camera aperture is reduced or the shutter speed is increased to reduce the exposure amount.
And if the histogram distribution of the current rectified image frame is balanced, the exposure state of the current image frame is normal exposure.
The relationship between the histogram distribution and the exposure amount is shown in table 1.
TABLE 1 histogram distribution vs. Exposure
Figure 872744DEST_PATH_IMAGE022
According to the embodiment, the front end identification of the power component is carried out by using the lightweight, low-time-consumption and high-precision Yolov5s model, so that the identification accuracy is improved; calling a corresponding Yolov5s model to identify the part through the name of the shooting point in the route information, and controlling the angle of a holder to enable the part to be positioned in the center of the image, so that the problems of incomplete shooting of the part and positioning of the part at the edge of the image are solved, and image rectification is realized; the brightness of the shot image is kept in a normal range through an automatic exposure technology, and the problems of overexposure and underexposure of the image caused by time, weather and the like are solved; the invention improves the quality of the autonomous inspection image of the unmanned aerial vehicle and lays a foundation for subsequent image analysis and defect identification.
Example two
The embodiment provides an unmanned aerial vehicle front-end image processing system, which specifically comprises the following modules:
(1) The target identification module is used for judging the type of the electric power component to be shot according to the component name of the shooting point of the unmanned aerial vehicle, calling a pre-trained target identification model and carrying out target identification on the image frame in the video stream;
(2) The target determining module is used for extracting all recognition results of the types of the power components to be shot, calculating the area of each target, taking the target with the largest area as a final target and confirming coordinate information of the final target;
(3) The image deviation rectifying module is used for calculating the pixel-level offset of the position of the final target from the central position of the image so as to control the tripod head to adjust the angle to enable the target to be positioned at the central position of the lens to realize image deviation rectification;
and in the image deviation rectifying module, based on the pixel-level offset of the position of the final target from the image center position, the adjusting parameter of the holder is obtained according to the relation of an image coordinate system, a camera coordinate system and a world coordinate system.
(4) And the exposure adjusting module is used for judging the exposure state of the current image frame according to the histogram of the current rectified image frame so as to adjust the exposure of the camera, so that the brightness of the image frame is in a normal range.
Specifically, in the exposure adjustment module, the exposure state includes normal exposure, overexposure, and underexposure.
In the exposure adjusting module, if more than half of the histogram of the current rectified image frame is distributed on the left side, the exposure state of the current image frame is underexposed.
In the exposure adjusting module, if the part of the histogram of the current rectified image frame, which exceeds 60%, is distributed on the right side, the exposure state of the current image frame is overexposure.
In the exposure adjusting module, if the histogram distribution of the current rectified image frame is balanced, the exposure state of the current image frame is normal exposure.
It should be noted that, each module in the present embodiment corresponds to each step in the first embodiment one to one, and the specific implementation process is the same, which is not described herein again.
EXAMPLE III
The present embodiment provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in the drone front end image processing method as described above.
Example four
The embodiment provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the program, the steps in the unmanned aerial vehicle front-end image processing method are implemented.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An unmanned aerial vehicle front-end image processing method is characterized by comprising the following steps:
judging the type of an electric power component to be shot according to the name of the component at the shooting point of the unmanned aerial vehicle, calling a pre-trained target recognition model, and performing target recognition on an image frame in a video stream;
extracting all recognition results of the types of the power components to be shot, calculating the area of each target, taking the target with the largest area as a final target and confirming coordinate information of the final target;
calculating the pixel-level offset of the position of the final target from the central position of the image to control the angle adjustment of the holder so that the target is positioned at the central position of the lens to realize image deviation correction;
and judging the exposure state of the current image frame according to the histogram of the current rectified image frame, and further adjusting the exposure amount of the camera so as to enable the brightness of the image frame to be within a normal range.
2. The unmanned aerial vehicle front-end image processing method of claim 1, wherein in the process of image rectification, the adjustment parameters of the pan-tilt are obtained according to the relationship between the image coordinate system, the camera coordinate system and the world coordinate system based on the pixel-level offset of the position of the final target from the image center position.
3. The unmanned aerial vehicle front-end image processing method of claim 1, wherein the exposure state comprises normal exposure, overexposure, and underexposure.
4. The method as claimed in claim 1 or 3, wherein if more than 60% of the histogram of the current de-skewed image frame is distributed on the left side, the exposure status of the current image frame is underexposed.
5. The unmanned aerial vehicle front-end image processing method as claimed in claim 1 or 3, wherein if more than 60% of the histogram of the current de-skewed image frame is distributed on the right side, the exposure status of the current image frame is overexposure.
6. The method as claimed in claim 1 or 3, wherein if the histogram distribution of the current de-skewed image frame is balanced, the exposure status of the current image frame is normal exposure.
7. An unmanned aerial vehicle front end image processing system, characterized in that includes:
the target identification module is used for judging the type of the electric power component to be shot according to the component name of the shooting point of the unmanned aerial vehicle, calling a pre-trained target identification model and carrying out target identification on the image frame in the video stream;
a target determination module for extracting all recognition results of types of power components to be photographed, calculating the area of each target, taking the target with the largest area as a final target and confirming coordinate information thereof;
the image deviation rectifying module is used for calculating the pixel-level deviation of the position of the final target from the central position of the image so as to control the tripod head to adjust the angle to enable the target to be positioned at the central position of the lens to realize image deviation rectification;
and the exposure adjusting module is used for judging the exposure state of the current image frame according to the histogram of the current rectified image frame so as to adjust the exposure of the camera, so that the brightness of the image frame is in a normal range.
8. The unmanned aerial vehicle front-end image processing system of claim 7, wherein in the image rectification module, the adjustment parameters of the pan-tilt are obtained according to the relationship of an image coordinate system, a camera coordinate system and a world coordinate system based on a pixel-level offset of the position of the final target from the image center position.
9. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the steps in the drone front-end image processing method according to any one of claims 1 to 6.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the steps in the drone front end image processing method according to any one of claims 1-6.
CN202211471383.1A 2022-11-23 2022-11-23 Unmanned aerial vehicle front-end image processing method, system, storage medium and equipment Pending CN115578662A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211471383.1A CN115578662A (en) 2022-11-23 2022-11-23 Unmanned aerial vehicle front-end image processing method, system, storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211471383.1A CN115578662A (en) 2022-11-23 2022-11-23 Unmanned aerial vehicle front-end image processing method, system, storage medium and equipment

Publications (1)

Publication Number Publication Date
CN115578662A true CN115578662A (en) 2023-01-06

Family

ID=84590175

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211471383.1A Pending CN115578662A (en) 2022-11-23 2022-11-23 Unmanned aerial vehicle front-end image processing method, system, storage medium and equipment

Country Status (1)

Country Link
CN (1) CN115578662A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274843A (en) * 2023-11-15 2023-12-22 安徽继远软件有限公司 Unmanned aerial vehicle front end defect identification method and system based on lightweight edge calculation

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105827995A (en) * 2016-03-30 2016-08-03 金三立视频科技(深圳)有限公司 Automatic exposure method and system based on histogram
CN106657803A (en) * 2016-12-26 2017-05-10 中国科学院长春光学精密机械与物理研究所 Automatic exposure method for high-speed camera applied to electro-optic theodolite
CN107729808A (en) * 2017-09-08 2018-02-23 国网山东省电力公司电力科学研究院 A kind of image intelligent acquisition system and method for power transmission line unmanned machine inspection
WO2020029732A1 (en) * 2018-08-06 2020-02-13 Oppo广东移动通信有限公司 Panoramic photographing method and apparatus, and imaging device
CN112164015A (en) * 2020-11-30 2021-01-01 中国电力科学研究院有限公司 Monocular vision autonomous inspection image acquisition method and device and power inspection unmanned aerial vehicle
CN112947519A (en) * 2021-02-05 2021-06-11 北京御航智能科技有限公司 Unmanned aerial vehicle inspection method and device and edge calculation module
CN113408510A (en) * 2021-08-23 2021-09-17 中科方寸知微(南京)科技有限公司 Transmission line target deviation rectifying method and system based on deep learning and one-hot coding
CN113554083A (en) * 2021-07-16 2021-10-26 京东方科技集团股份有限公司 Multi-exposure image sample generation method and device, computer equipment and medium
CN113850799A (en) * 2021-10-14 2021-12-28 长春工业大学 YOLOv 5-based trace DNA extraction workstation workpiece detection method
CN114430462A (en) * 2022-04-07 2022-05-03 北京御航智能科技有限公司 Unmanned aerial vehicle autonomous photographing parameter adjusting method, device, equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105827995A (en) * 2016-03-30 2016-08-03 金三立视频科技(深圳)有限公司 Automatic exposure method and system based on histogram
CN106657803A (en) * 2016-12-26 2017-05-10 中国科学院长春光学精密机械与物理研究所 Automatic exposure method for high-speed camera applied to electro-optic theodolite
CN107729808A (en) * 2017-09-08 2018-02-23 国网山东省电力公司电力科学研究院 A kind of image intelligent acquisition system and method for power transmission line unmanned machine inspection
WO2020029732A1 (en) * 2018-08-06 2020-02-13 Oppo广东移动通信有限公司 Panoramic photographing method and apparatus, and imaging device
CN112164015A (en) * 2020-11-30 2021-01-01 中国电力科学研究院有限公司 Monocular vision autonomous inspection image acquisition method and device and power inspection unmanned aerial vehicle
CN112947519A (en) * 2021-02-05 2021-06-11 北京御航智能科技有限公司 Unmanned aerial vehicle inspection method and device and edge calculation module
CN113554083A (en) * 2021-07-16 2021-10-26 京东方科技集团股份有限公司 Multi-exposure image sample generation method and device, computer equipment and medium
CN113408510A (en) * 2021-08-23 2021-09-17 中科方寸知微(南京)科技有限公司 Transmission line target deviation rectifying method and system based on deep learning and one-hot coding
CN113850799A (en) * 2021-10-14 2021-12-28 长春工业大学 YOLOv 5-based trace DNA extraction workstation workpiece detection method
CN114430462A (en) * 2022-04-07 2022-05-03 北京御航智能科技有限公司 Unmanned aerial vehicle autonomous photographing parameter adjusting method, device, equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘嘉政 等: "基于深度学习的树种图像自动识别", no. 01, pages 142 - 148 *
彭浩: "基于YOLOv5的无人机巡检图像绝缘子检测技术的研究", 《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》, vol. 2022, no. 04, pages 042 - 222 *
柳长安 等: "飞行机器人电塔巡检视频关键帧提取预处理", vol. 43, no. 1, pages 477 - 480 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274843A (en) * 2023-11-15 2023-12-22 安徽继远软件有限公司 Unmanned aerial vehicle front end defect identification method and system based on lightweight edge calculation
CN117274843B (en) * 2023-11-15 2024-04-19 安徽继远软件有限公司 Unmanned aerial vehicle front end defect identification method and system based on lightweight edge calculation

Similar Documents

Publication Publication Date Title
CN111272148B (en) Unmanned aerial vehicle autonomous inspection self-adaptive imaging quality optimization method for power transmission line
CN110113538B (en) Intelligent shooting equipment, intelligent control method and device
CN111770285B (en) Exposure brightness control method and device, electronic equipment and storage medium
CN108401154B (en) Image exposure degree non-reference quality evaluation method
CN111583116A (en) Video panorama stitching and fusing method and system based on multi-camera cross photography
CN108322666B (en) Method and device for regulating and controlling camera shutter, computer equipment and storage medium
CN105635565A (en) Shooting method and equipment
CN113052151B (en) Unmanned aerial vehicle automatic landing guiding method based on computer vision
CN114430462B (en) Unmanned aerial vehicle autonomous photographing parameter adjusting method, device, equipment and storage medium
CN105578062A (en) Light metering mode selection method and image acquisition device utilizing same
CN113391644B (en) Unmanned aerial vehicle shooting distance semi-automatic optimization method based on image information entropy
CN115578662A (en) Unmanned aerial vehicle front-end image processing method, system, storage medium and equipment
CN105979152A (en) Smart shooting system
CN113382143B (en) Automatic exposure adjusting method for binocular camera of fire-fighting robot
CN114037895A (en) Unmanned aerial vehicle pole tower inspection image identification method
CN112585945A (en) Focusing method, device and equipment
CN116185065A (en) Unmanned aerial vehicle inspection method and device and nonvolatile storage medium
WO2021189429A1 (en) Image photographing method and device, movable platform, and storage medium
CN113472998B (en) Image processing method, image processing device, electronic equipment and storage medium
CN115100500A (en) Target detection method and device and readable storage medium
CN111429366B (en) Single-frame low-light image enhancement method based on brightness conversion function
CN107395953A (en) A kind of imaging parameters optimization method of panorama camera
CN109889734A (en) A kind of exposure compensating method of adjustment for the shooting of more camera lenses
CN113992845B (en) Image shooting control method and device and computing equipment
WO2023240651A1 (en) Image processing method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination