CN116664934A - Unmanned aerial vehicle lens processing method, unmanned aerial vehicle lens processing device, computer equipment and storage medium - Google Patents

Unmanned aerial vehicle lens processing method, unmanned aerial vehicle lens processing device, computer equipment and storage medium Download PDF

Info

Publication number
CN116664934A
CN116664934A CN202310636859.0A CN202310636859A CN116664934A CN 116664934 A CN116664934 A CN 116664934A CN 202310636859 A CN202310636859 A CN 202310636859A CN 116664934 A CN116664934 A CN 116664934A
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
image
detected
feature extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310636859.0A
Other languages
Chinese (zh)
Inventor
王晓聪
叶洪江
陆海应
陈创升
何治安
肖铭杰
游亚雄
骆杰平
彭章
胡树坚
杨智泉
李永潮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd
Original Assignee
Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd filed Critical Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd
Priority to CN202310636859.0A priority Critical patent/CN116664934A/en
Publication of CN116664934A publication Critical patent/CN116664934A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application relates to an unmanned aerial vehicle lens processing method, an unmanned aerial vehicle lens processing device, computer equipment, a storage medium and a computer program product. The method comprises the following steps: acquiring an image to be detected, which is obtained by shooting an unmanned aerial vehicle lens of a target unmanned aerial vehicle; the image to be detected is used for detecting the pollutant attachment condition of the surface of the unmanned aerial vehicle lens; extracting image features of the image to be detected according to preset feature extraction information to obtain feature extraction results corresponding to the image to be detected; determining a pollutant detection result aiming at the lens of the unmanned aerial vehicle according to the characteristic extraction result; determining a target cleaning mode according to the pollution type and the pollution degree, and sending a cleaning instruction corresponding to the target cleaning mode to a target unmanned aerial vehicle; the cleaning instruction is used for instructing the target unmanned aerial vehicle to execute cleaning operation on the unmanned aerial vehicle lens according to a target cleaning mode. By adopting the method, the unmanned aerial vehicle camera can be automatically and effectively cleaned in time, the shooting efficiency of the unmanned aerial vehicle is improved, and the shooting quality of the unmanned aerial vehicle is ensured.

Description

Unmanned aerial vehicle lens processing method, unmanned aerial vehicle lens processing device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and apparatus for processing a lens of an unmanned aerial vehicle, a computer device, a storage medium, and a computer program product.
Background
With the development of unmanned aerial vehicle technology, unmanned aerial vehicle can be widely used in fields such as agriculture, survey and drawing, environmental monitoring, etc. When the unmanned aerial vehicle executes an automatic driving task, various pollutants such as dust, haze, water drops and the like can be attached to the surface of a lens of the unmanned aerial vehicle. The attached pollutants can cause the definition of the image to be reduced, influence the taking result and also influence the control and navigation of the unmanned aerial vehicle.
In the conventional technology, a manual cleaning treatment mode is generally adopted, workers are required to go to a remote airport place to carry out unmanned aerial vehicle maintenance, in the process of unmanned aerial vehicle task execution, pollutant attachment conditions cannot be timely processed, unmanned aerial vehicle maintenance efficiency is low, and task execution effect is poor.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an unmanned aerial vehicle lens processing method, apparatus, computer device, storage medium, and computer program product that can improve the unmanned aerial vehicle lens processing efficiency.
In a first aspect, the present application provides a method for processing a lens of an unmanned aerial vehicle, where the method includes:
acquiring an image to be detected, which is obtained by shooting an unmanned aerial vehicle lens of a target unmanned aerial vehicle; the image to be detected is used for detecting the pollutant attachment condition of the surface of the unmanned aerial vehicle lens;
extracting image features of the image to be detected according to preset feature extraction information to obtain feature extraction results corresponding to the image to be detected; the preset feature extraction information is determined based on pollutant detection on the surface of the unmanned aerial vehicle lens;
determining a pollutant detection result aiming at the unmanned aerial vehicle lens according to the characteristic extraction result; the pollutant detection result comprises a pollution type and a pollution degree;
determining a target cleaning mode according to the pollution type and the pollution degree, and sending a cleaning instruction corresponding to the target cleaning mode to the target unmanned aerial vehicle; the cleaning instruction is used for indicating the target unmanned aerial vehicle to execute the cleaning operation of the unmanned aerial vehicle lens according to the target cleaning mode.
In one embodiment, the extracting the image features of the image to be detected according to the preset feature extraction information to obtain a feature extraction result corresponding to the image to be detected includes:
Acquiring a gray level image corresponding to the image to be detected, and adjusting the gray level number of the gray level image;
determining frequency information corresponding to each pixel in the adjusted gray level image, and generating a pixel characteristic matrix according to the frequency information corresponding to each pixel;
and extracting to obtain a feature extraction result corresponding to the image to be detected by adopting the pixel feature matrix and the preset feature extraction information.
In one embodiment, the extracting the feature extraction result corresponding to the image to be detected by using the pixel feature matrix and the preset feature extraction information includes:
normalizing the pixel characteristic matrix;
extracting different types of features from the processed pixel feature matrix according to the preset feature extraction information; the preset feature extraction information is used for indicating the type of the feature to be extracted;
and obtaining the feature vector of the image to be detected according to the extracted features of different types, and taking the feature vector as the feature extraction result.
In one embodiment, the determining, according to the feature extraction result, a contaminant detection result for the lens of the unmanned aerial vehicle includes:
Inputting the feature vector of the image to be detected into a pre-trained pollutant classification model;
obtaining a pollutant detection result of the unmanned aerial vehicle lens according to pollutant prediction information output by the pollutant classification model;
the pre-trained pollutant classification model comprises a support vector machine, wherein the support vector machine is used for classifying pollutants of different types according to an optimal decision boundary.
In one embodiment, before the step of extracting the image features of the image to be detected according to the preset feature extraction information to obtain the feature extraction result corresponding to the image to be detected, the method further includes:
performing image preprocessing on the image to be detected according to the preprocessing operation information to obtain a processed image to be detected;
and executing the step of extracting the image features of the image to be detected according to preset feature extraction information by adopting the processed image to be detected to obtain a feature extraction result corresponding to the image to be detected.
In one embodiment, the target unmanned aerial vehicle is configured with a monitoring device, and after the step of sending the cleaning instruction corresponding to the target cleaning mode to the target unmanned aerial vehicle, the method further includes:
Acquiring lens state information acquired by the monitoring device; the lens state information is obtained by monitoring the cleaning state of the unmanned aerial vehicle lens in real time;
generating a cleaning feedback result for the unmanned aerial vehicle lens according to the lens state information; and the cleaning feedback result is used for representing the processing condition of executing cleaning operation on the unmanned aerial vehicle lens.
In a second aspect, the present application further provides an unmanned aerial vehicle lens processing apparatus, the apparatus comprising:
the to-be-detected image acquisition module is used for acquiring an to-be-detected image obtained by shooting the unmanned aerial vehicle lens of the target unmanned aerial vehicle; the image to be detected is used for detecting the pollutant attachment condition of the surface of the unmanned aerial vehicle lens;
the image feature extraction module is used for extracting image features of the image to be detected according to preset feature extraction information to obtain feature extraction results corresponding to the image to be detected; the preset feature extraction information is determined based on pollutant detection on the surface of the unmanned aerial vehicle lens;
the pollutant detection result obtaining module is used for determining a pollutant detection result aiming at the unmanned aerial vehicle lens according to the characteristic extraction result; the pollutant detection result comprises a pollution type and a pollution degree;
The cleaning instruction sending module is used for determining a target cleaning mode according to the pollution type and the pollution degree and sending a cleaning instruction corresponding to the target cleaning mode to the target unmanned aerial vehicle; the cleaning instruction is used for indicating the target unmanned aerial vehicle to execute the cleaning operation of the unmanned aerial vehicle lens according to the target cleaning mode.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
the to-be-detected image acquisition module is used for acquiring an to-be-detected image obtained by shooting the unmanned aerial vehicle lens of the target unmanned aerial vehicle; the image to be detected is used for detecting the pollutant attachment condition of the surface of the unmanned aerial vehicle lens;
the image feature extraction module is used for extracting image features of the image to be detected according to preset feature extraction information to obtain feature extraction results corresponding to the image to be detected; the preset feature extraction information is determined based on pollutant detection on the surface of the unmanned aerial vehicle lens;
the pollutant detection result obtaining module is used for determining a pollutant detection result aiming at the unmanned aerial vehicle lens according to the characteristic extraction result; the pollutant detection result comprises a pollution type and a pollution degree;
The cleaning instruction sending module is used for determining a target cleaning mode according to the pollution type and the pollution degree and sending a cleaning instruction corresponding to the target cleaning mode to the target unmanned aerial vehicle; the cleaning instruction is used for indicating the target unmanned aerial vehicle to execute the cleaning operation of the unmanned aerial vehicle lens according to the target cleaning mode.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
the to-be-detected image acquisition module is used for acquiring an to-be-detected image obtained by shooting the unmanned aerial vehicle lens of the target unmanned aerial vehicle; the image to be detected is used for detecting the pollutant attachment condition of the surface of the unmanned aerial vehicle lens;
the image feature extraction module is used for extracting image features of the image to be detected according to preset feature extraction information to obtain feature extraction results corresponding to the image to be detected; the preset feature extraction information is determined based on pollutant detection on the surface of the unmanned aerial vehicle lens;
the pollutant detection result obtaining module is used for determining a pollutant detection result aiming at the unmanned aerial vehicle lens according to the characteristic extraction result; the pollutant detection result comprises a pollution type and a pollution degree;
The cleaning instruction sending module is used for determining a target cleaning mode according to the pollution type and the pollution degree and sending a cleaning instruction corresponding to the target cleaning mode to the target unmanned aerial vehicle; the cleaning instruction is used for indicating the target unmanned aerial vehicle to execute the cleaning operation of the unmanned aerial vehicle lens according to the target cleaning mode.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprising a computer program which, when executed by a processor, performs the steps of:
the to-be-detected image acquisition module is used for acquiring an to-be-detected image obtained by shooting the unmanned aerial vehicle lens of the target unmanned aerial vehicle; the image to be detected is used for detecting the pollutant attachment condition of the surface of the unmanned aerial vehicle lens;
the image feature extraction module is used for extracting image features of the image to be detected according to preset feature extraction information to obtain feature extraction results corresponding to the image to be detected; the preset feature extraction information is determined based on pollutant detection on the surface of the unmanned aerial vehicle lens;
the pollutant detection result obtaining module is used for determining a pollutant detection result aiming at the unmanned aerial vehicle lens according to the characteristic extraction result; the pollutant detection result comprises a pollution type and a pollution degree;
The cleaning instruction sending module is used for determining a target cleaning mode according to the pollution type and the pollution degree and sending a cleaning instruction corresponding to the target cleaning mode to the target unmanned aerial vehicle; the cleaning instruction is used for indicating the target unmanned aerial vehicle to execute the cleaning operation of the unmanned aerial vehicle lens according to the target cleaning mode.
According to the unmanned aerial vehicle lens processing method, the device, the computer equipment, the storage medium and the computer program product, the image to be detected is obtained through shooting of the unmanned aerial vehicle lens of the target unmanned aerial vehicle, the image to be detected is used for detecting the pollutant attachment condition of the unmanned aerial vehicle lens surface, then the image characteristics of the image to be detected are extracted according to the preset characteristic extraction information, the characteristic extraction result corresponding to the image to be detected is obtained, the preset characteristic extraction information is determined based on pollutant detection on the unmanned aerial vehicle lens surface, the pollutant detection result for the unmanned aerial vehicle lens is determined according to the characteristic extraction result, the pollutant detection result comprises the pollution type and the pollution degree, the cleaning instruction corresponding to the target cleaning mode is sent to the target unmanned aerial vehicle according to the pollution type and the pollution degree, the cleaning instruction is used for indicating the target unmanned aerial vehicle to execute the cleaning operation on the unmanned aerial vehicle lens according to the target cleaning mode, automatic cleaning of the unmanned aerial vehicle camera is achieved, whether the unmanned aerial vehicle lens is polluted or not is automatically perceived and the corresponding cleaning operation is conducted through monitoring the cleaning degree of the unmanned aerial vehicle lens surface, the unmanned aerial vehicle lens can be effectively cleaned in time, the unmanned aerial vehicle camera shooting efficiency is improved, and the unmanned aerial vehicle shooting quality is guaranteed.
Drawings
Fig. 1 is a flow chart of a method for processing a lens of a drone in one embodiment;
FIG. 2 is a flow chart of an image feature extraction step in one embodiment;
fig. 3 is a flow chart of a method for processing a lens of a drone according to another embodiment;
fig. 4 is a block diagram of a lens processing device of the unmanned aerial vehicle in one embodiment;
fig. 5 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
In one embodiment, as shown in fig. 1, a method for processing a lens of an unmanned aerial vehicle is provided, and the embodiment is applied to a terminal for illustration, it can be understood that the method can also be applied to a server, and can also be applied to a system including the terminal and the server, and implemented through interaction between the terminal and the server. In this embodiment, the method includes the steps of:
step 101, acquiring an image to be detected, which is obtained by shooting an unmanned aerial vehicle lens of a target unmanned aerial vehicle;
As an example, the cleaning condition of the surface of the lens of the unmanned aerial vehicle of each unmanned aerial vehicle can be monitored for one or more unmanned aerial vehicles, and each unmanned aerial vehicle to be monitored can be used as a target unmanned aerial vehicle.
The image to be detected can be used for detecting the pollutant attachment condition of the surface of the unmanned aerial vehicle lens so as to judge whether the unmanned aerial vehicle lens is polluted or not based on the pollutant attachment condition, namely, the cleanliness of the surface of the unmanned aerial vehicle lens.
In practical application, the unmanned aerial vehicle to be monitored can be used as a target unmanned aerial vehicle, and aiming at the target unmanned aerial vehicle, the image data shot by the unmanned aerial vehicle lens of the unmanned aerial vehicle can be obtained as an image to be detected.
For example, the unmanned aerial vehicle lens equipped on the unmanned aerial vehicle may be adopted to perform shooting in real time, and then the shot image data (i.e. the image to be detected) may be transmitted to the method processing unit, where the method processing unit may exist in the unmanned aerial vehicle lens monitoring terminal, and perform data transmission through communication between the unmanned aerial vehicle and the unmanned aerial vehicle lens monitoring terminal, and may also exist in a monitoring device configured by the unmanned aerial vehicle, so as to further perform detection based on the shot image data.
Step 102, extracting image features of the image to be detected according to preset feature extraction information to obtain feature extraction results corresponding to the image to be detected;
the preset feature extraction information may be determined based on contaminant detection on the surface of the lens of the unmanned aerial vehicle, for example, in order to detect the contaminant on the surface of the lens of the unmanned aerial vehicle, features such as contrast, correlation, energy, entropy and the like of the image may be mainly extracted.
After the image to be detected is obtained, a gray level image corresponding to the image to be detected can be obtained, the gray level number of the gray level image can be adjusted, then frequency information corresponding to the pixels can be determined for each pixel in the adjusted gray level image, a pixel feature matrix is generated according to the frequency information corresponding to each pixel, and further the feature extraction result corresponding to the image to be detected can be extracted by adopting the pixel feature matrix and preset feature extraction information.
In an alternative embodiment, the preprocessing operation may be performed on the acquired image to be detected, for example, but not limited to, operations such as image noise reduction and gray scale processing, and further image feature extraction may be performed on the processed image to be detected, so that accuracy and reliability of feature extraction may be further improved.
Specifically, feature extraction can be performed on a preprocessed image (i.e., a processed image to be detected) based on a GLCM (Gray-level Co-occurrence Matrix) image processing technology, for example, features such as contrast, correlation, energy, entropy and the like of the image can be mainly extracted, so as to obtain a feature extraction result corresponding to the image to be detected. Therefore, the GLCM algorithm is adopted to detect pollutants, so that the method has higher accuracy and reliability, and the problems of misjudgment and missed judgment can be effectively avoided.
Step 103, determining a pollutant detection result aiming at the unmanned aerial vehicle lens according to the characteristic extraction result; the pollutant detection result comprises a pollution type and a pollution degree;
in a specific implementation, the feature vector of the image to be detected can be input into a pre-trained pollutant classification model, and then a pollutant detection result of the unmanned aerial vehicle lens can be obtained according to pollutant prediction information output by the pollutant classification model, wherein the pollutant detection result can comprise a pollution type and a pollution degree.
In an example, the pre-trained contaminant classification model may include a support vector machine that may be used to classify different types of contaminants according to an optimal decision boundary, e.g., extracted features (i.e., feature extraction results) may be passed into a classifier (i.e., a pre-trained contaminant classification model), such as a support vector machine, that may be used to classify lens cleanliness (e.g., cleanliness of a lens surface of an unmanned aerial vehicle) according to feature vectors by employing a support vector machine (Support Vector Machine, SVM) algorithm; under the condition that the unmanned aerial vehicle lens is polluted, the pollution type and the pollution degree can be further determined, so that the unmanned aerial vehicle lens can be conveniently treated in a corresponding cleaning mode.
And 104, determining a target cleaning mode according to the pollution type and the pollution degree, and sending a cleaning instruction corresponding to the target cleaning mode to the target unmanned aerial vehicle.
The cleaning instruction may be used to instruct the target unmanned aerial vehicle to perform a cleaning operation on the lens of the unmanned aerial vehicle according to a target cleaning mode.
In practical application, the corresponding relation among the pollution types, the pollution degrees and the cleaning modes can be preset according to different pollution types and pollution degrees, and then the target cleaning mode can be determined according to the pollution types and the pollution degrees after the unmanned aerial vehicle lens is judged to be polluted, and the cleaning instruction corresponding to the target cleaning mode can be sent to the target unmanned aerial vehicle.
Optionally, under the condition that the unmanned aerial vehicle lens is polluted, the unmanned aerial vehicle lens monitoring terminal can send a cleaning instruction to a main controller of the unmanned aerial vehicle (namely the target unmanned aerial vehicle) through the corresponding communication module so as to instruct the unmanned aerial vehicle to start the self-cleaning system to clean the unmanned aerial vehicle lens,
for example, for different pollution types and pollution levels, a plurality of cleaning modes can be preset, and the cleaning modes can comprise modes of water spraying, air flow, laser ablation, ultrasonic vibration and the like, so that the optimal cleaning mode can be selected according to actual conditions to perform automatic cleaning.
Compared with the traditional method, the technical scheme of the embodiment can automatically sense whether the unmanned aerial vehicle lens is polluted or not by monitoring the cleaning degree of the surface of the unmanned aerial vehicle lens, and can perform corresponding cleaning operation on different pollution types and pollution degrees, so that the unmanned aerial vehicle camera can be ensured to be cleaned effectively in time, the automation degree is high, manual intervention is not needed, the maintenance cost is reduced, all-weather automatic cleaning is realized, and the shooting efficiency and quality of the unmanned aerial vehicle are improved.
According to the unmanned aerial vehicle lens processing method, the image to be detected is obtained through shooting of the unmanned aerial vehicle lens of the target unmanned aerial vehicle, then the image characteristics of the image to be detected are extracted according to the preset characteristic extraction information, the characteristic extraction result corresponding to the image to be detected is obtained, the pollutant detection result of the unmanned aerial vehicle lens is determined according to the characteristic extraction result, then the target cleaning mode is determined according to the pollution type and the pollution degree, the cleaning instruction corresponding to the target cleaning mode is sent to the target unmanned aerial vehicle, automatic cleaning of the unmanned aerial vehicle camera is achieved, whether the unmanned aerial vehicle lens is polluted or not is automatically perceived, corresponding cleaning operation is carried out, the unmanned aerial vehicle camera can be timely and effectively cleaned, the unmanned aerial vehicle shooting efficiency is improved, and the unmanned aerial vehicle shooting quality is guaranteed.
In one embodiment, as shown in fig. 2, the extracting the image features of the image to be detected according to the preset feature extraction information to obtain the feature extraction result corresponding to the image to be detected may include the following steps:
step 201, acquiring a gray level image corresponding to the image to be detected, and adjusting the gray level number of the gray level image;
step 202, determining frequency information corresponding to each pixel in the adjusted gray level image, and generating a pixel characteristic matrix according to the frequency information corresponding to each pixel;
and 203, extracting to obtain a feature extraction result corresponding to the image to be detected by adopting the pixel feature matrix and the preset feature extraction information.
In practical application, the GLCM gray co-occurrence matrix is used as an image feature extraction method, which can be used for describing a spatial relationship between different gray levels in an image, in the process of performing image processing based on GLCM, an image to be detected can be converted into a gray image, a calculation window size and a calculation direction can be defined, for example, a calculation window with a fixed size can be determined for the gray image, one or more directions, such as a horizontal direction, a vertical direction, a diagonal direction and the like, can be designated, and further, the image gray level normalization processing can be performed on the gray image, for example, the gray level in the gray image can be reduced to a designated level (for example, 8 levels or 16 levels), and an adjusted gray image can be obtained.
In an example, after the image gray level normalization process, the frequency (i.e., frequency information) of occurrence of the pixel pair in the adjusted gray level image may be calculated, for example, for each pixel, a gray level combination of the pixel pair and the adjacent pixel in the specified direction may be obtained, and the frequency of occurrence of each combination may be counted, and then the counted frequency information may be stored in a matrix, such as a GLCM matrix, that is, a pixel feature matrix may be generated according to the frequency information corresponding to each pixel.
For example, the gray image may be loaded by a cv2.imread function, then the gray level number may be defined, and then the GLCM matrix may be calculated using a cv2.calcglcm function.
In this embodiment, the gray level number of the gray level image is adjusted by acquiring the gray level image corresponding to the image to be detected, then, for each pixel in the adjusted gray level image, the frequency information corresponding to the pixel is determined, the pixel feature matrix is generated according to the frequency information corresponding to each pixel, and then, the feature extraction result corresponding to the image to be detected is extracted by adopting the pixel feature matrix and the preset feature extraction information, so that the pollutant detection can be performed based on the GLCM image processing technology, which is helpful to improve the detection accuracy and reliability, and avoid the problems of erroneous judgment and missed judgment.
In an embodiment, the extracting, by using the pixel feature matrix and the preset feature extraction information, the feature extraction result corresponding to the image to be detected may include the following steps:
normalizing the pixel characteristic matrix; extracting different types of features from the processed pixel feature matrix according to the preset feature extraction information; the preset feature extraction information is used for indicating the type of the feature to be extracted; and obtaining the feature vector of the image to be detected according to the extracted features of different types, and taking the feature vector as the feature extraction result.
In a specific implementation, the GLCM matrix (i.e., the pixel feature matrix) may be normalized, for example, for each element in the GLCM matrix, the sum of all the elements may be divided to obtain a normalized GLCM matrix, i.e., the processed pixel feature matrix.
In an example, the preset feature extraction information and the normalized GLCM matrix may be combined, and the texture features may be calculated, for example, various texture features such as contrast, correlation, energy, entropy, etc. (i.e., different types of features) may be extracted from the normalized GLCM matrix, and then the extracted texture features may be used as feature vectors (i.e., feature extraction results) of the image to be detected, which may be used for tasks such as image classification and recognition. Therefore, the cleaning degree of the unmanned aerial vehicle lens can be actively perceived based on the GLCM image processing technology, and whether pollutants exist on the surface of the lens can be judged through feature extraction and classification recognition.
For example, the pap distance between the GLCM matrix and the all-zero matrix may be calculated using the cv2. Compactehist function as a measure of contrast.
As another example, by using Python and OpenCV libraries, the steps of the GLCM algorithm may be implemented as follows:
import cv2
import numpy as np
# load image
img=cv2.imread(‘image.jpg’,0)
# definition gray level number
levels=256
# calculate GLCM matrix
glcm=cv2.calcGLCM(img,[1],None,levels,levels)
Calculation of statistical information #
contrast=cv2.compareHist(xxxx)
Results are displayed #
print(“Contrast:”,contrast)
In this embodiment, through carrying out normalization processing to the pixel feature matrix, then extracting information according to preset feature, extract different grade type characteristic from the pixel feature matrix after handling, and then according to the different grade type characteristic of extracting, obtain the feature vector of waiting to detect the image, as the feature extraction result, can realize that automatic perception unmanned aerial vehicle camera lens receives the pollution, promoted pollutant detection accuracy.
In one embodiment, the determining the pollutant detection result for the unmanned aerial vehicle lens according to the feature extraction result may include the following steps:
inputting the feature vector of the image to be detected into a pre-trained pollutant classification model; and obtaining a pollutant detection result of the unmanned aerial vehicle lens according to the pollutant prediction information output by the pollutant classification model.
The pre-trained contaminant classification model may include a support vector machine, which is a classification model that may be used to classify different types of contaminants according to an optimal decision boundary.
In an example, when feature classification is performed according to feature vectors (i.e., feature extraction results) of an image to be detected, the feature vectors may be input into a classifier (i.e., a pre-trained contaminant classification model) to perform classification, such as a support vector machine SVM, an artificial neural network ANN, a convolutional neural network CNN, and the like, which is not particularly limited in this embodiment.
In yet another example, based on the support vector machine SVM, the following steps may be taken:
1. data were collected and pre-processed: data may be collected and pre-processed, including but not limited to data cleansing, noise removal, and the like.
2. Feature extraction: the data may be feature extracted and may be used to describe the characteristics of the data by converting the data from an original form into a set of feature vectors.
3. Marking data: the data may be labeled, such as by labeling the category to which each data point belongs.
4. Dividing data: the data may be divided into training and testing sets.
5. Training a model: by training the SVM model using the training set, optimal decision boundaries and support vectors can be determined. For example, based on an SVM algorithm, data of different categories can be classified by searching an optimal decision boundary, wherein the decision boundary is a hyperplane, and for two-dimensional data, the decision boundary is a straight line; for multidimensional data, the decision boundary is a hyperplane. The SVM algorithm can determine the optimal decision boundaries and support vectors by solving a convex quadratic programming problem, e.g., an optimization algorithm can be used for solving.
6. Model evaluation: by evaluating the model performance using the test set, metrics including accuracy, recall, F1 values, etc. can be obtained.
7. And (3) applying a model: the trained SVM model (i.e. the pre-trained pollutant classification model) can be used for data classification tasks, and further follow-up operations can be performed according to the prediction results output by the model.
In an alternative embodiment, a kernel function may be selected during training of the SVM model, and the low-dimensional features may be mapped to the high-dimensional feature space, where the kernel function may include a linear kernel function, a polynomial kernel function, a gaussian kernel function, and the like.
For example, the step of supporting the vector machine SVM algorithm may be implemented using a scikit-learn library in Python:
import necessary libraries and datasets
from sklearn import datasets
from sklearn.model selection import train test split
from sklearn import svm
# load iris dataset
iris=datasets.load iris()
# divide the dataset into training and testing sets
X train,X test,y train,y test=train test split(iris.data,iris.target,test size=0.4,random state=0)
# create an SVM classifier
clf=svm.SVC(kernel='linear',C=1).fit(X train,y train)
# prediction on test set
y pred=clf.predict(X test)
Accuracy of # output model
print("Accuracy:",clf.score(X test,y test))
In this embodiment, through the feature vector input to the pollutant classification model of training in advance of waiting to detect the image, and then according to the pollutant predictive information of pollutant classification model output, obtain the pollutant testing result of unmanned aerial vehicle camera lens, can automatic perception unmanned aerial vehicle camera lens receive the pollution and carry out the pollutant classification, provide data support for further carrying out corresponding clean operation to different pollution types and pollution degree, promoted unmanned aerial vehicle shooting efficiency.
In one embodiment, before the step of extracting the image features of the image to be detected according to the preset feature extraction information to obtain the feature extraction result corresponding to the image to be detected, the method may further include the following steps:
performing image preprocessing on the image to be detected according to the preprocessing operation information to obtain a processed image to be detected; and executing the step of extracting the image features of the image to be detected according to preset feature extraction information by adopting the processed image to be detected to obtain a feature extraction result corresponding to the image to be detected.
In practical application, the preprocessing operation can be performed on the acquired image to be detected according to the preprocessing operation information, for example, the preprocessing operation can include but is not limited to operations such as image noise reduction and gray scale processing, and further image feature extraction can be performed on the processed image to be detected, so that the accuracy and reliability of feature extraction can be further improved.
In this embodiment, the image to be detected is preprocessed according to the preprocessing operation information to obtain a processed image to be detected, and then the processed image to be detected is adopted to execute the step of extracting the image features of the image to be detected according to the preset feature extraction information to obtain the feature extraction result corresponding to the image to be detected, so that the accuracy and reliability of feature extraction can be improved.
In one embodiment, the target unmanned aerial vehicle is configured with a monitoring device, and after the step of sending the cleaning instruction corresponding to the target cleaning mode to the target unmanned aerial vehicle, the method may further include the following steps:
acquiring lens state information acquired by the monitoring device; the lens state information is obtained by monitoring the cleaning state of the unmanned aerial vehicle lens in real time; generating a cleaning feedback result for the unmanned aerial vehicle lens according to the lens state information; and the cleaning feedback result is used for representing the processing condition of executing cleaning operation on the unmanned aerial vehicle lens.
In a specific implementation, the target unmanned aerial vehicle can be configured with a monitoring device, the cleaning state (namely, lens state information) of the lens of the unmanned aerial vehicle can be monitored in real time by adopting the monitoring device, and the monitoring result (namely, cleaning feedback result) can be fed back to the main controller and the operator through the communication module so as to further carry out unmanned aerial vehicle maintenance and service. Therefore, the unmanned aerial vehicle shooting device can be suitable for unmanned aerial vehicle shooting under different environments and conditions, has wide application, and is simple in structure, high in reliability and easy to maintain and replace.
In this embodiment, through the camera lens state information that acquires monitoring devices and gather, and then according to camera lens state information, generate the clean feedback result to unmanned aerial vehicle camera lens, can clean unmanned aerial vehicle camera in time effectively and feedback information, guaranteed unmanned aerial vehicle shooting quality.
In one embodiment, as shown in fig. 3, a flow diagram of another unmanned aerial vehicle lens processing method is provided. In this embodiment, the method includes the steps of:
in step 301, an image to be detected obtained by capturing an unmanned aerial vehicle lens of a target unmanned aerial vehicle is obtained, and the image to be detected is subjected to image preprocessing according to preprocessing operation information, so as to obtain a processed image to be detected. In step 302, a gray scale image corresponding to the image to be detected is obtained, and the gray scale number of the gray scale image is adjusted. And determining frequency information corresponding to the pixels for each pixel in the adjusted gray level image, and generating a pixel characteristic matrix according to the frequency information corresponding to each pixel. In step 303, the pixel feature matrix is normalized, and different types of features are extracted from the processed pixel feature matrix according to the preset feature extraction information. In step 304, according to the extracted different types of features, feature vectors of the image to be detected are obtained as feature extraction results. In step 305, the feature vector of the image to be detected is input to a pre-trained pollutant classification model, and the pollutant detection result of the unmanned aerial vehicle lens is obtained according to the pollutant prediction information output by the pollutant classification model. In step 306, according to the pollution type and the pollution level, a target cleaning mode is determined, and a cleaning instruction corresponding to the target cleaning mode is sent to the target unmanned aerial vehicle. In step 307, the lens state information collected by the monitoring device is obtained, and a cleaning feedback result for the unmanned aerial vehicle lens is generated according to the lens state information. It should be noted that, the specific limitation of the above steps may be referred to the specific limitation of the method for processing the lens of the unmanned aerial vehicle, which is not described herein.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides an unmanned aerial vehicle lens processing device for realizing the unmanned aerial vehicle lens processing method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in the embodiments of the present application of one or more unmanned aerial vehicle lens processing devices may be referred to the limitation of the unmanned aerial vehicle lens processing method hereinabove, and will not be described herein.
In one embodiment, as shown in fig. 4, there is provided a unmanned aerial vehicle lens processing apparatus, including:
the to-be-detected image acquisition module 401 is configured to acquire an to-be-detected image obtained by shooting a lens of an unmanned aerial vehicle of the target unmanned aerial vehicle; the image to be detected is used for detecting the pollutant attachment condition of the surface of the unmanned aerial vehicle lens;
the image feature extraction module 402 is configured to extract image features of the image to be detected according to preset feature extraction information, so as to obtain a feature extraction result corresponding to the image to be detected; the preset feature extraction information is determined based on pollutant detection on the surface of the unmanned aerial vehicle lens;
a pollutant detection result obtaining module 403, configured to determine a pollutant detection result for the lens of the unmanned aerial vehicle according to the feature extraction result; the pollutant detection result comprises a pollution type and a pollution degree;
the cleaning instruction sending module 404 is configured to determine a target cleaning mode according to the pollution type and the pollution level, and send a cleaning instruction corresponding to the target cleaning mode to the target unmanned aerial vehicle; the cleaning instruction is used for indicating the target unmanned aerial vehicle to execute the cleaning operation of the unmanned aerial vehicle lens according to the target cleaning mode.
In one embodiment, the image feature extraction module 402 includes:
the gray level number adjusting sub-module is used for acquiring a gray level image corresponding to the image to be detected and adjusting the gray level number of the gray level image;
the characteristic matrix generation sub-module is used for determining frequency information corresponding to each pixel in the adjusted gray image, and generating a pixel characteristic matrix according to the frequency information corresponding to each pixel;
and the feature extraction result obtaining submodule is used for extracting and obtaining a feature extraction result corresponding to the image to be detected by adopting the pixel feature matrix and the preset feature extraction information.
In one embodiment, the feature extraction result obtaining submodule includes:
the normalization processing unit is used for performing normalization processing on the pixel characteristic matrix;
the feature extraction unit is used for extracting different types of features from the processed pixel feature matrix according to the preset feature extraction information; the preset feature extraction information is used for indicating the type of the feature to be extracted;
and the feature vector obtaining unit is used for obtaining the feature vector of the image to be detected according to the extracted features of the different types, and taking the feature vector as the feature extraction result.
In one embodiment, the contaminant detection result obtaining module 403 includes:
the model processing submodule is used for inputting the feature vector of the image to be detected into a pre-trained pollutant classification model;
the pollutant classification sub-module is used for obtaining a pollutant detection result of the unmanned aerial vehicle lens according to the pollutant prediction information output by the pollutant classification model;
the pre-trained pollutant classification model comprises a support vector machine, wherein the support vector machine is used for classifying pollutants of different types according to an optimal decision boundary.
In one embodiment, the apparatus further comprises:
the preprocessing module is used for preprocessing the image to be detected according to the preprocessing operation information to obtain a processed image to be detected;
and executing a feature extraction module, which is used for extracting the image features of the image to be detected according to preset feature extraction information by adopting the processed image to be detected, and obtaining a feature extraction result corresponding to the image to be detected.
In one embodiment, the target drone is configured with a monitoring device, the device further comprising:
the monitoring module is used for acquiring the lens state information acquired by the monitoring device; the lens state information is obtained by monitoring the cleaning state of the unmanned aerial vehicle lens in real time;
The cleaning feedback module is used for generating a cleaning feedback result aiming at the unmanned aerial vehicle lens according to the lens state information; and the cleaning feedback result is used for representing the processing condition of executing cleaning operation on the unmanned aerial vehicle lens.
The modules in the unmanned aerial vehicle lens processing device can be all or partially realized by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure of which may be as shown in fig. 5. The computer device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input means. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program when executed by a processor implements a method of unmanned aerial vehicle lens processing.
It will be appreciated by those skilled in the art that the structure shown in FIG. 5 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
acquiring an image to be detected, which is obtained by shooting an unmanned aerial vehicle lens of a target unmanned aerial vehicle; the image to be detected is used for detecting the pollutant attachment condition of the surface of the unmanned aerial vehicle lens;
extracting image features of the image to be detected according to preset feature extraction information to obtain feature extraction results corresponding to the image to be detected; the preset feature extraction information is determined based on pollutant detection on the surface of the unmanned aerial vehicle lens;
determining a pollutant detection result aiming at the unmanned aerial vehicle lens according to the characteristic extraction result; the pollutant detection result comprises a pollution type and a pollution degree;
Determining a target cleaning mode according to the pollution type and the pollution degree, and sending a cleaning instruction corresponding to the target cleaning mode to the target unmanned aerial vehicle; the cleaning instruction is used for indicating the target unmanned aerial vehicle to execute the cleaning operation of the unmanned aerial vehicle lens according to the target cleaning mode.
In one embodiment, the steps of the unmanned aerial vehicle lens processing method in the other embodiments described above are also implemented when the processor executes the computer program.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring an image to be detected, which is obtained by shooting an unmanned aerial vehicle lens of a target unmanned aerial vehicle; the image to be detected is used for detecting the pollutant attachment condition of the surface of the unmanned aerial vehicle lens;
extracting image features of the image to be detected according to preset feature extraction information to obtain feature extraction results corresponding to the image to be detected; the preset feature extraction information is determined based on pollutant detection on the surface of the unmanned aerial vehicle lens;
determining a pollutant detection result aiming at the unmanned aerial vehicle lens according to the characteristic extraction result; the pollutant detection result comprises a pollution type and a pollution degree;
Determining a target cleaning mode according to the pollution type and the pollution degree, and sending a cleaning instruction corresponding to the target cleaning mode to the target unmanned aerial vehicle; the cleaning instruction is used for indicating the target unmanned aerial vehicle to execute the cleaning operation of the unmanned aerial vehicle lens according to the target cleaning mode.
In one embodiment, the computer program when executed by the processor further implements the steps of the unmanned aerial vehicle lens processing method in the other embodiments described above.
In one embodiment, a computer program product is provided comprising a computer program which, when executed by a processor, performs the steps of:
acquiring an image to be detected, which is obtained by shooting an unmanned aerial vehicle lens of a target unmanned aerial vehicle; the image to be detected is used for detecting the pollutant attachment condition of the surface of the unmanned aerial vehicle lens;
extracting image features of the image to be detected according to preset feature extraction information to obtain feature extraction results corresponding to the image to be detected; the preset feature extraction information is determined based on pollutant detection on the surface of the unmanned aerial vehicle lens;
determining a pollutant detection result aiming at the unmanned aerial vehicle lens according to the characteristic extraction result; the pollutant detection result comprises a pollution type and a pollution degree;
Determining a target cleaning mode according to the pollution type and the pollution degree, and sending a cleaning instruction corresponding to the target cleaning mode to the target unmanned aerial vehicle; the cleaning instruction is used for indicating the target unmanned aerial vehicle to execute the cleaning operation of the unmanned aerial vehicle lens according to the target cleaning mode.
In one embodiment, the computer program when executed by the processor further implements the steps of the unmanned aerial vehicle lens processing method in the other embodiments described above.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (10)

1. A method for processing a lens of an unmanned aerial vehicle, the method comprising:
acquiring an image to be detected, which is obtained by shooting an unmanned aerial vehicle lens of a target unmanned aerial vehicle; the image to be detected is used for detecting the pollutant attachment condition of the surface of the unmanned aerial vehicle lens;
extracting image features of the image to be detected according to preset feature extraction information to obtain feature extraction results corresponding to the image to be detected; the preset feature extraction information is determined based on pollutant detection on the surface of the unmanned aerial vehicle lens;
Determining a pollutant detection result aiming at the unmanned aerial vehicle lens according to the characteristic extraction result; the pollutant detection result comprises a pollution type and a pollution degree;
determining a target cleaning mode according to the pollution type and the pollution degree, and sending a cleaning instruction corresponding to the target cleaning mode to the target unmanned aerial vehicle; the cleaning instruction is used for indicating the target unmanned aerial vehicle to execute the cleaning operation of the unmanned aerial vehicle lens according to the target cleaning mode.
2. The method according to claim 1, wherein the extracting the image features of the image to be detected according to the preset feature extraction information to obtain the feature extraction result corresponding to the image to be detected includes:
acquiring a gray level image corresponding to the image to be detected, and adjusting the gray level number of the gray level image;
determining frequency information corresponding to each pixel in the adjusted gray level image, and generating a pixel characteristic matrix according to the frequency information corresponding to each pixel;
and extracting to obtain a feature extraction result corresponding to the image to be detected by adopting the pixel feature matrix and the preset feature extraction information.
3. The method according to claim 2, wherein the extracting, using the pixel feature matrix and the preset feature extraction information, to obtain the feature extraction result corresponding to the image to be detected includes:
normalizing the pixel characteristic matrix;
extracting different types of features from the processed pixel feature matrix according to the preset feature extraction information; the preset feature extraction information is used for indicating the type of the feature to be extracted;
and obtaining the feature vector of the image to be detected according to the extracted features of different types, and taking the feature vector as the feature extraction result.
4. A method according to claim 3, wherein said determining a contaminant detection result for the unmanned aerial vehicle lens based on the feature extraction result comprises:
inputting the feature vector of the image to be detected into a pre-trained pollutant classification model;
obtaining a pollutant detection result of the unmanned aerial vehicle lens according to pollutant prediction information output by the pollutant classification model;
the pre-trained pollutant classification model comprises a support vector machine, wherein the support vector machine is used for classifying pollutants of different types according to an optimal decision boundary.
5. The method according to claim 1, wherein before the step of extracting image features of the image to be detected according to preset feature extraction information to obtain feature extraction results corresponding to the image to be detected, the method further comprises:
performing image preprocessing on the image to be detected according to the preprocessing operation information to obtain a processed image to be detected;
and executing the step of extracting the image features of the image to be detected according to preset feature extraction information by adopting the processed image to be detected to obtain a feature extraction result corresponding to the image to be detected.
6. The method according to any one of claims 1 to 5, wherein the target drone is configured with a monitoring device, and after the step of sending the cleaning instruction corresponding to the target cleaning mode to the target drone, the method further comprises:
acquiring lens state information acquired by the monitoring device; the lens state information is obtained by monitoring the cleaning state of the unmanned aerial vehicle lens in real time;
generating a cleaning feedback result for the unmanned aerial vehicle lens according to the lens state information; and the cleaning feedback result is used for representing the processing condition of executing cleaning operation on the unmanned aerial vehicle lens.
7. An unmanned aerial vehicle lens processing apparatus, the apparatus comprising:
the to-be-detected image acquisition module is used for acquiring an to-be-detected image obtained by shooting the unmanned aerial vehicle lens of the target unmanned aerial vehicle; the image to be detected is used for detecting the pollutant attachment condition of the surface of the unmanned aerial vehicle lens;
the image feature extraction module is used for extracting image features of the image to be detected according to preset feature extraction information to obtain feature extraction results corresponding to the image to be detected; the preset feature extraction information is determined based on pollutant detection on the surface of the unmanned aerial vehicle lens;
the pollutant detection result obtaining module is used for determining a pollutant detection result aiming at the unmanned aerial vehicle lens according to the characteristic extraction result; the pollutant detection result comprises a pollution type and a pollution degree;
the cleaning instruction sending module is used for determining a target cleaning mode according to the pollution type and the pollution degree and sending a cleaning instruction corresponding to the target cleaning mode to the target unmanned aerial vehicle; the cleaning instruction is used for indicating the target unmanned aerial vehicle to execute the cleaning operation of the unmanned aerial vehicle lens according to the target cleaning mode.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 6 when the computer program is executed.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
CN202310636859.0A 2023-05-31 2023-05-31 Unmanned aerial vehicle lens processing method, unmanned aerial vehicle lens processing device, computer equipment and storage medium Pending CN116664934A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310636859.0A CN116664934A (en) 2023-05-31 2023-05-31 Unmanned aerial vehicle lens processing method, unmanned aerial vehicle lens processing device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310636859.0A CN116664934A (en) 2023-05-31 2023-05-31 Unmanned aerial vehicle lens processing method, unmanned aerial vehicle lens processing device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116664934A true CN116664934A (en) 2023-08-29

Family

ID=87723718

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310636859.0A Pending CN116664934A (en) 2023-05-31 2023-05-31 Unmanned aerial vehicle lens processing method, unmanned aerial vehicle lens processing device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116664934A (en)

Similar Documents

Publication Publication Date Title
KR102229594B1 (en) Display screen quality detection method, device, electronic device and storage medium
Wu et al. Pruning deep convolutional neural networks for efficient edge computing in condition assessment of infrastructures
CN113705478B (en) Mangrove single wood target detection method based on improved YOLOv5
CN110148130B (en) Method and device for detecting part defects
KR20200004825A (en) Display device quality checking methods, devices, electronic devices and storage media
KR20200087297A (en) Defect inspection method and apparatus using image segmentation based on artificial neural network
KR102476679B1 (en) Apparatus and method for object detection
JP2021086379A (en) Information processing apparatus, information processing method, program, and method of generating learning model
CN116593479B (en) Method, device, equipment and storage medium for detecting appearance quality of battery cover plate
Goodarzi et al. Optimization of a cnn-based object detector for fisheye cameras
KR20230147636A (en) Manufacturing quality control system and method using automated visual inspection
CN113439227B (en) Capturing and storing enlarged images
CN110334775B (en) Unmanned aerial vehicle line fault identification method and device based on width learning
CN116740728A (en) Dynamic acquisition method and system for wafer code reader
D'Angelo et al. Deep learning-based object detection for digital inspection in the mining industry
CN117197591B (en) Data classification method based on machine learning
Jung et al. Anomaly Candidate Extraction and Detection for automatic quality inspection of metal casting products using high-resolution images
US11341379B1 (en) Smart image tagging and selection on mobile devices
CN116664934A (en) Unmanned aerial vehicle lens processing method, unmanned aerial vehicle lens processing device, computer equipment and storage medium
CN116524296A (en) Training method and device of equipment defect detection model and equipment defect detection method
CN115953731A (en) Intelligent coal flow monitoring data analysis method for improving CNN algorithm model
CN116052082A (en) Power distribution station room anomaly detection method and device based on deep learning algorithm
Kumaresan et al. Deep Learning Based Simple CNN Weld Defects Classification Using Optimization Technique
He et al. Fabric defect detection based on improved object as point
CN112131418A (en) Target labeling method, target labeling device and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination