CN115331129A - Junk data identification method based on unmanned aerial vehicle and artificial intelligence - Google Patents

Junk data identification method based on unmanned aerial vehicle and artificial intelligence Download PDF

Info

Publication number
CN115331129A
CN115331129A CN202211256439.1A CN202211256439A CN115331129A CN 115331129 A CN115331129 A CN 115331129A CN 202211256439 A CN202211256439 A CN 202211256439A CN 115331129 A CN115331129 A CN 115331129A
Authority
CN
China
Prior art keywords
garbage
image
aerial vehicle
unmanned aerial
artificial intelligence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211256439.1A
Other languages
Chinese (zh)
Other versions
CN115331129B (en
Inventor
陈钢
刘攀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Byte Technology Qingdao Co Ltd
Original Assignee
Byte Technology Qingdao Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Byte Technology Qingdao Co Ltd filed Critical Byte Technology Qingdao Co Ltd
Priority to CN202211256439.1A priority Critical patent/CN115331129B/en
Publication of CN115331129A publication Critical patent/CN115331129A/en
Application granted granted Critical
Publication of CN115331129B publication Critical patent/CN115331129B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an image recognition technology, in particular to a junk data recognition method based on an unmanned aerial vehicle and artificial intelligence, which comprises the following steps: collecting high-definition video image data with large data volume through unmanned aerial vehicle operation, and performing interframe difference processing on videos to obtain effective high-definition pictures; preprocessing the frame-extracted image to obtain a data set; the classifier determines whether garbage exists in the grids or not by carrying out grid division on the feature extraction of the image; and performing garbage recognition and classification on the cells with the garbage. According to the invention, the garbage image is intelligently acquired by the unmanned aerial vehicle, so that the manpower is saved, and the workload of workers is reduced; the high-definition effective junk pictures are obtained by improving the interframe difference algorithm, and the definition of the images is further improved by improving the filtering algorithm; extracting and processing the garbage images in the images by a method of dividing grids and classifying, so that intelligent recognition of garbage is facilitated; the garbage is classified by combining a classifier, so that the garbage is convenient to recycle.

Description

Junk data identification method based on unmanned aerial vehicle and artificial intelligence
Technical Field
The invention relates to an image processing technology, in particular to a junk data identification method based on an unmanned aerial vehicle and artificial intelligence.
Background
In the past, informatization technology is not common, and violation building management departments usually adopt a manual inspection means to manually find rubbish. In recent years, a method of remote monitoring using a camera has appeared to monitor everywhere, but this method has some disadvantages, such as dead monitoring angle and large investment of money. Although the methods can be used for collecting the garbage images, the garbage images also need to be accurately classified and diversified in collecting mode, when the number of the obtained images is large or the range of the images is large, huge workload can be generated through a manual identification mode, all tasks cannot be completed through simple intelligent identification, and meanwhile, the identification efficiency is relatively low. And at present, most of the identification of the garbage data only stays in the identification stage, and along with the strict control of the state on garbage classification, the garbage classification can not adapt to the requirements of people gradually.
Disclosure of Invention
The invention aims to solve the defects in the background technology by providing a junk data identification method based on an unmanned aerial vehicle and artificial intelligence.
The technical scheme adopted by the invention is as follows:
the junk data identification method based on the unmanned aerial vehicle and artificial intelligence comprises the following steps:
s1.1: collecting high-definition video image data with large data volume through unmanned aerial vehicle operation, and performing interframe difference processing on videos to obtain effective high-definition pictures;
s1.2: preprocessing the frame-extracted image to obtain a data set;
s1.3: the classifier determines whether garbage exists in the grids or not by carrying out grid division on the feature extraction of the image;
s1.4: and performing garbage recognition and classification on the cells with the garbage.
As a preferred technical scheme of the invention: the inter-frame difference processing image algorithm in S1.1 is as follows:
taking two continuous frames of images:
Figure 760616DEST_PATH_IMAGE002
wherein:
Figure 490806DEST_PATH_IMAGE004
wherein the content of the first and second substances,
Figure 536123DEST_PATH_IMAGE005
is a difference image between two successive frame images,
Figure 993649DEST_PATH_IMAGE006
and
Figure 883720DEST_PATH_IMAGE007
respectively the gray values of the pixels of the two adjacent frames,
Figure 768499DEST_PATH_IMAGE008
the threshold value selected during the binarization of the differential image,
Figure 51844DEST_PATH_IMAGE009
the representation of the foreground is performed,
Figure 313061DEST_PATH_IMAGE010
representing the background, β is the suppression coefficient.
As a preferred technical scheme of the invention: and in the S1.2, after the difference image subjected to the inter-frame difference processing is obtained, preprocessing is carried out on the image, and the preprocessing step comprises the steps of carrying out color inversion, filtering processing and enhancement processing on the image.
As a preferred technical scheme of the invention: the filtering process establishes a rectangular coordinate system by taking the image starting point of the lower left corner of the differential image as an origin, and the point
Figure 60569DEST_PATH_IMAGE011
To make the image enclose the maximum point of the rectangle, we get:
Figure 319512DEST_PATH_IMAGE012
Figure 136158DEST_PATH_IMAGE013
wherein the content of the first and second substances,
Figure 951798DEST_PATH_IMAGE014
is a point
Figure 740763DEST_PATH_IMAGE011
Pixel mean in the whole image;
Figure 986586DEST_PATH_IMAGE015
is a point
Figure 556108DEST_PATH_IMAGE011
Pixel variance in the entire image.
As a preferred technical scheme of the invention: order to
Figure 96810DEST_PATH_IMAGE016
Enclosing a rectangular area in a coordinate system of the differential image; obtaining the coordinates of any point in the rectangular area
Figure 553331DEST_PATH_IMAGE017
(ii) a Calculating the current rectangular region inner point
Figure 216393DEST_PATH_IMAGE017
Mean and variance of (c):
Figure 961626DEST_PATH_IMAGE018
Figure 102758DEST_PATH_IMAGE019
obtaining:
Figure 679364DEST_PATH_IMAGE020
wherein the content of the first and second substances,
Figure 716590DEST_PATH_IMAGE021
to the post-filtering point
Figure 995124DEST_PATH_IMAGE017
The gray value of (a).
As a preferred technical scheme of the invention: and (4) adding a multi-azimuth template operator through a canny edge extraction algorithm, and carrying out edge detection processing on the image.
As a preferred technical scheme of the invention: in S1.3, the preprocessed image is subjected to grid division based on image scale:
setting an original data set as M, the width as x and the height as y, wherein M belongs to M in a sub-graph of 8704;
the division number W of the original data set M is determined according to the following formula:
Figure 687749DEST_PATH_IMAGE022
wherein the content of the first and second substances,
Figure 305813DEST_PATH_IMAGE023
a grid partitioning factor;
Figure 61410DEST_PATH_IMAGE024
the number of feature points in the graph.
As a preferred technical scheme of the invention: identifying the targets in the grids through the cascade classifier, setting P as a grid set with garbage, setting N as a grid set without garbage, setting the false detection rate as f, setting the detection rate as d, and setting the standard false detection rate as d
Figure 827241DEST_PATH_IMAGE025
(ii) a Initial value setting
Figure 60907DEST_PATH_IMAGE026
Figure 799056DEST_PATH_IMAGE027
Let i =1; when in use
Figure 240402DEST_PATH_IMAGE028
When i = i +1;
Figure 181944DEST_PATH_IMAGE029
Figure 468569DEST_PATH_IMAGE030
when the temperature is higher than the set temperature
Figure 599905DEST_PATH_IMAGE031
When the temperature of the water is higher than the set temperature,
Figure 946573DEST_PATH_IMAGE032
training a cascade classifier with n characteristics to obtain a set
Figure 624679DEST_PATH_IMAGE033
And collections
Figure 731306DEST_PATH_IMAGE034
And until the detection accuracy and the false detection rate of the target classifier are reached.
As a preferred technical scheme of the invention: for network aggregation with garbage
Figure 178468DEST_PATH_IMAGE035
And continuing to perform splitting and classification identification until each grid contains at most one garbage target.
As a preferred technical scheme of the invention: in the step S1.4, aiming at a network set with garbage, a recoverable garbage output is defined as a, a kitchen garbage output is defined as b, a harmful garbage output is defined as c, and other garbage outputs are defined as d; let the classification error be
Figure 446770DEST_PATH_IMAGE036
Defining the weight coefficient in each cell to satisfy:
Figure 408909DEST_PATH_IMAGE037
wherein the content of the first and second substances,
Figure 240599DEST_PATH_IMAGE038
is a weight coefficient;
Figure 355317DEST_PATH_IMAGE039
Figure 43787DEST_PATH_IMAGE040
Figure 444288DEST_PATH_IMAGE041
the classification values of the classification samples obtained for t weak classifiers.
Compared with the prior art, the garbage data identification method based on the unmanned aerial vehicle and the artificial intelligence has the beneficial effects that:
according to the invention, the garbage image is intelligently acquired by the unmanned aerial vehicle, so that the manpower is saved, and the workload of workers is reduced; the high-definition effective junk pictures are obtained by improving the inter-frame difference algorithm, and the definition of the pictures is further improved by improving the filtering algorithm; extracting and processing the garbage images in the images by a method of dividing grids and classifying, so that intelligent recognition of garbage is facilitated; the garbage is classified by combining a classifier, so that the garbage is convenient to recycle.
Drawings
FIG. 1 is a flow chart of a method of a preferred embodiment of the present invention.
Detailed Description
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other, and the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a preferred embodiment of the present invention provides a junk data identification method based on an unmanned aerial vehicle and artificial intelligence, including the following steps:
s1.1: collecting high-definition video image data with large data volume through unmanned aerial vehicle operation, and performing interframe difference processing on videos to obtain effective high-definition pictures;
s1.2: preprocessing the frame-extracted image to obtain a data set;
s1.3: performing grid division by extracting the features of the image, and determining whether garbage exists in a grid by a classifier;
s1.4: and performing garbage recognition and classification on the cells with the garbage.
The inter-frame difference processing image algorithm in S1.1 is as follows:
taking two continuous frames of images:
Figure 876406DEST_PATH_IMAGE001
wherein:
Figure 845631DEST_PATH_IMAGE003
wherein the content of the first and second substances,
Figure 908265DEST_PATH_IMAGE005
is a difference image between two successive frame images,
Figure 579417DEST_PATH_IMAGE006
and
Figure 831538DEST_PATH_IMAGE007
respectively pixels of two adjacent framesThe value of the gray-scale value,
Figure 107799DEST_PATH_IMAGE008
is a threshold value selected when the difference image is binarized,
Figure 888804DEST_PATH_IMAGE009
the representation of the foreground is performed,
Figure 312832DEST_PATH_IMAGE010
representing the background, β is the suppression coefficient.
And in the S1.2, after the difference image subjected to the inter-frame difference processing is obtained, preprocessing is carried out on the image, and the preprocessing step comprises the steps of carrying out color inversion, filtering processing and enhancement processing on the image.
The filtering process establishes a rectangular coordinate system by taking the image starting point of the lower left corner of the differential image as an origin, and the point
Figure 290015DEST_PATH_IMAGE011
To make the image enclose the maximum point of the rectangle, we get:
Figure 971182DEST_PATH_IMAGE012
Figure 437936DEST_PATH_IMAGE013
wherein the content of the first and second substances,
Figure 37675DEST_PATH_IMAGE014
is a point
Figure 615287DEST_PATH_IMAGE011
Pixel mean in the whole image;
Figure 866140DEST_PATH_IMAGE015
is a point
Figure 254527DEST_PATH_IMAGE011
Pixel variance in the entire image.
Order to
Figure 387568DEST_PATH_IMAGE042
Enclosing a rectangular area in a coordinate system of the differential image; obtaining the coordinates of any point in the rectangular area
Figure 254024DEST_PATH_IMAGE043
(ii) a Calculating the current rectangular region inner point
Figure 359384DEST_PATH_IMAGE043
Mean and variance of (c):
Figure 650163DEST_PATH_IMAGE018
Figure 270500DEST_PATH_IMAGE019
obtaining:
Figure 393177DEST_PATH_IMAGE020
wherein, the first and the second end of the pipe are connected with each other,
Figure 900513DEST_PATH_IMAGE044
to the post-filtering point
Figure 879970DEST_PATH_IMAGE043
The gray value of (a).
And (4) adding a multi-azimuth template operator through a canny edge extraction algorithm, and carrying out edge detection processing on the image.
In S1.3, the preprocessed image is subjected to grid division based on image scale:
setting an original data set as M, the width as x, the height as y, wherein M belongs to M for 8704, and M belongs to M as a sub-graph;
the division number W of the original data set M is determined according to the following formula:
Figure 941598DEST_PATH_IMAGE022
wherein the content of the first and second substances,
Figure 399124DEST_PATH_IMAGE045
a grid partitioning factor;
Figure 26546DEST_PATH_IMAGE046
the number of feature points in the graph.
Identifying the target in the grid through a cascade classifier, wherein P is a grid set with garbage, N is a grid set without garbage, the false detection rate is f, the detection rate is d, and the standard false detection rate is
Figure 380167DEST_PATH_IMAGE047
(ii) a Initial value setting
Figure 975096DEST_PATH_IMAGE048
Figure 1694DEST_PATH_IMAGE049
Let i =1; when in use
Figure 670573DEST_PATH_IMAGE050
When i = i +1;
Figure 8144DEST_PATH_IMAGE051
Figure 824791DEST_PATH_IMAGE052
when in use
Figure 827382DEST_PATH_IMAGE053
When the utility model is used, the water is discharged,
Figure 429396DEST_PATH_IMAGE054
training a cascade classifier with n characteristics to obtain a set
Figure 124819DEST_PATH_IMAGE055
And collections
Figure 179494DEST_PATH_IMAGE056
And until the detection accuracy and the false detection rate of the target classifier are reached.
For network set with garbage
Figure 782514DEST_PATH_IMAGE055
And continuing to perform splitting and classification recognition until each grid contains at most one garbage target.
In the step S1.4, aiming at a network set with garbage, a recoverable garbage output is defined as a, a kitchen garbage output is defined as b, a harmful garbage output is defined as c, and other garbage outputs are defined as d; let the classification error be
Figure 691564DEST_PATH_IMAGE057
Defining the weight coefficient in each cell to satisfy:
Figure 102429DEST_PATH_IMAGE037
wherein the content of the first and second substances,
Figure 96930DEST_PATH_IMAGE058
is a weight coefficient;
Figure 988794DEST_PATH_IMAGE039
Figure 814667DEST_PATH_IMAGE040
Figure 586314DEST_PATH_IMAGE059
the classification values of the classification samples obtained for t weak classifiers.
In this embodiment, it is assumed that one frame image contains one piece of waste paper and one piece of fruit peel.
The unmanned aerial vehicle collects garbage video data of all places, and garbage image data in the video data are extracted according to an improved interframe difference processing algorithm: taking two consecutive images of the waste paper and a piece of peel:
Figure 615581DEST_PATH_IMAGE001
wherein:
Figure 825983DEST_PATH_IMAGE003
wherein the content of the first and second substances,
Figure 991516DEST_PATH_IMAGE005
is a difference image between two successive frame images,
Figure 199643DEST_PATH_IMAGE006
and
Figure 453557DEST_PATH_IMAGE007
respectively the gray values of the pixels of the two adjacent frames,
Figure 202070DEST_PATH_IMAGE008
is a threshold value selected when the difference image is binarized,
Figure 940219DEST_PATH_IMAGE009
the representation of the foreground is performed,
Figure 866718DEST_PATH_IMAGE010
representing the background, β is the suppression coefficient.
And preprocessing the garbage image obtained by the difference, including color inversion, image filtering and image enhancement of the image. Wherein the color reversal mathematical expression is as follows:
Figure 854265DEST_PATH_IMAGE060
the inverted pixel value is equal to 255 minus the current pixel value.
The filtering processing steps are as follows:
establishing a rectangular coordinate system by taking the image starting point of the lower left corner of the differential image as an origin, and calculating the point
Figure 94885DEST_PATH_IMAGE061
To make the image enclose the maximum point of the rectangle, we get:
Figure 749857DEST_PATH_IMAGE062
Figure 847257DEST_PATH_IMAGE063
wherein the content of the first and second substances,
Figure 525363DEST_PATH_IMAGE064
is a point
Figure 615679DEST_PATH_IMAGE061
Pixel mean in the entire image;
Figure 607381DEST_PATH_IMAGE065
is a point
Figure 328213DEST_PATH_IMAGE061
Pixel variance in the entire image.
Order to
Figure 306664DEST_PATH_IMAGE066
Enclosing a rectangular area in a coordinate system of the differential image; obtaining the coordinates of any point in the rectangular area
Figure 935091DEST_PATH_IMAGE067
(ii) a Calculating the current rectangular region inner point
Figure 236760DEST_PATH_IMAGE067
Mean and variance of (c):
Figure 941542DEST_PATH_IMAGE068
Figure 390978DEST_PATH_IMAGE069
obtaining:
Figure 511512DEST_PATH_IMAGE070
wherein the content of the first and second substances,
Figure 730003DEST_PATH_IMAGE071
to the post-filtering point
Figure 596897DEST_PATH_IMAGE067
The gray value of (a).
And (4) adding a multi-azimuth template operator through a canny edge extraction algorithm, and carrying out edge detection processing on the image.
Extracting the features of the preprocessed image, carrying out grid division according to the extracted features, and dividing the complete image according to the following steps:
setting an original data set as M, the width as x, the height as y, wherein M belongs to M for 8704, and M belongs to M as a sub-graph;
the number W of the division lattices of the original data set M is determined according to the following formula:
Figure 268050DEST_PATH_IMAGE072
wherein, the first and the second end of the pipe are connected with each other,
Figure 707122DEST_PATH_IMAGE073
a grid partitioning factor;
Figure 796432DEST_PATH_IMAGE074
the number of feature points in the graph.
Divide the image into four grids of the same size, respectively the grid at the upper left corner
Figure 826705DEST_PATH_IMAGE075
Grid at the upper right corner
Figure 939148DEST_PATH_IMAGE076
Grid at lower left corner
Figure 713069DEST_PATH_IMAGE077
Grid at lower right corner
Figure 656886DEST_PATH_IMAGE078
Grid with waste paper and peel at right lower corner
Figure 61322DEST_PATH_IMAGE078
In the waste paper in the grid
Figure 972646DEST_PATH_IMAGE078
In the upper left corner, the peel is in the grid
Figure 32482DEST_PATH_IMAGE078
The upper right corner.
Confirming whether a garbage target exists in the divided grids through a cascade classifier:
identifying the targets in the grids through a cascade classifier, setting P as a grid set with garbage, N as a grid set without garbage, and setting the false detection rate as f, the detection rate as d and the standard false detection rate as d
Figure 283334DEST_PATH_IMAGE079
(ii) a Initial value setting
Figure 671722DEST_PATH_IMAGE048
Figure 804763DEST_PATH_IMAGE049
Let i =1; when in use
Figure 123749DEST_PATH_IMAGE050
When i = i +1;
Figure 42157DEST_PATH_IMAGE051
Figure 585134DEST_PATH_IMAGE052
when in use
Figure 893887DEST_PATH_IMAGE053
When the temperature of the water is higher than the set temperature,
Figure 78880DEST_PATH_IMAGE054
training a cascade classifier with n characteristics to obtain a set
Figure 589146DEST_PATH_IMAGE080
And collections
Figure 506286DEST_PATH_IMAGE081
And until the detection accuracy and the false detection rate of the target classifier are reached.
The result of the detection
Figure 879499DEST_PATH_IMAGE082
Figure 291020DEST_PATH_IMAGE083
For network set with garbage
Figure 902130DEST_PATH_IMAGE080
And continuing to perform splitting and classification identification until each grid contains at most one garbage target.
To pair
Figure 68800DEST_PATH_IMAGE084
Continuously dividing to obtain the upper left corners of four grids with the same size
Figure 601412DEST_PATH_IMAGE085
The upper right corner
Figure 597050DEST_PATH_IMAGE086
Lower left corner
Figure 344557DEST_PATH_IMAGE087
The lower right corner
Figure 869080DEST_PATH_IMAGE088
Is detected to obtain
Figure 433529DEST_PATH_IMAGE089
Figure 498437DEST_PATH_IMAGE090
Aiming at a network set with garbage, defining recoverable garbage output as a, kitchen garbage output as b, harmful garbage output as c and other garbage output as d; let the classification error be
Figure 287401DEST_PATH_IMAGE091
Defining the weight coefficient in each cell to satisfy:
Figure 530295DEST_PATH_IMAGE092
wherein the content of the first and second substances,
Figure 834237DEST_PATH_IMAGE038
is a weight coefficient;
Figure 391252DEST_PATH_IMAGE093
Figure 97039DEST_PATH_IMAGE094
Figure 510834DEST_PATH_IMAGE095
the classification values of the classification samples obtained for t weak classifiers.
For netsGrid (C)
Figure 239756DEST_PATH_IMAGE085
Figure 912046DEST_PATH_IMAGE086
And the classifier respectively outputs a and b to finish the identification of the garbage.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present specification describes embodiments, not every embodiment includes only a single embodiment, and such description is for clarity purposes only, and it is to be understood that all embodiments may be combined as appropriate by one of ordinary skill in the art to form other embodiments as will be apparent to those of skill in the art from the description herein.

Claims (10)

1. A junk data identification method based on an unmanned aerial vehicle and artificial intelligence is characterized in that: the method comprises the following steps:
s1.1: collecting high-definition video image data with large data volume through unmanned aerial vehicle operation, and performing interframe difference processing on videos to obtain effective high-definition pictures;
s1.2: preprocessing the frame-extracted image to obtain a data set;
s1.3: performing grid division by extracting the features of the image, and determining whether garbage exists in a grid by a classifier;
s1.4: and performing garbage recognition and classification on the cells with the garbage.
2. The spam data recognition method based on unmanned aerial vehicle and artificial intelligence according to claim 1, wherein: the inter-frame difference processing image algorithm in S1.1 is as follows:
taking two continuous frames of images:
Figure 585655DEST_PATH_IMAGE001
wherein:
Figure 607838DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure 338028DEST_PATH_IMAGE003
is a differential image between two successive frame images,
Figure 180082DEST_PATH_IMAGE004
and
Figure 388340DEST_PATH_IMAGE005
the gray values of the pixels of the two adjacent frames are respectively, T is a threshold value selected during the binarization of the differential image,
Figure 999450DEST_PATH_IMAGE006
the representation of the foreground is performed,
Figure 897611DEST_PATH_IMAGE007
representing the background, β is the suppression coefficient.
3. The junk data recognition method based on unmanned aerial vehicle and artificial intelligence of claim 1, wherein: and in the S1.2, after the difference image subjected to the inter-frame difference processing is obtained, preprocessing is carried out on the image, and the preprocessing step comprises the steps of carrying out color inversion, filtering processing and enhancement processing on the image.
4. The spam data recognition method based on unmanned aerial vehicle and artificial intelligence according to claim 3, wherein: the filtering process establishes a rectangular coordinate system by taking the image starting point of the lower left corner of the differential image as an origin, and the point
Figure 961382DEST_PATH_IMAGE008
To make the image enclose the largest point of the rectangle, we get:
Figure 707753DEST_PATH_IMAGE009
Figure 438948DEST_PATH_IMAGE010
wherein the content of the first and second substances,
Figure 510941DEST_PATH_IMAGE011
is a point
Figure 796429DEST_PATH_IMAGE012
Pixel mean in the entire image;
Figure 612069DEST_PATH_IMAGE013
is a point
Figure 197771DEST_PATH_IMAGE014
Pixel variance in the entire image.
5. The junk data recognition method based on unmanned aerial vehicle and artificial intelligence of claim 4, wherein: order to
Figure 178015DEST_PATH_IMAGE015
Enclosing a rectangular area in a coordinate system of the differential image; obtaining the coordinates of any point in the rectangular area(ii) a Calculating the current rectangular region inner point
Figure 216378DEST_PATH_IMAGE016
Mean and variance of (c):
Figure 819398DEST_PATH_IMAGE017
Figure 10339DEST_PATH_IMAGE018
obtaining:
Figure 673401DEST_PATH_IMAGE019
wherein the content of the first and second substances,
Figure 949793DEST_PATH_IMAGE020
to the post-filtering point
Figure 90925DEST_PATH_IMAGE016
Of the gray scale value of (a).
6. The spam data recognition method based on unmanned aerial vehicle and artificial intelligence according to claim 1, wherein: and (4) adding a multi-azimuth template operator through a canny edge extraction algorithm, and carrying out edge detection processing on the image.
7. The junk data recognition method based on unmanned aerial vehicle and artificial intelligence of claim 1, wherein: in S1.3, the preprocessed image is subjected to grid division based on image scale:
setting an original data set as M, the width as x and the height as y, wherein M belongs to M in a sub-graph of 8704;
the number W of the division lattices of the original data set M is determined according to the following formula:
Figure 401951DEST_PATH_IMAGE021
wherein the content of the first and second substances,
Figure 704757DEST_PATH_IMAGE022
a grid partitioning factor;
Figure 996673DEST_PATH_IMAGE023
the number of feature points in the graph.
8. The spam data recognition method based on unmanned aerial vehicle and artificial intelligence of claim 7, wherein: identifying the target in the grid through a cascade classifier, wherein P is a grid set with garbage, N is a grid set without garbage, the false detection rate is f, the detection rate is d, and the standard false detection rate is
Figure 675916DEST_PATH_IMAGE024
(ii) a Initial value setting
Figure 841449DEST_PATH_IMAGE025
Figure 580735DEST_PATH_IMAGE026
Let i =1; when in use
Figure 566140DEST_PATH_IMAGE027
When i = i +1;
Figure 314653DEST_PATH_IMAGE028
Figure 334693DEST_PATH_IMAGE029
when in use
Figure 244880DEST_PATH_IMAGE030
When the temperature of the water is higher than the set temperature,
Figure 966848DEST_PATH_IMAGE031
training a cascade classifier with n characteristics to obtain a set
Figure 995417DEST_PATH_IMAGE032
And collections
Figure 384810DEST_PATH_IMAGE033
And until the detection accuracy and the false detection rate of the target classifier are reached.
9. The junk data recognition method based on unmanned aerial vehicle and artificial intelligence of claim 8, wherein: for network set with garbage
Figure 216631DEST_PATH_IMAGE034
And continuing to perform splitting and classification recognition until each grid contains at most one garbage target.
10. The junk data recognition method based on unmanned aerial vehicle and artificial intelligence of claim 1, wherein: in the step S1.4, aiming at a network set with garbage, a recoverable garbage output is defined as a, a kitchen garbage output is defined as b, a harmful garbage output is defined as c, and other garbage outputs are defined as d; let the classification error be
Figure 425895DEST_PATH_IMAGE035
Defining the weight coefficient in each cell to satisfy:
Figure 266943DEST_PATH_IMAGE036
wherein the content of the first and second substances,
Figure 245264DEST_PATH_IMAGE037
is a weight coefficient;
Figure 513565DEST_PATH_IMAGE038
Figure 210126DEST_PATH_IMAGE039
Figure 838553DEST_PATH_IMAGE040
The classification values of the classification samples obtained for t weak classifiers.
CN202211256439.1A 2022-10-14 2022-10-14 Junk data identification method based on unmanned aerial vehicle and artificial intelligence Active CN115331129B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211256439.1A CN115331129B (en) 2022-10-14 2022-10-14 Junk data identification method based on unmanned aerial vehicle and artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211256439.1A CN115331129B (en) 2022-10-14 2022-10-14 Junk data identification method based on unmanned aerial vehicle and artificial intelligence

Publications (2)

Publication Number Publication Date
CN115331129A true CN115331129A (en) 2022-11-11
CN115331129B CN115331129B (en) 2023-03-24

Family

ID=83915049

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211256439.1A Active CN115331129B (en) 2022-10-14 2022-10-14 Junk data identification method based on unmanned aerial vehicle and artificial intelligence

Country Status (1)

Country Link
CN (1) CN115331129B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106886795A (en) * 2017-02-17 2017-06-23 北京维弦科技有限责任公司 Object identification method based on the obvious object in image
CN208165794U (en) * 2017-11-14 2018-11-30 中国矿业大学 A kind of intelligent classification dustbin
CN108960343A (en) * 2018-08-02 2018-12-07 霍金阁 A kind of solid waste recognition methods, system, device and readable storage medium storing program for executing
CN109050811A (en) * 2018-08-30 2018-12-21 深圳市研本品牌设计有限公司 It is a kind of for clearing up the unmanned plane and storage medium of rubbish
CN110254993A (en) * 2018-12-27 2019-09-20 广东拜登网络技术有限公司 Garbage classification and collection system based on mesh-managing
CN110781896A (en) * 2019-10-17 2020-02-11 暨南大学 Track garbage identification method, cleaning method, system and resource allocation method
CN111559588A (en) * 2020-05-18 2020-08-21 广东邮电职业技术学院 Intelligent garbage can for classified garbage throwing and classified garbage throwing method
CN113569954A (en) * 2021-07-29 2021-10-29 皖江工学院 Intelligent wild animal classification and identification method
US20210357879A1 (en) * 2019-04-08 2021-11-18 Jiangxi University Of Science And Technology Automatic classification system
CN113671975A (en) * 2021-08-11 2021-11-19 上海电机学院 Coastal garbage cleaning device and method based on machine vision technology
WO2022001961A1 (en) * 2020-06-28 2022-01-06 深圳天感智能有限公司 Detection method, detection device and detection system for moving target thrown from height
CN114202708A (en) * 2021-12-10 2022-03-18 深圳市旗扬特种装备技术工程有限公司 Garbage type identification method and system, electronic equipment and medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106886795A (en) * 2017-02-17 2017-06-23 北京维弦科技有限责任公司 Object identification method based on the obvious object in image
CN208165794U (en) * 2017-11-14 2018-11-30 中国矿业大学 A kind of intelligent classification dustbin
CN108960343A (en) * 2018-08-02 2018-12-07 霍金阁 A kind of solid waste recognition methods, system, device and readable storage medium storing program for executing
CN109050811A (en) * 2018-08-30 2018-12-21 深圳市研本品牌设计有限公司 It is a kind of for clearing up the unmanned plane and storage medium of rubbish
CN110254993A (en) * 2018-12-27 2019-09-20 广东拜登网络技术有限公司 Garbage classification and collection system based on mesh-managing
US20210357879A1 (en) * 2019-04-08 2021-11-18 Jiangxi University Of Science And Technology Automatic classification system
CN110781896A (en) * 2019-10-17 2020-02-11 暨南大学 Track garbage identification method, cleaning method, system and resource allocation method
CN111559588A (en) * 2020-05-18 2020-08-21 广东邮电职业技术学院 Intelligent garbage can for classified garbage throwing and classified garbage throwing method
WO2022001961A1 (en) * 2020-06-28 2022-01-06 深圳天感智能有限公司 Detection method, detection device and detection system for moving target thrown from height
CN113569954A (en) * 2021-07-29 2021-10-29 皖江工学院 Intelligent wild animal classification and identification method
CN113671975A (en) * 2021-08-11 2021-11-19 上海电机学院 Coastal garbage cleaning device and method based on machine vision technology
CN114202708A (en) * 2021-12-10 2022-03-18 深圳市旗扬特种装备技术工程有限公司 Garbage type identification method and system, electronic equipment and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
方路平等: "目标检测算法研究综述", 《计算机工程与应用》 *

Also Published As

Publication number Publication date
CN115331129B (en) 2023-03-24

Similar Documents

Publication Publication Date Title
US11263434B2 (en) Fast side-face interference resistant face detection method
CN108615226B (en) Image defogging method based on generation type countermeasure network
CN107480643B (en) Intelligent garbage classification processing robot
CN101216943B (en) A method for video moving object subdivision
CN105678213B (en) Dual-mode mask person event automatic detection method based on video feature statistics
CN111126184B (en) Post-earthquake building damage detection method based on unmanned aerial vehicle video
CN111222478A (en) Construction site safety protection detection method and system
CN110334660A (en) A kind of forest fire monitoring method based on machine vision under the conditions of greasy weather
CN110309765B (en) High-efficiency detection method for video moving target
CN105869174A (en) Sky scene image segmentation method
CN113255605A (en) Pavement disease detection method and device, terminal equipment and storage medium
CN113221976A (en) Multi-video-frame black smoke diesel vehicle detection method and system based on space-time optical flow network
CN113313107A (en) Intelligent detection and identification method for multiple types of diseases on cable surface of cable-stayed bridge
CN111461076A (en) Smoke detection method and smoke detection system combining frame difference method and neural network
CN116665092A (en) Method and system for identifying sewage suspended matters based on IA-YOLOV7
CN117576632B (en) Multi-mode AI large model-based power grid monitoring fire early warning system and method
CN105354547A (en) Pedestrian detection method in combination of texture and color features
CN110472567A (en) A kind of face identification method and system suitable under non-cooperation scene
Muchtar et al. A unified smart surveillance system incorporating adaptive foreground extraction and deep learning-based classification
CN115331129B (en) Junk data identification method based on unmanned aerial vehicle and artificial intelligence
CN115294486B (en) Method for identifying and judging illegal garbage based on unmanned aerial vehicle and artificial intelligence
CN107133965A (en) One kind is based on computer graphic image morphological image segmentation method
CN107403192B (en) Multi-classifier-based rapid target detection method and system
CN114694090A (en) Campus abnormal behavior detection method based on improved PBAS algorithm and YOLOv5
CN106530300A (en) Flame identification algorithm of low-rank analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant