CN114241332A - Deep learning-based solid waste field identification method and device and storage medium - Google Patents

Deep learning-based solid waste field identification method and device and storage medium Download PDF

Info

Publication number
CN114241332A
CN114241332A CN202111554678.0A CN202111554678A CN114241332A CN 114241332 A CN114241332 A CN 114241332A CN 202111554678 A CN202111554678 A CN 202111554678A CN 114241332 A CN114241332 A CN 114241332A
Authority
CN
China
Prior art keywords
sample
deep learning
solid waste
detection
sample image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111554678.0A
Other languages
Chinese (zh)
Inventor
王恒俭
兰德顺
付湘鄂
黎臣
郑高强
苏浪华
马占军
周国龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Bowo Wisdom Technology Co ltd
Original Assignee
Shenzhen Bowo Wisdom Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Bowo Wisdom Technology Co ltd filed Critical Shenzhen Bowo Wisdom Technology Co ltd
Priority to CN202111554678.0A priority Critical patent/CN114241332A/en
Publication of CN114241332A publication Critical patent/CN114241332A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of environmental monitoring, in particular to a solid waste field identification method and device based on deep learning and a storage medium. It includes: acquiring a satellite remote sensing image of a region to be detected; and inputting the satellite remote sensing image into a pre-trained deep learning network detection model to obtain a solid waste detection result of the area to be detected. Therefore, the method for detecting the solid wastes by adopting the deep learning model based on the satellite remote sensing images does not need manual on-site inspection, saves the labor cost and the time cost, and simultaneously improves the detection efficiency.

Description

Deep learning-based solid waste field identification method and device and storage medium
Technical Field
The invention relates to the technical field of environmental monitoring, in particular to a solid waste field identification method and device based on deep learning and a storage medium.
Background
The solid waste storage yard mainly comprises domestic garbage, construction garbage, industrial and mining deposits and the like according to sources, and the solid waste storage yard is various in types, different in characteristics, large in size and form difference, so that at present, the supervision of the solid waste storage yard mainly depends on manual patrol, and the patrol needs to be carried out by corresponding grids at fixed time, fixed period and fixed point. When the law enforcement officer checks daily grids, the law enforcement officer finds that suspicious vehicles carry solid waste in enterprises, and initially obtains a first clue by tracking the vehicles and analyzing paths.
The manual patrol method is highly accurate, but depends on the experience of law enforcement personnel, and is labor-intensive. The scientific and technological investment is increased in China, and the 'non-waste city' trial construction workers are propelled in a full way, so that the area needing to be patrolled is larger, huge labor cost can be consumed if the approval patrolling is adopted, huge time cost can be consumed, and the working efficiency is low.
Disclosure of Invention
The invention mainly solves the technical problems that the existing manual inspection solid waste storage yard has high cost and low efficiency.
A solid waste field identification method based on deep learning comprises the following steps:
acquiring a satellite remote sensing image of a region to be detected;
and inputting the satellite remote sensing image into a pre-trained deep learning network detection model to obtain a solid waste detection result of the area to be detected.
In one embodiment, the solid waste detection result includes the type of the yard, the range of the yard, and the location information.
In one embodiment, the deep learning network model is trained by:
establishing a sample library: establishing a storage yard sample image library;
training: training a deep learning network initial model by adopting partial sample images in a sample image library;
a detection step: testing the trained network initial model by using the rest sample images in the sample image library;
judging whether the detection precision of the trained initial model meets a preset requirement or not according to a test result, and if so, taking the currently trained network initial model as the network detection model; otherwise, continuing to circulate the training step and the detection step until the detection precision of the initial model after training meets the preset requirement.
In one embodiment, the sample library establishment comprises: and acquiring a satellite remote sensing image of the sample storage yard as a sample image, and dividing the sample image into sample image blocks with the size of 224 × 224.
In one embodiment, the sample library establishment further comprises:
sample image preprocessing: performing atmospheric correction, orthorectification, panchromatic data orthorectification and multispectral-panchromatic fusion processing on the obtained sample image to obtain image data with meter-level spatial resolution;
sketching the sample image; and drawing sample data of the storage yard and dividing the storage yard type on the sample image by visual interpretation and combining the space data of the existing storage yard map.
A solid waste field recognition device based on deep learning comprises:
the acquisition module is used for acquiring a satellite remote sensing image of a region to be detected;
and the detection module is used for inputting the satellite remote sensing image into a pre-trained deep learning network detection model to obtain a solid waste detection result of the area to be detected.
In one embodiment, the solid waste detection result includes the type of the yard, the range of the yard, and the location information.
In one embodiment, the deep learning network model is trained by:
establishing a sample library: establishing a storage yard sample image library;
training: training a deep learning network initial model by adopting most sample images in a sample image library and XML files of storage yard information in the images;
a detection step: testing the trained network initial model by using the rest small part of sample images in the sample image library and the XML file of the storage yard information in the images;
judging whether the detection precision of the trained initial model meets a preset requirement or not according to a test result, and if so, taking the currently trained network initial model as the network detection model; otherwise, continuing to circulate the training step and the detection step until the detection precision of the initial model after training meets the preset requirement.
In one embodiment, the sample library establishment comprises: and acquiring a satellite remote sensing image of the sample storage yard as a sample image, and dividing the sample image into sample image blocks with the size of 224 × 224.
A computer readable storage medium having stored thereon a program executable by a processor to implement a method as described above.
According to the solid waste field identification method based on deep learning of the embodiment, the method comprises the following steps: acquiring a satellite remote sensing image of a region to be detected; and inputting the satellite remote sensing image into a pre-trained deep learning network detection model to obtain a solid waste detection result of the area to be detected. Therefore, the method for detecting the solid wastes by adopting the deep learning model based on the satellite remote sensing images does not need manual on-site inspection, saves the labor cost and the time cost, and simultaneously improves the detection efficiency.
Drawings
Fig. 1 is a flowchart of a solid waste field identification method according to an embodiment of the present application;
FIG. 2 is a flowchart of a detection model training method according to an embodiment of the present application;
fig. 3 is a schematic diagram of a network structure of VGG16 according to an embodiment of the present application;
FIG. 4 is a flow chart of a model training and detection method according to an embodiment of the present application;
FIG. 5a is a remote sensing image of a steel material yard according to an embodiment of the present application;
FIG. 5b is an image of a steel material field captured during a field examination according to an embodiment of the present application;
FIG. 6a is a remote sensing image of a sand and stone material yard according to an embodiment of the present disclosure;
FIG. 6b is a photograph of a pile of sand and stone material taken in the field according to the embodiment of the present application;
FIG. 7a is a remote sensing image of a coal material yard according to an embodiment of the present disclosure;
FIG. 7b is a field photographic image of a coal pile according to an embodiment of the present disclosure;
FIG. 8a is a remote sensing image of a construction material yard according to an embodiment of the present application;
FIG. 8b is a photograph of the construction material yard in the embodiment of the present application;
FIG. 9 is a diagram of yard sections detected by the network model according to an embodiment of the present invention;
FIG. 10 is a schematic structural diagram of an identification device according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a pooling process according to an embodiment of the present application;
FIG. 12 is a schematic diagram of anchor frame generation according to an embodiment of the present application;
FIG. 13 is a schematic diagram illustrating anchor frame size acquisition according to an embodiment of the present application;
FIG. 14 is a flow chart illustrating yard identification according to an embodiment of the present disclosure;
fig. 15 is a schematic diagram of the determined anchor frame and the real yard position according to the embodiment of the present application.
Detailed Description
The present invention will be described in further detail with reference to the following detailed description and accompanying drawings. Wherein like elements in different embodiments are numbered with like associated elements. In the following description, numerous details are set forth in order to provide a better understanding of the present application. However, those skilled in the art will readily recognize that some of the features may be omitted or replaced with other elements, materials, methods in different instances. In some instances, certain operations related to the present application have not been shown or described in detail in order to avoid obscuring the core of the present application from excessive description, and it is not necessary for those skilled in the art to describe these operations in detail, so that they may be fully understood from the description in the specification and the general knowledge in the art.
Furthermore, the features, operations, or characteristics described in the specification may be combined in any suitable manner to form various embodiments. Also, the various steps or actions in the method descriptions may be transposed or transposed in order, as will be apparent to one of ordinary skill in the art. Thus, the various sequences in the specification and drawings are for the purpose of describing certain embodiments only and are not intended to imply a required sequence unless otherwise indicated where such sequence must be followed.
As deep learning matures, multiple types can be extracted using the deep learning model. The applicant studied intelligent identification of yards by means of neural networks. Through an AI model and a remote sensing image technology, the storage yards are classified, a large number of samples are selected, and the characteristics of each type of storage yard are automatically learned. The process can automatically learn and establish an extraction model of each type of storage yard without concerning the selection and combination of features, thereby realizing the automatic storage yard extraction analysis.
But the work of intelligently identifying the solid waste storage yard in a complex scene by utilizing an AI model and a remote sensing technology is less at present. According to the method, a bead-triangle city group is selected as a test area, and the type of a storage yard for environment damage is selected on the basis of field investigation in a large amount of fields in the early period. Secondly, selecting a large number of samples for each type of storage yard, finally learning the samples by using a VGG16-Frcnn neural network to obtain an extraction model of each type of storage yard, and further realizing the identification of the storage yard under a complex scene.
The first embodiment is as follows:
referring to fig. 1 and fig. 3, the present embodiment provides a method for identifying a solid waste area based on deep learning, which includes:
step 101: and acquiring a satellite remote sensing image of the area to be detected. In the embodiment, a high-precision satellite remote sensing device is adopted to acquire image data with meter-level spatial resolution.
Step 102: and inputting the satellite remote sensing image into a pre-trained deep learning network detection model to obtain a solid waste detection result of the area to be detected. In the embodiment, the type, the range and the position information of the storage yard can be detected through the network detection model, for example, the storage yard type of the current region to be detected is identified and detected to be a steel material storage yard, a sand and stone storage yard, a coal storage yard, a building material storage yard, a household garbage storage yard, a building garbage storage yard and the like. Meanwhile, the range and the position information of the storage yard are marked on the map by combining the map information, as shown in fig. 9, after the position of the storage yard in each area is detected, the corresponding position on the map is marked, wherein the area marked by the square frame is the range of the storage yard.
As shown in fig. 2 and fig. 3, the deep learning network model of the present embodiment is obtained by training through the following method:
step 201: establishing a sample library: and establishing a storage yard sample image library. In the embodiment, high-resolution remote sensing image data such as GF2, GF7 and the like are adopted.
Step 202: training: and training a deep learning network initial model by using most sample images in the sample image library. For example, the network initial model in this embodiment is vgg16 network model, for example, model1, and model1 is trained by using 80% of sample images in the sample library, and image blocks existing in the storage yard can be identified by model 1. The model2 is trained by using FRCNN models such as model2, and using a sample image library and the distribution range of the storage yard in the corresponding sample image, and the storage yard image detected by the model1 is further detected by using a model2 model, so that the storage yard range in the image block is detected.
Please refer to fig. 5a, fig. 6a, fig. 7a, and fig. 8a, which respectively show the range and the position of each yard detected by the network detection model2.
Step 203: a detection step: and testing the trained network initial model by using the residual small part of sample images in the sample image library. For example, 20% sample images in the sample image library are used to test the recognition accuracy of the trained vgg16 network model 1. The recognition accuracy of the trained FRCNN network model2 was tested using 20% sample images in the sample image library and the XML file of the yard distribution range in the images.
Step 204: judging whether the detection precision of the trained initial model meets the preset requirement or not according to the test result, and if so, taking the currently trained network initial model as a network detection model; otherwise, continuing to circulate the training step and the detection step until the detection precision of the initial model after training meets the preset requirement.
In this embodiment, the process of establishing the sample library includes:
1.1, preprocessing a sample image: and performing atmospheric correction, orthorectification, panchromatic data orthorectification and multispectral-panchromatic fusion processing on the obtained sample image to obtain image data with meter-level spatial resolution.
1.2, sketching the sample image; and drawing sample data of the storage yard and dividing the storage yard type on the sample image by visual interpretation and combining the space data of the existing storage yard map. For example, the sample images are divided into types of steel material storage yards, sand and stone storage yards, coal storage yards, building material storage yards, household garbage storage yards, building garbage storage yards and the like, and 100-200 sample images are sketched in each type of storage yard.
The sample picture naming convention is ullon _ ullat _ rdlon _ rdlat _ wgs84_ gf2. tif. The meaning of each parameter in the file name is as follows: ulon, ullat, ullon, rdlon, wgs84, a sample image coordinate system, gf2, a satellite sensor to which the sample image belongs, wherein the longitude of the upper left corner of the sample image is equal to the latitude of the upper left corner of the sample image; the stock yard basic information and the positions (distribution ranges) of four corner points in the sample image are marked by using an XML file, and the XML file has the following structure:
Figure BDA0003418763300000051
Figure BDA0003418763300000061
1.3, manually surveying the information of the inspection yard, please refer to fig. 5b, fig. 6b, fig. 7b, and fig. 8b according to the position and the type information of the sketched sample. And manually carrying out field investigation and photographing on the site, verifying the stock dump information, and correcting or eliminating the wrong stock dump sample information.
In this embodiment, when the satellite remote sensing image of the sample yard is acquired as the sample image, the sample image is divided into sample image blocks with the size of 224 × 224. Similarly, when the satellite remote sensing image of the area to be detected is detected, the image to be detected is divided into image blocks with the size of 224 × 224, and an image block suspected to exist in a storage yard is screened out from the tens of millions of image blocks by using a network detection model 1.
The structure of the vgg16 network model of the present embodiment includes: convolutional layers, pooling layers, and full link layers.
The convolution process of the convolutional layer uses a convolution kernel, and continuously scans on each layer of pixel matrix according to the step length, the value scanned each time is multiplied by the number of the corresponding position in the convolution kernel, and then the sum is added, and the obtained value generates a new matrix.
The pooling layer pooling (pooling) process is: the pooling operation corresponds to a dimensionality reduction operation, with maximum pooling and average pooling, with maximum pooling (max pooling) being most commonly used. The spatial size of the data is constantly reduced by the pooling layer, and the number of parameters and the amount of computation decreases accordingly, which controls the overfitting to some extent.
The full link (full connected) of the full link layer is: for the n-1 layer and the n layer, any node of the n-1 layer is connected with all nodes of the n layer, namely, when each node of the n layer performs calculation, the input of the activation function is the weight of all nodes of the n-1 layer.
The structure of the VGG16-Frcnn neural network of the present example is as follows:
the characteristic diagram extraction module: this embodiment is implemented using vgg16 parts before the fully connected layer of the neural network architecture (model 1). The feature maps (feauture maps) corresponding to the sample images are obtained by performing a series of convolution (convolution), feature activation (Relu), and pooling (Pooling) operations on the sample images. As shown in fig. 11, the input image (224 × 224) is reduced in size by 16 after pooling, and the output feature map has a dimension (18 × 18).
Region candidate box generation network rpn (region pro-social network): and (4) processing the feature map extracted in the step (1) to obtain a region candidate frame (Proposal). This part is mainly realized by an Anchor frame (Anchor).
2.1 convolution operation: firstly, performing convolution and feature activation operation on the features in the step 1.
2.2 Anchor frame generation. The anchor frame is a rectangle on the image, the vertex of the rectangle records position information, and the area of the rectangle records range information. Each anchor frame is formed by windows with different length-width ratios and different areas.
Each pixel point in the feature map generates 9 anchor frames according to the window size (8, 16, 32) and the length-width ratio (1:1, 1:2, 2: 1). Thus, a total of (18 × 9) anchor frames are generated. The anchor frame is represented by vectors (x, y, w, h), which respectively represent the coordinates of the center point of the anchor frame on the image, and the length and width (18 × 9 × 4). And each anchor frame is divided into positive (positive) and negative (negative), and whether the anchor frame contains the yard information (18 x 9 x 2) is recorded. The anchor frame generation principle is shown in fig. 12.
2.3 region candidate box generation. And determining whether the anchor frame contains the yard information or not by judging the positive and negative of the anchor frame. And performing regression operation on the anchor frame containing the storage yard information and the real storage yard position (recorded in an XML file) in the image, and obtaining the anchor frame close to the real storage yard position on the ground surface as a candidate area (Proposal) by adjusting the offset and the scaling of the anchor frame. As shown in fig. 15, the frame of the dot is an XML file to find the range of the real distribution position of the storage yard in the recorded image. The frame of the large point is a generated anchor frame, and the anchor frame is close to the real position in the storage yard as much as possible by adjusting offset and zooming.
Region of interest Pooling layer (ROI Pooling): the prosassals (candidate boxes) size and shape obtained by linear regression models for all posivethanthors are different, and the fully connected layer at the end of the network for classification and re-regression requires that the input must be of fixed size. To address this problem, as shown in fig. 13, model2 first down-samples the different sizes of propofol (M × N) by 16 times to obtain (M/16 × N × 16) sizes, and then obtains fixed sizes (7 × 7) of propofol sizes by pooling layer operations.
4: classification (classification): calculating the specific category of each pronodal by using the obtained pronodal feature maps through a full connection layer and a normalization index (softmax), and outputting a stock yard category probability vector; and meanwhile, the position offset of each proposal is obtained by using the anchor frame position regression again, and the regression is used for regression of a more accurate target detection frame. The technical route of steps 2, 3 and 4 in model2 is shown in FIG. 14:
in one embodiment, the detection model is continuously updated for training during operation, for example, the storage yard sample images in the sample library are perfected to increase the identification of different types of storage yards and improve the identification accuracy, and the feature combination model is not required to be rebuilt. In the embodiment, the yard category of the area to be detected is identified through the network model, and the identification accuracy reaches 82% through manual verification.
Example two:
the present embodiment provides a solid waste field recognition device based on deep learning, as shown in fig. 10, the recognition device includes: an acquisition module 301 and a detection module 302.
The acquisition module 301 is configured to acquire a satellite remote sensing image of an area to be detected.
The detection module 302 is configured to input the satellite remote sensing image into a pre-trained deep learning network detection model to obtain a solid waste detection result of the area to be detected.
Example three:
the present embodiment provides a computer-readable storage medium having a program stored thereon, the program being executable by a processor to implement the solid waste field identification method provided in the above embodiment.
The skilled person will appreciate that all or part of the functions of the methods in the above embodiments may be implemented by hardware, or may be implemented by computer programs. When all or part of the functions of the above embodiments are implemented by a computer program, the program may be stored in a computer-readable storage medium, and the storage medium may include: a read only memory, a random access memory, a magnetic disk, an optical disk, a hard disk, etc., and the program is executed by a computer to realize the above functions. For example, the program may be stored in a memory of the device, and when the program in the memory is executed by the processor, all or part of the functions described above may be implemented. In addition, when all or part of the functions in the above embodiments are implemented by a computer program, the program may be stored in a storage medium such as a server, another computer, a magnetic disk, an optical disk, a flash disk, or a removable hard disk, and may be downloaded or copied to a memory of a local device, or may be version-updated in a system of the local device, and when the program in the memory is executed by a processor, all or part of the functions in the above embodiments may be implemented.
The present invention has been described in terms of specific examples, which are provided to aid understanding of the invention and are not intended to be limiting. For a person skilled in the art to which the invention pertains, several simple deductions, modifications or substitutions may be made according to the idea of the invention.

Claims (10)

1. A solid waste field identification method based on deep learning is characterized by comprising the following steps:
acquiring a satellite remote sensing image of a region to be detected;
and inputting the satellite remote sensing image into a pre-trained deep learning network detection model to obtain a solid waste detection result of the area to be detected.
2. The deep learning-based solid waste site identification method according to claim 1, wherein the solid waste detection result includes a type of a yard, a yard range, and location information.
3. The deep learning-based solid waste field identification method according to claim 1, wherein the deep learning network model is obtained by training through the following method:
establishing a sample library: establishing a storage yard sample image library;
training: training a deep learning network initial model by adopting partial sample images in a sample image library;
a detection step: testing the trained network initial model by using the rest sample images in the sample image library;
judging whether the detection precision of the trained initial model meets a preset requirement or not according to a test result, and if so, taking the currently trained network initial model as the network detection model; otherwise, continuing to circulate the training step and the detection step until the detection precision of the initial model after training meets the preset requirement.
4. The deep learning-based solid waste field identification method according to claim 3, wherein the sample library establishment comprises: and acquiring a satellite remote sensing image of the sample storage yard as a sample image, and dividing the sample image into sample image blocks with the size of 224 × 224.
5. The deep learning-based solid waste field identification method according to claim 4, wherein the sample library establishment further comprises:
sample image preprocessing: performing atmospheric correction, orthorectification, panchromatic data orthorectification and multispectral-panchromatic fusion processing on the obtained sample image to obtain image data with meter-level spatial resolution;
sketching the sample image; and drawing sample data of the storage yard and dividing the storage yard type on the sample image by visual interpretation and combining the space data of the existing storage yard map.
6. The utility model provides a solid waste field recognition device based on deep learning which characterized in that includes:
the acquisition module is used for acquiring a satellite remote sensing image of a region to be detected;
and the detection module is used for inputting the satellite remote sensing image into a pre-trained deep learning network detection model to obtain a solid waste detection result of the area to be detected.
7. The deep learning-based solid waste site identification apparatus according to claim 6, wherein the solid waste detection result includes a classification of a yard, a yard range, and location information.
8. The deep learning-based solid waste field recognition device of claim 7, wherein the deep learning network model is trained by the following method:
establishing a sample library: establishing a storage yard sample image library;
training: training a deep learning network initial model by adopting most sample images in a sample image library;
a detection step: testing the trained network initial model by using the residual small part of sample images in the sample image library;
judging whether the detection precision of the trained initial model meets a preset requirement or not according to a test result, and if so, taking the currently trained network initial model as the network detection model; otherwise, continuing to circulate the training step and the detection step until the detection precision of the initial model after training meets the preset requirement.
9. The deep learning-based solid waste field identification apparatus of claim 8, wherein the sample library establishment comprises: and acquiring a satellite remote sensing image of the sample storage yard as a sample image, and dividing the sample image into sample image blocks with the size of 224 × 224.
10. A computer-readable storage medium, characterized in that the medium has stored thereon a program which is executable by a processor to implement the method according to any one of claims 1-5.
CN202111554678.0A 2021-12-17 2021-12-17 Deep learning-based solid waste field identification method and device and storage medium Pending CN114241332A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111554678.0A CN114241332A (en) 2021-12-17 2021-12-17 Deep learning-based solid waste field identification method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111554678.0A CN114241332A (en) 2021-12-17 2021-12-17 Deep learning-based solid waste field identification method and device and storage medium

Publications (1)

Publication Number Publication Date
CN114241332A true CN114241332A (en) 2022-03-25

Family

ID=80758437

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111554678.0A Pending CN114241332A (en) 2021-12-17 2021-12-17 Deep learning-based solid waste field identification method and device and storage medium

Country Status (1)

Country Link
CN (1) CN114241332A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114898226A (en) * 2022-05-31 2022-08-12 北京百度网讯科技有限公司 Map data processing method and device, electronic equipment and storage medium
CN115272853A (en) * 2022-07-27 2022-11-01 清华大学 Industrial wasteland identification method and product based on artificial intelligence technology and big data
CN116563724A (en) * 2023-04-20 2023-08-08 生态环境部卫星环境应用中心 Urban solid waste extraction method and system based on multisource high-resolution satellite remote sensing image

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180075417A1 (en) * 2016-09-14 2018-03-15 International Business Machines Corporation Drone and drone-based system for collecting and managing waste for improved sanitation
CN110598784A (en) * 2019-09-11 2019-12-20 北京建筑大学 Machine learning-based construction waste classification method and device
CN111767822A (en) * 2020-06-23 2020-10-13 浙江大华技术股份有限公司 Garbage detection method and related equipment and device
CN112836615A (en) * 2021-01-26 2021-05-25 西南交通大学 Remote sensing image multi-scale solid waste detection method based on deep learning and global reasoning
CN113392788A (en) * 2021-06-23 2021-09-14 中国科学院空天信息创新研究院 Construction waste identification method and device
CN113516059A (en) * 2021-06-23 2021-10-19 南京华高生态环境遥感技术研究院有限公司 Solid waste identification method and device, electronic device and storage medium
CN113673586A (en) * 2021-08-10 2021-11-19 北京航天创智科技有限公司 Mariculture area classification method fusing multi-source high-resolution satellite remote sensing images

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180075417A1 (en) * 2016-09-14 2018-03-15 International Business Machines Corporation Drone and drone-based system for collecting and managing waste for improved sanitation
CN110598784A (en) * 2019-09-11 2019-12-20 北京建筑大学 Machine learning-based construction waste classification method and device
CN111767822A (en) * 2020-06-23 2020-10-13 浙江大华技术股份有限公司 Garbage detection method and related equipment and device
CN112836615A (en) * 2021-01-26 2021-05-25 西南交通大学 Remote sensing image multi-scale solid waste detection method based on deep learning and global reasoning
CN113392788A (en) * 2021-06-23 2021-09-14 中国科学院空天信息创新研究院 Construction waste identification method and device
CN113516059A (en) * 2021-06-23 2021-10-19 南京华高生态环境遥感技术研究院有限公司 Solid waste identification method and device, electronic device and storage medium
CN113673586A (en) * 2021-08-10 2021-11-19 北京航天创智科技有限公司 Mariculture area classification method fusing multi-source high-resolution satellite remote sensing images

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114898226A (en) * 2022-05-31 2022-08-12 北京百度网讯科技有限公司 Map data processing method and device, electronic equipment and storage medium
CN114898226B (en) * 2022-05-31 2024-03-26 北京百度网讯科技有限公司 Map data processing method, map data processing device, electronic equipment and storage medium
CN115272853A (en) * 2022-07-27 2022-11-01 清华大学 Industrial wasteland identification method and product based on artificial intelligence technology and big data
CN116563724A (en) * 2023-04-20 2023-08-08 生态环境部卫星环境应用中心 Urban solid waste extraction method and system based on multisource high-resolution satellite remote sensing image
CN116563724B (en) * 2023-04-20 2024-05-14 生态环境部卫星环境应用中心 Urban solid waste extraction method and system based on multisource high-resolution satellite remote sensing image

Similar Documents

Publication Publication Date Title
Tan et al. Automatic detection of sewer defects based on improved you only look once algorithm
CN109409263B (en) Method for detecting urban ground feature change of remote sensing image based on Siamese convolutional network
CN110598784B (en) Machine learning-based construction waste classification method and device
CN114241332A (en) Deep learning-based solid waste field identification method and device and storage medium
Lechner et al. Remote sensing of small and linear features: Quantifying the effects of patch size and length, grid position and detectability on land cover mapping
CN111339858A (en) Oil and gas pipeline marker identification method based on neural network
Li et al. Automatic bridge crack identification from concrete surface using ResNeXt with postprocessing
CN111914767B (en) Scattered sewage enterprise detection method and system based on multi-source remote sensing data
CN112257799A (en) Method, system and device for detecting household garbage target
CN104376303A (en) Vehicle low-resolution imaging method
CN116740652B (en) Method and system for monitoring rust area expansion based on neural network model
CN117422699B (en) Highway detection method, highway detection device, computer equipment and storage medium
Xia et al. Geographically local representation learning with a spatial prior for visual localization
CN113378642B (en) Method for detecting illegal occupation buildings in rural areas
CN110399868B (en) Coastal wetland bird detection method
Luo et al. High-precise water extraction based on spectral-spatial coupled remote sensing information
CN117011759A (en) Method and system for analyzing multi-element geological information of surrounding rock of tunnel face by drilling and blasting method
Fusco et al. An application of artificial intelligence to support the discovering of roman centuriation remains
CN108109156A (en) SAR image Approach for road detection based on ratio feature
CN114882375A (en) Intelligent identification method and device for tailing pond
Wu et al. Spring Point Detection of High Resolution Image Based on YOLOV3
CN118470333B (en) Geographic environment semantic segmentation method and system based on remote sensing image
WO2024198068A1 (en) Pavement distress matching and continuous tracking method
CN112800895B (en) Method for identifying building based on deep learning algorithm
Atkinson Issues of uncertainty in super-resolution mapping and the design of an inter-comparison study

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination