CN114898210B - Neural network-based remote sensing image target identification method - Google Patents

Neural network-based remote sensing image target identification method Download PDF

Info

Publication number
CN114898210B
CN114898210B CN202210504266.4A CN202210504266A CN114898210B CN 114898210 B CN114898210 B CN 114898210B CN 202210504266 A CN202210504266 A CN 202210504266A CN 114898210 B CN114898210 B CN 114898210B
Authority
CN
China
Prior art keywords
image
pixel
sub
remote sensing
noise reduction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210504266.4A
Other languages
Chinese (zh)
Other versions
CN114898210A (en
Inventor
黄冬虹
倪燕
王璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingyan Lingzhi Information Consulting Beijing Co ltd
Original Assignee
Qingyan Lingzhi Information Consulting Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingyan Lingzhi Information Consulting Beijing Co ltd filed Critical Qingyan Lingzhi Information Consulting Beijing Co ltd
Priority to CN202210504266.4A priority Critical patent/CN114898210B/en
Publication of CN114898210A publication Critical patent/CN114898210A/en
Application granted granted Critical
Publication of CN114898210B publication Critical patent/CN114898210B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/36Applying a local operator, i.e. means to operate on image points situated in the vicinity of a given point; Non-linear local filtering operations, e.g. median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones

Abstract

The invention discloses a remote sensing image target identification method based on a neural network, which comprises the following steps of S1, acquiring a visible light remote sensing image and an infrared remote sensing image through a shooting device; s2, carrying out graying processing on the visible light remote sensing image to obtain a grayed image; s3, carrying out self-adaptive noise reduction processing on the gray images to obtain noise-reduced images; s4, performing median noise reduction processing on the infrared remote sensing image to obtain a noise-reduced infrared remote sensing image; s5, performing edge detection on the infrared remote sensing image subjected to noise reduction to obtain an edge pixel point set S 1 (ii) a S6, obtaining S in the noise reduction image 1 Set S of corresponding pixel points 2 (ii) a S6, the set S is compared in the noise reduction image 2 Performing edge repairing treatment on the pixel points to obtain a repaired image; and S7, inputting the restored image into a preselected trained neural network model for target recognition processing to obtain a recognition result. The method can improve the accuracy of neural network identification on the visible light remote sensing image.

Description

Neural network-based remote sensing image target identification method
Technical Field
The invention relates to the field of image recognition, in particular to a remote sensing image target recognition method based on a neural network.
Background
The remote sensing image is a film or a digital photo recording the size of electromagnetic waves of various ground objects. As technology matured, remote sensing images soon became available for use in various aspects of production activities, such as agricultural production, city planning, resource statistics, and the like. In the prior art, before target identification is performed on a visible light remote sensing image, noise reduction processing is often required to be performed first to reduce the influence of noise on the final identification accuracy. However, the noise reduction process may reduce the content of detail information in the visible light remote sensing image, so that the accuracy may be affected when the visible light remote sensing image is input into the neural network for recognition.
Disclosure of Invention
The invention aims to disclose a remote sensing image target recognition method based on a neural network, and solves the problems that in the prior art, the content of detail information in a visible light remote sensing image is reduced by performing noise reduction on the visible light remote sensing image, and the accuracy of recognition by inputting the visible light remote sensing image into the neural network is influenced.
In order to achieve the purpose, the invention adopts the following technical scheme:
a remote sensing image target identification method based on a neural network comprises the following steps:
s1, acquiring a visible light remote sensing image visrsimg and an infrared remote sensing image infrarsimg through a shooting device;
s2, carrying out graying processing on the visrsimg to obtain a grayed image gray;
s3, carrying out self-adaptive noise reduction processing on the grayimg to obtain a noise reduction image afgrayimg;
s4, carrying out median noise reduction treatment on the infra red remote sensing image afnfrsimg to obtain an infra red remote sensing image afnfrsimg subjected to noise reduction;
s5, performing edge detection on the afinfrsimg to obtain an edge pixel point set S 1
S6, obtaining S in afgrayimg 1 Set S of corresponding pixel points 2
S6, set S is treated in affryimg 2 Performing edge repairing treatment on the pixel point to obtain a repaired image fiximg;
and S7, inputting the fiximg into the preselected trained neural network model to perform target recognition processing, and obtaining a recognition result.
Preferably, the photographing device includes a drone, a fixed wing aircraft, and a satellite.
Preferably, the S2 includes:
graying visrsimg using the following formula:
grayimg(x,y)=w 1 ×R(x,y)+w 2 ×G(x,y)+w 3 ×B(x,y)
in the formula, (x, y) represents the coordinates of a pixel point; w is a 1 、w 2 、w 3 Respectively representing a preset first processing coefficient, a preset second processing coefficient and a preset third processing coefficient; grayimg (x, y) represents the pixel value of a pixel point with coordinates (x, y) in the gray image grayimg; r (x, y) represents the pixel value of a pixel point with coordinates (x, y) in the image R; g (x, y) represents the pixel value of a pixel point with coordinates (x, y) in the image G; b (x, y) represents the pixel value of the pixel point with the coordinate (x, y) in the image B; the image R, the image G, and the image B are images corresponding to a red component, a green component, and a blue component of visrsimg in the RGB color model, respectively.
Preferably, the S3 includes: performing self-adaptive partition processing on the grayimg, and dividing the grayimg into a plurality of sub-areas;
and respectively carrying out self-adaptive noise reduction processing on each subarea in the grayimg to obtain a noise reduction image afgrayimg.
Preferably, the performing adaptive partitioning processing on the grayimg to divide the grayimg into a plurality of sub-regions includes:
partitioning in a self-adaptive mode:
dividing grayimg into C sub-regions with the same number of pixels in the first time of partition, and storing all sub-regions obtained in the current partition into a set U 1
Respectively calculate U 1 The quality fraction of each sub-area is stored into a further sub-area set alrU, and the sub-areas with the quality fractions larger than a set quality fraction threshold value are stored into the sub-area set alrU 1 Storing the sub-areas with the quality scores less than or equal to the set quality score threshold value into a result partition set finU;
the kth partition, k is greater than or equal to 2, alrU k-1 Each element in the set is divided into C sub-areas with the same number of pixel points, and all the sub-areas obtained by the sub-areas are stored into a set U k
Respectively calculate U k The quality fraction of each sub-area is stored in a further sub-area set, and the sub-areas with the quality fractions larger than the set quality fraction threshold valuealrU k Storing the sub-areas with the quality scores less than or equal to the set quality score threshold value into a result partition set finU;
if alrU k If the number of elements contained in the result partition set finU is less than 1, the elements in the result partition set finU are used as finally obtained sub-regions.
Preferably, the mass fraction is calculated by the following formula:
Figure BDA0003636758110000031
in the formula, quasco represents the quality score, set represents the collection of pixel points in the sub-area, and grayimg i Representing the pixel value of pixel points i in the set in grayimg, numset representing the total number of pixel points in the set, gvsv representing a preset pixel value variance reference value, imggra i The gradient amplitude of a pixel point i in the set is represented, igvsv represents a preset gradient amplitude variance reference value, and numfr represents the number of pixel points in a sub-area, wherein the pixel points are larger than a self-adaptive pixel value threshold value.
Preferably, the adaptively denoising processing is performed on each sub-region in the gradyimg to obtain a denoised image affryimg, and the method includes:
for the subregion chiarea, if the variance of the pixel values of the pixels in the chiarea is smaller than the set variance threshold, performing noise reduction on the chiarea in the following manner:
afgrayimg d =mid(Ω d )
in the formula, afgrayimg d The pixel value, Ω, of the pixel point d in the child in the noise reduction image affray d Represents the set of pixels in the region with radius R and the center of the pixel d in the chiarea, mid (omega) d ) Represents taking Ω d The middle value of the pixel value of the middle pixel point;
if the variance of the pixel values of the pixels in the chiarea is larger than or equal to a set variance threshold value, performing noise reduction processing on the chiarea by using a wavelet noise reduction algorithm;
and respectively carrying out the noise reduction processing on each subarea to obtain a noise reduction image affryimg.
The method comprises the steps of carrying out noise reduction processing on the visible light remote sensing image, carrying out noise reduction processing on the infrared remote sensing image acquired at the same time as the visible light remote sensing image, and then obtaining a set S of edge pixel points in the infrared remote sensing image 1 And then obtaining S 1 Set S of corresponding pixel points in noise-reduced image 2 And then to S in the noise-reduced image 2 And performing edge repairing treatment on the pixel points to obtain a repaired image. By the arrangement mode, edge detail information of pixel points in the repaired image can be effectively improved. Thereby improving the accuracy of the final target detection result.
Drawings
The invention is further illustrated by means of the attached drawings, but the embodiments in the drawings do not constitute any limitation to the invention, and for a person skilled in the art, other drawings can be obtained on the basis of the following drawings without inventive effort.
Fig. 1 is a diagram of an exemplary embodiment of a neural network-based remote sensing image target identification method according to the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
The embodiment shown in fig. 1 provides a method for identifying a target in a remote sensing image based on a neural network, which comprises the following steps:
s1, acquiring a visible light remote sensing image visrsimg and an infrared remote sensing image infrarsimg through a shooting device;
s2, carrying out graying processing on the visrsimg to obtain a grayed image gradimig;
s3, carrying out self-adaptive noise reduction processing on the grayimg to obtain a noise reduction image afgrayimg;
s4, carrying out median noise reduction on infrafsmg to obtain an infrared remote sensing image afinfrafsmg subjected to noise reduction;
s5, performing edge detection on the afinfrsimg to obtain an edge pixel point set S 1
S6, obtaining S in afgrayimg 1 Set S of corresponding pixel points 2
S6, set S is treated in affryimg 2 Performing edge repairing treatment on the pixel point to obtain a repaired image fiximg;
and S7, inputting the fiximg into the preselected trained neural network model to perform target recognition processing, and obtaining a recognition result.
The method comprises the steps of carrying out noise reduction processing on the visible light remote sensing image, carrying out noise reduction processing on the infrared remote sensing image acquired at the same time as the visible light remote sensing image, and then obtaining a set S of edge pixel points in the infrared remote sensing image 1 And then obtaining S 1 Set S of corresponding pixel points in noise-reduced image 2 And then to S in the noise-reduced image 2 And performing edge repairing treatment on the pixel points to obtain a repaired image. By the arrangement mode, edge detail information of pixel points in the repaired image can be effectively improved. Thereby improving the accuracy of the final target detection result.
Preferably, the recognition result includes the number of objects present in the fix image fixmg and the name of each object.
Preferably, the neural network model comprises a convolutional layer, a fully-connected layer and a classification layer, and the convolutional layer comprises a pooling layer.
The convolution layer is used for carrying out convolution processing on the input restored image fiximg to obtain a convolution characteristic diagram; the pooling layer is used for pooling the convolution characteristic map to obtain a pooling characteristic map;
the fully-connected layer is used for mapping the feature space obtained by calculating the convolutional layer to a sample mark space.
And the classification layer is used for outputting a classification result.
Preferably, the photographing device includes a drone, a fixed wing aircraft, and a satellite.
Preferably, the S2 includes:
graying visrsimg using the following formula:
grayimg(x,y)=w 1 ×R(x,y)+w 2 ×G(x,y)+w 3 ×B(x,y)
in the formula, (x, y) represents the coordinates of a pixel point; w is a 1 、w 2 、w 3 Respectively representing a preset first processing coefficient, a preset second processing coefficient and a preset third processing coefficient; grayimg (x, y) represents the pixel value of a pixel point with coordinates (x, y) in the gray image grayimg; r (x, y) represents the pixel value of a pixel point with coordinates (x, y) in the image R; g (x, y) represents the pixel value of a pixel point with coordinates (x, y) in the image G; b (x, y) represents the pixel value of the pixel point with the coordinate (x, y) in the image B; the image R, the image G, and the image B are images corresponding to a red component, a green component, and a blue component of the visrsimg in the RGB color model, respectively.
Preferably, w 1 Is taken to be 0.11,w 2 Is taken to be 0.6,w 3 Is 0.3.
Preferably, the S3 includes: performing self-adaptive partition processing on the grayimg, and dividing the grayimg into a plurality of sub-regions;
and respectively carrying out self-adaptive noise reduction processing on each subarea in the grayimg to obtain a noise reduction image afgrayimg.
In the prior art, the same noise reduction mode, such as a gaussian noise reduction function, is generally directly used for carrying out noise reduction processing on the whole gray-scale image, and the noise reduction processing mode has certain problems. The noise is unevenly distributed in the gray-scaled image, and the overall processing mode does not consider the distribution difference, so that the image is smoothed due to excessive noise reduction in a normal area, and the detail information is lost. In addition, the overall noise reduction processing mode easily causes the noise reduction processing time to be too long. Therefore, the invention selects different noise reduction modes for different sub-regions by setting a self-adaptive noise reduction processing mode, and can consider the noise reduction efficiency while considering the noise reduction accuracy. Thereby effectively solving the problems existing in the prior art.
Preferably, the performing adaptive partitioning processing on the grayimg to divide the grayimg into a plurality of sub-regions includes:
partitioning in a self-adaptive mode:
partitioning for the first time, dividing grayimg into C sub-regions with the same number of pixel points, and storing all sub-regions obtained by partitioning into a set U 1
Respectively calculate U 1 The quality fraction of each sub-area is stored into a further sub-area set alrU, and the sub-areas with the quality fractions larger than a set quality fraction threshold value are stored into the sub-area set alrU 1 Storing the sub-areas with the quality scores less than or equal to the set quality score threshold value into a result partition set finU;
the kth partition, k is greater than or equal to 2, alrU k-1 Each element in the set is divided into C sub-areas with the same number of pixel points, and all the sub-areas obtained by the sub-areas are stored into a set U k
Respectively calculate U k The quality fraction of each sub-area is stored into a further sub-area set alrU, and the sub-areas with the quality fractions larger than a set quality fraction threshold value are stored into the sub-area set alrU k Storing the sub-areas with the quality scores less than or equal to the set quality score threshold value into a result partition set finU;
if alrU k If the number of elements included in the result partition set finU is less than 1, the elements in the result partition set finU are used as finally obtained sub-regions.
In the process of subregion, what this application adopted is the mode of many times subregion, rather than directly once carrying out the subregion, because the subregion result that the mode of once subregion obtained can not make the similar pixel of pixel value distribution divide to same subregion, obviously such mode is unfavorable for the development of this application subsequent self-adaptation noise reduction, therefore, this application divides the subregion constantly through calculating the mass fraction, make every subregion that finally obtains as far as possible, the similar pixel of pixel value distribution is divided to same subregion, thereby improve the accuracy of subsequent self-adaptation noise reduction.
Preferably, the mass fraction is calculated by the following formula:
Figure BDA0003636758110000061
in the formula, quasco represents the mass fraction, set represents the collection of pixel points in the sub-area, and grayimg i Representing the pixel value of a pixel point i in the set in the grayimg, numset representing the total number of the pixel points in the set, gvsv representing a preset pixel value variance reference value, imggra i The gradient amplitude of a pixel point i in the set is represented, igvsv represents a preset gradient amplitude variance reference value, and numfr represents the number of pixel points in a sub-area, wherein the pixel points are larger than a self-adaptive pixel value threshold value.
The quality score of the invention is mainly considered from the aspects of pixel value, gradient amplitude and the like, the variances of the two aspects are respectively calculated, the smaller the variance is, the higher the similarity between the pixel points in the sub-area is, and on the contrary, the smaller the similarity is, the further partitioning is needed. In addition, the number of the pixel points larger than the self-adaptive threshold value is considered, and the larger the number is, the smaller the similarity between the pixel points is represented. Therefore, the distribution of the finally obtained pixel values in the same region can be similar as much as possible, and the subsequent noise reduction processing on the region by using the same noise reduction processing function is facilitated.
Preferably, the adaptive pixel value threshold is obtained by:
and (4) calculating the sub-region by using an otsu algorithm to obtain an image segmentation threshold value, and taking the image segmentation threshold value as a self-adaptive pixel value threshold value.
By the arrangement mode, the pixel value threshold value can be changed in a self-adaptive mode along with the specific distribution condition of the pixel values of the pixels in the sub-area, and the accuracy of the quality fraction is improved.
Preferably, the adaptively denoising processing is performed on each sub-region in the gradyimg to obtain a denoised image affryimg, and the method includes:
for the subregion chiarea, if the variance of the pixel values of the pixels in the chiarea is smaller than the set variance threshold, performing noise reduction on the chiarea in the following manner:
afgrayimg d =mid(Ω d )
in the formula, afgrayimg d The pixel value, Ω, of the pixel point d in the child in the noise reduction image affray d Represents the set of pixels in the region with radius R and the center of the pixel d in the chiarea, mid (omega) d ) Represents taking Ω d The middle value of the pixel value of the middle pixel point;
if the variance of the pixel values of the pixels in the chiarea is larger than or equal to a set variance threshold value, performing noise reduction processing on the chiarea by using a wavelet noise reduction algorithm;
and respectively carrying out the noise reduction processing on each subarea to obtain a noise reduction image affryimg.
The variance mainly reflects the degree of difference between the pixels, and the greater the degree of difference, the more the number of noise pixels is expressed, and the more uneven the pixel value distribution is, so for the subregion of which the variance is smaller than the set variance threshold value, the denoising is performed by acquiring the intermediate value of the pixels in the set in the circular region, and for the subregion of which the variance is greater than or equal to the set variance threshold value, the wavelet denoising is performed in a way that the denoising effect is better, but the processing time required by denoising is performed by using the intermediate value for the relative neighborhood. This can balance the noise reduction effect and the noise reduction processing time. The noise reduction processing time is effectively shortened while the noise reduction effect is ensured.
Preferably, the S4 includes:
taking the visible light remote sensing image visrsimg as a reference image, and carrying out registration processing on the infrared remote sensing image infrarsimg to obtain a registered infrared image reginframg;
and performing median noise reduction processing on the reginfrsimg by using a median filtering algorithm to obtain the noise-reduced infrared remote sensing image afnfrsimg.
The visible light remote sensing image and the infrared remote sensing image are respectively acquired by different lenses, so that specific things contained in the two types of images can be slightly different in final imaging, and the influence of slight difference caused by different angles can be avoided by carrying out registration processing.
Preferably, S6 includes:
for S 2 The pixel point pxnode in (1) is subjected to edge repairing treatment by adopting the following formula:
Figure BDA0003636758110000081
in the formula, fixxmg (pxnode) represents a pixel value of pxnode in the repaired image fixxm, grayimg (pxnode) represents a pixel value of a pixel corresponding to pxnode in grayimg, grayimg (stnode) represents a pixel value of a pixel corresponding to a reference pixel stnode in grayimg, and afgrayimg (stnode) represents a pixel value of the reference pixel stnode in afgrayimg.
The edge restoration processing is based on the proportional relation of pixel values of two pixel points which are compared in the gray level image, the proportional relation is utilized in affray to restore the pixel value of the pxnode, and the proportional relation is restored in the restored image, so that the pixel value of the edge pixel point is enhanced, the edge restoration is realized, and the content of detail information in the restored image is improved.
While embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.
It should be noted that, functional units/modules in the embodiments of the present invention may be integrated into one processing unit/module, or each unit/module may exist alone physically, or two or more units/modules are integrated into one unit/module. The integrated units/modules may be implemented in the form of hardware, or may be implemented in the form of software functional units/modules.
From the above description of embodiments, it is clear for a person skilled in the art that the embodiments described herein can be implemented in hardware, software, firmware, middleware, code or any appropriate combination thereof. For a hardware implementation, a processor may be implemented in one or more of the following units: an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, other electronic units designed to perform the functions described herein, or a combination thereof. For a software implementation, some or all of the procedures of an embodiment may be performed by a computer program instructing associated hardware.
In practice, the program may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. Computer-readable media can include, but is not limited to, RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.

Claims (4)

1. A remote sensing image target identification method based on a neural network is characterized by comprising the following steps:
s1, acquiring a visible light remote sensing image visrsimg and an infrared remote sensing image infrarsimg through a shooting device;
s2, carrying out graying processing on the visrsimg to obtain a grayed image gray;
s3, carrying out self-adaptive noise reduction processing on the gradyimg to obtain a noise-reduced image afgradyimg;
s4, carrying out median noise reduction on infrafsmg to obtain an infrared remote sensing image afinfrafsmg subjected to noise reduction;
s5, performing edge detection on the afinfrsimg to obtain an edge pixel point set S 1
S6, obtaining S in affryimg 1 Set S of corresponding pixel points 2
S6, set S is treated in affryimg 2 Performing edge repairing treatment on the pixel points to obtain repaired images fiximg;
s7, inputting the fiximg into a preselected trained neural network model to perform target recognition processing to obtain a recognition result;
the S3 comprises the following steps: performing self-adaptive partition processing on the grayimg, and dividing the grayimg into a plurality of sub-regions;
respectively carrying out self-adaptive noise reduction processing on each subarea in the gray img to obtain a noise-reduced image afgray img;
the performing adaptive partition processing on the grayimg to divide the grayimg into a plurality of sub-regions includes:
partitioning in a self-adaptive mode:
partitioning for the first time, dividing grayimg into C sub-regions with the same number of pixel points, and storing all sub-regions obtained by partitioning into a set U 1
Respectively calculate U 1 The quality fraction of each sub-area is stored into a further sub-area set alrU, and the sub-areas with the quality fractions larger than the set quality fraction threshold value are stored into the sub-area set alrU 1 Storing the subareas with the mass fractions smaller than or equal to the set mass fraction threshold value into a result subarea set finU;
the kth partition, k is greater than or equal to 2, alrU k-1 Each element in the set is divided into C subregions with the same number of pixel points, and all subregions obtained by the current subregion are stored into a set U k
Respectively calculate U k The quality fraction of each sub-area is stored into a further sub-area set alrU, and the sub-areas with the quality fractions larger than the set quality fraction threshold value are stored into the sub-area set alrU k Storing the sub-areas with the quality scores less than or equal to the set quality score threshold value into a result partition set finU;
if alrU k If the number of the elements contained in the result partition set finU is less than 1, taking the elements in the result partition set finU as finally obtained sub-areas;
the mass fraction is calculated by the following formula:
Figure FDA0004053929510000021
in the formula, quasco represents the quality score, set represents the collection of pixel points in the sub-area, and grayimg i Representing the pixel value of a pixel point i in the set in the grayimg, numset representing the total number of the pixel points in the set, gvsv representing a preset pixel value variance reference value, imggra i Representing the gradient amplitude of a pixel point i in the set, igvsv representing a preset gradient amplitude variance reference value, and numfr representing the number of pixel points in a sub-region, wherein the pixel points are greater than a self-adaptive pixel value threshold value;
the S6 comprises the following steps:
for S 2 The pixel point pxnode in (1) is subjected to edge repairing treatment by adopting the following formula:
Figure FDA0004053929510000022
in the formula, fixxmg (pxnode) represents a pixel value of pxnode in the repaired image fixxm, grayimg (pxnode) represents a pixel value of a pixel corresponding to pxnode in grayimg, grayimg (stnode) represents a pixel value of a pixel corresponding to a reference pixel stnode in grayimg, and afgrayimg (stnode) represents a pixel value of the reference pixel stnode in afgrayimg.
2. The method for target recognition based on the remote sensing image of the neural network as claimed in claim 1, wherein the shooting device comprises an unmanned aerial vehicle, a fixed wing aircraft and a satellite.
3. The method for target recognition of remote sensing images based on the neural network as claimed in claim 1, wherein the S2 comprises:
graying visrsimg using the following formula:
grayimg(x,y)=w 1 ×R(x,y)+w 2 ×G(x,y)+w 3 ×B(x,y)
in the formula, (x, y) represents the coordinates of a pixel point; w is a 1 、w 2 、w 3 Respectively representing a preset first processing coefficient, a preset second processing coefficient and a preset third processing coefficient; grayimg (x, y) represents the pixel value of a pixel point with coordinates (x, y) in the gray image grayimg; r (x, y) represents the pixel value of a pixel point with coordinates (x, y) in the image R; g (x, y) represents the pixel value of a pixel point with coordinates (x, y) in the image G; b (x, y) represents the pixel value of the pixel point with the coordinate (x, y) in the image B; the image R, the image G, and the image B are images corresponding to a red component, a green component, and a blue component of the visrsimg in the RGB color model, respectively.
4. The method for identifying the target of the remote sensing image based on the neural network as claimed in claim 1, wherein the step of performing adaptive noise reduction processing on each sub-area in the grayimg to obtain a noise-reduced image afgrayimg comprises the following steps:
for the subregion chiarea, if the variance of the pixel values of the pixels in the chiarea is smaller than the set variance threshold, performing noise reduction on the chiarea in the following manner:
afgrayimg d =mid(Ω d )
in the formula, afgrayimg d The pixel value, omega, of the pixel point d in the child in the noise-reduced image afgrayimg is represented d Represents the set of pixels in the region with radius R and the center of the pixel d in the chiarea, mid (omega) d ) Represents taking Ω d The middle value of the pixel value of the middle pixel point;
if the variance of the pixel values of the pixels in the chiarea is larger than or equal to a set variance threshold value, performing noise reduction processing on the chiarea by using a wavelet noise reduction algorithm;
and respectively carrying out the noise reduction processing on each subarea to obtain a noise-reduced image affryimg.
CN202210504266.4A 2022-05-10 2022-05-10 Neural network-based remote sensing image target identification method Active CN114898210B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210504266.4A CN114898210B (en) 2022-05-10 2022-05-10 Neural network-based remote sensing image target identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210504266.4A CN114898210B (en) 2022-05-10 2022-05-10 Neural network-based remote sensing image target identification method

Publications (2)

Publication Number Publication Date
CN114898210A CN114898210A (en) 2022-08-12
CN114898210B true CN114898210B (en) 2023-03-03

Family

ID=82720819

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210504266.4A Active CN114898210B (en) 2022-05-10 2022-05-10 Neural network-based remote sensing image target identification method

Country Status (1)

Country Link
CN (1) CN114898210B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115511833B (en) * 2022-09-28 2023-06-27 广东百能家居有限公司 Glass surface scratch detecting system
CN116308888B (en) * 2023-05-19 2023-08-11 南方电网数字平台科技(广东)有限公司 Operation ticket management system based on neural network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101472036A (en) * 2007-12-29 2009-07-01 明基电通信息技术有限公司 Image processing method capable of removing flaw and device applied the method
CN110490914A (en) * 2019-07-29 2019-11-22 广东工业大学 It is a kind of based on brightness adaptively and conspicuousness detect image interfusion method
CN111833275A (en) * 2020-07-20 2020-10-27 山东师范大学 Image denoising method based on low-rank analysis
CN112907460A (en) * 2021-01-25 2021-06-04 宁波市鄞州区测绘院 Remote sensing image enhancement method
CN112950490A (en) * 2021-01-25 2021-06-11 宁波市鄞州区测绘院 Unmanned aerial vehicle remote sensing mapping image enhancement processing method
CN113850176A (en) * 2021-09-22 2021-12-28 北京航空航天大学 Fine-grained weak-feature target emergence detection method based on multi-mode remote sensing image

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009126621A2 (en) * 2008-04-07 2009-10-15 Tufts University Methods and apparatus for image restoration
CN110675345A (en) * 2019-09-25 2020-01-10 中国人民解放军61646部队 Fuzzy completion processing method and device for remote sensing image to-be-repaired area
CN113298808B (en) * 2021-06-22 2022-03-18 哈尔滨工程大学 Method for repairing building shielding information in tilt-oriented remote sensing image
CN113989662B (en) * 2021-10-18 2023-02-03 中国电子科技集团公司第五十二研究所 Remote sensing image fine-grained target identification method based on self-supervision mechanism

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101472036A (en) * 2007-12-29 2009-07-01 明基电通信息技术有限公司 Image processing method capable of removing flaw and device applied the method
CN110490914A (en) * 2019-07-29 2019-11-22 广东工业大学 It is a kind of based on brightness adaptively and conspicuousness detect image interfusion method
CN111833275A (en) * 2020-07-20 2020-10-27 山东师范大学 Image denoising method based on low-rank analysis
CN112907460A (en) * 2021-01-25 2021-06-04 宁波市鄞州区测绘院 Remote sensing image enhancement method
CN112950490A (en) * 2021-01-25 2021-06-11 宁波市鄞州区测绘院 Unmanned aerial vehicle remote sensing mapping image enhancement processing method
CN113850176A (en) * 2021-09-22 2021-12-28 北京航空航天大学 Fine-grained weak-feature target emergence detection method based on multi-mode remote sensing image

Also Published As

Publication number Publication date
CN114898210A (en) 2022-08-12

Similar Documents

Publication Publication Date Title
CN114898210B (en) Neural network-based remote sensing image target identification method
CN108549892B (en) License plate image sharpening method based on convolutional neural network
Negru et al. Exponential contrast restoration in fog conditions for driving assistance
CN110378946B (en) Depth map processing method and device and electronic equipment
KR20180065889A (en) Method and apparatus for detecting target
CN111369605B (en) Infrared and visible light image registration method and system based on edge features
CN110390643B (en) License plate enhancement method and device and electronic equipment
CN110428389B (en) Low-light-level image enhancement method based on MSR theory and exposure fusion
CN111754531A (en) Image instance segmentation method and device
Thanh et al. Single image dehazing based on adaptive histogram equalization and linearization of gamma correction
CN110728201B (en) Image processing method and device for fingerprint identification
CN111091507A (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN112053302B (en) Denoising method and device for hyperspectral image and storage medium
CN114936981B (en) Cloud platform-based place code scanning registration system
CN110807742A (en) Low-light-level image enhancement method based on integrated network
CN114882332A (en) Target detection system based on image fusion
CN111028214B (en) Skin detection device based on cloud platform
WO2020020445A1 (en) A method and a system for processing images to obtain foggy images
CN111444555A (en) Temperature measurement information display method and device and terminal equipment
CN113744142B (en) Image restoration method, electronic device and storage medium
CN107085839B (en) SAR image speckle reduction method based on texture enhancement and sparse coding
CN116823686B (en) Night infrared and visible light image fusion method based on image enhancement
CN110516731B (en) Visual odometer feature point detection method and system based on deep learning
CN116912115A (en) Underwater image self-adaptive enhancement method, system, equipment and storage medium
CN111311610A (en) Image segmentation method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant