CN114415142B - Rain clutter identification method and system based on navigation radar - Google Patents

Rain clutter identification method and system based on navigation radar Download PDF

Info

Publication number
CN114415142B
CN114415142B CN202210105646.0A CN202210105646A CN114415142B CN 114415142 B CN114415142 B CN 114415142B CN 202210105646 A CN202210105646 A CN 202210105646A CN 114415142 B CN114415142 B CN 114415142B
Authority
CN
China
Prior art keywords
image
pixel
number value
value
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN202210105646.0A
Other languages
Chinese (zh)
Other versions
CN114415142A (en
Inventor
杨婧
周双林
王晓谊
夏文涛
刘波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Highlandr Digital Technology Co ltd
Original Assignee
Beijing Highlandr Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Highlandr Digital Technology Co ltd filed Critical Beijing Highlandr Digital Technology Co ltd
Priority to CN202210105646.0A priority Critical patent/CN114415142B/en
Publication of CN114415142A publication Critical patent/CN114415142A/en
Application granted granted Critical
Publication of CN114415142B publication Critical patent/CN114415142B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/414Discriminating targets with respect to background clutter
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/28Details of pulse systems
    • G01S7/285Receivers
    • G01S7/292Extracting wanted echo-signals
    • G01S7/2923Extracting wanted echo-signals based on data belonging to a number of consecutive radar periods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The embodiment of the invention discloses a rain clutter identification method based on a navigation radar, which comprises the following steps: respectively carrying out median filtering processing and minimum filtering processing on the first image to obtain a second image and a third image; calculating gradient values of the second image in all directions by adopting convolution templates in multiple directions, selecting a maximum value and carrying out threshold judgment to obtain a fourth image; performing connected domain processing on the fourth image to obtain a fifth image; carrying out target area detection on the fifth image by using the pixel value information of the second image and the connected domain information of the fifth image to obtain a sixth image; replacing the pixel value of the target area of the second image by using the pixel value of the target area of the sixth image and the pixel value of the third image to obtain a seventh image; and smoothing the seventh image to obtain an eighth image. The embodiment of the invention also discloses a rain clutter recognition system based on the navigation radar. The invention identifies the rain clutter, and can better inhibit the rain clutter and detect the target after identification.

Description

Rain clutter identification method and system based on navigation radar
Technical Field
The invention relates to the technical field of radar, in particular to a rain clutter identification method and system based on a navigation radar.
Background
The radar is difficult to distinguish the distance between adjacent raindrops, so that echo videos generated by rain reflection all form loose cotton-like continuous bright spot areas without obvious edges on a screen. The larger the rainfall, the thicker the raindrop, the shorter the radar operating wavelength, the wider the antenna beam and the wider the pulse used, the stronger the rain reflection, and the small target echoes in the raining spot area will be submerged. Conversely, the weaker the rain reflection. In the prior art, radar echo video images are processed by a differential circuit to suppress rain and snow interference. The wide interference video pulse formed by the raindrop contains a wide range of gentle direct current components, the gentle direct current components can be filtered out through differentiation, and a small amount of edge components are reserved, so that large continuous interference echoes such as rain and snow can be suppressed.
However, the use of the differential circuit suppresses the echo of the land target while suppressing the echo of the rain or snow, and the differential circuit is used in the full range and the all direction, and the echo of the small target outside the rain clutter region is highly likely to be suppressed.
Disclosure of Invention
In order to solve the above problems, an object of the present invention is to provide a method and a system for identifying a rain clutter area based on a navigation radar, which can better perform rain clutter suppression and target detection after identifying the rain clutter area.
The embodiment of the invention provides a rain clutter identification method based on a navigation radar, which comprises the following steps:
s1, performing median filtering processing and minimum filtering processing on the first image respectively to obtain a second image and a third image, wherein the first image is an original radar echo video image;
s2, calculating gradient values of the second image in all directions by adopting convolution templates in multiple directions, selecting the maximum value in the gradient values and carrying out preset threshold judgment to obtain a fourth image;
s3, performing connected domain processing on the fourth image to obtain a fifth image;
s4, performing target area detection on the fifth image by using the pixel value information of the second image and the connected domain information of the fifth image to obtain a sixth image;
s5, replacing the pixel value of the target area of the second image by the pixel value of the target area of the sixth image and the pixel value of the third image to obtain a seventh image;
and S6, smoothing the seventh image to obtain an eighth image.
As a further improvement of the present invention, said S2 includes:
the convolution templates in the multiple directions are respectively convolved with the second image to obtain multiple filtering images;
for each pixel point, taking the maximum value of the gradient values in the plurality of filtering images as the gradient value of the pixel point to obtain a spatial filtering image;
and carrying out thresholding processing on the spatial domain filtering image based on a preset threshold value to obtain the fourth image.
As a further improvement of the invention, the method employs convolution templates in eight directions, 0 °, 45 °, 90 °, 135 °, 180 °, 225 °, 270 °, and 315 °, respectively.
As a further improvement of the present invention, in S3, performing connected domain processing on each foreground pixel point in the fourth image by using an eight-neighborhood seed filling method includes:
selecting a foreground pixel point from the fourth image as a seed, merging the foreground pixel points adjacent to the seed into the same pixel point set according to the set condition of the connected domain to obtain a connected domain, and marking all foreground pixel points in the connected domain by adopting the same sequence number value.
As a further improvement of the present invention, said S4 includes:
and scanning the fifth image line by line and line by line, when the marks with the same serial number value appear in the middle of the fifth image and a plurality of 0 s appear in the middle of the fifth image, judging the condition of pixel points corresponding to the 0 s, and if the condition is met, giving the serial number value to the pixel points to extract a target area to obtain a sixth image.
As a further improvement of the present invention, when there are marks with the same sequence number value and there are several 0's in the middle of the marks in the line scanning or column scanning process, the conditional judgment is performed on the pixel points corresponding to the several 0's, and if the condition is satisfied, the pixel point is assigned with the sequence number value, including:
s41, when scanning the ith row or ith column of the fifth image, finding out a first sequence number value which is not 0, recording the sequence number value as label, and recording the initial pixel coordinate of the sequence number value as startID;
s42, continuing to scan backward from the initial pixel coordinate, searching for a serial number value which is not 0, if the serial number value is equal to label, recording the current pixel coordinate as the ending pixel coordinate of the serial number value, and recording as end ID;
s43, continuing to scan backwards from the current pixel coordinate, repeating S42 to update the endID until a new sequence number value newlabel which is not 0 and is not equal to label appears, and recording the pixel coordinate newstartID corresponding to the new sequence number value;
s44, sequentially extracting pixel values in coordinates ranges of startID and endID in the ith row or ith column from the second image, assuming that one of the coordinates in the coordinate ranges of startID and endID is [ i, j ], and if the pixel value corresponding to the coordinate [ i, j ] is greater than the pixel value corresponding to the start coordinate [ i, startID ] and the serial number value in the coordinate [ i, j ] in the fifth image is 0, updating the serial number value corresponding to the coordinate [ i, j ] in the fifth image to label;
s45, updating label as new sequence number value newabel, updating startID as pixel coordinate newstartID corresponding to new sequence number value;
s46, the process of S41-S45 is repeated until the row scan or the column scan is completed.
The embodiment of the invention also provides a rain clutter recognition system based on the navigation radar, which comprises:
the sequencing filtering processing module is used for respectively carrying out median filtering processing and minimum filtering processing on the first image to obtain a second image and a third image, wherein the first image is an original radar echo video image;
the edge detection module is used for calculating gradient values of the second image in all directions by adopting convolution templates in multiple directions, selecting the maximum value in the gradient values and judging a preset threshold value to obtain a fourth image;
the connected domain processing module is used for carrying out connected domain processing on the fourth image to obtain a fifth image;
the target area detection module is used for carrying out target area detection on the fifth image by utilizing the pixel value information of the second image and the connected domain information of the fifth image to obtain a sixth image;
the rain clutter region identification module is used for replacing the pixel value of the target region of the second image by using the pixel value of the target region of the sixth image and the pixel value of the third image to obtain a seventh image;
and the smoothing processing module is used for smoothing the seventh image to obtain an eighth image.
As a further improvement of the present invention, the edge detection module includes:
the convolution templates in the multiple directions are respectively convolved with the second image to obtain multiple filtering images;
for each pixel point, taking the maximum value of the gradient values in the plurality of filtering images as the gradient value of the pixel point to obtain a spatial filtering image;
and carrying out thresholding processing on the spatial domain filtering image based on a preset threshold value to obtain the fourth image.
As a further improvement of the invention, the method employs convolution templates in eight directions, 0 °, 45 °, 90 °, 135 °, 180 °, 225 °, 270 °, and 315 °, respectively.
As a further improvement of the present invention, the connected domain processing module comprises:
selecting a foreground pixel point from the fourth image as a seed, merging the foreground pixel points adjacent to the seed into the same pixel point set according to the set condition of the connected domain to obtain a connected domain, and marking all foreground pixel points in the connected domain by adopting the same sequence number value.
As a further improvement of the present invention, the target area detection module includes:
and scanning the fifth image line by line and line by line, when the marks with the same serial number value appear in the middle of the fifth image and a plurality of 0 s appear in the middle of the fifth image, judging the condition of pixel points corresponding to the 0 s, and if the condition is met, giving the serial number value to the pixel points to extract a target area to obtain a sixth image.
As a further improvement of the present invention, the target area detection module includes:
the starting pixel coordinate marking module is used for finding a first sequence number value which is not 0 when scanning the ith row or ith column of the fifth image, marking the sequence number value as label, and recording the starting pixel coordinate of the sequence number value as startID;
the end pixel coordinate marking module is used for continuously scanning backwards from the initial pixel coordinate, searching a serial number value which is not 0, and recording the current pixel coordinate as an end pixel coordinate of the serial number value and recording the end pixel coordinate as an endID if the serial number value is equal to label;
the ending pixel coordinate updating module is used for continuing backward scanning from the current pixel coordinate, repeatedly ending the process in the pixel coordinate marking module to update the endID until a new sequence number value newlabel which is not 0 and is not equal to label appears, and recording the pixel coordinate newstartID corresponding to the new sequence number value;
a 0 sequence number value updating module, configured to sequentially extract pixel values in coordinate ranges of startID and endID in an ith row or ith column from the second image, assume that one of the coordinates in the coordinate ranges of startID and endID is [ i, j ], and if a pixel value corresponding to a coordinate [ i, j ] is greater than a pixel value corresponding to a start coordinate [ i, startID ] and a sequence number value in the coordinate [ i, j ] in the fifth image is 0, update a sequence number value corresponding to the coordinate [ i, j ] in the fifth image to label;
a label sequence number value updating module used for updating label to a new sequence number value newlabel and updating startID to a pixel coordinate newstartID corresponding to the new sequence number value;
and the repeating module is used for repeating the processes of the starting pixel coordinate marking module, the ending pixel coordinate updating module, the 0 serial number value updating module and the label serial number value updating module until the line scanning or the column scanning is finished.
Embodiments of the present invention also provide an electronic device, which includes a memory and a processor, where the memory is configured to store one or more computer instructions, and the one or more computer instructions are executed by the processor to implement the method.
Embodiments of the present invention also provide a computer-readable storage medium, on which a computer program is stored, the computer program being executed by a processor to implement the method.
The invention has the beneficial effects that:
the rain clutter region in the original radar echo image is identified, rain clutter suppression and target detection can be better carried out after identification, small targets outside the rain clutter region can be better reserved, and the improvement effect on the echo of a continental land target is also better.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a schematic flowchart of a method for identifying rain clutter based on a navigation radar according to an exemplary embodiment of the present invention;
FIG. 2 is a schematic illustration of a first image according to an exemplary embodiment of the present invention;
FIG. 3 is a schematic diagram of a second image in accordance with an exemplary embodiment of the present invention;
FIG. 4 is a diagram illustrating an eight-direction convolution template in accordance with an exemplary embodiment of the present invention;
FIG. 5 is a diagram illustrating a fourth image in accordance with an exemplary embodiment of the present invention;
FIG. 6 is a schematic illustration of a sixth image in accordance with an exemplary embodiment of the present invention;
fig. 7 is a diagram illustrating a seventh image according to an exemplary embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that, if directional indications (such as up, down, left, right, front, and back … …) are involved in the embodiment of the present invention, the directional indications are only used for explaining the relative position relationship between the components, the motion situation, and the like under a certain posture (as shown in the drawing), and if the certain posture is changed, the directional indications are changed accordingly.
In addition, in the description of the present invention, the terms used are for illustrative purposes only and are not intended to limit the scope of the present invention. The terms "comprises" and/or "comprising" are used to specify the presence of stated elements, steps, operations, and/or components, but do not preclude the presence or addition of one or more other elements, steps, operations, and/or components. The terms "first," "second," and the like may be used to describe various elements, not necessarily order, and not necessarily limit the elements. In addition, in the description of the present invention, "a plurality" means two or more unless otherwise specified. These terms are only used to distinguish one element from another. These and/or other aspects will become apparent to those of ordinary skill in the art in view of the following drawings, and the description of the embodiments of the present invention will be more readily understood by those of ordinary skill in the art. The figures depict described embodiments of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated in the present application may be employed without departing from the principles described in the present application.
The embodiment of the invention provides a rain clutter identification method based on a navigation radar, which comprises the following steps of:
s1, performing median filtering processing and minimum filtering processing on the first image respectively to obtain a second image and a third image, wherein the first image is an original radar echo video image;
s2, calculating gradient values of the second image in all directions by adopting convolution templates in multiple directions, selecting the maximum value in the gradient values and carrying out preset threshold judgment to obtain a fourth image;
s3, performing connected domain processing on the fourth image to obtain a fifth image;
s4, performing target area detection on the fifth image by using the pixel value information of the second image and the connected domain information of the fifth image to obtain a sixth image;
s5, replacing the pixel value of the target area in the second image by the pixel value of the target area of the sixth image and the pixel value of the third image to obtain a seventh image;
and S6, smoothing the seventh image to obtain an eighth image.
The method of the invention firstly identifies the rain clutter area in the original radar echo image and then inhibits the rain clutter area (a differential circuit can be used for inhibiting the rain clutter area), so that small targets outside the rain clutter area can be well reserved, and the method has a good improvement effect on the echo of the continental target. After the small target is removed through median filtering, the large target is removed through edge detection, connected domain processing and rain clutter region detection, and the remaining region is considered as a rain clutter region. The method has better real-time performance, and can carry out the processing on the original radar echo image in real time.
In S1, the first image is subjected to a sorting filtering process, which includes a median filtering process and a minimum filtering process.
And performing median filtering processing on the first image by adopting a median filter. The median filter belongs to one of nonlinear filters, can remove small targets in an image, retains flaky large targets and rain clutter, can retain detailed parts by the median filter, does not blur edges, and is favorable for subsequent edge detection. The basic principle of median filtering is to replace the value of a certain point in an image with the median of the values of the points in a neighborhood of the point, so that the surrounding pixel values are close to the true values. In the method, a neighborhood of 8 × 5 is adopted to perform median filtering on a first image (hereinafter referred to as image1, shown in fig. 2) to obtain a second image (hereinafter referred to as image2, shown in fig. 3).
And carrying out minimum value filtering processing on the first image by adopting a minimum filter. The minimum filter uses the first sample value in a set of ordered elements from small to large. The method adopts 8-5 neighborhood to filter the image1, uses the minimum sample value to replace the center value of the current window, and stores the filtered image as a third image (hereinafter referred to as image 3).
In an alternative embodiment, the S2 includes:
the convolution templates in the multiple directions are respectively convolved with the second image to obtain multiple filtering images;
for each pixel point, taking the maximum value of the gradient values in the plurality of filtering images as the gradient value of the pixel point to obtain a spatial filtering image;
and carrying out thresholding processing on the spatial domain filtering image based on a preset threshold value to obtain the fourth image.
When the method of the invention is used for carrying out edge detection on the image after the sorting filtering processing, gradient values in all directions are firstly calculated, the intensity change value of each pixel point in the image is determined by utilizing the maximum gradient value, and possible edge pixel points are reserved by selecting a threshold value.
In an alternative embodiment, the method uses convolution templates for eight directions, 0 °, 45 °, 90 °, 135 °, 180 °, 225 °, 270 °, 315 °, respectively.
The method of the invention adopts convolution templates of eight directions of 0 degree, 45 degree, 90 degree, 135 degree, 180 degree, 225 degree, 270 degree and 315 degree to calculate the image gradient value, and the convolution templates of the eight directions are shown in figure 4. The edge detection of the convolution template in eight directions can better detect the edge of each direction of the image.
In the method, convolution templates in eight directions are respectively convolved with image2 after median filtering, eight filtered images obtained through filtering are divided into gl (X, Y), g2(X, Y), g3(X, Y), g4(X, Y), g5(X, Y), g6(X, Y), g7(X, Y) and g8(X, Y), and the finally obtained spatial domain filtered image g (X, Y) takes the maximum value of the gradient values in the eight directions corresponding to the current pixel, namely g (X, Y) = max (| gl (X, Y) |, | g2(X, Y) |, | g3(X, Y) |, | g4(X, Y) |, | g5(X, Y) |, | g3(X, Y), | g7(X, Y) | 4642 (X, Y)) | 84 |. Then, thresholding is performed on the spatial domain filtered image, the value of the pixel point smaller than the preset threshold is set to 0, and the value of the pixel point larger than the preset threshold is set to 255, so as to obtain a final edge detection image, and the final edge detection image is saved as a fourth image (hereinafter referred to as image4, as shown in fig. 5).
In an optional implementation manner, in S3, for each foreground pixel point in the fourth image, performing connected component area processing by using an eight-neighborhood seed filling method includes:
selecting a foreground pixel point from the fourth image as a seed, merging the foreground pixel points adjacent to the seed into the same pixel point set according to the set condition of the connected domain to obtain a connected domain, and marking all foreground pixel points in the connected domain by adopting the same sequence number value.
The processing method of the connected domain comprises a two-pass scanning method and a seed filling method. The method adopts a seed filling method, selects a foreground pixel point as a seed, then merges the foreground pixel points adjacent to the seed into the same pixel point set according to two basic setting conditions (the pixel values are the same and the positions are adjacent) of a connected domain, and finally obtains the pixel point set as the connected domain. The same connected component is marked with the same sequence number value, and the processed connected component is saved as a fifth image (hereinafter referred to as image 5).
In an alternative embodiment, the S4 includes:
and scanning the fifth image line by line and line by line, when the fifth image has marks with the same serial number value and a plurality of 0 s appear in the middle in the line scanning or line scanning process, judging the condition of pixel points corresponding to the plurality of 0 s, and if the condition is met, giving the serial number value to the pixel point to extract a target area to obtain a sixth image (hereinafter referred to as image 6).
The method of the invention carries out progressive scanning and column-by-column scanning on the image5 processed by the connected domain, if the same serial number value is found in a certain row or a certain column of scanning and a plurality of 0 appears in the middle, the condition judgment is carried out on the pixel points corresponding to the 0 appearing, if the condition is met, the serial number value is given to the pixel point, after the completion of the progressive scanning and the column scanning, the target area can be extracted, and the image6 of the target area is extracted as shown in figure 6.
In an optional implementation manner, when there are marks with the same sequence number value and there are several 0 sequence numbers in the middle of the marks in the line scanning or column scanning process, performing condition judgment on pixel points corresponding to the several 0, and if the condition is met, assigning the sequence number value to the pixel point includes:
s41, when scanning the ith row or ith column of the fifth image, finding out a first sequence number value which is not 0, recording the sequence number value as label, and recording the initial pixel coordinate of the sequence number value as startID;
s42, continuing to scan backward from the initial pixel coordinate, searching for a serial number value which is not 0, if the serial number value is equal to label, recording the current pixel coordinate as the ending pixel coordinate of the serial number value, and recording as end ID;
s43, continuing to scan backwards from the current pixel coordinate, repeating S42 to update the endID until a new sequence number value newlabel which is not 0 and is not equal to label appears, and recording the pixel coordinate newstartID corresponding to the new sequence number value;
s44, sequentially extracting pixel values in coordinates ranges of startID and endID in the ith row or ith column from the second image, assuming that one of the coordinates in the coordinate ranges of startID and endID is [ i, j ], and if the pixel value corresponding to the coordinate [ i, j ] is greater than the pixel value corresponding to the start coordinate [ i, startID ] and the serial number value in the coordinate [ i, j ] in the fifth image is 0, updating the serial number value corresponding to the coordinate [ i, j ] in the fifth image to label;
s45, updating label as new sequence number value newabel, updating startID as pixel coordinate newstartID corresponding to new sequence number value;
s46, repeating the process of S41-S45 until the whole line scan or column scan is completed.
According to the method, after connected domain processing, a large target area is detected, the detection process not only considers row scanning and column scanning, but also judges the pixel value of an echo, the same target area is further ensured to be identified, and a seventh image7 after a rain clutter area is identified is shown in FIG. 7.
The embodiment of the invention provides a rain clutter recognition system based on a navigation radar, which comprises:
the sequencing filtering processing module is used for respectively carrying out median filtering processing and minimum filtering processing on the first image to obtain a second image and a third image, wherein the first image is an original radar echo video image;
the edge detection module is used for calculating gradient values of the second image in all directions by adopting convolution templates in multiple directions, selecting the maximum value in the gradient values and judging a preset threshold value to obtain a fourth image;
the connected domain processing module is used for carrying out connected domain processing on the fourth image to obtain a fifth image;
the target area detection module is used for carrying out target area detection on the fifth image by utilizing the pixel value information of the second image and the connected domain information of the fifth image to obtain a sixth image;
the rain clutter region identification module is used for replacing the pixel value of the target region of the second image by using the pixel value of the target region of the sixth image and the pixel value of the third image to obtain a seventh image;
and the smoothing processing module is used for smoothing the seventh image to obtain an eighth image.
In an alternative embodiment, the edge detection module includes:
performing convolution processing on the convolution templates in the multiple directions and the second image respectively to obtain multiple filtering images;
for each pixel point, taking the maximum value of the gradient values in the plurality of filtering images as the gradient value of the pixel point to obtain a spatial filtering image;
and carrying out thresholding processing on the spatial domain filtering image based on a preset threshold value to obtain the fourth image.
In an alternative embodiment, the method employs convolution templates for eight directions, 0 °, 45 °, 90 °, 135 °, 180 °, 225 °, 270 °, and 315 °, respectively.
In an optional embodiment, the connected domain processing module includes:
selecting a foreground pixel point from the fourth image as a seed, merging the foreground pixel points adjacent to the seed into the same pixel point set according to the set condition of the connected domain to obtain a connected domain, and marking all foreground pixel points in the connected domain by adopting the same sequence number value.
In an optional embodiment, the target area detection module includes:
and scanning the fifth image line by line and line by line, when the marks with the same serial number value appear in the middle of the fifth image and a plurality of 0 s appear in the middle of the fifth image, judging the condition of pixel points corresponding to the 0 s, and if the condition is met, giving the serial number value to the pixel points to extract a target area to obtain a sixth image.
In an optional embodiment, the target area detection module includes:
the starting pixel coordinate marking module is used for finding a first sequence number value which is not 0 when scanning the ith row or ith column of the fifth image, marking the sequence number value as label, and recording the starting pixel coordinate of the sequence number value as startID;
the end pixel coordinate marking module is used for continuously scanning backwards from the initial pixel coordinate, searching a serial number value which is not 0, and recording the current pixel coordinate as an end pixel coordinate of the serial number value and recording the end pixel coordinate as an endID if the serial number value is equal to label;
the ending pixel coordinate updating module is used for continuing backward scanning from the current pixel coordinate, repeatedly ending the process in the pixel coordinate marking module to update the endID until a new sequence number value newlabel which is not 0 and is not equal to label appears, and recording the pixel coordinate newstartID corresponding to the new sequence number value;
a 0 sequence number value updating module, configured to sequentially extract pixel values in coordinate ranges of startID and endID in an ith row or ith column from the second image, assume that one of the coordinates in the coordinate ranges of startID and endID is [ i, j ], and if a pixel value corresponding to a coordinate [ i, j ] is greater than a pixel value corresponding to a start coordinate [ i, startID ] and a sequence number value in the coordinate [ i, j ] in the fifth image is 0, update a sequence number value corresponding to the coordinate [ i, j ] in the fifth image to label;
a label sequence number value updating module used for updating label to a new sequence number value newlabel and updating startID to a pixel coordinate newstartID corresponding to the new sequence number value;
and the repeating module is used for repeating the processes of the starting pixel coordinate marking module, the ending pixel coordinate updating module, the 0 serial number value updating module and the label serial number value updating module until the line scanning or the column scanning is finished.
The disclosure also relates to an electronic device comprising a server, a terminal and the like. The electronic device includes: at least one processor; a memory communicatively coupled to the at least one processor; and a communication component communicatively coupled to the storage medium, the communication component receiving and transmitting data under control of the processor; wherein the memory stores instructions executable by the at least one processor to implement the method of the above embodiments.
In an alternative embodiment, the memory is used as a non-volatile computer-readable storage medium for storing non-volatile software programs, non-volatile computer-executable programs, and modules. The processor executes various functional applications of the device and data processing, i.e., implements the method, by executing nonvolatile software programs, instructions, and modules stored in the memory.
The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store a list of options, etc. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and such remote memory may be connected to the external device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more modules are stored in the memory and, when executed by the one or more processors, perform the methods of any of the method embodiments described above.
The product can execute the method provided by the embodiment of the application, has corresponding functional modules and beneficial effects of the execution method, and can refer to the method provided by the embodiment of the application without detailed technical details in the embodiment.
The present disclosure also relates to a computer-readable storage medium for storing a computer-readable program for causing a computer to perform some or all of the above-described method embodiments.
That is, as can be understood by those skilled in the art, all or part of the steps in the method for implementing the embodiments described above may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Furthermore, those of ordinary skill in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
It will be understood by those skilled in the art that while the present invention has been described with reference to exemplary embodiments, various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed, but that the invention will include all embodiments falling within the scope of the appended claims.

Claims (9)

1. A rain clutter identification method based on a navigation radar is characterized by comprising the following steps:
s1, performing median filtering processing and minimum filtering processing on the first image respectively to obtain a second image and a third image, wherein the first image is an original radar echo video image;
s2, calculating gradient values of the second image in all directions by adopting convolution templates in multiple directions, selecting the maximum value in the gradient values and carrying out preset threshold judgment to obtain a fourth image;
s3, performing connected domain processing on the fourth image to obtain a fifth image;
s4, performing target area detection on the fifth image by using the pixel value information of the second image and the connected domain information of the fifth image to obtain a sixth image;
s5, replacing the pixel value of the target area of the second image with the pixel value of the target area of the sixth image and the pixel value of the third image to obtain a seventh image;
and S6, smoothing the seventh image to obtain an eighth image.
2. The method of claim 1, wherein the S2 includes:
the convolution templates in the multiple directions are respectively convolved with the second image to obtain multiple filtering images;
for each pixel point, taking the maximum value of the gradient values in the plurality of filtering images as the gradient value of the pixel point to obtain a spatial filtering image;
and carrying out thresholding processing on the spatial domain filtering image based on a preset threshold value to obtain the fourth image.
3. The method of claim 1 or 2, wherein the method employs a convolution template of eight directions, 0 °, 45 °, 90 °, 135 °, 180 °, 225 °, 270 °, and 315 °, respectively.
4. The method according to claim 1, wherein in S3, for each foreground pixel in the fourth image, performing connected domain processing by using an eight-neighborhood seed filling method includes:
selecting a foreground pixel point from the fourth image as a seed, merging the foreground pixel points adjacent to the seed into the same pixel point set according to the set condition of the connected domain to obtain a connected domain, and marking all foreground pixel points in the connected domain by adopting the same sequence number value.
5. The method of claim 4, wherein the S4 includes:
and scanning the fifth image line by line and line by line, when the marks with the same serial number value appear in the middle of the fifth image and a plurality of 0 s appear in the middle of the fifth image, judging the condition of pixel points corresponding to the 0 s, and if the condition is met, giving the serial number value to the pixel points to extract a target area to obtain a sixth image.
6. The method of claim 5, wherein when there are marks with the same sequence number value and there are several 0's in the middle of the marks in the line scanning or column scanning process, performing condition judgment on pixel points corresponding to the several 0's, and if the condition is met, assigning the sequence number value to the pixel point comprises:
s41, when scanning the ith row or ith column of the fifth image, finding out a first sequence number value which is not 0, recording the sequence number value as label, and recording the initial pixel coordinate of the sequence number value as startID;
s42, continuing to scan backward from the initial pixel coordinate, searching for a serial number value which is not 0, if the serial number value is equal to label, recording the current pixel coordinate as the ending pixel coordinate of the serial number value, and recording as end ID;
s43, continuing to scan backwards from the current pixel coordinate, repeating S42 to update the endID until a new sequence number value newlabel which is not 0 and is not equal to label appears, and recording the pixel coordinate newstartID corresponding to the new sequence number value;
s44, sequentially extracting pixel values in coordinates of startID and endID in the ith row or ith column from the second image, assuming that one of the coordinates in the coordinates of startID and endID is [ i, j ], if the pixel value corresponding to the coordinate [ i, j ] is greater than the pixel value corresponding to the start coordinate [ i, startID ] and the serial number value in the coordinate [ i, j ] in the fifth image is 0, updating the serial number value corresponding to the coordinate [ i, j ] in the fifth image to be label;
s45, updating label to be new sequence number value newlabel, and updating startID to be pixel coordinate newstartID corresponding to the new sequence number value;
s46, the process of S41-S45 is repeated until the row scan or the column scan is completed.
7. A rain clutter recognition system based on a navigation radar, the system comprising:
the sequencing filtering processing module is used for respectively carrying out median filtering processing and minimum filtering processing on the first image to obtain a second image and a third image, wherein the first image is an original radar echo video image;
the edge detection module is used for calculating gradient values of the second image in all directions by adopting convolution templates in multiple directions, selecting the maximum value in the gradient values and judging a preset threshold value to obtain a fourth image;
the connected domain processing module is used for carrying out connected domain processing on the fourth image to obtain a fifth image;
the target area detection module is used for carrying out target area detection on the fifth image by utilizing the pixel value information of the second image and the connected domain information of the fifth image to obtain a sixth image;
the rain clutter region identification module is used for replacing the pixel value of the target region of the second image by using the pixel value of the target region of the sixth image and the pixel value of the third image to obtain a seventh image;
and the smoothing processing module is used for smoothing the seventh image to obtain an eighth image.
8. An electronic device comprising a memory and a processor, wherein the memory is configured to store one or more computer instructions, wherein the one or more computer instructions are executed by the processor to implement the method of any of claims 1-6.
9. A computer-readable storage medium, on which a computer program is stored, the computer program being executable by a processor for implementing the method according to any one of claims 1-6.
CN202210105646.0A 2022-01-28 2022-01-28 Rain clutter identification method and system based on navigation radar Expired - Fee Related CN114415142B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210105646.0A CN114415142B (en) 2022-01-28 2022-01-28 Rain clutter identification method and system based on navigation radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210105646.0A CN114415142B (en) 2022-01-28 2022-01-28 Rain clutter identification method and system based on navigation radar

Publications (2)

Publication Number Publication Date
CN114415142A CN114415142A (en) 2022-04-29
CN114415142B true CN114415142B (en) 2022-08-16

Family

ID=81279999

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210105646.0A Expired - Fee Related CN114415142B (en) 2022-01-28 2022-01-28 Rain clutter identification method and system based on navigation radar

Country Status (1)

Country Link
CN (1) CN114415142B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115128570B (en) * 2022-08-30 2022-11-25 北京海兰信数据科技股份有限公司 Radar image processing method, device and equipment

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006072255A1 (en) * 2005-01-10 2006-07-13 Navico Danmark A/S Digital radar system with clutter reduction
JP5658871B2 (en) * 2009-11-02 2015-01-28 古野電気株式会社 Signal processing apparatus, radar apparatus, signal processing program, and signal processing method
CN104199009B (en) * 2014-09-18 2016-07-13 中国民航科学技术研究院 A kind of radar image clutter suppression method based on time domain specification
CN104915932B (en) * 2015-05-19 2018-04-27 中国电子科技集团公司第五十研究所 Hologram radar image preprocessing and target extraction method based on target signature
CN110738106A (en) * 2019-09-05 2020-01-31 天津大学 optical remote sensing image ship detection method based on FPGA
CN111199537A (en) * 2019-12-20 2020-05-26 中电科技(合肥)博微信息发展有限责任公司 VTS system radar target trace extraction method based on image processing, terminal device and computer readable storage medium
CN111767806B (en) * 2020-06-12 2023-05-05 北京理工大学 Ultra-narrow pulse radar ship target identification method based on Attribute
CN112649793A (en) * 2020-12-29 2021-04-13 西安科锐盛创新科技有限公司 Sea surface target radar trace condensation method and device, electronic equipment and storage medium
CN112634275B (en) * 2021-03-11 2021-06-01 北京海兰信数据科技股份有限公司 Radar echo image processing method and system
CN113064162B (en) * 2021-04-02 2023-03-14 中国科学院空天信息创新研究院 Detection method and device applied to radar system for detecting foreign matters on airfield runway

Also Published As

Publication number Publication date
CN114415142A (en) 2022-04-29

Similar Documents

Publication Publication Date Title
CN108280450B (en) Expressway pavement detection method based on lane lines
CN109242791B (en) Batch repair method for damaged plant leaves
US10037597B2 (en) Image inpainting system and method for using the same
CN111399507B (en) Method for determining boundary line in grid map and method for dividing grid map
CN108921813B (en) Unmanned aerial vehicle detection bridge structure crack identification method based on machine vision
CN114415142B (en) Rain clutter identification method and system based on navigation radar
CN107909571A (en) A kind of weld beam shape method, system, equipment and computer-readable storage medium
CN109840463B (en) Lane line identification method and device
CN112580447B (en) Edge second-order statistics and fusion-based power line detection method
CN113780110A (en) Method and device for detecting weak and small targets in image sequence in real time
Duong et al. Near real-time ego-lane detection in highway and urban streets
CN111833367A (en) Image processing method and device, vehicle and storage medium
CN109087347B (en) Image processing method and device
CN113450402B (en) Navigation center line extraction method for vegetable greenhouse inspection robot
CN113628202A (en) Determination method, cleaning robot and computer storage medium
CN113421210A (en) Surface point cloud reconstruction method based on binocular stereo vision
CN116030430A (en) Rail identification method, device, equipment and storage medium
CN116912115A (en) Underwater image self-adaptive enhancement method, system, equipment and storage medium
CN117031424A (en) Water surface target detection tracking method based on navigation radar
CN115223031B (en) Monocular frame ranging method and device, medium and curtain wall robot
CN114612628A (en) Map beautifying method, device, robot and storage medium
CN109493301B (en) Map image processing method and device and robot
CN109766889B (en) Rail image recognition post-processing method based on curve fitting
CN110728228A (en) Seismic wave image fault identification method based on ant colony tracking algorithm
CN115908184B (en) Automatic removal method and device for mole pattern

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220816

CF01 Termination of patent right due to non-payment of annual fee