CN112561940B - Dense multi-target parameter extraction method and device and terminal equipment - Google Patents

Dense multi-target parameter extraction method and device and terminal equipment Download PDF

Info

Publication number
CN112561940B
CN112561940B CN202011422326.5A CN202011422326A CN112561940B CN 112561940 B CN112561940 B CN 112561940B CN 202011422326 A CN202011422326 A CN 202011422326A CN 112561940 B CN112561940 B CN 112561940B
Authority
CN
China
Prior art keywords
detected
image
target
pixel point
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011422326.5A
Other languages
Chinese (zh)
Other versions
CN112561940A (en
Inventor
史林
刘利民
曾瑞
马俊涛
黄欣鑫
韩壮志
尹园威
吕萌
李钦
王丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Army Engineering University of PLA
Original Assignee
Army Engineering University of PLA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Army Engineering University of PLA filed Critical Army Engineering University of PLA
Priority to CN202011422326.5A priority Critical patent/CN112561940B/en
Publication of CN112561940A publication Critical patent/CN112561940A/en
Application granted granted Critical
Publication of CN112561940B publication Critical patent/CN112561940B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention is suitable for the technical field of radar, and provides a method, a device and a terminal device for extracting dense multi-target parameters, wherein the method comprises the following steps: reading a preprocessed image to be detected in any distance gate; calculating the maximum value of gray gradient in the image to be detected, determining the edges of a plurality of targets in the image to be detected, carrying out binarization processing on the image to be detected with the edges of the plurality of targets determined, and determining a binary image matrix corresponding to the image to be detected; determining minimum circumscribed rectangle data corresponding to each target area according to the binary image matrix; calculating a pixel point with the maximum amplitude value in each minimum circumscribed rectangle data, and taking the pixel point with the maximum amplitude value as a particle point corresponding to each target; time, velocity, and amplitude data for the target are determined from each particle. Therefore, the time, the speed and the amplitude of each target can be effectively extracted, and the conversion of the digital signals to physical parameters is realized.

Description

Dense multi-target parameter extraction method and device and terminal equipment
Technical Field
The invention belongs to the technical field of radars, and particularly relates to a method and a device for extracting dense multi-target parameters and terminal equipment.
Background
The target parameter extraction is a processing procedure from an echo signal data file to a target parameter file, and is a key step from a digital signal to a physical parameter. The key processing steps of the whole speed measuring radar system are to accurately extract time, frequency and amplitude information of shot particles from the noise-reduced echo data. After constant false alarm processing, the signal-to-noise ratio of the shot echo signal is greatly improved, and because a large number of targets exist in each range gate, the conventional single-target parameter extraction method is generally carried out in one dimension of a time domain or a frequency domain, the existence of the targets is judged according to the fact that the echo amplitude exceeds a specific threshold, and then parameter information is determined according to the position of an echo peak, so that a target echo signal data set cannot be converted into a target parameter data set, the parameter information such as time required by data processing cannot be effectively obtained, and further data processing cannot be carried out.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method, an apparatus, and a terminal device for extracting dense multi-target parameters, which aim to solve the problem in the prior art that parameter information such as time required for data processing cannot be effectively obtained.
In order to achieve the above object, a first aspect of the embodiments of the present invention provides a method for dense multi-target parameter extraction, including:
reading a preprocessed image to be detected in any distance gate, wherein the image to be detected comprises a plurality of densely arranged targets, and the image to be detected is a two-dimensional time-frequency graph;
calculating the maximum value of the gray gradient in the image to be detected, determining the edges of the multiple targets in the image to be detected, performing binarization processing on the image to be detected with the edges of the multiple targets determined, and determining a binary image matrix corresponding to the image to be detected;
determining minimum circumscribed rectangle data corresponding to each target area according to the binary image matrix;
calculating a pixel point with the maximum amplitude value in each minimum circumscribed rectangle data, and taking the pixel point with the maximum amplitude value as a particle point corresponding to each target;
time, velocity, and amplitude data for the target are determined from each particle.
As another embodiment of the present application, the calculating a maximum value of a gray gradient in the image to be detected and determining edges of the plurality of targets in the image to be detected includes:
respectively carrying out plane convolution operation on the image to be detected by adopting a preset transverse convolution factor and a preset longitudinal convolution factor to obtain a gradient value detected by a transverse edge and a gradient value detected by a longitudinal edge;
calculating the sum of the square of the gradient value detected by the transverse edge and the square of the gradient value detected by the longitudinal edge to obtain the sum of squares of the gradient amplitudes of the current pixel points;
when the sum of squares of the gradient amplitudes is larger than or equal to the square of a preset threshold value, determining the current pixel point as an edge point;
according to the method for determining the current pixel point as the edge point, the pixel point in the image to be detected is calculated, and the edges of a plurality of targets are determined.
As another embodiment of the present application, the method further includes:
and when the sum of the squares of the gradient amplitudes is not greater than the square of a preset threshold value, determining that the current pixel point is a non-edge point.
As another embodiment of the present application, a method for calculating a preset threshold includes:
calculating the average pixel value of all pixel points in the image to be detected;
and multiplying the average pixel value by a preset weighting factor to obtain the preset threshold value.
As another embodiment of the present application, the determining the minimum circumscribed rectangle data corresponding to each target area according to the binary image matrix includes:
according to the binary image matrix, sequentially carrying out label processing on each pixel point in the image to be detected to obtain the label of each pixel point;
and calculating the four outermost tangent points of the edge of each target area according to the labels of the pixel points to obtain the minimum circumscribed rectangle data of each target area.
As another embodiment of the present application, sequentially labeling each pixel point in the image to be detected according to the binary image matrix to obtain a label of each pixel point, including:
if the pixel values of the pixel point (i-1, j) and the pixel point (i, j-1) are both 0, the label of the pixel point (i, j) is set as H (i, j);
if the pixel point (i-1, j) has a label H (i-1, j) and the pixel point (i, j-1) has no label, the label of the pixel point (i, j) is set as H (i, j) ═ H (i-1, j);
if the pixel point (i, j-1) has a label H (i, j-1) and the pixel point (i-1, j) has no label, the label of the pixel point (i, j) is set to H (i, j) ═ H (i, j-1);
if the pixel (i-1, j) has the label H (i-1, j), the pixel (i, j-1) has the label H (i, j-1), and H (i-1, j) < H (i, j-1), the label of the pixel (i, j) is set to H (i, j) ═ H (i-1, j);
and calculating the label of each pixel point according to the method for determining the label of the pixel point (i, j).
As another embodiment of the present application, the calculating a pixel point with a maximum amplitude in each of the minimum circumscribed rectangle data, and taking the pixel point with the maximum amplitude as a particle corresponding to each target includes:
determining the pixel position range occupied by each target according to each minimum circumscribed rectangle data;
and sequencing the amplitudes of the pixel points in each pixel position range, and determining the pixel point with the maximum amplitude in each pixel position range as a particle corresponding to each target.
A second aspect of an embodiment of the present invention provides an apparatus for dense multi-target parameter extraction, including:
the reading module is used for reading a preprocessed image to be detected in any distance gate, the image to be detected comprises a plurality of targets which are densely arranged, and the image to be detected is a two-dimensional time-frequency graph;
the target edge extraction module is used for calculating the maximum value of the gray gradient in the image to be detected, determining the edges of the targets in the image to be detected, carrying out binarization processing on the image to be detected with the edges of the targets determined, and determining a binary image matrix corresponding to the image to be detected;
the target area determining module is used for determining the minimum circumscribed rectangle data corresponding to each target area according to the binary image matrix;
the particle determining module is used for calculating a pixel point with the maximum amplitude value in each minimum circumscribed rectangle data, and taking the pixel point with the maximum amplitude value as a particle corresponding to each target;
and the parameter conversion module is used for determining time, speed and amplitude data of the target according to each particle.
A third aspect of an embodiment of the present invention provides a terminal device, including: the system comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor executes the computer program to realize the steps of the intensive multi-target parameter extraction method according to any one of the above embodiments.
A fourth aspect of an embodiment of the present invention provides a computer-readable storage medium, including: the computer-readable storage medium stores a computer program which, when executed by a processor, implements the steps of the method for dense multi-target parameter extraction as described in any of the above embodiments.
Compared with the prior art, the embodiment of the invention has the following beneficial effects: since there are a large number of targets in each range gate, and the number of targets is different, the conventional single target processing method cannot meet the requirements. Compared with the prior art, the binary image matrix corresponding to the image to be detected is obtained by performing binarization processing after the target edge is extracted, then the connected region is determined, the number of the connected region is further marked, the target region is determined, and the time, speed and amplitude data of the target are obtained according to the pixel point with the maximum amplitude in the target region, so that the time, speed and amplitude of each target can be effectively extracted, and the conversion of the digital signal to the physical parameter is realized.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a schematic flow chart of an implementation of a method for dense multi-target parameter extraction according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of determining edges of a plurality of objects in an image under test according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an edge detection result according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of the labeling result of a 5-shot projectile according to an embodiment of the present invention;
FIG. 5 is an exemplary diagram of an apparatus for dense multi-target parameter extraction provided by an embodiment of the present invention;
fig. 6 is a schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Fig. 1 is a schematic view of an implementation flow of the method for extracting dense multi-target parameters according to the embodiment of the present invention, which is described in detail as follows.
Step 101, reading a preprocessed image to be detected in any range gate, wherein the image to be detected comprises a plurality of densely arranged targets, and the image to be detected is a two-dimensional time-frequency graph.
Optionally, the preprocessing in this step may be time-frequency resolution and noise reduction processing on the original image to be detected.
In this embodiment, the acquired echo data is a time-frequency two-dimensional time-frequency graph, a large number of targets exist in each range gate, and the number of the targets is different. The conventional single-target parameter extraction method is generally carried out in one dimension of a time domain or a frequency domain, the existence of a target is judged according to the fact that the amplitude of an echo exceeds a specific threshold, then parameter information is determined according to the position of the peak value of the echo, for dense multi-target echoes, a plurality of targets exist in each range gate, the echoes are two-dimensional time-frequency images, the number of the targets is unknown, in this case, a uniform threshold is set according to a conventional method, if the targets are lost or split, and the conventional method cannot obtain the effective peak value of the multi-target two-dimensional time-frequency images, so that a target echo signal data set cannot be converted into a target parameter data set, the time and other parameter information required by data processing cannot be effectively obtained, and further data processing cannot be carried out.
102, calculating the maximum value of the gray gradient in the image to be detected, determining the edges of the multiple targets in the image to be detected, performing binarization processing on the image to be detected with the edges of the multiple targets determined, and determining a binary image matrix corresponding to the image to be detected.
The image edge refers to a set of pixel points with pixel gray levels having step change or roof-like change. The image edges have two characteristics: the gray scale parallel to the edge direction changes slowly, and the gray scale perpendicular to the edge changes strongly. The edge detection principle is to locate edges by measuring the gray scale change of an image.
Optionally, as shown in fig. 2, when the maximum value of the gray gradient in the image to be measured is calculated in this step and the edges of the multiple targets in the image to be measured are determined, the following steps may be included.
Step 201, performing plane convolution operation on the image to be detected by using a preset transverse convolution factor and a preset longitudinal convolution factor respectively to obtain a gradient value detected by a transverse edge and a gradient value detected by a longitudinal edge.
Optionally, the preset transverse convolution factor may be
Figure BDA0002822981720000061
The predetermined vertical convolution factor may be
Figure BDA0002822981720000062
Optionally, according to
Figure BDA0002822981720000063
And obtaining a gradient value of the transverse edge detection, wherein Gx is the gradient value of the transverse edge detection, and A is the image to be detected.
Optionally according to
Figure BDA0002822981720000064
Gradient values of longitudinal edge detection are obtained, wherein Gy is the gradient value of longitudinal edge detection.
Step 202, calculating the sum of the square of the gradient value detected by the transverse edge and the square of the gradient value detected by the longitudinal edge to obtain the sum of the squares of the gradient amplitudes of the current pixel points.
Optionally, according to G ═ Gx2+Gy2And obtaining the sum of squares of the gradient amplitudes of the current pixel point, wherein G is the sum of squares of the gradient amplitudes of the current pixel point.
Step 203, when the sum of squares of the gradient amplitudes is greater than or equal to the square of a preset threshold, determining that the current pixel point is an edge point.
Optionally, the method for calculating the preset threshold in this step may include:
calculating the average pixel value of all pixel points in the image to be detected;
and multiplying the average pixel value by a preset weighting factor to obtain the preset threshold value.
Optionally, the preset weighting factor may be set according to a requirement, and a value of the preset weighting factor is not limited in this embodiment.
And 204, when the sum of the squares of the gradient amplitudes is not greater than the square of a preset threshold value, determining that the current pixel point is a non-edge point.
Step 205, according to the above method for determining the current pixel point as the edge point, calculating the pixel point in the image to be measured, and determining the edges of a plurality of targets.
FIG. 3 is a graph showing the edge detection results of the data after the fourth range gate constant false alarm of the measured rifle signal. The 10 zones represent 10 shots and the outer boundary contour of the zones can be obtained to provide support for zone positioning.
After the edges of a plurality of targets in the image to be detected are determined, binarization processing is performed, that is, the image binarization is to convert the gray level image into a binary image represented by '0' and '1', where pixels in the interior and edges of the image to be detected after edge detection can be assigned as '1', and pixels in other parts can be assigned as '0', so as to obtain a binary image matrix corresponding to the image to be detected.
And 103, determining the minimum circumscribed rectangle data corresponding to each target area according to the binary image matrix.
Through the binarization processing, the contour of the target area is obtained, and the target area is located to acquire coordinate point information of the target area. Therefore, in this step, the following processing may be performed:
according to the binary image matrix, labeling each pixel point in the image to be detected in sequence to obtain the label of each pixel point, so that the connected pixel points have the same label, and the unconnected pixel points have different labels; and calculating the outermost four tangent points of the edge of each target area according to the labels of the pixel points to obtain the minimum circumscribed rectangle data of each target area.
Optionally, according to the binary image matrix, labeling each pixel point in the image to be detected in sequence, and when the label of each pixel point is obtained, judging the pixel points of the left neighborhood and the lower neighborhood of the target pixel point in sequence for the binary image matrix, which specifically includes:
if the pixel values of the pixel point (i-1, j) and the pixel point (i, j-1) are both 0, giving a new label H (i, j) to the pixel point (i, j);
if the pixel point (i-1, j) has a label H (i-1, j) and the pixel point (i, j-1) has no label, the label of the pixel point (i, j) is set as H (i, j) ═ H (i-1, j);
if the pixel point (i, j-1) has a label H (i, j-1) and the pixel point (i-1, j) has no label, the label of the pixel point (i, j) is set to H (i, j) ═ H (i, j-1);
if the pixel point (i-1, j) has a label H (i-1, j), the pixel point (i, j-1) has a label H (i, j-1), and H (i-1, j) < H (i, j-1), the label of the pixel point (i, j) is set to H (i, j) < H (i-1, j);
and calculating the label of each pixel point according to the method for determining the label of the pixel point (i, j). As shown in fig. 4, the processing result of the 5 shots marking process is illustrated, each white area represents one shot, and the marks of the pixel points in each area are the same.
After the labels of the pixel points are labeled, the labels are arranged in sequence, different label values are given to different target areas in the time frequency data according to the time sequence, and the subsequent processing is facilitated.
An array matrix with the same size as the original image to be detected can be obtained by label processing, each target area has different label values, and geometric characteristic parameters of the target areas can be extracted. When the shot trace extraction is carried out, as the shot trace points are mostly strip-shaped, only the minimum circumscribed rectangle surrounding the target area needs to be determined, and the minimum circumscribed rectangle is a visual reflection of the flatness degree of the target area. Therefore, the minimum circumscribed rectangle can be obtained by obtaining the outermost 4 tangent points of the edge of the target area as the boundary characteristic points of the circumscribed rectangle.
And 104, calculating the pixel point with the maximum amplitude in each minimum circumscribed rectangle data, and taking the pixel point with the maximum amplitude as the particle point corresponding to each target.
Optionally, when the pixel point with the largest amplitude value in each minimum circumscribed rectangle data is calculated in this step and the pixel point with the largest amplitude value is taken as the particle corresponding to each target, the method may include:
determining the pixel position range occupied by each target according to each minimum circumscribed rectangle data;
and sequencing the amplitudes of the pixel points in each pixel position range, and determining the pixel point with the maximum amplitude in each pixel position range as a particle corresponding to each target.
Alternatively, the particles are the required trajectory points for the initial velocity extrapolation.
Time, velocity, and amplitude data for the target are determined from each particle, step 105.
Time, speed and amplitude data of the particles can be obtained through coordinate conversion, and conversion of the digital signals to physical parameters is achieved.
The dense multi-target parameter extraction method comprises the steps of determining the edges of a plurality of targets in an image to be detected by calculating the maximum value of the gray gradient in the image to be detected, carrying out binarization processing on the image to be detected with the edges of the plurality of targets determined, and determining a binary image matrix corresponding to the image to be detected; determining minimum circumscribed rectangle data corresponding to each target area according to the binary image matrix; calculating a pixel point with the maximum amplitude value in each minimum circumscribed rectangle data, and taking the pixel point with the maximum amplitude value as a particle point corresponding to each target; the time, speed and amplitude data of the target are determined according to each particle, the time, speed and amplitude of each target can be effectively extracted, the conversion of digital signals to physical parameters is realized, the data calculation amount is reduced, the data precision is improved, and a foundation is laid for the next data fitting.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Fig. 5 is a diagram illustrating an example of an apparatus for dense multi-target parameter extraction according to an embodiment of the present invention, corresponding to the method for dense multi-target parameter extraction described in the foregoing embodiment. As shown in fig. 5, the apparatus may include: a read module 501, a target edge extraction module 502, a target region determination module 503, a particle determination module 504, and a parameter conversion module 505.
The reading module 501 is configured to read a preprocessed image to be detected in any range gate, where the image to be detected includes multiple targets arranged densely, and the image to be detected is a two-dimensional time-frequency graph;
a target edge extraction module 502, configured to calculate a maximum value of a gray gradient in the image to be detected, determine edges of the multiple targets in the image to be detected, perform binarization processing on the image to be detected, where the edges of the multiple targets are determined, and determine a binary image matrix corresponding to the image to be detected;
a target area determining module 503, configured to determine, according to the binary image matrix, minimum circumscribed rectangle data corresponding to each target area;
a particle determining module 504, configured to calculate a pixel point with a largest amplitude in each of the minimum circumscribed rectangle data, and use the pixel point with the largest amplitude as a particle corresponding to each target;
a parameter conversion module 505 for determining time, velocity and amplitude data of the target from each particle.
Optionally, the target edge extracting module 502 calculates a maximum value of a gray gradient in the image to be detected, and when determining the edges of the multiple targets in the image to be detected, is configured to:
respectively carrying out plane convolution operation on the image to be detected by adopting a preset transverse convolution factor and a preset longitudinal convolution factor to obtain a gradient value detected by a transverse edge and a gradient value detected by a longitudinal edge;
calculating the sum of the square of the gradient value detected by the transverse edge and the square of the gradient value detected by the longitudinal edge to obtain the sum of squares of the gradient amplitudes of the current pixel points;
when the sum of squares of the gradient amplitudes is larger than or equal to the square of a preset threshold value, determining the current pixel point as an edge point;
according to the method for determining the current pixel point as the edge point, the pixel point in the image to be detected is calculated, and the edges of a plurality of targets are determined.
Optionally, the target edge extracting module 502 is further configured to:
and when the sum of the squares of the gradient amplitudes is not greater than the square of a preset threshold value, determining that the current pixel point is a non-edge point.
Optionally, the target edge extracting module 502 is further configured to:
calculating the average pixel value of all pixel points in the image to be detected;
and multiplying the average pixel value by a preset weighting factor to obtain the preset threshold value.
Optionally, when the target region determining module 503 determines the minimum circumscribed rectangle data corresponding to each target region according to the binary image matrices corresponding to the plurality of target regions, it may be configured to:
according to the binary image matrix, labeling each pixel point in the image to be detected in sequence to obtain the label of each pixel point;
and calculating the four outermost tangent points of the edge of each target area according to the labels of the pixel points to obtain the minimum circumscribed rectangle data of each target area.
Optionally, the target area determining module 503 sequentially performs label processing on each pixel point in the image to be detected according to the binary image matrix, and when obtaining a label of each pixel point, may be configured to:
if the pixel values of the pixel point (i-1, j) and the pixel point (i, j-1) are both 0, the label of the pixel point (i, j) is set as H (i, j);
if the pixel point (i-1, j) has a label H (i-1, j) and the pixel point (i, j-1) has no label, the label of the pixel point (i, j) is set as H (i, j) ═ H (i-1, j);
if the pixel point (i, j-1) has a label H (i, j-1) and the pixel point (i-1, j) has no label, the label of the pixel point (i, j) is set to H (i, j) ═ H (i, j-1);
if the pixel point (i-1, j) has a label H (i-1, j), the pixel point (i, j-1) has a label H (i, j-1), and H (i-1, j) < H (i, j-1), the label of the pixel point (i, j) is set to H (i, j) < H (i-1, j);
and calculating the label of each pixel point according to the method for determining the label of the pixel point (i, j).
Optionally, the particle determining module 504 may calculate a pixel point with the largest amplitude in each minimum circumscribed rectangle data, and when the pixel point with the largest amplitude is used as a particle corresponding to each target, may be configured to:
determining the pixel position range occupied by each target according to each minimum circumscribed rectangle data;
and sequencing the amplitudes of the pixel points in each pixel position range, and determining the pixel point with the maximum amplitude in each pixel position range as a particle corresponding to each target.
The device for extracting the dense multi-target parameters is used for calculating the maximum value of the gray gradient in the image to be detected through a target edge extraction module, determining the edges of the multiple targets in the image to be detected, carrying out binarization processing on the image to be detected with the edges of the multiple targets determined, and determining a binary image matrix corresponding to the image to be detected; according to the binary image matrix, a target area determining module determines minimum circumscribed rectangle data corresponding to each target area; the particle determining module is used for calculating a pixel point with the maximum amplitude value in each minimum circumscribed rectangle data, and taking the pixel point with the maximum amplitude value as a particle corresponding to each target; the parameter conversion module is used for determining time, speed and amplitude data of the target according to each particle, so that the time, speed and amplitude of each target can be effectively extracted, the conversion of digital signals to physical parameters is realized, the data computation amount is reduced, the data precision is improved, and a foundation is laid for the next data fitting.
Fig. 6 is a schematic diagram of a terminal device according to an embodiment of the present invention. As shown in fig. 6, the terminal device 600 of this embodiment includes: a processor 601, a memory 602, and a computer program 603, such as a dense multi-objective parameter extraction program, stored in the memory 602 and executable on the processor 601. When the processor 601 executes the computer program 603, steps in an embodiment of the method for extracting the intensive multi-target parameters described above, such as steps 101 to 105 shown in fig. 1, or steps 201 to 205 shown in fig. 2, and when the processor 601 executes the computer program 603, functions of modules in embodiments of the apparatuses described above, such as functions of modules 501 to 505 shown in fig. 5, are implemented.
Illustratively, the computer program 603 may be partitioned into one or more program modules, which are stored in the memory 602 and executed by the processor 601 to implement the present invention. The one or more program modules may be a series of computer program instruction segments capable of performing specific functions, which are used for describing the execution process of the computer program 603 in the apparatus for intensive multi-target parameter extraction or the terminal device 600. For example, the computer program 603 may be divided into a reading module 501, a target edge extraction module 502, a target region determination module 503, a particle determination module 504, and a parameter conversion module 505, and specific functions of the modules are shown in fig. 5, which are not described herein again.
The terminal device 600 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 601, a memory 602. Those skilled in the art will appreciate that fig. 6 is merely an example of a terminal device 600 and does not constitute a limitation of terminal device 600 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the terminal device may also include input-output devices, network access devices, buses, etc.
The Processor 601 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 602 may be an internal storage unit of the terminal device 600, such as a hard disk or a memory of the terminal device 600. The memory 602 may also be an external storage device of the terminal device 600, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 600. Further, the memory 602 may also include both an internal storage unit and an external storage device of the terminal device 600. The memory 602 is used for storing the computer programs and other programs and data required by the terminal device 600. The memory 602 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one type of logical function division, and other division manners may be available in actual implementation, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments described above may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein.

Claims (10)

1. A method for dense multi-target parameter extraction is characterized by comprising the following steps:
reading a preprocessed image to be detected in any distance gate, wherein a plurality of targets exist in the distance gate, the image to be detected comprises a plurality of targets which are densely arranged, and the image to be detected is a two-dimensional time-frequency image;
calculating the maximum value of the gray gradient in the image to be detected, determining the edges of the multiple targets in the image to be detected, performing binarization processing on the image to be detected with the edges of the multiple targets determined, and determining a binary image matrix corresponding to the image to be detected;
determining minimum circumscribed rectangle data corresponding to each target area according to the binary image matrix;
calculating a pixel point with the maximum amplitude value in each minimum circumscribed rectangle data, and taking the pixel point with the maximum amplitude value as a particle point corresponding to each target; the mass point is a bullet point required by the initial velocity extrapolation of the target;
time, velocity, and amplitude data for the target are determined from each particle.
2. The method for dense multi-target parameter extraction according to claim 1, wherein the calculating a maximum value of a gray gradient in the image to be detected and determining edges of the plurality of targets in the image to be detected includes:
respectively carrying out plane convolution operation on the image to be detected by adopting a preset transverse convolution factor and a preset longitudinal convolution factor to obtain a gradient value detected by a transverse edge and a gradient value detected by a longitudinal edge;
calculating the sum of the square of the gradient value detected by the transverse edge and the square of the gradient value detected by the longitudinal edge to obtain the square sum of the gradient amplitude of the current pixel point;
when the sum of squares of the gradient amplitudes is larger than or equal to the square of a preset threshold value, determining the current pixel point as an edge point;
according to the method for determining the current pixel point as the edge point, the pixel point in the image to be detected is calculated, and the edges of a plurality of targets are determined.
3. The method for dense multi-target parameter extraction as recited in claim 2, further comprising:
and when the sum of the squares of the gradient amplitudes is not greater than the square of a preset threshold value, determining that the current pixel point is a non-edge point.
4. The method for dense multi-target parameter extraction as claimed in claim 2, wherein the method for calculating the preset threshold value comprises:
calculating the average pixel value of all pixel points in the image to be detected;
and multiplying the average pixel value by a preset weighting factor to obtain the preset threshold value.
5. The method for dense multi-target parameter extraction according to any one of claims 1 to 4, wherein the determining the minimum bounding rectangle data corresponding to each target region according to the binary image matrix comprises:
according to the binary image matrix, sequentially carrying out label processing on each pixel point in the image to be detected to obtain the label of each pixel point;
and calculating the four outermost tangent points of the edge of each target area according to the labels of the pixel points to obtain the minimum circumscribed rectangle data of each target area.
6. The method for dense multi-target parameter extraction according to claim 5, wherein said sequentially labeling each pixel in the image to be detected according to the binary image matrix to obtain the label of each pixel comprises:
if the pixel values of the pixel point (i-1, j) and the pixel point (i, j-1) are both 0, the label of the pixel point (i, j) is set as H (i, j);
if the pixel point (i-1, j) has a label H (i-1, j) and the pixel point (i, j-1) has no label, the label of the pixel point (i, j) is set as H (i, j) ═ H (i-1, j);
if the pixel (i, j-1) has the label H (i, j-1) and the pixel (i-1, j) has no label, the label of the pixel (i, j) is set to H (i, j) ═ H (i, j-1);
if the pixel point (i-1, j) has a label H (i-1, j), the pixel point (i, j-1) has a label H (i, j-1), and H (i-1, j) < H (i, j-1), the label of the pixel point (i, j) is set to H (i, j) < H (i-1, j);
and calculating the label of each pixel point according to the method for determining the label of the pixel point (i, j).
7. The method for dense multi-target parameter extraction according to claim 5, wherein the step of calculating the pixel point with the maximum amplitude in each of the minimum bounding rectangle data, and using the pixel point with the maximum amplitude as the particle corresponding to each target comprises:
determining the pixel position range occupied by each target according to each minimum circumscribed rectangle data;
and sequencing the amplitudes of the pixel points in each pixel position range, and determining the pixel point with the maximum amplitude in each pixel position range as a particle corresponding to each target.
8. An apparatus for dense multi-target parameter extraction, comprising:
the device comprises a reading module, a processing module and a processing module, wherein the reading module is used for reading a preprocessed image to be detected in any distance gate, a plurality of targets exist in the distance gate, the image to be detected comprises a plurality of targets which are densely arranged, and the image to be detected is a two-dimensional time-frequency graph;
the target edge extraction module is used for calculating the maximum value of the gray gradient in the image to be detected, determining the edges of the targets in the image to be detected, carrying out binarization processing on the image to be detected with the edges of the targets determined, and determining a binary image matrix corresponding to the image to be detected;
the target area determining module is used for determining the minimum circumscribed rectangle data corresponding to each target area according to the binary image matrix;
the particle determining module is used for calculating a pixel point with the maximum amplitude value in each minimum circumscribed rectangle data, and taking the pixel point with the maximum amplitude value as a particle corresponding to each target; the mass point is a bullet point required by the initial velocity extrapolation of the target;
and the parameter conversion module is used for determining time, speed and amplitude data of the target according to each particle.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202011422326.5A 2020-12-08 2020-12-08 Dense multi-target parameter extraction method and device and terminal equipment Active CN112561940B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011422326.5A CN112561940B (en) 2020-12-08 2020-12-08 Dense multi-target parameter extraction method and device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011422326.5A CN112561940B (en) 2020-12-08 2020-12-08 Dense multi-target parameter extraction method and device and terminal equipment

Publications (2)

Publication Number Publication Date
CN112561940A CN112561940A (en) 2021-03-26
CN112561940B true CN112561940B (en) 2022-07-12

Family

ID=75059674

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011422326.5A Active CN112561940B (en) 2020-12-08 2020-12-08 Dense multi-target parameter extraction method and device and terminal equipment

Country Status (1)

Country Link
CN (1) CN112561940B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113870293B (en) * 2021-09-27 2022-10-14 东莞拓斯达技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN114707560B (en) * 2022-05-19 2024-02-09 北京闪马智建科技有限公司 Data signal processing method and device, storage medium and electronic device
CN114994671B (en) * 2022-05-31 2023-11-28 南京慧尔视智能科技有限公司 Target detection method, device, equipment and medium based on radar image
CN116740115B (en) * 2023-08-14 2023-11-17 国网电商科技有限公司 Image edge detection method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104021553B (en) * 2014-05-30 2016-12-07 哈尔滨工程大学 A kind of sonar image object detection method based on pixel layering
CN109102518A (en) * 2018-08-10 2018-12-28 广东工业大学 A kind of method of Image Edge-Detection, system and associated component
CN110873877B (en) * 2019-04-25 2021-04-23 北京航空航天大学 Method and device for determining target motion track
CN111445699B (en) * 2020-04-13 2021-10-26 黑龙江工程学院 Intersection traffic conflict discrimination method based on real-time vehicle track

Also Published As

Publication number Publication date
CN112561940A (en) 2021-03-26

Similar Documents

Publication Publication Date Title
CN112561940B (en) Dense multi-target parameter extraction method and device and terminal equipment
Zhai et al. Inshore ship detection via saliency and context information in high-resolution SAR images
CN109299720B (en) Target identification method based on contour segment spatial relationship
CN111340787B (en) Method and device for detecting and identifying wire defects of power transmission line and computer equipment
CN110031843B (en) ROI (region of interest) -based SAR (synthetic Aperture Radar) image target positioning method, system and device
CN113009442B (en) Method and device for identifying multipath target of radar static reflecting surface
CN111444964A (en) Multi-target rapid image matching method based on self-adaptive ROI (region of interest) division
CN110889843A (en) SAR image ship target detection method based on maximum stable extremal region
CN116128849A (en) Method, device, equipment and storage medium for detecting underwater cracks of concrete dam
CN112630742B (en) Method, device and equipment for processing high sidelobe Doppler stripe and storage medium
CN111445510A (en) Method for detecting straight line in image
CN109801428B (en) Method and device for detecting edge straight line of paper money and terminal
CN114742849B (en) Leveling instrument distance measuring method based on image enhancement
CN112233042B (en) Method for rapidly generating large-scene SAR image containing non-cooperative target
CN110728311A (en) Image processing method, device and storage medium
CN111401377B (en) Meter data reading method and device, electronic equipment and storage medium
Wang et al. Improved Morphological Band‐Pass Filtering Algorithm and Its Application in Circle Detection
CN106845489B (en) SAR image target feature extraction method based on improved Krawtchouk moment
CN110766005B (en) Target feature extraction method and device and terminal equipment
CN110335219B (en) Correction method and correction device for pixel distortion and terminal
CN113298759A (en) Water area detection method and device, electronic equipment and storage medium
CN113408538A (en) SVM-based radar RD image weak target detection method and system, storage medium and electronic terminal
CN113391276B (en) Radar blocking detection method and device and terminal equipment
CN116416251B (en) Method and related device for detecting quality of whole-core flame-retardant conveying belt based on image processing
CN112835001B (en) Sea surface target radar trace condensation method, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant