CN115685170B - Active sonar target echo detection method based on reinforcement learning - Google Patents

Active sonar target echo detection method based on reinforcement learning Download PDF

Info

Publication number
CN115685170B
CN115685170B CN202310004992.4A CN202310004992A CN115685170B CN 115685170 B CN115685170 B CN 115685170B CN 202310004992 A CN202310004992 A CN 202310004992A CN 115685170 B CN115685170 B CN 115685170B
Authority
CN
China
Prior art keywords
value
signal
active sonar
echo
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310004992.4A
Other languages
Chinese (zh)
Other versions
CN115685170A (en
Inventor
张艳
王振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Guoshu Information Technology Co ltd
Original Assignee
Qingdao Guoshu Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Guoshu Information Technology Co ltd filed Critical Qingdao Guoshu Information Technology Co ltd
Priority to CN202310004992.4A priority Critical patent/CN115685170B/en
Publication of CN115685170A publication Critical patent/CN115685170A/en
Application granted granted Critical
Publication of CN115685170B publication Critical patent/CN115685170B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Abstract

The invention belongs to the technical field of underwater sound engineering, and particularly discloses an active sonar target echo detection method based on reinforcement learning. The method comprises the following steps: acquiring historical state data of active sonar signal echoes; quantizing and standardizing the historical state data, performing gray value processing, and hiding the features of sonar signals in the generated gray map; each gray level diagram is a set of historical state data, and a label is manufactured for each set of historical state data; establishing a reward value system, and calculating a reward value according to the correlation degree of the power of the signal; inputting the historical state data into a reinforcement learning model for reinforcement learning training; and carrying out state judgment on the input active sonar signal echo data by using the trained reinforcement learning model, and finding a target echo position corresponding to the data. According to the invention, the active sonar received signal is processed by the reinforcement learning method, whether the target echo exists or not is detected, and the detection accuracy and robustness are improved.

Description

Active sonar target echo detection method based on reinforcement learning
Technical Field
The invention belongs to the technical field of underwater sound engineering, and relates to an active sonar target echo detection method based on reinforcement learning.
Background
Active sonar is an underwater device capable of transmitting sound waves and acquiring information by processing echoes from objects. At present, active sonar is widely applied to the fields of ocean mapping, ocean fishery, military and the like.
Due to the complex and variable marine environment, marine acoustic channels have time-varying and space-varying characteristics. When the echo reflected by the object reaches the receiving end through the marine acoustic channel, serious distortion is often generated, and difficulty is brought to the detection and processing of the echo signal.
In order to have a high resolution for objects, active sonar typically emits high frequency sound waves.
However, the high-frequency sound wave propagates in the sea and attenuates the lower-frequency sound wave more seriously, so that the received signal amplitude of the active sonar is often very small, and thus the active sonar is very easily submerged by ocean noise, reverberation noise and the like.
In addition, a higher sampling rate is required for receiving the high-frequency sound signal, so that the real-time processing difficulty is correspondingly increased.
The advent of intelligent algorithms has made possible the detection and processing of echo signals. But the echo characteristics of the echo signal may become blurred due to its unique characteristics, i.e. following environmental changes.
Although the fuzzy logic is applicable to any complex object environment changes during the subsequent processing, as the input and output variables increase, the reasoning of the fuzzy logic becomes very complex and difficult to debug.
Disclosure of Invention
The invention aims to provide an active sonar target echo detection method based on reinforcement learning, which is used for processing an active sonar received signal and detecting whether a target echo exists or not so as to improve the detection accuracy and the robustness.
In order to achieve the above purpose, the invention adopts the following technical scheme:
the active sonar target echo detection method based on reinforcement learning comprises the following steps:
step 1, acquiring historical state data of active sonar signal echoes;
firstly, quantifying and standardizing historical state data, then dividing the preprocessed signals and carrying out gray value processing, wherein the features of sonar signals are hidden in a generated gray-scale image;
each generated gray level image is a group of historical state data; making labels for each set of historical state data to represent the positions of echoes of the historical state data, and taking each set of historical state data added with the labels as one set of training data respectively;
step 2, establishing a reward value system, and calculating a reward value according to the correlation degree of the power of the signal;
step 3, inputting each group of historical state data into the reinforcement learning model to make a preliminary decision;
inputting each group of historical state data and preliminary decision data into a pre-established convolution network model in the training process, extracting the characteristics of the active sonar signals by the convolution network model, and quantifying the characteristics of the signals to obtain state variation and rewarding values;
the state quantity change value is the difference value between the current state data and the next state data; updating a data table structure according to each group of historical state data, the corresponding preliminary decision data, state change quantity and rewarding value;
the data table structure is used for collecting the state of each step when the reinforcement learning model is used for learning, and judging the executed action through table lookup operation when the trained reinforcement learning model is used for acting;
updating an estimated value network by minimizing the difference between the value of a preset target and the value of the output, constructing an objective function by using the difference between the current value and the previous value, and updating the weight of the estimated value network by using a gradient descent method;
here, the predetermined target refers to the minimum distance from the target echo, and the output refers to the actual distance from the target;
and 4, performing state judgment on the input echo data of the active sonar signal by using the trained convolution network model and the data table structure until a target echo position corresponding to the echo signal data is found.
The invention has the following advantages:
as described above, the present invention describes an active sonar target echo detection method based on reinforcement learning. According to the detection method, the active sonar received signal is processed by using the reinforcement learning method, and whether the target echo exists or not is detected, so that the detection accuracy and the detection robustness are improved. The method of the invention uses a convolution feature extraction mode, so that the target echo detection is more accurate, and the capacity of the target echo detection is more generalized by adding and holding a large number of data sets, and the echo signals in different states can be detected. The introduction of reinforcement learning is equivalent to adding a brain to the judgment of the signal echo, wherein the action utility of Q is used for evaluating the advantages and disadvantages of taking a certain action under a specific state, only the next step of information is utilized by utilizing the Markov property, the system is searched according to the strategy guidance, and the state value is updated in each step of exploration, which is equivalent to extracting accurate characteristics, judging whether the signal echo is the target echo through the machine brain, and giving a next step of state prompt, thereby improving the judgment efficiency of the target echo.
Drawings
Fig. 1 is a flow chart of an active sonar target echo detection method based on reinforcement learning in an embodiment of the invention.
Fig. 2 is a flow chart of signal processing in an embodiment of the invention.
FIG. 3 is a diagram of a network architecture of a reinforcement learning model in an embodiment of the invention.
Fig. 4 is a flowchart of determining the existence of an active sonar target echo based on a reinforcement learning model in an embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the attached drawings and detailed description:
as shown in fig. 1, the active sonar target echo detection method based on reinforcement learning comprises the following steps:
step 1, acquiring historical state data of active sonar signal echoes.
Firstly, carrying out quantization and standardization preprocessing operation on historical state data, then dividing the preprocessed signals, carrying out gray value processing on the divided signals, and hiding the characteristics of sonar signals in a generated gray-scale image.
The segmentation refers to orderly segmentation of the active sonar signal, namely, in the segmentation of the active sonar signal, the signal of the next section contains 1/3 part of the previous section, so as to enable feature learning to have continuity.
The variation of the difference between black and white of four consecutive pixels in each gray scale map generated corresponds to a return sine wave.
Each generated gray scale map is a set of historical state data.
As shown in fig. 2, the process of converting a signal into a gray signal map using the mean and variance of the signal is as follows:
step 1.1 calculating the variance G of the active sonar signal by the following formula v
G v =∑ w i=1b j=1 (G(i,j)-g) 2 /(w·b)。
Wherein w and b respectively represent the number of rows and columns in the active sonar signal amplitude diagram, wherein i epsilon [1, w ], j epsilon [1, b ]; g (i, j) represents the amplitude of the signal at the position (i, j) in the graph, and G represents the data average of the active sonar signal amplitude graph.
Step 1.2 adjusts the amplitude map of the active sonar signal by the following formula.
G(i,j)=[G(i,j)-min[G]]/max[G]。
Wherein min [ G ] represents the minimum value of the gray values G of the active sonar signals, and max [ G ] represents the maximum value of G.
And step 1.3, normalizing the adjusted active sonar signal diagram by using the gray mean value and the variance.
When G is more than or equal to G, the adjusted gray value
Figure GDA0004119248950000031
When G<In g, the patient is treated with>
Figure GDA0004119248950000032
Wherein G is n (i, j) represents the adjusted gray value, which is used to make the feature more pronounced where it is obscured.
Labeling each set of historical state data to represent the position of the echo; and respectively taking each set of history state data after the label is added as one set of training data for training the reinforcement learning model.
And 2, establishing a reward value system, and calculating a reward value according to the correlation degree of the power of the signal.
The present embodiment gives a specific prize-calculating method to CW (continuous wave) echoes, as follows:
P K =β/(d·I T Q -1 K I)。
wherein P is K To strengthen the prize value of the current behavior, P is calculated when the correlation degree of the active sonar echo signal and the transmitting signal is higher K The represented prize value will also go high; the corresponding correlation degree is lower, the rewarding value is alsoAnd becomes smaller.
Beta is expressed as a reverberation suppression factor, and I is expressed as an identity matrix; q (Q) K Is the covariance matrix, which is the covariance matrix of the echo signal segments, multiplied by the noise matrix from the echo signal of the desired signal.
d represents the sequence length after signal segmentation, d=n/K; wherein N is the rank of the signal covariance matrix estimation; k denotes the initial filter order, which is the number of segments of the signal, and K is a one-dimensional matrix.
According to the invention, correlation solution is established according to the original characteristics of the transmitted signal, when the detected signal is matched with the power of the transmitted signal, the highest reward is given to the signal, and when the correlation with the power spectrum of the original signal is small, the lowest reward is given to the signal, namely, the reward 0.
The low correlation at this time means that the correlation with the power spectrum of the original signal is smaller than the preset correlation threshold.
To tell where the target is during training, a prize value needs to be provided in, and the prize value is calculated as shown in the above formula, and a prize value is provided each time the target is more and more leaned in, and is awarded.
This approach may guide the convolutional network model to find the target and learn the features in the vicinity of the target in detail.
And 3, inputting each set of historical state data (namely, training judgment results of the network according to the training set and continuously covering and updating the generated data table set) into the reinforcement learning model to make a preliminary decision.
In this embodiment, a reinforcement learning model is used, and the model structure is shown in fig. 3. And transmitting the state S of each step and the prize r of the next step into G, and determining the action a of the next step through signal convolution processing. And in order to improve the learning ability, learning is started from the front and the back of the signal at the same time, and the end of learning is judged when the front and the back learning find targets at the same time.
Thanks to this structure, a good learning effect can be obtained using a small number of data sets.
In the training process, each group of historical state data and preliminary decision data are input into a pre-established convolution network model, the convolution network model extracts the characteristics of the active sonar signals, and the characteristics of the signals are quantized to obtain state change quantity and rewarding values.
And inputting the gray value image data generated after standard processing into a convolutional network model to extract main characteristics thereof, and analyzing the next motion state of the convolutional network model according to the characteristics thereof, namely, left, stop and right motion states.
When the motion state of the next step is left, the echo signal is judged to be on the left side, and the echo is continuously searched to the left side.
When the motion state of the next step is right, the echo signal is judged to be on the right side, and the echo is continuously searched to the right side.
When the next motion state is stopped, indicating that this is the position where the echo is located, no motion will be stopped.
The state quantity change value refers to the difference value between the current state data and the next state data; and updating the data table structure according to each set of historical state data, the corresponding preliminary decision data, the state change quantity and the rewarding value.
The data table structure is used for collecting the state of each step when the reinforcement learning model is used for learning, and when the trained reinforcement learning model is used for acting, the action is judged through the table look-up operation.
The data table structure is represented as qtable= (s, [ a, r)],s * ). Wherein s represents the current state, s * Indicating the next state, a indicates the number of actions, and r indicates how much of the prize value will be achieved by the next action.
After obtaining the state s and the prize value r of the next step, the value of the current state is evaluated, and the calculation formula is as follows:
Q=r+γmax s q (s, a; θ). Wherein θ represents an input parameter value, γ represents an attenuation factor, max s Q (s, a; θ) represents the largest future prize given the new state and action.
Q is the corresponding evaluation of the current operation, and is updated continuously in the following step until the current operation is the optimal value.
The value of the preset target and the value of the output are minimized to update the estimated value network, an objective function is built according to the difference between the current value and the previous value, and the weight of the estimated value network is updated by a gradient descent method.
Here, the predetermined target refers to the minimum distance from the target echo, and the output refers to the actual distance from the target.
The update process of the estimation network is as follows:
Q * (s,a)=Q(s,a)+α(r+max s Q(s’,a’)-Q(s,a))。
L(θ)=E(TargetQ θ -Q * (s,a))。
wherein Q (s, a) represents the update of the Q value after the operation is completed; q (s, a) represents the Q value of the current action; alpha represents learning efficiency. s 'and a' represent the next action and the next state, respectively.
max s Q (s ', a') represents the maximum prize available in the case, targetQ θ Representing the output of the convolutional network model.
L (theta) represents a loss function under the parameter theta, the loss function updates the network weight by carrying out a counter-propagation gradient, and the optimal result is obtained when the L (theta) is converged, and the action network is the optimal control strategy.
The stored convolution network model is the trained convolution network model (sonar signal characteristic extraction network model).
And continuously carrying out error judgment with the original signal through an updating formula of the estimation network until the target signal is found so as to minimize the loss function value of the target signal and the original network structure, and training the target signal to quickly find the position of the target echo signal.
The motion network and the estimation network give consideration to dynamic performance and robustness, and the estimation network can accurately estimate the running state of the current echo signal detection system, has certain guiding significance on the control process and can be used for online updating.
And 4, performing state judgment on the input echo data of the active sonar signal by using the trained convolutional network model and the data table structure Qtable until a target echo position corresponding to the echo signal data is found.
As shown in fig. 4, the active sonar target echo existence judging method based on the reinforcement learning model comprises the following steps:
and 4.1, carrying out quantization and standardized preprocessing on the input active sonar signals.
The input original signal has the problems of jump points, singular values and the like, the problems increase difficulty for the subsequent feature discovery, and the initial step of preprocessing the input original signal can enable the feature judgment and calculation to be more accurate.
Step 4.2. Orderly dividing the active sonar signal, wherein in the dividing of the active sonar signal, the signal of the next section contains 1/3 part of the previous section, so as to enable feature learning to have continuity.
Step 4.3, obtaining a trained convolutional network model and a Q table, namely a storage state data table, according to the step 3; the Q table stores the feature extraction and learning of the signals for the next action judgment in reinforcement learning, and the convolution network model stores the feature extraction and learning of the signals.
And 4.4, identifying the active sonar detection signal through a convolution network model to obtain signal characteristics so as to infer and judge the next step by using a Q table.
And 4.5, quickly deducing whether the next step is ending judgment or continuing judgment by inquiring the Q table structure.
And 4.6, when the next step is finished by inquiring the Q table structure, the data of the section is a data section containing echo signals, and meanwhile, the distance position of the sonar from the target is deduced according to the position of the echo.
The foregoing description is, of course, merely illustrative of preferred embodiments of the present invention, and it should be understood that the present invention is not limited to the above-described embodiments, but is intended to cover all modifications, equivalents and alternatives falling within the spirit and scope of the present invention as defined by the appended claims.

Claims (7)

1. The active sonar target echo detection method based on reinforcement learning is characterized by comprising the following steps of:
step 1, acquiring historical state data of active sonar signal echoes;
firstly, quantifying and standardizing historical state data, then dividing the preprocessed signals and carrying out gray value processing, wherein the features of sonar signals are hidden in a generated gray-scale image;
each generated gray level image is a group of historical state data; labeling each set of historical state data to represent the position of the echo; respectively taking each group of history state data added with the labels as one group of training data of the reinforcement learning model;
step 2, establishing a reward value system, and calculating a reward value according to the correlation degree of the power of the signal;
step 3, inputting each group of historical state data into the reinforcement learning model to make a preliminary decision;
inputting each group of historical state data and preliminary decision data into a pre-established convolution network model in the training process, extracting the characteristics of the active sonar signals by the convolution network model, and quantifying the characteristics of the signals to obtain state change quantity and rewarding values;
wherein, the state quantity change value is the difference value between the current state data and the next state data; updating a data table structure according to each group of historical state data, the corresponding preliminary decision data, state change quantity and rewarding value;
the data table structure is used for collecting the state of each step when the reinforcement learning model is used for learning, and judging the executed action through table lookup operation when the trained reinforcement learning model is used for acting;
updating an estimated value network by minimizing the difference between the value of a preset target and the value of the output, constructing an objective function by using the difference between the current value and the previous value, and updating the weight of the estimated value network by using a gradient descent method;
here, the predetermined target refers to the minimum distance from the target echo, and the output refers to the actual distance from the target;
and 4, performing state judgment on the input active sonar signal echo data by using the trained convolution network model and the data table structure until a target echo position corresponding to the active sonar signal echo data is found.
2. The reinforcement learning-based active sonar target echo detection method of claim 1, wherein,
in the step 1, the gray value processing process of the divided signals is as follows:
step 1.1 calculating the variance G of the active sonar signal by the following formula v
G v =∑ w i=1b j=1 (G(i,j)-g) 2 /(w·b);
Wherein w and b respectively represent the number of rows and columns in the active sonar signal amplitude diagram, wherein i epsilon [1, w ], j epsilon [1, b ]; g (i, j) represents the amplitude of the signal at the position (i, j) in the graph, and G represents the data average of the active sonar signal amplitude graph;
step 1.2, adjusting an amplitude diagram of an active sonar signal through the following formula;
G(i,j)=[G(i,j)-min[G]]/max[G];
wherein min [ G ] represents the minimum value in the gray value G of the active sonar signal, and max [ G ] represents the maximum value in G;
step 1.3, normalizing the adjusted active sonar signal diagram by using a gray mean value and a variance;
when G is more than or equal to G, the adjusted gray value
Figure FDA0004083457590000011
When G<In g, the patient is treated with>
Figure FDA0004083457590000012
3. The reinforcement learning-based active sonar target echo detection method of claim 1, wherein,
in the step 2, the method for calculating the prize value is as follows: p (P) K =β/(d·I T Q -1 K I);
Wherein P is K To strengthen the prize value of the current behavior, P is calculated when the correlation degree of the active sonar echo signal and the transmitting signal is higher K The represented prize value will also go high; when the corresponding correlation degree is lower, the rewarding value is also reduced;
beta is expressed as a reverberation suppression factor, and I is expressed as an identity matrix; q (Q) K Is a covariance matrix obtained by multiplying the echo signals of the required signals by a noise matrix, and is a covariance matrix of echo signal fragments;
d represents the sequence length after signal segmentation, d=n/K; wherein N is the rank of the signal covariance matrix estimation; k denotes the initial filter order, which is the number of segments of the signal, and K is a one-dimensional matrix.
4. The reinforcement learning-based active sonar target echo detection method of claim 1, wherein,
in the step 3, the data table structure is represented as qtable= (s, [ a, r)],s * ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein s represents the current state, s * Representing the next state, a representing the number of actions, r representing how much of the prize value will be obtained by the next action;
after obtaining the state s and the prize value r of the next step, the value of the current state is evaluated, and the calculation formula is as follows:
Q=r+γmax s Q(s,a;θ);
wherein θ represents an input parameter value, γ represents an attenuation factor, max s Q (s, a; θ) represents the largest future rewards given the new status and actions; q is the corresponding evaluation value of the current operation.
5. The reinforcement learning-based active sonar target echo detection method of claim 1, wherein,
in the step 3, the generated active sonar signal gray-scale image is input into a convolution network model to extract main characteristics thereof, and the motion state of the next step, namely, the left motion state, the stop motion state and the right motion state is analyzed according to the characteristics;
when the motion state of the next step is left, judging that the echo signal is on the left side of the motion state, and continuing to search for the echo on the left side;
when the motion state of the next step is right, judging that the echo signal is on the right side of the motion state, and continuing to search for the echo to the right side;
when the next motion state is stopped, the target that indicates the echo is the current position, and will stop motionless at this time.
6. The reinforcement learning-based active sonar target echo detection method of claim 4, wherein,
in the step 3, the update process of the estimation network is as follows:
Q * (s,a)=Q(s,a)+α(r+max s Q(s’,a’)-Q(s,a));
L(θ)=E(TargetQ θ -Q * (s,a));
wherein Q is * (s, a) represents the update of the Q value after the operation is completed; q (s, a) represents the Q value of the current action; alpha represents learning efficiency; s 'and a' represent the next action and the next state, respectively; max (max) s Q (s ', a') represents the maximum prize available in the case, targetQ θ Representing the output result of the convolutional network model;
l (theta) represents a loss function under the parameter theta, the loss function updates the network weight by carrying out a counter-propagation gradient, and the optimal result is obtained when the L (theta) is converged unchanged, and the action network is the optimal control strategy at the moment;
the stored convolution network model is the trained convolution network model.
7. The reinforcement learning-based active sonar target echo detection method of claim 1, wherein,
the step 4 specifically comprises the following steps:
step 4.1, noise reduction processing is carried out on the input active sonar signals;
step 4.2, dividing the active sonar signals;
step 4.3, obtaining a trained convolutional network model and a Q table according to the step 3; the Q table stores the characteristic extraction and learning of signals for the next action judgment in reinforcement learning, and the convolution network model stores the characteristic extraction and learning of signals;
step 4.4, identifying the active sonar detection signal through a convolution network model to obtain signal characteristics for the next step of inference and judgment of the Q table;
step 4.5, rapidly deducing whether the next step is to finish judgment or continue judgment by inquiring the Q table structure;
and 4.6, when the next step is finished by inquiring the Q table structure, the data of the section is a data section containing echo signals, and meanwhile, the distance position of the sonar from the target is deduced according to the position of the echo.
CN202310004992.4A 2023-01-04 2023-01-04 Active sonar target echo detection method based on reinforcement learning Active CN115685170B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310004992.4A CN115685170B (en) 2023-01-04 2023-01-04 Active sonar target echo detection method based on reinforcement learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310004992.4A CN115685170B (en) 2023-01-04 2023-01-04 Active sonar target echo detection method based on reinforcement learning

Publications (2)

Publication Number Publication Date
CN115685170A CN115685170A (en) 2023-02-03
CN115685170B true CN115685170B (en) 2023-05-09

Family

ID=85057290

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310004992.4A Active CN115685170B (en) 2023-01-04 2023-01-04 Active sonar target echo detection method based on reinforcement learning

Country Status (1)

Country Link
CN (1) CN115685170B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009068989A (en) * 2007-09-13 2009-04-02 Nec Corp Active sonar device and reverberation removal method by the same
CN102768354A (en) * 2011-05-05 2012-11-07 中国科学院声学研究所 Method and system for obtaining echo data of underwater target

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5555532A (en) * 1984-05-23 1996-09-10 The United States Of America As Represented By The Secretary Of The Navy Method and apparatus for target imaging with sidelooking sonar
FR3083633B1 (en) * 2018-07-05 2020-05-29 Thales METHOD AND DEVICE FOR DISPLAYING HIGH DYNAMIC SONAR OR RADAR DATA
CN111652149A (en) * 2020-06-04 2020-09-11 青岛理工大学 Deep convolutional neural network-based benthonic oil sonar detection image identification method
CN112526524B (en) * 2020-12-09 2022-06-17 青岛澎湃海洋探索技术有限公司 Underwater fishing net detection method based on forward-looking sonar image and AUV platform
CN112731410B (en) * 2020-12-25 2021-11-05 上海大学 Underwater target sonar detection method based on CNN
CN113466839B (en) * 2021-09-03 2021-12-07 北京星天科技有限公司 Side-scan sonar sea bottom line detection method and device
CN114444571A (en) * 2021-12-23 2022-05-06 中国船舶重工集团公司第七一五研究所 Sonar target individual identification method for autonomous learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009068989A (en) * 2007-09-13 2009-04-02 Nec Corp Active sonar device and reverberation removal method by the same
CN102768354A (en) * 2011-05-05 2012-11-07 中国科学院声学研究所 Method and system for obtaining echo data of underwater target

Also Published As

Publication number Publication date
CN115685170A (en) 2023-02-03

Similar Documents

Publication Publication Date Title
CN112001270B (en) Ground radar automatic target classification and identification method based on one-dimensional convolutional neural network
CN110852158B (en) Radar human motion state classification algorithm and system based on model fusion
CN116405109B (en) Optical module communication self-adaptive modulation method based on linear direct drive
CN110675410A (en) Side-scan sonar sunken ship target unsupervised detection method based on selective search algorithm
CN108646249B (en) Parameterized leakage target detection method suitable for partial uniform reverberation background
Huang et al. Radar waveform recognition based on multiple autocorrelation images
CN113992299B (en) Ship noise spectrum modulation method and device
CN115685170B (en) Active sonar target echo detection method based on reinforcement learning
CN109584256A (en) A kind of pulsar DM algorithm for estimating based on Hough straight-line detection
CN111929666B (en) Weak underwater sound target line spectrum autonomous extraction method based on sequential environment learning
CN112287752A (en) Method for extracting early fault characteristics of rotating shaft of hydroelectric generator
CN112765550A (en) Target behavior segmentation method based on Wi-Fi channel state information
Xu et al. Shipwrecks detection based on deep generation network and transfer learning with small amount of sonar images
CN117115436A (en) Ship attitude detection method and device, electronic equipment and storage medium
JP5078669B2 (en) Target detection apparatus, target detection method, and target detection program
US6376831B1 (en) Neural network system for estimating conditions on submerged surfaces of seawater vessels
CN114298094B (en) Automatic line spectrum extraction method based on principal component analysis
CN115951328A (en) Wind speed estimation method and device of wind lidar based on probability density constraint
CN216748058U (en) Interference characteristic parameter recognition device
Mao et al. Modulation Recognition Algorithm of Radar Signal Based on ICanny-CNN
CN117434153B (en) Road nondestructive testing method and system based on ultrasonic technology
CN116699617A (en) Floating target identification method by utilizing centroid time sequence information
CN116092484B (en) Signal detection method and system based on distributed optical fiber sensing in high-interference environment
CN116930901A (en) Two-stage sea surface radar target accurate detection method and system based on sequence reconstruction
De-shan et al. Robust object profile extraction based on multi-beam sonar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant