CN116500625B - Recovery imaging method, device, system, electronic equipment and readable storage medium - Google Patents

Recovery imaging method, device, system, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN116500625B
CN116500625B CN202310781024.4A CN202310781024A CN116500625B CN 116500625 B CN116500625 B CN 116500625B CN 202310781024 A CN202310781024 A CN 202310781024A CN 116500625 B CN116500625 B CN 116500625B
Authority
CN
China
Prior art keywords
snapshot
sonar
arrival
signal
receiving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310781024.4A
Other languages
Chinese (zh)
Other versions
CN116500625A (en
Inventor
张祺
丁飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Zhihai Technology Co ltd
Original Assignee
Tianjin Zhihai Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Zhihai Technology Co ltd filed Critical Tianjin Zhihai Technology Co ltd
Priority to CN202310781024.4A priority Critical patent/CN116500625B/en
Publication of CN116500625A publication Critical patent/CN116500625A/en
Application granted granted Critical
Publication of CN116500625B publication Critical patent/CN116500625B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/89Sonar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/539Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

The embodiment of the application discloses a method, a device, a system, electronic equipment and a readable storage medium for recovering imaging, wherein the method comprises the following steps: under the condition that the sonar radiates a transmitting signal to the water bottom, receiving snapshot observation signal vectors corresponding to the Q snapshots respectively through the acoustic transducer array; according to a pre-acquired receiving array flow pattern and snapshot observation signal vectors, respectively calculating corresponding arrival direction angle vectors of each snapshot; according to the direction-of-arrival angle vector corresponding to each snapshot, M target position coordinate points corresponding to each snapshot are respectively determined; generating a three-dimensional interference image according to the G, Q and M target position coordinate points; in the process that the sonar moves in water, the sonar radiates G times of emission signals to the water bottom, and Q, M and G are positive integers.

Description

Recovery imaging method, device, system, electronic equipment and readable storage medium
Technical Field
The present application relates to the field of information processing technologies, and in particular, to a method, an apparatus, a system, an electronic device, and a readable storage medium for recovering imaging.
Background
At present, in professional ocean exploration, mainly rely on sonar detection technology, through to the submarine transmission wave beam, the water target thing and the submarine within a certain range can be covered to the transmission wave beam, and the transmission signal forms back scattering energy through water target thing or submarine reflection, scattering, forms echo signal after receiving and processing by the acoustic transducer array, carries out a series of processing to echo signal, finally generates the three-dimensional interference image that is used for instructing the submarine condition.
Because of the complex underwater conditions, and the limitations of both existing side-scan imaging schemes and multi-beam sonar imaging schemes, it is currently difficult to generate three-dimensional interference images that are descriptive of the underwater conditions.
Disclosure of Invention
The embodiment of the application provides a recovery imaging method, a device, a system, electronic equipment and a readable storage medium, which can solve the problem that a three-dimensional interference image for describing the underwater situation is difficult to generate at present.
In a first aspect, an embodiment of the present application provides a method for restoring imaging, including:
under the condition that the sonar radiates a transmitting signal to the water bottom, receiving snapshot observation signal vectors corresponding to the Q snapshots respectively through the acoustic transducer array;
according to a pre-acquired receiving array flow pattern and snapshot observation signal vectors, respectively calculating corresponding arrival direction angle vectors of each snapshot;
according to the direction-of-arrival angle vector corresponding to each snapshot, M target position coordinate points corresponding to each snapshot are respectively determined;
generating a three-dimensional interference image according to the G, Q and M target position coordinate points; in the process that the sonar moves in water, the sonar radiates G times of emission signals to the water bottom, and Q, M and G are positive integers.
In a second aspect, embodiments of the present application provide a recovery imaging system, the system comprising: wet end, dry end and sparse direction of arrival solver, wherein,
the wet end includes at least: the system comprises an acoustic transducer array, a transmitting signal generating module, an echo signal receiving and processing module, a first communication time sequence control module and a power supply module:
wherein the acoustic transducer array is used for radiating a transmitting signal to the water bottom and receiving an echo signal;
a transmission signal generation module for driving a line array in the acoustic transducer array to generate acoustic power;
the echo signal receiving and processing module is used for receiving echo signals corresponding to the Q snapshots respectively and converting the echo signals into snapshot observation signal vectors corresponding to the Q snapshots respectively;
the first communication time sequence control module is used for establishing communication connection with the dry end;
the power supply module is used for supplying power to the acoustic transducer array, the transmitting signal generating module, the echo signal receiving and processing module and the first communication time sequence control module;
the sparse direction-of-arrival calculating device is used for respectively calculating the direction-of-arrival angle vector corresponding to each snapshot according to the pre-acquired receiving array flow pattern and the snapshot observation signal vector;
The dry end comprises at least: the system comprises display control equipment, a second communication time sequence control module and power supply equipment;
the display control equipment is used for respectively determining M target position coordinate points corresponding to each snapshot according to the arrival direction angle vector corresponding to each snapshot; generating a three-dimensional interference image according to the G, Q and M target position coordinate points; in the process that the sonar moves in water, G times of emission signals are radiated to the water by the sonar, Q, M and G are positive integers, and the sonar comprises an acoustic transducer array;
the second communication time sequence control module is used for establishing communication connection with the wet end;
and the power supply equipment is used for supplying power to the display control equipment and the second communication time sequence control module.
In a third aspect, an embodiment of the present application provides a recovery imaging apparatus, including:
the receiving module is used for receiving snapshot observation signal vectors corresponding to the Q snapshots respectively through the acoustic transducer array under the condition that the sonar radiates the emission signal to the water bottom;
the resolving module is used for respectively resolving the direction-of-arrival angle vector corresponding to each snapshot according to the pre-acquired receiving array flow pattern and the snapshot observation signal vector;
the determining module is used for respectively determining M target position coordinate points corresponding to each snapshot according to the arrival direction angle vector corresponding to each snapshot;
The generating module is used for generating a three-dimensional interference image according to the G, Q and M target position coordinate points; in the process that the sonar moves in water, the sonar radiates G times of emission signals to the water bottom, and Q, M and G are positive integers.
In a fourth aspect, an embodiment of the present application provides an electronic device, including: a processor and a memory storing computer program instructions; the processor, when executing the computer program instructions, implements the method as in the first aspect or any of the possible implementations of the first aspect.
In a fifth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement a method as in the first aspect or any of the possible implementations of the first aspect.
In the embodiment of the application, under the condition that the sonar radiates a transmitting signal to the water bottom, the sound transducer array is used for receiving the snapshot observation signal vectors corresponding to the Q snapshots respectively, and according to the pre-acquired receiving array flow pattern and the snapshot observation signal vectors, the arrival direction angle vector corresponding to each snapshot is respectively calculated, so that the arrival direction angle vector corresponding to each snapshot can be quickly and accurately calculated; according to the direction of arrival angle vector corresponding to each snapshot, M target position coordinate points corresponding to each snapshot can be respectively determined. Finally, in the process that the sonar moves in water, G emission signals are radiated to the water bottom by the sonar, and correspondingly, G echo signals are received, and the echo signals are used for being converted into snapshot observation signal vectors, so that three-dimensional interference images can be quickly and accurately generated according to G, Q and M target position coordinate points, and the imaging precision and resolution of the three-dimensional interference images can be improved.
Drawings
In order to more clearly illustrate the technical solution of the embodiments of the present application, the drawings that are needed to be used in the embodiments of the present application will be briefly described, and it is possible for a person skilled in the art to obtain other drawings according to these drawings without inventive effort.
FIG. 1 is a schematic illustration of a side scan sonar measurement;
FIG. 2 is a schematic diagram of another side-scan sonar measurement;
FIG. 3 is a flowchart of a method for restoring imaging according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a coordinate point of a target position according to an embodiment of the present application;
FIG. 5 is a schematic representation of a three-dimensional interference image provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of a recovery imaging system according to an embodiment of the present application;
fig. 7 is a schematic diagram of parallel connection of array elements in an array according to an embodiment of the present application;
FIG. 8 is a schematic diagram of an acoustic transducer array layout provided by an embodiment of the present application;
fig. 9 is a schematic coverage area diagram of a transmitting beam and a receiving beam according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a multi-frequency simultaneous detection acoustic transducer array according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a recovery imaging device according to an embodiment of the present application;
Fig. 12 is a schematic hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
Features and exemplary embodiments of various aspects of the present application will be described in detail below, and in order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail below with reference to the accompanying drawings and the detailed embodiments. It should be understood that the specific embodiments described herein are merely configured to illustrate the application and are not configured to limit the application. It will be apparent to one skilled in the art that the present application may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the application by showing examples of the application.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
First, technical terms and technical principles related to the embodiments of the present application will be described.
The sonar is electronic equipment for detecting, positioning and communicating underwater targets by utilizing underwater sound waves, and is the most widely and most important device in underwater acoustics.
The number of shots refers to the ratio of the total time tping to the minimum processing time tsp.
Echo refers to a signal that reaches a given point by a path other than the normal path. Echo is generated by the fact that a part of energy is absorbed by a reflecting object after the signal is reflected by the reflecting object, so that an original signal with attenuation delay is generated, and the original signal is superimposed.
The direction of arrival (Direction of arrival, DOA) refers to the direction of arrival of the spatial signals, i.e., the angle of arrival of each signal at a reference array element of the array, simply referred to as the direction of arrival.
Relates to the principle of a multi-beam sonar system, and is known to be in a single ping time, namely in a time for completing complete radiation of a transmitting signal and reception of an echo signal~/>+/>In, multibeam sonar is +.>Radiating a signal to the water bottom at any time to form a wide beam with an opening angle of +.>The narrow beam opening angle is +.>The transmitting beam of the ultrasonic transducer covers a water body target object and the water bottom in a certain range, the transmitting signal is reflected and scattered by the water body target object and the water bottom to form back scattering energy, and the back scattering energy is received and processed by the acoustic transducer array to form an echo signal.
The wide beam is opened to be angle in advanceDividing into M determined directions, M is a positive integer, and angle resolution is ++in equiangular mode>. Using digital wave beam forming technique to separate each +.>Echo signals of the direction. Based on each->The echo signals of the direction are used for determining M discrete distance values Lm, namely, the open angle of the characterization beam is +.>Distance between the inner M direction water body target objects or the water bottom and the multi-beam sonar.
Multi-beam sonar moving under water according to known depth, direction and speed v, and known M discrete distance values Lm andthe real depth of the water body target objects or the water bottom in M directions can be further solved. Repeating the single ping process K times to obtain M multiplied by K discrete distance values, and forming a two-dimensional image displayed in a point cloud mode for representing the real depth of a water body target object or the water bottom, namely representing the topography in the water body. The multi-beam sonar system essentially solves the time information indirectly by using the amplitude information, and forms a two-dimensional image by directly using the time and angle information.
Currently, multi-beam sonar equipment cannot realize simultaneous characterization of depth and back-scattered energyThree-dimensional interference images. Second, the principle of a side-scan sonar system is involved. Knowing the single ping time ~/>+/>Inside, side scan sonar>The signals are radiated to the water bottom at the moment, and the wide beam opening angles formed by the double broadsides are +.>The narrow beam opening angle is +.>The transmitting beam of the ultrasonic transducer covers a water body target object and the water bottom in a certain range, the transmitting signal is reflected and scattered by the water body target object or the water bottom to form back scattering energy, and the back scattering energy is received and processed by the acoustic transducer array to form an echo signal. Drawbacks of the side scan sonar system include: firstly, P discrete amplitude values are related to distance, propagation loss in water, an emitted beam angle, TVG (time varying gain) compensation condition and water object or water bottom surface characteristic condition, depth cannot be accurately represented or inverted, and description of the water object or water bottom is approximate estimation;
secondly, when the water bottom severely fluctuates, suddenly changes or inclines greatly, the echo signals do not accord with the assumption that the arrival time is related to the distance between the water body target object and the water bottom and the sonar, and the description error is caused by space-time confusion;
specifically, as shown in fig. 1, under the condition that the water bottom has no severe fluctuation, abrupt change or large inclination, as shown in fig. 1 (a), an echo signal generated under the preset sonar (point O1) reaches the sonar first, and an echo signal generated at the junction between the inclined distance LSR and the water bottom (point O4) reaches the sonar last.
Interference imaging sonar can work normally. Conditions of large inclination at the water bottomIn the case of the extreme case of large water bottom inclination, as shown in FIG. 1 (b), at 2Echo signals are generated by the water bottom (O1-O2) after the moment, and the amplitude time sequence envelope curve (1-2) is inconsistent with the water bottom (O1-O2) after pulse compression, so that error description is generated.
In the case of severe fluctuation and abrupt change of the water bottom, as shown in fig. 1 (c), in theory, the echo signal generated by the point O1 reaches the sonar first, and because of the severe fluctuation and abrupt change of the water bottom, the point O6 is rather closer to the sonar, and the echo signal generated by the point O6 reaches the sonar earlier than the point O1. Thus, an erroneous description may be generated.
Thirdly, due to the pulse compression method and the multipath effect of the water body, target echo signals with the same arrival time but different directions cannot be distinguished.
As shown in FIG. 2, the side scan sonar emits a signal to the radiation of the water bottom at point O, 2Echo signals generated by the point 1 and the point 1 'reach the sonar at the same time, the two reach the same time, the amplitude of the point 1 and the amplitude of the point 1' are overlapped after pulse compression, and the two points at different positions collapse to form one point.
Similarly, the arrival time of each point between the point 1 and the point 2 is the same as that of each point between the point 1 'and the point 2', and the amplitudes are overlapped. The actual water bottom is concave, but the amplitude time series envelope results characterize the water bottom convexity, producing description errors compared to the actual water bottom.
Next, the principle of multibeam sonar interferometric imaging is involved.
The multi-beam sonar system utilizes a digital beam forming (Digital Beamforming) technology to separate echo signals from M mixed echo signals in directions by utilizing a digital beam forming (Digital Beamforming) technology, performs correlation calculation on the echo signals in each direction and a transmitting signal to obtain the Arrival Time (TOA) of the echo signals in each direction, and simultaneously performs convolution operation on the mixed echo signals in the M directions and the transmitting signal to obtain an amplitude Time sequence consisting of P discrete points, wherein P is a positive integer, and the sounding operation and the side scanning operation are simultaneously performed in a single ping Time. However, the side scanning cannot truly reflect the distance, so that the side scanning is difficult to couple with the sounding to jointly form a three-dimensional interference imaging diagram.
Finally, interference imaging sonar principles are involved.
The wet end of interference imaging sonar moves under water according to known depth, direction and speed v, and the single ping process time~/>+In, interference imaging sonar>The signals are radiated to the water bottom at the moment, and the wide beam opening angles formed by the double broadsides are +.>The narrow beam opening angle is +.>The transmitting beam of the device covers a water body target object and the water bottom in a certain range. The transmitted signal is reflected and scattered by the water body target object or the water bottom to form back scattering energy, and the back scattering energy is received and processed by the acoustic transducer array to form an echo signal.
Pre-determined snapshot timeInterference imaging sonar processing echo signals, and according to snapshot time +.>Processing echo signals to form snapshot observation signal vectorsYThe method comprises the steps of carrying out a first treatment on the surface of the Using some algorithm to observe signal vector for snapshotYAnd calculating to obtain a target position coordinate point, wherein the algorithm can comprise the following steps: differential phaseBit algorithm, high resolution subarray space fitting algorithm, point-by-point DOA algorithm, CAATI algorithm.
Interference imaging sonar system, and substantially directly utilizing direction of arrival angle theta (i.e. angle information) and amplitude corresponding to the direction of arrival angle(i.e., amplitude information) and time information three-dimensional information. Therefore, the interference imaging sonar increases the direction of arrival angle θ (angle information) as compared with the side scan sonar, and the true depth can be calculated in combination with the time information. Compared with a multi-beam sonar system, the system simultaneously couples the sounding information and the side scanning information to obtain a three-dimensional interference imaging diagram.
At present, a differential phase algorithm can only estimate one target at a time and is not suitable for interference imaging sonar application; the high-resolution subarray space fitting algorithm needs to calculate the incidence angles of all subarray selection modes and select the subarray with the smallest standard deviation, and has the advantages of large calculated amount, dependence on snapshot quantity and low efficiency.
The point-by-point DOA algorithm mainly uses an ESPRIT algorithm, and is assisted by a space smoothing method and noise whitening operation, and ensures that an echo signal leads to an observed signal vectorYCovariance matrix"full rank", the computational complexity is high; moreover, no effective method is available for estimating and determining the number of target echo signals with the same arrival time but different directions, and the number of target echo signals is inevitably set to be 'lost', 'pseudo-solution', 'no solution'; severely depends on the premise that the topography does not have severe fluctuation, mutation or large inclination; and, the core problem of "distinguishing target echo signals having the same arrival time but different directions" cannot be solved.
The CAATI algorithm is subject to a prony method matrix full rank condition, and the number of target echo signals with the same arrival time but different directions is calculated by N=2 times of the number of line arrays in a single-side sound transducer array.
Therefore, the current three-dimensional interference imaging map algorithm has limitations, which make it difficult to generate three-dimensional interference images for describing the underwater condition. The following describes a recovery imaging method provided in the embodiment of the present application in detail.
Fig. 3 is a flowchart of a method for recovering imaging according to an embodiment of the present application.
As shown in fig. 3, the method for restoring imaging may include steps 310 to 340, and the method is applied to a restoring imaging apparatus, specifically as follows:
step 310, under the condition that the sonar radiates a transmitting signal to the water bottom, receiving snapshot observation signal vectors corresponding to the Q snapshots respectively through the acoustic transducer array;
step 320, according to the pre-acquired receiving array flow pattern and snapshot observation signal vector, respectively calculating the corresponding direction-of-arrival angle vector of each snapshot;
step 330, determining M target position coordinate points corresponding to each snapshot according to the direction-of-arrival angle vector corresponding to each snapshot;
step 340, generating a three-dimensional interference image according to the g×q×m target position coordinate points; in the process that the sonar moves in water, the sonar radiates G times of emission signals to the water bottom, and Q, M and G are positive integers.
In the recovery imaging method provided by the application, under the condition that the sonar radiates a transmitted signal to the water bottom, the sound transducer array is used for receiving the snapshot observation signal vectors corresponding to the Q snapshots respectively, and according to the pre-acquired receiving array flow pattern and the snapshot observation signal vectors, the arrival direction angle vector corresponding to each snapshot is calculated respectively, so that the arrival direction angle vector corresponding to each snapshot can be calculated quickly and accurately; according to the direction of arrival angle vector corresponding to each snapshot, M target position coordinate points corresponding to each snapshot can be respectively determined. Finally, in the process that the sonar moves in water, G emission signals are radiated to the water bottom by the sonar, and correspondingly, G echo signals are received, and the echo signals are used for being converted into snapshot observation signal vectors, so that three-dimensional interference images can be quickly and accurately generated according to G, Q and M target position coordinate points, and the imaging precision and resolution of the three-dimensional interference images can be improved.
The following describes the contents of steps 310-340, respectively:
involving step 310.
Under the condition that the sonar radiates emission signals to the water bottom, the sound transducer array is used for receiving snapshot observation signal vectors corresponding to the Q snapshots respectively.
In one possible embodiment, before step 310, further includes:
acquiring an inclined distance length and a distance resolution, wherein the inclined distance length is the farthest detection length of the sonar;
determining preset snapshot time according to the slant distance length and the distance resolution;
receiving, by the acoustic transducer array, snapshot observation signal vectors corresponding to the Q snapshots, respectively, comprising:
based on preset snapshot time, snapshot observation signal vectors corresponding to the Q snapshots are received through the acoustic transducer array.
The acoustic transducer array receives echo signals, and then forms snapshot observation signal vectors corresponding to Q snapshots respectively by utilizing an echo signal receiving and processing module.
Specifically, a single ping time is knownOr a skew length LSR, according to the distance resolution DeltaL=c· +.>Will beDivided into Q snapshots. Wherein, the preset snapshot time is->Lsr=q·Δl. Wherein c is the sound velocity in water. Let Q { q|0 less than or equal to Q less than or equal to (Q-1), q.epsilon.positive integer } be the snapshot number index.
Wherein, presetting the snapshot time is convenient for the digital system to realize discrete time signal processing. Accordingly, the snapshot observation signal vectors corresponding to the Q snapshots respectively can be received by the acoustic transducer array based on the preset snapshot time.
The smaller the preset snapshot time is, the more snapshot observation signal vectors can be collected, and the more the finally determined target position coordinate point is accurate.
Wherein, confirm and predetermine the snapshot time according to oblique distance length and distance resolution, include:
determining the number of snapshots according to the length of the inclined distance and the distance resolution;
and determining preset snapshot time according to the number of snapshots and preset time length, wherein the preset time length is the time length between the time that the sonar radiates a once-transmitted signal to the water bottom and the time that the acoustic transducer array receives an echo signal.
Where q=lsr/Δl, Q is the number of snapshots. The skew length is LSR and the distance resolution is DeltaL. Thus, the number of shots can be determined from the skew length and the distance resolution.
Wherein->For a preset length of time between the sonar radiating a once transmitted signal to the water bottom and the acoustic transducer array receiving an echo signal, +.>The snapshot time is preset.
Wherein c is the sound velocity in water, and Q is a positive integer.
Correspondingly, receiving, by the acoustic transducer array, snapshot observation signal vectors corresponding to the Q snapshots respectively, including: based on preset snapshot time, snapshot observation signal vectors corresponding to the Q snapshots are received through the acoustic transducer array.
For example, the preset snapshot time is 0.1 seconds, that is, the snapshot observation signal vector can be collected at 0.1 seconds, 0.2 seconds, 0.3 seconds, and the like.
Wherein, based on predetermine the snapshot time, through the snapshot observation signal vector that Q pieces of quick shooting of sound transducer array receipt correspond respectively, include:
acquiring echo frequency band signals output by a line array;
converting the echo band signal into an echo baseband signal;
and converting the echo baseband signal into the snapshot observation signal vector.
Specifically, first, the sonar generates a transmission signal F; then, by acquiring an arbitrary line arrayOutput echo band signal +.>The method comprises the steps of carrying out a first treatment on the surface of the Then, the echo band signal is processed +.>Band-to-baseband conversion by echo signal receiving processing module to form echo baseband signal +.>The method comprises the steps of carrying out a first treatment on the surface of the Finally, according to the preset snapshot time +.>Baseband signal->The digitization is realized by sampling, analog-digital conversion and the like, and the q-th snapshot observation signal vector +.>
Involving step 320.
And respectively calculating the corresponding direction-of-arrival angle vector of each snapshot according to the pre-acquired receiving array flow pattern and the snapshot observation signal vector.
In one possible embodiment, step 320 includes:
according to the flow pattern of the receiving array, respectively calculating the amplitude vector corresponding to each snapshot observation signal vector;
and determining the direction-of-arrival angle vector according to the amplitude vector.
=A/> Wherein (1)>For the fast-shooting observation signal vector, A is the receiving array flow pattern, +.>And observing the amplitude vector corresponding to the signal vector for each snapshot. The amplitude vector is used for representing the amplitude vector corresponding to the direction of arrival angle.
Resolving the amplitude vector X corresponding to the direction of arrival angle corresponding to the q-th snapshot by using a sparse direction of arrival resolving device q =[r 0 ,…,r M-1 ] T Direction of arrival angle vector=/>
Direction of arrival angle vector=/>Can be used for calculating the coordinate point of the target position,/>),/>For characterizing depth measurement information, i.e. depth. Amplitude vector X corresponding to direction of arrival angle q =[r 0 ,…,r M-1 ] T The side sweep information, i.e., the target backscatter energy, is characterized.
Therefore, the direction-of-arrival angle vector corresponding to each snapshot can be rapidly and accurately calculated according to the pre-acquired receiving array flow pattern and the snapshot observation signal vector.
In one possible embodiment, before step 320, further includes:
acquiring the number of single-side line arrays of the sound transducer array and the number of target directions of wide beam opening angles;
And determining a receiving array flow pattern according to the number of arrays, the number of target directions and a preset mathematical model.
Determining a receive array flow pattern by the number of arrays N of single-sided lines of the acoustic transducer array and the number of target directions M of wide beam opening anglesA
Wherein, the liquid crystal display device comprises a liquid crystal display device,,/>for wide beam opening angle, Δθ is the direction of arrival angular resolution, +.>In the wide beam opening angle, the number of target directions divided by the angle resolution delta theta of the direction of arrival is defined, and m is an index in the number of target directions.
According to single broadside wide beam opening angleThe resolution of the angle of arrival delta theta can be calculated according to the formula +.>Determining the M value, in particular when the Δθ requirement is small enough, the M value will be correspondingly large, when the receive array patternAAn "overcomplete dictionary (Over complete dictionary)" is constructed.
At this time, according to the mathematical modelY=AX,XIs a sparse vector. The sparse vector has the meaning that only a limited number of non-zero elements exist in the vector, and the rest elements are zero elements.
Involving step 330.
According to the direction-of-arrival angle vector corresponding to each snapshot, M target position coordinate points corresponding to each snapshot are respectively determined;
in one possible embodiment, step 330 includes:
obtaining distance resolution and projection distance, wherein the projection distance is the distance between a sonar and a projection point of the sonar at the water bottom;
And respectively determining M target position coordinate points corresponding to each snapshot according to the distance resolution, the projection distance and the direction-of-arrival angle vector.
Wherein, the included angle between the target or echo signal and the normal line of the acoustic transducer array is defined as theta, namely the direction of arrival angle theta. Distance between sonar and projection point of sonar at water bottom is knownBy means of the direction-of-arrival angle vector +>The total M target position coordinate points corresponding to the qth snapshot can be solved by the distance resolution delta L and the snapshot number index q, and any point is expressed as (/ -by using the M index>,),/>Characterizing the vertical distance, i.e. depth, +.>The horizontal distance between the target and the sonar transducer array is characterized as shown in the following formula.
(1)
Wherein, the liquid crystal display device comprises a liquid crystal display device,the distance between the sonar and the projection point of the sonar at the water bottom is the projection distance;
is the distance resolution; />Is the direction of arrival angle vector;
,/>) Is the target position coordinate point.
As shown in FIG. 4, there is a target of 1 in the qth snapshot,/>) The following equation is obtained by the object 1 in the triangular geometry in the figure.
=/>
Wherein, the liquid crystal display device comprises a liquid crystal display device,the distance between the sonar and the projection point of the sonar on the water bottom is shown;
=/>=/>the method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>=/>-/>
=/>
;/>
Involving step 340.
Generating a three-dimensional interference image according to the G, Q and M target position coordinate points; in the process that the sonar moves in water, the sonar radiates G times of emission signals to the water bottom, and Q, M and G are positive integers.
In the process that the sonar moves in water, the sonar radiates G times of emission signals to the water, the acoustic transducer array receives G times of echo signals, and the snapshot observation signal vector is obtained by converting the echo signals.
In one possible embodiment, step 340 includes:
generating G snapshot images according to G.Q.M target position coordinate points, wherein each snapshot image is generated by Q.M target position coordinate points;
and generating a three-dimensional interference image according to the G snapshot images.
For the Q-th snapshot, the interference imaging color codes of M target position coordinate points corresponding to the Q-th snapshot can be realized by using a point cloud coordinate interference imaging information fusion display method, and M target position coordinates with color codes are formed, namely each snapshot image is generated by Q.M target position coordinate points. Wherein q is any positive integer in the [1, Q ] interval.
The method comprises the following steps of: and transforming the local coordinate system and the selected geographic coordinate system, and performing post-processing on the target position coordinate points to achieve the optimal measurement and display effect, wherein the post-processing on the target position coordinate points comprises adding processing, modifying processing and deleting processing.
The method comprises the steps of generating G snapshot images according to G X Q X M target position coordinate points, namely single ping imaging. And simultaneously repeating snapshot imaging for Q times at the left and right sides, realizing single ping imaging of the furthest detection distance (oblique distance LSR) at the left and right sides, forming Q total interference imaging data sets according to the snapshot number index Q sequence, and forming at most Q x M discrete target position coordinate points with color codes.
To generating three-dimensional interference images from G snapshot images.
Along with the motion of the interference imaging sonar under water, G { g|0 is less than or equal to G is less than or equal to (G-1), and G epsilon positive integer } is set as a ping number index. GAnd forming total G single ping images (ping 1-ping G) according to the serial number index G sequence of the pings in the moving direction in time, and imaging the G single pings by combining the sonar speed v and displaying the G single pings at a dry end display and control device.
As shown in fig. 5, for g×q×m discrete target position coordinate points with color coding, a three-dimensional interference imaging map may be displayed in a point cloud manner.
In the field of sonar detection, an interference imaging sonar sparse recovery imaging system consisting of a dry end and a wet end is constructed, and an echo transient sparse recovery imaging method is utilized to obtain an amplitude vector corresponding to a direction-of-arrival angle And the direction of arrival angle vector->The interference imaging sonar sparse recovery imaging method is used for generating a three-dimensional interference imaging image through snapshot imaging, single ping imaging and three-dimensional interference imaging, and is suitable for interference imaging sonar.
Further, according to the system and the method related to the embodiment of the application, the near-bottom multi-beam sonar or side-scan sonar receiving array is transformed into an N-line array to form the receiving array flow patternAThe interference imaging function can be realized, and the working modes of the multi-beam sonar system and the side-scan sonar system can be expanded.
In summary, in the embodiment of the application, under the condition that the sonar radiates a transmitting signal to the water bottom, the sound transducer array receives the snapshot observation signal vectors corresponding to the Q snapshots respectively, and according to the pre-acquired receiving array flow pattern and the snapshot observation signal vectors, the arrival direction angle vector corresponding to each snapshot is calculated respectively, so that the arrival direction angle vector corresponding to each snapshot can be calculated quickly and accurately; according to the direction of arrival angle vector corresponding to each snapshot, M target position coordinate points corresponding to each snapshot can be respectively determined. Finally, in the process that the sonar moves in water, G emission signals are radiated to the water bottom by the sonar, and correspondingly, G echo signals are received, and the echo signals are used for being converted into snapshot observation signal vectors, so that three-dimensional interference images can be quickly and accurately generated according to G, Q and M target position coordinate points, and the imaging precision and resolution of the three-dimensional interference images can be improved.
As shown in fig. 6, the recovery imaging system is specifically shown as follows:
a restoration imaging system comprising: wet end 610, dry end 620, and sparse direction of arrival solver 630, wherein,
the wet end 610 includes at least: the system comprises an acoustic transducer array, a transmitting signal generating module, an echo signal receiving and processing module, a first communication time sequence control module and a power supply module:
wherein the acoustic transducer array is used for radiating a transmitting signal to the water bottom and receiving an echo signal;
a transmission signal generation module for driving a line array in the acoustic transducer array to generate acoustic power;
the echo signal receiving and processing module is used for receiving echo signals corresponding to the Q snapshots respectively and converting the echo signals into snapshot observation signal vectors corresponding to the Q snapshots respectively;
the first communication time sequence control module is used for establishing communication connection with the dry end;
the power supply module is used for supplying power to the acoustic transducer array, the transmitting signal generating module, the echo signal receiving and processing module and the first communication time sequence control module;
the sparse direction-of-arrival calculating means 630 is configured to calculate a direction-of-arrival angle vector corresponding to each snapshot according to a pre-acquired receiving array flow pattern and a snapshot observation signal vector;
The dry end 620 includes at least: the system comprises display control equipment, a second communication time sequence control module and power supply equipment;
the display control equipment is used for respectively determining M target position coordinate points corresponding to each snapshot according to the arrival direction angle vector corresponding to each snapshot; generating a three-dimensional interference image according to the G, Q and M target position coordinate points; in the process that the sonar moves in water, G times of emission signals are radiated to the water by the sonar, Q, M and G are positive integers, and the sonar comprises an acoustic transducer array;
the second communication time sequence control module is used for establishing communication connection with the wet end;
and the power supply equipment is used for supplying power to the display control equipment and the second communication time sequence control module.
In one possible embodiment, the sparse direction of arrival solver is integrated at the wet end if the computational force of the sparse direction of arrival solver is the first computational force;
under the condition that the calculation force of the sparse direction-of-arrival calculation device is the second calculation force, integrating the sparse direction-of-arrival calculation device at the dry end; wherein the first computing force is less than the second computing force.
According to the calculation capability of the sparse direction-of-arrival solver, the sparse direction-of-arrival solver is integrated at the wet end under the condition that the calculation force of the sparse direction-of-arrival solver is the first calculation force, and at this time, the sparse direction-of-arrival solver is usually an embedded system or a programmable logic device, such as a field programmable gate array (Field Programmable Gate Array, FPGA), a digital signal processor (Digital Signal Processor, DSP) or a micro control unit (Microcontroller Unit, MCU) and the like, and the sparse direction-of-arrival solver can be integrated at the wet end in consideration of signal processing convenience.
In the case where the computational power of the sparse direction-of-arrival solver is the second computational power, a high-computational-power sparse direction-of-arrival solver, typically a high-computational-power computing system, such as a server, a multi-core computer, or the like, may be employed, and the sparse direction-of-arrival solver may be integrated at the dry end in consideration of its power consumption, volume, or the like.
Specifically, the wet end is designed and manufactured in an underwater fish towing manner, and, except for the structural shell, as shown in fig. 6, the wet end includes at least: 1. an acoustic transducer array; 2. a transmit signal generation module; 3. a transceiver conversion device; 4. an echo signal receiving and processing module; 5. a fish dragging gesture measurement module; 6. a communication timing control module; 7. a control unit; 8. and a power supply module.
The transducer array is an array formed by arranging a plurality of transducers in a certain form, and the direction and energy of a transmitting beam can be effectively controlled through the transducer array. The towed fish is a sensor of a measuring instrument for measuring the towing of the ship. Such as a side scan sonar, a towing device for a marine magnetometer.
Considering that there is a spatial distance between the wet end (located at a depth under water) and the dry end (deployed on a surface carrier), the transmission of the echo signal from the wet end to the dry end and the signal processing is not beneficial to ensuring the signal-to-noise ratio, in one possible embodiment of the present application, the echo signal is processed nearby in the wet end signal processing system.
In one possible embodiment of the application, the wet end signal processing system is integrally arranged in a watertight electronic cabin and then connected with the acoustic transducer arrays of the port and starboard by watertight cables, and the watertight electronic cabin are jointly arranged in a non-sealing fish towing structure shell, so that the design difficulty is reduced, and the suitability is improved.
The watertight electronic cabin is used for a container for installing underwater electronic equipment, can realize physical isolation between an internal device and a seawater environment, and protects reliable operation of the internal device.
The following description will be given in order:
the acoustic transducer array is formed by selecting materials with excellent electro-acoustic reversible conversion performance, such as piezoelectric ceramics, that is, materials and processes which can make the characteristics of array elements consistent, and the array elements are arranged according to a specific layout.
In one possible embodiment, the acoustic transducer array is deployed in a transmit receive co-location. Namely, in the single ping process, after the acoustic transducer array finishes transmitting signal radiation, the acoustic transducer array is switched to a receiving mode through the transceiver conversion device, and the back scattering energy reception is finished. Thus, the complexity of the system is simplified, the acoustic transducer array is used for transmitting and receiving, and the transmitting acoustic power and receiving performance are improved obviously. It will be appreciated that the acoustic transducer array may be deployed in a manner that selects a transmit-receive split.
At a specific operating frequency f, the operating bandwidth B of the acoustic transducer array is (f-B/2-f+B/2).
In one possible embodiment, a large bandwidth may be selected, i.e., an acoustic transducer array operating bandwidth close to f+B/2. The transmission signal system is satisfied with the following steps including but not limited to: single frequency pulse signals, linear Frequency Modulation (LFM), coded phase modulation signals, pseudo-random signals, hyperbolic frequency hopping signals, inter-pulse modulation signals, and the like.
The array layout of the single-side sound transducer can select N (N is more than or equal to 2 and N is a positive integer) line array layout to enable the receiving array to flowAThe (Array manual) is simple and convenient to implement an echo transient sparse recovery imaging method. Wherein the receive array pattern is a matrix that is a set of ensemble steering vectors.
Specifically, N (N is more than or equal to 2 and N is a positive integer) line arrays with the same characteristics can be respectively arranged on the left side and the right side of the underwater towed fish along the fore-and-aft direction, and the N value is more than or equal to k.lg according to the formula by the sparsity k and the number M of target directionsM is calculated and determined. Number of target directionsFor indicating the number of target directions within the wide beam opening angle, divided by the angular resolution Δθ of the direction of arrival.
In one possible embodiment, adjacent array elements in each line array are arranged at equal spacing d=λ/2, where the operating wavelength λ, c=λ·f, the acoustic velocity in water c, and the operating frequency f.
If a phase unwrapping algorithm is introduced, the adjacent line array pitch should be configured according to the phase unwrapping algorithm. The adjacent line arrays can be arranged at equal intervals d, phase ambiguity is avoided when the echo transient sparse recovery imaging method is applied, and meanwhile algorithm complexity is simplified, wherein d=λ/2.
In one possible embodiment, all array elements in each line array are electrically connected in parallel. Namely, in an array of N (N is more than or equal to 2) lines, on the premise that array elements are not electrically shorted, each array element electrical pin is pin1 and pin2, so that the emission of a plurality of array elements can be simplified and controlled, meanwhile, the flow pattern requirement of a receiving array is realized, and the resource requirement of software and hardware is reduced. As shown in fig. 7. Each line array has out+ and OUT-electrical interfaces.
In one possible embodiment, the port transmit and receive beams of the acoustic transducer array are identical in nature, wherein the transmit and receive beams of a conventional radar are identical and the transmit and receive beams are different for some special systems of radar.
Taking the single broadside as an example, the measurement in the beam direction of the main lobe of the array can be selected; wide beam opening angle for selecting vertical fish-towing motion directionBig, narrow beam opening angle along the direction of the towing motion +. >And the small and sound transducer arrays are provided with a depression angle, so that the coverage range of a transmitting beam and a receiving beam is large, and the slant range LSR is large. The signal processing weighting technology can be adopted to improve the opening angle and the directivity of the single-side beam, and the description is omitted.
Wherein the narrow beam opening angle is determinedWhen the towed fish moves, the towed fish moving speed v and the underwater sound speed c are combined, so that the transmitted signal and the echo signal are ensured to be within the coverage range of the transmitted wave beam and the received wave beam.
When LSR>>The transmitting signal 0 is sent out, and the echo signal generated by the farthest distance (the inclined distance LSR) from the sonar is generatedThe sonar is reached at the latest moment. At 0 to%>In time, sonar underwater movement distance +.>Ensure->Therefore, narrow beam opening angle +.>The relation among the towing speed v and the underwater sound speed c is shown in the formula (2). Thereby, the narrow single broadside beam opening angle can be effectively defined +>
(2)
When LH/lambda>>1, the physical length of the array is LH and the open angle of the narrow beamThe approximate relation between the working wavelength lambda is shown in the formula (3);
(3)
further, the number of array elements in each line array can be calculated
Thereby, the physical length LH of the acoustic transducer array and the number of array elements of each line array can be effectively limited
Exemplarily, assuming a towing motion speed v=3 m/s (about 6 knots), an underwater sound speed c=1500 m/s If->The maximum movement speed v=6.54 m/s (about 12.8 knots) of the towed fish is supported.
The acoustic transducer array operating frequency f=450 kHz (λ=3.34 mm); operating bandwidth b=40 kHz; the port and starboard sides are respectively provided with an array of N=6 lines~/>) The method comprises the steps of carrying out a first treatment on the surface of the Adjacent array elements in each line array are arranged at equal intervals d=1.67 mm, and adjacent line arrays are arranged at equal intervals d=1.67 mm; narrow beam opening angle->The method comprises the steps of carrying out a first treatment on the surface of the The physical length LH of the acoustic transducer array is approximately 0.334m, and the number of array elements in each line array is approximately equal to +.>
An example of an acoustic transducer array layout (port, starboard, etc.) is shown in fig. 8; n=6, i.e. the acoustic transducer array comprises 6 line arrays, i.e~/>
A schematic diagram of the transmit and receive beam coverage is shown in fig. 9.
In one possible embodiment, to satisfy a plurality of operating frequencies~/>Simultaneous detection requirement, as shown in FIG. 10, each operating frequency +.>~/>As described in the above embodiments, the working frequency of the acoustic transducer array is higher, typically greater than 300kHz, and the working frequency of some high resolution scenes can reach 1-2 MHz. The distance d between adjacent line arrays is smaller, which is difficult to realize at a group of working frequencies because of the manufacturing requirement of the acoustic transducer array>Inserting another set of operating frequencies +.for an acoustic transducer array (assuming 3 line arrays, i.e. 1-1,1-2, 1-3) >An acoustic transducer array (2-1, 2-2,2-3, assuming a 3 line array), i.e. a layout of (1-1, 2-1,1-2,2-2,1-3, 2-3) is formed.
It should be noted that, if the manufacturing process of the acoustic transducer allows, another line array with a different frequency may be inserted into the line array pitch d of every two line arrays with the same frequency, that is, overlapped and laid, so as to achieve miniaturization of the transducer array. If the manufacturing process of the acoustic transducer does not allow, the wire arrays of the different frequencies cannot be laid out overlapping.
Multiple operating frequencies~/>The simultaneous detection may be selected from: the working frequencies are different, independent acoustic transducer array groups are respectively formed, and the acoustic transducer array groups are sequentially arranged along the direction of the tail and the head of the towed fish, so that the multi-frequency simultaneous detection acoustic transducer array is formed. Because the central position of each sound transducer array group has deviation in space, the correction can be performed in the signal processing or image processing links, and the description is omitted. Thus, multi-frequency simultaneous detection can be realized.
Emission mode: the transmitting-receiving conversion device transmits the transmitting signal generated by the transmitting signal generating module to each OUT+ and OUT-electrical interface of the acoustic transducer array with minimum loss; reception mode: the transceiver conversion device connects each OUT+ and OUT-electrical interface of the acoustic transducer array to the echo signal receiving and processing module, and is realized by a conventional technology, and details are not repeated. When the acoustic transducer array adopts receiving and transmitting separation, the device is omitted.
To a transmit signal generation module.
The wire array is driven to produce the desired acoustic power by a transmit signal generation module functions including, but not limited to: 1. generating a transmitting signal baseband; 2. baseband-to-band conversion of the transmit signal; 3. power amplifying the frequency band of the transmitting signal;
an emission signal generating module can be configured for each line array according to the number N of arrays of the single-side line, or the same emission signal generating module can be configured for N line arrays. Is realized according to the conventional technology and is not repeated.
Illustratively, a single-frequency interference imaging sonar sparse recovery imaging system with a working frequency F transmits a signal F as shown in formula (4), and the amplitude of the transmitted signal F is U.
(4)
Wherein t is time.
To echo signal receiving and processing modules.
Echo signal receiving processing module functional circuits include, but are not limited to: the echo signal low noise receiving, at least one stage of amplitude amplification, at least one stage of analog bandpass filtering, time gain compensation (TVG), frequency band-baseband conversion, and forming the echo signal baseband signal, the above process is realized according to the conventional technology, and the description is omitted.
The echo signal receiving and processing module receives the echo signal according to the preset snapshot timeSampling and analog-digital converting the echo baseband signal to form the snapshot observation signal vector Y
Relates to a towed fish attitude measurement module.
The invention relates to a towed fish attitude measurement module, which is configured to acquire towed fish six-degree-of-freedom attitude data in real time in order to correct the coverage range change of a transmitting beam and a receiving beam caused by the underwater motion attitude change of a towed fish, reduce the effect of a three-dimensional interference imaging map, and can select an inertial navigation device, an acceleration and attitude measurement device, an MEMS system and the like, and the description is omitted.
To a communication timing control module.
And the communication time sequence control module is used for establishing communication with the dry end and realizing data and control instruction transmission and time sequence control, including but not limited to using electric and optical media.
To a control unit.
The control unit is used for overall control, calculation, data transmission, storage, communication and the like of the system, and can select MCU, FPGA, DSP and other programmable logic devices and combine mass storage equipment and external circuits required by the programmable logic devices.
To power supply modules. The power supply module is used for supplying and converting power.
As shown in fig. 6, the dry end includes, in addition to the structural portion, at least: 1. display control equipment; 2. a communication timing control module; 3. an auxiliary sensor; 4. and a power supply device.
Relates to display control equipment.
The display control equipment comprises man-machine interaction and display equipment, and is used for generating control instructions, configuring parameters or carrying out calculation and information processing; the three-dimensional interference imaging map is further calculated, generated and displayed according to the auxiliary sensor information, the direction of arrival angle and the corresponding amplitude of the direction of arrival angle, and a portable computer or a server can be selected.
To a communication timing control module.
And the communication time sequence control module is used for establishing communication with the wet end and realizing data and control instruction transmission and time sequence control, including but not limited to using electric and optical media.
To auxiliary sensors.
Other necessary information required for three-dimensional interferometric image generation and registration is acquired, including but not limited to: a sound velocity meter, a position acquisition device (GPS positioning system), etc.
To power supply equipment for power supply and conversion.
In summary, in the embodiment of the application, the recovery imaging system is divided into a wet end and a dry end, and the recovery imaging system is reasonably configured at the wet end or the dry end according to the calculation condition of the sparse direction-of-arrival calculation device, so that the design of the acoustic transducer array is focused, the sparse recovery processing requirement is met, meanwhile, the acoustic transducer array is simple and convenient to manufacture, and the effectiveness of the recovery imaging system is improved.
Based on the above-mentioned recovery imaging method shown in fig. 3, an embodiment of the present application further provides a recovery imaging apparatus, as shown in fig. 11, the apparatus 1100 may include:
a receiving module 1110, configured to receive, by using the acoustic transducer array, snapshot observation signal vectors corresponding to the Q snapshots respectively under the condition that the sonar radiates a transmission signal to the water bottom;
the resolving module 1120 is configured to respectively resolve the direction-of-arrival angle vector corresponding to each snapshot according to the pre-acquired receiving array flow pattern and the snapshot observation signal vector;
the determining module 1130 is configured to determine M target position coordinate points corresponding to each snapshot according to the direction-of-arrival angle vector corresponding to each snapshot;
a generating module 1140, configured to generate a three-dimensional interference image according to the g×q×m target position coordinate points; in the process that the sonar moves in water, the sonar radiates G times of emission signals to the water bottom, and Q, M and G are positive integers.
In one possible embodiment, the resolving module 1120 is specifically configured to:
according to the flow pattern of the receiving array, respectively calculating the amplitude vector corresponding to each snapshot observation signal vector;
and determining the direction-of-arrival angle vector according to the amplitude vector.
In one possible embodiment, the determining module 1130 is specifically configured to:
Obtaining distance resolution and projection distance, wherein the projection distance is the distance between a sonar and a projection point of the sonar at the water bottom;
and respectively determining M target position coordinate points corresponding to each snapshot according to the distance resolution, the projection distance and the direction-of-arrival angle vector.
In one possible embodiment, the apparatus 1100 may further include:
the first acquisition module is used for acquiring the inclined distance length and the distance resolution, wherein the inclined distance length is the farthest detection length of the sonar;
the first determining module is used for determining preset snapshot time according to the slant distance length and the distance resolution;
the receiving module 1110 is specifically configured to:
based on preset snapshot time, snapshot observation signal vectors corresponding to the Q snapshots are received through the acoustic transducer array.
In a possible embodiment, the first determining module is specifically configured to:
determining the number of snapshots according to the length of the inclined distance and the distance resolution;
and determining preset snapshot time according to the number of snapshots and preset time length, wherein the preset time length is the time length between the time that the sonar radiates a once-transmitted signal to the water bottom and the time that the acoustic transducer array receives an echo signal.
In one possible embodiment, the apparatus 1100 may further include:
the second acquisition module is used for acquiring the number of single-side line arrays of the sound transducer array and the number of target directions of wide beam opening angles;
And the second determining module is used for determining a receiving array flow pattern according to the number of arrays, the number of target directions and a preset mathematical model.
In one possible embodiment, the generating module 1140 is specifically configured to:
generating G snapshot images according to G.Q.M target position coordinate points, wherein each snapshot image is generated by Q.M target position coordinate points;
and generating a three-dimensional interference image according to the G snapshot images.
In summary, in the embodiment of the application, under the condition that the sonar radiates a transmitting signal to the water bottom, the sound transducer array receives the snapshot observation signal vectors corresponding to the Q snapshots respectively, and according to the pre-acquired receiving array flow pattern and the snapshot observation signal vectors, the arrival direction angle vector corresponding to each snapshot is calculated respectively, so that the arrival direction angle vector corresponding to each snapshot can be calculated quickly and accurately; according to the direction of arrival angle vector corresponding to each snapshot, M target position coordinate points corresponding to each snapshot can be respectively determined. Finally, in the process that the sonar moves in water, G emission signals are radiated to the water bottom by the sonar, and correspondingly, G echo signals are received, and the echo signals are used for being converted into snapshot observation signal vectors, so that three-dimensional interference images can be quickly and accurately generated according to G, Q and M target position coordinate points, and the imaging precision and resolution of the three-dimensional interference images can be improved.
Fig. 12 shows a schematic hardware structure of an electronic device according to an embodiment of the present application.
A processor 1201 may be included in an electronic device, as well as a memory 1202 in which computer program instructions are stored.
In particular, the processor 1201 may include a Central Processing Unit (CPU), or an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or may be configured as one or more integrated circuits that implement embodiments of the present application.
Memory 1202 may include mass storage for data or instructions. By way of example, and not limitation, memory 1202 may include a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, magnetic tape, or universal serial bus (Universal Serial Bus, USB) Drive, or a combination of two or more of the above. Memory 1202 may include removable or non-removable (or fixed) media where appropriate. Memory 1202 may be internal or external to the integrated gateway disaster recovery device, where appropriate. In a particular embodiment, the memory 1202 is a non-volatile solid-state memory. In particular embodiments, memory 1202 includes Read Only Memory (ROM). The ROM may be mask programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory, or a combination of two or more of these, where appropriate.
Processor 1201 implements a restoration imaging method of any of the embodiments shown in the figures by reading and executing computer program instructions stored in memory 1202.
In one example, the electronic device may also include a communication interface 1203 and a bus 1210. As shown in fig. 12, the processor 1201, the memory 1202, and the communication interface 1203 are connected to each other via a bus 1210 and perform communication with each other.
The communication interface 1203 is mainly used for implementing communication among the modules, devices, units and/or apparatuses in the embodiment of the present application.
Bus 1210 includes hardware, software, or both, coupling components of an electronic device to each other. By way of example, and not limitation, the buses may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a HyperTransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a micro channel architecture (MCa) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus, or a combination of two or more of the above. Bus 1210 may include one or more buses, where appropriate. Although embodiments of the application have been described and illustrated with respect to a particular bus, the application contemplates any suitable bus or interconnect.
The electronic device may perform the recovery imaging method in the embodiment of the present application, thereby implementing the recovery imaging method described in connection with fig. 3.
In addition, in connection with the restoration imaging method in the above embodiment, the embodiment of the present application may be implemented by providing a computer-readable storage medium. The computer readable storage medium has stored thereon computer program instructions; the computer program instructions, when executed by the processor, implement the restoration imaging method of fig. 3.
It should be understood that the application is not limited to the particular arrangements and instrumentality described above and shown in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present application are not limited to the specific steps described and shown, and those skilled in the art can make various changes, modifications and additions, or change the order between steps, after appreciating the spirit of the present application.
The functional blocks shown in the above-described structural block diagrams may be implemented in hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the application are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave. A "machine-readable medium" may include any medium that can store or transfer information. Examples of machine-readable media include electronic circuitry, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio Frequency (RF) links, and the like. The code segments may be downloaded via computer networks such as the internet, intranets, etc.
It should also be noted that the exemplary embodiments mentioned in this disclosure describe some methods or systems based on a series of steps or devices. However, the present application is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, or may be performed in a different order from the order in the embodiments, or several steps may be performed simultaneously.
In the foregoing, only the specific embodiments of the present application are described, and it will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the systems, modules and units described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein. It should be understood that the scope of the present application is not limited thereto, and any equivalent modifications or substitutions can be easily made by those skilled in the art within the technical scope of the present application, and they should be included in the scope of the present application.

Claims (12)

1. A method of restoring imaging, the method comprising:
under the condition that the sonar radiates a transmitting signal to the water bottom, receiving snapshot observation signal vectors corresponding to the Q snapshots respectively through the acoustic transducer array;
According to a pre-acquired receiving array flow pattern and the snapshot observation signal vector, respectively calculating a direction-of-arrival angle vector corresponding to each snapshot;
according to the direction of arrival angle vector corresponding to each snapshot, M target position coordinate points corresponding to each snapshot are respectively determined;
generating a three-dimensional interference image according to the G, Q and M target position coordinate points; in the process that the sonar moves in water, the sonar radiates G times of emission signals to the water, and Q, M and G are positive integers.
2. The method according to claim 1, wherein the calculating the corresponding direction-of-arrival angle vector of each snapshot from the pre-acquired receiving array flow pattern and the snapshot observation signal vector, respectively, comprises:
according to the receiving array flow pattern, respectively calculating amplitude vectors corresponding to each snapshot observation signal vector;
and determining the direction-of-arrival angle vector according to the amplitude vector.
3. The method according to claim 1, wherein determining M target position coordinate points corresponding to each snapshot according to the direction-of-arrival angle vector corresponding to each snapshot includes:
Obtaining a distance resolution and a projection distance, wherein the projection distance is the distance between the sonar and a projection point of the sonar at the water bottom;
and respectively determining M target position coordinate points corresponding to each snapshot according to the distance resolution, the projection distance and the direction-of-arrival angle vector.
4. The method of claim 1, wherein, in the case where the sonar emits a signal to the underwater radiation, before receiving the snapshot observation signal vectors corresponding to the Q snapshots respectively by the acoustic transducer array, the method further comprises:
acquiring an inclined distance length and a distance resolution, wherein the inclined distance length is the farthest detection length of the sonar;
determining a preset snapshot time according to the slant distance length and the distance resolution;
the receiving, by the acoustic transducer array, snapshot observation signal vectors corresponding to the Q snapshots respectively includes:
based on the preset snapshot time, snapshot observation signal vectors respectively corresponding to the Q snapshots are received through the acoustic transducer array.
5. The method of claim 4, wherein said determining a preset snapshot time based on said skew length and said distance resolution comprises:
Determining the number of snapshots according to the slant distance length and the distance resolution;
and determining the preset snapshot time according to the snapshot quantity and the preset time length, wherein the preset time length is the time length between the time that the sonar radiates a primary transmission signal to the water bottom and the time that the acoustic transducer array receives an echo signal.
6. The method of claim 1, wherein before the calculating the corresponding direction-of-arrival angle vector for each snapshot separately from the pre-acquired receive array flow pattern and the snapshot observation signal vector, the method further comprises:
acquiring the number of single-side line arrays and the number of target directions of wide beam opening angles of the sound transducer array;
and determining the flow pattern of the receiving array according to the number of the arrays, the number of the target directions and a preset mathematical model.
7. The method of claim 1, wherein generating a three-dimensional interference image from G x Q x M target position coordinate points comprises:
generating G snapshot images according to G.Q.M target position coordinate points, wherein each snapshot image is generated by Q.M target position coordinate points;
and generating the three-dimensional interference image according to the G snapshot images.
8. A restoration imaging system, the system comprising: wet end, dry end and sparse direction of arrival solver, wherein,
the wet end includes at least: the system comprises an acoustic transducer array, a transmitting signal generating module, an echo signal receiving and processing module, a first communication time sequence control module and a power supply module:
wherein the acoustic transducer array is used for radiating a transmitting signal to the water bottom and receiving an echo signal;
the emission signal generation module is used for driving a line array in the acoustic transducer array to generate acoustic power;
the echo signal receiving and processing module is used for receiving echo signals corresponding to the Q snapshots respectively and converting the echo signals into snapshot observation signal vectors corresponding to the Q snapshots respectively;
the first communication time sequence control module is used for establishing communication connection with the dry end;
the power supply module is used for supplying power to the acoustic transducer array, the transmitting signal generating module, the echo signal receiving and processing module and the first communication time sequence control module;
the sparse direction-of-arrival calculating device is used for respectively calculating the direction-of-arrival angle vector corresponding to each snapshot according to a pre-acquired receiving array flow pattern and the snapshot observation signal vector;
The dry end includes at least: the system comprises display control equipment, a second communication time sequence control module and power supply equipment;
the display control device is used for respectively determining M target position coordinate points corresponding to each snapshot according to the arrival direction angle vector corresponding to each snapshot; generating a three-dimensional interference image according to the G, Q and M target position coordinate points; in the process that the sonar moves in water, G times of emission signals are radiated to the water by the sonar, Q, M and G are positive integers, and the sonar comprises the acoustic transducer array;
the second communication time sequence control module is used for establishing communication connection with the wet end;
and the power supply equipment is used for supplying power to the display control equipment and the second communication time sequence control module.
9. The system of claim 8, wherein the sparse direction of arrival solver is integrated at the wet end if the computational force of the sparse direction of arrival solver is a first computational force;
integrating the sparse direction of arrival solver at the dry end if the computational power of the sparse direction of arrival solver is a second computational power; wherein the first computing force is less than the second computing force.
10. A recovery imaging apparatus, the apparatus comprising:
the receiving module is used for receiving snapshot observation signal vectors corresponding to the Q snapshots respectively through the acoustic transducer array under the condition that the sonar radiates the emission signal to the water bottom;
the resolving module is used for respectively resolving the arrival direction angle vector corresponding to each snapshot according to the pre-acquired receiving array flow pattern and the snapshot observation signal vector;
the determining module is used for respectively determining M target position coordinate points corresponding to each snapshot according to the direction-of-arrival angle vector corresponding to each snapshot;
the generating module is used for generating a three-dimensional interference image according to the G, Q and M target position coordinate points; in the process that the sonar moves in water, the sonar radiates G times of emission signals to the water, and Q, M and G are positive integers.
11. An electronic device, the device comprising: a processor and a memory storing computer program instructions; the processor, when executing the computer program instructions, implements the restoration imaging method as set forth in any one of claims 1-7.
12. A computer readable storage medium, having stored thereon computer program instructions which, when executed by a processor, implement the restoration imaging method as defined in any one of claims 1-7.
CN202310781024.4A 2023-06-29 2023-06-29 Recovery imaging method, device, system, electronic equipment and readable storage medium Active CN116500625B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310781024.4A CN116500625B (en) 2023-06-29 2023-06-29 Recovery imaging method, device, system, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310781024.4A CN116500625B (en) 2023-06-29 2023-06-29 Recovery imaging method, device, system, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN116500625A CN116500625A (en) 2023-07-28
CN116500625B true CN116500625B (en) 2023-10-20

Family

ID=87321740

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310781024.4A Active CN116500625B (en) 2023-06-29 2023-06-29 Recovery imaging method, device, system, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN116500625B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102176008A (en) * 2010-12-22 2011-09-07 中国船舶重工集团公司第七一五研究所 Phased azimuth filtering method for three-dimensional stratum imaging
CN102707258A (en) * 2012-06-05 2012-10-03 西安交通大学苏州研究院 Joint estimation method for azimuth angle and elevation angle of signal on basis of L-type sensor array
CN107037406A (en) * 2017-04-10 2017-08-11 南京理工大学 A kind of robust adaptive beamforming method
CN108732549A (en) * 2018-05-21 2018-11-02 南京信息工程大学 A kind of array element defect MIMO radar DOA estimation method based on covariance matrix reconstruct
CN109471063A (en) * 2018-11-06 2019-03-15 江西师范大学 Concentrating rate high-resolution Wave arrival direction estimating method based on delay snap
CN109696651A (en) * 2019-01-29 2019-04-30 电子科技大学 It is a kind of based on M estimation low number of snapshots under Wave arrival direction estimating method
CN109765562A (en) * 2018-12-10 2019-05-17 中国科学院声学研究所 A kind of three-dimensional looking forward sound sonar system and method
CN110045323A (en) * 2019-03-14 2019-07-23 电子科技大学 A kind of relatively prime battle array robust adaptive beamforming algorithm based on matrix fill-in
CN112119367A (en) * 2018-01-08 2020-12-22 脸谱科技有限责任公司 Method, apparatus and system for generating haptic stimulus and tracking user motion
CN114296087A (en) * 2021-12-13 2022-04-08 哈尔滨工程大学 On-line Bayes compression underwater imaging method, system, equipment and medium
CN115808659A (en) * 2022-12-19 2023-03-17 中国人民解放军火箭军工程大学 Robust beam forming method and system based on low-complexity uncertain set integration

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102176008A (en) * 2010-12-22 2011-09-07 中国船舶重工集团公司第七一五研究所 Phased azimuth filtering method for three-dimensional stratum imaging
CN102707258A (en) * 2012-06-05 2012-10-03 西安交通大学苏州研究院 Joint estimation method for azimuth angle and elevation angle of signal on basis of L-type sensor array
CN107037406A (en) * 2017-04-10 2017-08-11 南京理工大学 A kind of robust adaptive beamforming method
CN112119367A (en) * 2018-01-08 2020-12-22 脸谱科技有限责任公司 Method, apparatus and system for generating haptic stimulus and tracking user motion
CN108732549A (en) * 2018-05-21 2018-11-02 南京信息工程大学 A kind of array element defect MIMO radar DOA estimation method based on covariance matrix reconstruct
CN109471063A (en) * 2018-11-06 2019-03-15 江西师范大学 Concentrating rate high-resolution Wave arrival direction estimating method based on delay snap
CN109765562A (en) * 2018-12-10 2019-05-17 中国科学院声学研究所 A kind of three-dimensional looking forward sound sonar system and method
CN109696651A (en) * 2019-01-29 2019-04-30 电子科技大学 It is a kind of based on M estimation low number of snapshots under Wave arrival direction estimating method
CN110045323A (en) * 2019-03-14 2019-07-23 电子科技大学 A kind of relatively prime battle array robust adaptive beamforming algorithm based on matrix fill-in
CN114296087A (en) * 2021-12-13 2022-04-08 哈尔滨工程大学 On-line Bayes compression underwater imaging method, system, equipment and medium
CN115808659A (en) * 2022-12-19 2023-03-17 中国人民解放军火箭军工程大学 Robust beam forming method and system based on low-complexity uncertain set integration

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Multiresolution Compressive Sensing algorithm to detect off-grid direction of arrival;Audri Biswas et al.;《2016 10th International Conference on Signal Processing and Communication Systems (ICSPCS)》;第1-6页 *

Also Published As

Publication number Publication date
CN116500625A (en) 2023-07-28

Similar Documents

Publication Publication Date Title
EP3096159B1 (en) Sonar systems and methods using interferometry and beamforming for 3d imaging
US8767509B2 (en) Method and device for measuring a contour of the ground
CN106249224A (en) Multibeam forward looking sonar system and detection method
Hansen Synthetic aperture sonar technology review
US11846704B2 (en) Acoustic doppler system and method
US8456954B1 (en) Holographic navigation
JPH05249239A (en) Three-dimensional measurement and topography imaging sonar
WO2017158659A1 (en) Acoustic measurement device, acoustic measurement method, shaking component detection device, shaking component detection method, multi-beam acoustic measurement device, and synthetic aperture sonar
US11774587B2 (en) Multimission and multispectral sonar
WO2008105932A2 (en) System and method for forward looking sonar
US20180224544A1 (en) Forward scanning sonar system and method with angled fan beams
CN103226192B (en) Device and method for selecting signal, and radar apparatus
JP3515751B2 (en) Reconstruction method of three-dimensional submarine structure
JP5767002B2 (en) Ultrasonic transmission / reception device and fish quantity detection method
CA2928461A1 (en) Forward scanning sonar system and method with angled fan beams
EP3064958B1 (en) Systems and associated methods for producing a 3d sonar image
CN116500625B (en) Recovery imaging method, device, system, electronic equipment and readable storage medium
KR101331333B1 (en) Method and device for measuring a profile of the ground
EP3325997A1 (en) Forward scanning sonar system and method with angled fan beams
CA2993361A1 (en) Forward scanning sonar system and method with angled fan beams
JP3583908B2 (en) Target measuring device
JP6757083B2 (en) Echo sounder and multi-beam echo sounder
Olivieri Bio-inspired broadband SONAR technology for small UUVs
US20230043880A1 (en) Target velocity vector display system, and target velocity vector display method and program
CN116643282A (en) Towed line array synthetic aperture sonar system and detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant