CN112710988B - Tank armored vehicle sound vibration artificial intelligence detection positioning method - Google Patents
Tank armored vehicle sound vibration artificial intelligence detection positioning method Download PDFInfo
- Publication number
- CN112710988B CN112710988B CN202011629284.2A CN202011629284A CN112710988B CN 112710988 B CN112710988 B CN 112710988B CN 202011629284 A CN202011629284 A CN 202011629284A CN 112710988 B CN112710988 B CN 112710988B
- Authority
- CN
- China
- Prior art keywords
- neural network
- rbf neural
- time
- vibration
- current
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 36
- 238000000034 method Methods 0.000 title claims abstract description 34
- 238000013473 artificial intelligence Methods 0.000 title claims abstract description 26
- 238000013528 artificial neural network Methods 0.000 claims abstract description 86
- 238000003491 array Methods 0.000 claims abstract description 56
- 238000012549 training Methods 0.000 claims abstract description 20
- 238000004364 calculation method Methods 0.000 claims abstract description 9
- 238000012876 topography Methods 0.000 claims description 8
- 230000006870 function Effects 0.000 description 22
- 238000012545 processing Methods 0.000 description 6
- 238000004590 computer program Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 1
- 238000005315 distribution function Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000010183 spectrum analysis Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S5/00—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
- G01S5/18—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using ultrasonic, sonic, or infrasonic waves
- G01S5/22—Position of source determined by co-ordinating a plurality of position lines defined by path-difference measurements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
Abstract
The invention discloses a tank armored vehicle sound vibration artificial intelligence detection positioning method, which relates to the technical field of target detection positioning and comprises the steps that at least three groups of sound cross arrays are arranged above the ground, and sound in air formed by running of a tank armored vehicle is captured and detected; arranging at least three groups of vibration sensors on the ground or underground, and capturing and detecting vibration formed by the running of the tank armored vehicle; respectively training an RBF neural network by using input and expected output according to the detection output of the acoustic cross array and the vibration sensor, and obtaining a first target position and a second target position according to a training result; and performing weighted calculation on the first target position and the second target position by using a weighting function to obtain the final position of the tank armored vehicle. The invention has the advantages of less array elements, simple form, convenient and quick array arrangement and plane and space omnibearing search function.
Description
Technical Field
The invention relates to the technical field of target detection and positioning, in particular to a tank armored vehicle sound vibration artificial intelligence detection and positioning method.
Background
Due to the complex environment in the battlefield, various noises such as gunshot, gunfire, helicopter sound and the like are mixed together, so that the sound positioning and measuring technology is greatly interfered, the positioning precision and the identification precision are greatly reduced, and the battlefield environment does not allow the use of a complex array detector.
At present, the target positioning method mainly comprises: beamforming-based positioning, time difference of arrival-based positioning, and high resolution-based estimated positioning. Due to the complex battlefield environment, the positioning accuracy of the traditional single positioning method is difficult to guarantee, and even the positioning fails.
Therefore, in order to better eliminate the interference of different noises on the target positioning precision in a battlefield, the traditional positioning method needs to be improved based on the characteristics of the armored vehicle in driving.
Disclosure of Invention
Therefore, in order to overcome the defects, the embodiment of the invention provides a tank armored vehicle sound vibration artificial intelligence detection and positioning method.
Therefore, the tank armored vehicle sound vibration artificial intelligence detection positioning method provided by the embodiment of the invention comprises the following steps:
arranging at least three groups of sound cross arrays above the ground, and capturing and detecting sound in air formed by the running of the tank armored vehicle;
arranging at least three groups of vibration sensors on the ground or underground, and capturing and detecting vibration formed by the running of the tank armored vehicle;
respectively obtaining target direction angles measured by the acoustic cross array, taking the target direction angles as input of an RBF neural network, taking a determinant factor obtained according to the target direction angles as expected output of the RBF neural network, training the RBF neural network by utilizing the input and the expected output, inputting the current target direction angles into the trained RBF neural network to obtain a current determinant factor, and calculating to obtain a first target position according to the current determinant factor; meanwhile, respectively acquiring the time for measuring the vibration by the vibration sensor, taking the time as the input of the RBF neural network, taking the propagation speed mean value obtained according to the time as the expected output of the RBF neural network, training the RBF neural network by using the input and the expected output, inputting the current time into the trained RBF neural network to obtain the current propagation speed mean value, and calculating to obtain a second target position according to the current propagation speed mean value;
and performing weighted calculation on the first target position and the second target position by using a weighting function to obtain the final position of the tank armored vehicle.
Preferably, the number of the groups of the acoustic cross arrays is three, and the acoustic cross arrays are respectively a first group of acoustic cross arrays S1A second set of acoustic cross arrays S2And a third set of acoustic cross arrays S3。
Preferably, each set of acoustic cruciform arrays comprises a first acoustic sensor s1Second acoustic sensor s2A third acoustic sensor s3And a fourth acoustic sensor s4The acoustic sensors are uniformly distributed around the center, are positioned in the same plane and are distributed in a cross shape, and the distances from the cross center to the acoustic sensors are equal.
Preferably, the step of obtaining the target direction angles measured by the acoustic cross array, taking the target direction angles as input of the RBF neural network, taking the determinant factors obtained according to the target direction angles as expected output of the RBF neural network, training the RBF neural network by using the input and the expected output, inputting the current target direction angles into the trained RBF neural network, obtaining the current determinant factors, and calculating the first target position according to the current determinant factors includes:
separately acquiring a first set of acoustic cross arrays S1A second set of acoustic cross arrays S2And a third set of acoustic cross arrays S3Measured target heading angle θ1、θ2And theta3Angle of direction theta of the target1、θ2And theta3As input to the RBF neural network, will be based on the target steering angle θ1、θ2And theta3The resulting determinant D1、D2And D3As expected output of the RBF neural network, training the RBF neural network by using the input and the expected output, and setting the current target direction angle theta1、θ2And theta3Inputting the trained RBF neural network to obtain the current determinant D1、D2And D3According to the current determinant D1、D2And D3And calculating to obtain a first target position.
Preferably, the number of groups of vibration sensors is three, including the first vibration sensor M1A second vibration sensor M2And a third vibration sensor M3And are distributed in a triangular shape.
Preferably, the step of respectively obtaining the time for measuring the vibration by the vibration sensor, taking the time as the input of the RBF neural network, taking the propagation speed mean value obtained according to the time as the expected output of the RBF neural network, training the RBF neural network by using the input and the expected output, inputting the current time into the trained RBF neural network, obtaining the current propagation speed mean value, and calculating the second target position according to the current propagation speed mean value includes:
respectively acquiring first vibration sensors M1A second vibration sensor M2And a third vibration sensor M3Measuring a first time t of vibration1A second time t2And a third time t3A first time t1A second time t2And a third time t3As input to the RBF neural network, it will be based on a first time t1A second time t2And a third time t3Obtained mean propagation velocityAndas the expected output of the RBF neural network, the RBF neural network is trained by using the input and the expected output, and the current first time t is1A second time t2And a third time t3Inputting the trained RBF neural network to obtain the average value of the current propagation velocityAndaccording to the mean value of the current propagation velocityAndand calculating to obtain a second target position.
Preferably, the weighting function adopts a terrain weighting method, and the calculation formula of the final position of the tank armored vehicle is as follows:
hard ground topography: z is 0.63Zcs+0.37zps,
Soft ground topography: z is 0.37Zcs+0.63zps,
Wherein z ispsIs the first target position, zcsIs the second target position.
The technical scheme of the embodiment of the invention has the following advantages:
according to the tank armored vehicle sound vibration artificial intelligence detection positioning method provided by the embodiment of the invention, through the forms of the sound cross array and the vibration sensor array, the required array elements are few, the form is simple, the array arrangement is very convenient and fast, and the tank armored vehicle sound vibration artificial intelligence detection positioning method has a plane and space omnibearing search function. The sound detection positioning and the vibration detection positioning are combined, and the sound sensor and the vibration sensor work and measure at the same time, so that the detection positioning precision is improved. The RBF neural network is used for calculating to obtain the first target position and the second target position, and weighting calculation is used, so that sound vibration combination and artificial intelligence detection and positioning of the tank armored vehicle are realized, interference of gunshot sound on sound detection and positioning is eliminated, and detection positioning test precision of the tank armored vehicle is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic block diagram showing a specific example of a tank armored vehicle sound vibration artificial intelligence detection positioning system in an embodiment 1 of the invention;
FIG. 2 is a flowchart showing a specific example of a tank armored vehicle sound-vibration artificial intelligence detection positioning method in embodiment 2 of the present invention;
FIG. 3 is a schematic diagram of the layout of the acoustic cross array;
FIG. 4 is a schematic layout of three acoustic cross arrays;
fig. 5 is a schematic arrangement diagram of three sets of vibration sensors.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
In describing the present invention, it is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The term "and/or" includes any and all combinations of one or more of the associated listed items. The terms "center" and the like indicate an orientation or positional relationship based on that shown in the drawings, which is merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the referred device or element must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention. The terms "first," "second," "third," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The terms "mounted," "connected," and "coupled" are to be construed broadly and may, for example, be fixedly coupled, detachably coupled, or integrally coupled; can be mechanically or electrically connected; the two elements may be directly connected or indirectly connected through an intermediate medium, or may be communicated with each other inside the two elements, or may be wirelessly connected or wired connected. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
While the exemplary embodiments are described as performing an exemplary process using multiple units, it is understood that the exemplary process can also be performed by one or more modules. In addition, it is to be understood that the term controller/control unit refers to a hardware device that includes a memory and a processor. The memory is configured as a memory module and the processor is specifically configured to execute the processes stored in the memory module to thereby execute one or more processes.
Moreover, certain drawings in this specification are flow charts illustrating methods and systems. It will be understood that each block of the flowchart illustrations, and combinations of blocks in the flowchart illustrations, can be implemented by computer program instructions. These computer program instructions may be loaded onto a computer or other programmable apparatus to produce a machine, such that the instructions which execute on the computer or other programmable apparatus create means for implementing the functions specified in the flowchart block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart block or blocks. The computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks.
Accordingly, blocks of the flowchart illustrations support combinations of means for performing the specified functions and combinations of steps for performing the specified functions. It will also be understood that each block of the flowchart illustrations, and combinations of blocks in the flowchart illustrations, can be implemented by special purpose hardware-based computer systems which perform the specified functions or steps, or combinations of special purpose hardware and computer instructions.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Example 1
The embodiment provides a tank armoured vehicle sound artificial intelligence detection positioning system that shakes, as shown in figure 1, includes: the system comprises at least three groups of sound cross arrays 1, at least three groups of vibration sensors 2, an artificial intelligence positioning unit 3 and a signal processing unit 4;
the artificial intelligence positioning unit 3 is respectively connected with at least three groups of sound cross arrays 1 and at least three groups of vibration sensors 2 and is used for respectively obtaining target direction angles measured by the sound cross arrays, taking the target direction angles as input of an RBF neural network, taking determinant factors obtained according to the target direction angles as expected output of the RBF neural network, training the RBF neural network by utilizing the input and the expected output, inputting the current target direction angles into the trained RBF neural network to obtain current determinant factors, calculating and obtaining a first target position and outputting the first target position according to the current determinant factors, simultaneously respectively obtaining vibration time measured by the vibration sensors, taking the time as input of the RBF neural network, taking the propagation velocity mean value obtained according to the time as the expected output of the RBF neural network, training the RBF neural network by utilizing the input and the expected output, and inputting the current time into the trained RBF neural network, obtaining a current propagation speed average value, calculating according to the current propagation speed average value to obtain a second target position and outputting the second target position;
and the signal processing unit 4 is connected with the artificial intelligence positioning unit 3 and is used for performing weighting calculation on the first target position and the second target position by using a weighting function to obtain the final position of the tank armored vehicle.
Preferably, the number of sets of acoustic cross arrays is three, respectively the first set of acoustic cross arrays S1A second set of acoustic cross arrays S2And a third set of acoustic cross arrays S3The sound cross arrays are uniformly arranged at the height of about 2 meters above the ground and are positioned in the same plane.
Preferably, as shown in fig. 3, each set of acoustic cross arrays comprises a first acoustic sensor s1Second acoustic sensor s2Third acoustic sensor s3And a fourth acoustic sensor s4And a quaternary cross array is selected, the array elements are uniformly distributed around the center, are positioned in the same plane and are distributed in a cross shape, and the distance from the cross center to each acoustic sensor is r. The cross array is used as a special form of a circular array, has the function of plane and space omnibearing search, and has the advantages of less required array elements, simple form and very convenient array arrangement.
Preferably, the number of sets of vibration sensors is three, including the first vibration sensor M1A second vibration sensor M2And a third vibration sensor M3The vibration triangular arrays are formed by triangular distribution.
Preferably, the artificial intelligence positioning unit is used for respectively acquiring the first group of acoustic cross arrays S1A second set of acoustic cross arrays S2And a third set of acoustic cross arrays S3Measured target Direction Angle θ1、θ2And theta3Angle of direction theta of the target1、θ2And theta3As input to the RBF neural network, will depend on the target bearing angle θ1、θ2And theta3The resulting determinant D1、D2And D3As expected output of the RBF neural network, training the RBF neural network by using the input and the expected output, and setting the current target direction angle theta1、θ2And theta3Inputting the well-trained RBF neural network to obtain the current determinant D1、D2And D3According to the current determinant D1、D2And D3And calculating to obtain a first target position.
Preferably, the artificial intelligence positioning unit is further configured to acquire the first vibration sensors M, respectively1A second vibration sensor M2And a third vibration sensor M3Measuring a first time t of vibration1A second time t2And a third time t3A first time t1A second time t2And a third time t3As input to the RBF neural network, it will be based on a first time t1A second time t2And a third time t3Obtained mean propagation velocityAndas the expected output of the RBF neural network, the RBF neural network is trained by using the input and the expected output, and the current first time t is1A second time t2And a third time t3Inputting the trained RBF neural network to obtain the average value of the current propagation velocityAndaccording to the mean value of the current propagation velocity Andand calculating to obtain a second target position.
Preferably, the weighting function adopts a terrain weighting method, and the weighting factors tend to the vibration positioning result in hard-ground terrains such as towns and mountains, and tend to the sound positioning result in soft-ground unobstructed terrains such as plains and deserts. For example, the final position of a tank armored vehicle is calculated as:
hard ground topography: z is 0.63Zcs+0.37zps,
Soft ground topography: z is 0.37Zcs+0.63zps,
Wherein z ispsIs a first target position, zcsIs the second target position.
Preferably, the tank armored vehicle sound and vibration artificial intelligence detection positioning system further comprises: and the wireless transceiving unit 5 is connected with the signal processing unit 4 and is used for transmitting data between the signal processing unit 4 and a remote computer terminal.
Above-mentioned tank armoured vehicle sound shakes artificial intelligence and surveys positioning system, through sound cross array and vibration sensor array form, required array element is few, the form is simple, and the arrangement is very convenient and fast, has plane, the all-round search function in space. The sound detection positioning and the vibration detection positioning are combined, the RBF neural network is used for calculating to obtain the first target position and the second target position, and weighting calculation is used, so that the sound vibration combination and artificial intelligence detection positioning of the tank armored vehicle are realized, the interference of gunshot sound to the sound detection positioning is eliminated, and the detection positioning test precision of the tank armored vehicle is improved.
Example 2
The embodiment provides a sound vibration artificial intelligence detection and positioning method for a tank armored vehicle, which can be applied to the sound vibration artificial intelligence detection and positioning system for the tank armored vehicle in embodiment 1, and as shown in fig. 2, the method comprises the following steps:
s1, arranging at least three groups of sound cross arrays above the ground, and capturing and detecting sound in the air formed by the running of the tank armored vehicle; preferably, the number of sets of acoustic cross arrays is three, respectively the first set of acoustic cross arrays S1A second set of acoustic cross arrays S2And a third set of acoustic cross arrays S3The sound cross arrays are uniformly arranged at the height of about 2 meters above the ground and are positioned in the same plane. As shown in FIG. 3, each set of acoustic cruciform arrays includes a first acoustic sensor s1Second acoustic sensor s2A third acoustic sensor s3And a fourth acoustic sensor s4And a quaternary cross array is selected, the array elements are uniformly distributed around the center, are positioned in the same plane and are distributed in a cross shape, and the distance from the cross center to each acoustic sensor is r. The cross array is used as a special form of a circular array, has the function of plane and space omnibearing search, and has the advantages of less required array elements, simple form and very convenient array arrangement.
S2, arranging at least three groups of vibration sensors on the ground or underground, and capturing and detecting the vibration formed by the running of the tank armored vehicle; preferably, the number of sets of vibration sensors is three, including the first vibration sensor M1A second vibration sensor M2And a third vibration sensor M3The vibration triangular arrays are formed by triangular distribution.
S3, respectively obtaining target direction angles measured by the acoustic cross array, taking the target direction angles as input of an RBF neural network, taking a determinant factor obtained according to the target direction angles as expected output of the RBF neural network, training the RBF neural network by utilizing the input and the expected output, inputting the current target direction angles into the trained RBF neural network to obtain a current determinant factor, and calculating to obtain a first target position according to the current determinant factor; meanwhile, respectively acquiring the time for measuring the vibration by the vibration sensor, taking the time as the input of the RBF neural network, taking the propagation speed mean value obtained according to the time as the expected output of the RBF neural network, training the RBF neural network by using the input and the expected output, inputting the current time into the trained RBF neural network to obtain the current propagation speed mean value, and calculating to obtain a second target position according to the current propagation speed mean value;
preferably, a first set of acoustic cross arrays S is acquired separately1A second set of acoustic cross arrays S2And a third set of acoustic cross arrays S3Measured target heading angle θ1、θ2And theta3Angle of direction theta of the target1、θ2And theta3As input to the RBF neural network, will be based on the target steering angle θ1、θ2And theta3The resulting determinant D1、D2And D3As expected output of the RBF neural network, training the RBF neural network by using the input and the expected output, and setting the current target direction angle theta1、θ2And theta3Inputting the trained RBF neural network to obtain the current determinant D1、D2And D3According to the current determinant D1、D2And D3Calculating to obtain a first target position;
preferably, according to the target direction angle θ1、θ2And theta3Obtaining a determinant factor D1、D2And D3Comprises the following steps:
assuming that the tank armored vehicle target P and the three groups of acoustic cross arrays are positioned in the same plane, and because the array spacing is far larger than the array element spacing in the acoustic cross arrays, each group of acoustic cross arrays can be equivalent to a point, as shown in FIG. 4, in a plane rectangular coordinate system xoy, a first group of acoustic cross arrays S1A second set of acoustic cross arrays S2And a third set of acoustic cross arrays S3Respectively is (x)1,y1)、(x2,y2)、(x3,y3). Respectively plotted through a first set of acoustic cross arrays S1First positioning line L1By a second set of acoustic cross arrays S2Second location line L2By a third set of acoustic cross arrays S3Third location line L3To determineThe bit line equation is:
xsinθi-ycosθi=ci,
wherein, ci=xi sinθi-yi cosθi,i=1,2,3;
First positioning line L1And a second positioning line L2Point of intersection C (x)C,yC) A first fixed line L1And a third positioning line L3Point of intersection B (x)B,yB) A second positioning line L2And a third positioning line L3Intersection point A (x)A,yA) From the orientation line equation:
the coordinates of the midpoints D, E, F of the line segments BC, AC, AB are (x)D,yD),(xE,yE),(xF,yF),
A first set of acoustic cross arrays S of the target distance is obtained1A second set of acoustic crossesArray S2And a third set of acoustic cross arrays S3Respectively is r1=|S1D|,r2=|S2E|,r3=|S3F|;
Three location line intersections (x) are knownA,yA),(xB,yB),(xC,yC) The position (x) of the target P can be obtained0,y0) Comprises the following steps:
wherein D is1、D2、D3To determine the factor, the target direction angle theta is obtained1、θ2And theta3And a determinant factor D1、D2And D3The relationship between them.
An acoustic cross array as shown in figure 3 employs beamforming techniques to make location measurements of tank armored vehicle targets. First acoustic sensor s of acoustic cross array element1Second acoustic sensor s2A third acoustic sensor s3And a fourth acoustic sensor s4Minimum distance betweenLess than half the minimum wavelength of the signal.
The beam forming technology is an important method for processing array signals, is also a main means for spatial spectrum analysis, and is mainly used for orienting a target by a base array.
Assuming that at a certain time t, the target direction angle of the center point O is θ, and the time when the sound signal generated by the target reaches the center point O is f (t), the first sound sensor s1Second acoustic sensor s2A third acoustic sensor s3And a fourth acoustic sensor s4The received signals are: f [ t-T ]1],f[t-τ2],f[t-τ3],f[t-τ4](ii) a Wherein, tauiThe time difference of the received signals between the center point O and the ith acoustic sensor, i equals to 1,2,3, 4;
in the field of acoustic signal processing, the signals encountered are often narrowband real signals, which contain an envelope and phase modulation term and one or more high-frequency modulation terms, as follows for a single-frequency modulated source signal:
F(t)=u(t)cos[ω0t+v(t)],
analytic form F by F (t)a(t) to represent the sound source signal:
The signal received by the ith array element is:
wherein n isi(t) noise received for the ith array element;
according to the first acoustic sensor s1Second acoustic sensor s2A third acoustic sensor s3And a fourth acoustic sensor s4The target direction angle is obtained from the respective received signals.
Preferably, the first vibration sensors M are acquired separately1A second vibration sensor M2And a third vibration sensor M3Measuring a first time t of vibration1A second time t2And a third time t3A first time t1A second time t2And a third time t3As input to the RBF neural network, it will be based on a first time t1A second time t2And a third time t3Obtained mean propagation velocityAndas the expected output of the RBF neural network, the RBF neural network is trained by using the input and the expected output, and the current first time t is1A second time t2And a third time t3Inputting the trained RBF neural network to obtain the average value of the current propagation velocityAndaccording to the mean value of the current propagation velocityAndcalculating to obtain a second target position;
preferably, according to the first time t1The second timet2And a third time t3Obtained mean propagation velocityAndcomprises the following steps:
as shown in FIG. 5, in a rectangular coordinate system xoy, S represents a point vibration source (target), M1,M2,M3The first, second and third vibration sensors are respectively shown, and the vibration is a circle with a point vibration source as a center, and the propagation speed of the vibration is not constant under the influence of geological conditions.
The coordinates of the three vibration sensor array elements are respectively M1(o,o),M2(d,o),M3(o, d) the difference between the distances from the target to the array element i and the array element j isWherein, i is more than or equal to 1, j is less than or equal to 3,the average value of the propagation velocity from the target to the array element i is obtained;
y ═ ax + b, given:
l1the target point coordinates can be obtained as nx + m:
then the value of y can be obtained to obtain the first time t1A second time t2And a third time t3And mean propagation velocityAndthe relationship between them.
Preferably, the RBF neural network is trained using the inputs and the desired outputs:
definition of X ═ (X)1,x2,…,xn)TFor the network input vector, Y ═ Y1,y2,…,ys)TIs output from the network, [ phi ]i() is the radial basis function of the ith hidden layer node. The distribution function of the RBF neural network is:
where m is the number of hidden layer neuron nodes, i.e. the number of radial basis function centers, and the coefficient wiIs a connection weight;
wherein φ (#) is a radial basis function, | | x-ciI is the Euclidean norm, ciIs the ith center, xi, of the RBFiFor the ith radius of the RBF, the available network outputs are:
thus, the matrix expression for an RBF network can be expressed as:
D=HW+E,
wherein the desired output vector is D ═ D (D)1,d2,…,dp)TThe error vector between the desired output and the network output is E ═ E (E)1,e2,…,ep)TThe weight vector is W ═ W1,w2,…,wm)TThe regression matrix is H ═ H1,h2,…,hm)T;
Taking into account the influence of all training samples, ci、ξiAnd wiThe adjustment amounts of (a) and (b) are:
wherein phi isi(xj) For the ith implicit node pair xjInput of η1、η2、η3Respectively corresponding learning rates, ci(t) and ci(t +1) c at the t-th and t + 1-th iterations, respectivelyi,ξi(t) and xii(t +1) is ξ for the t-th and t + 1-th iterations, respectivelyi,wi(t) and wi(t +1) w at the t-th and t + 1-th iterations, respectivelyi(ii) a Obtaining a mean square error according to the cost function E so as to finish the training condition; when the mean square error of the actual output and the expected output is less than the set threshold, the network is considered to be trained.
And S4, performing weighting calculation on the first target position and the second target position by using a weighting function to obtain the final position of the tank armored vehicle.
Preferably, the weighting function adopts a terrain weighting method, and the weighting factors tend to the vibration positioning result in hard-ground terrains such as towns and mountains, and tend to the sound positioning result in soft-ground unobstructed terrains such as plains and deserts. For example, the final position of a tank armored vehicle is calculated as:
hard ground topography: z is 0.63Zcs+0.37zps,
Soft ground topography: z is 0.37Zcs+0.63zps,
Wherein z ispsIs a first target position, zcsIs the second target position.
According to the tank armored vehicle sound vibration artificial intelligence detection positioning method, the required array elements are few, the form is simple, the arrangement is very convenient and fast through the sound cross array and the vibration sensor array form, and the tank armored vehicle sound vibration artificial intelligence detection positioning method has a plane and space omnibearing search function. The sound detection positioning and the vibration detection positioning are combined, the RBF neural network is used for calculating to obtain the first target position and the second target position, and weighting calculation is used, so that the sound vibration combination and artificial intelligence detection positioning of the tank armored vehicle are realized, the interference of gunshot sound to the sound detection positioning is eliminated, and the detection positioning test precision of the tank armored vehicle is improved.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.
Claims (7)
1. A tank armored vehicle sound vibration artificial intelligence detection positioning method is characterized by comprising the following steps:
arranging at least three groups of sound cross arrays above the ground, and capturing and detecting sound in air formed by the running of the tank armored vehicle;
at least three groups of vibration sensors are arranged on the ground or underground, and the vibration formed by the running of the tank armored vehicle is captured and detected;
respectively obtaining target direction angles measured by the acoustic cross array, taking the target direction angles as input of an RBF neural network, taking a determinant factor obtained according to the target direction angles as expected output of the RBF neural network, training the RBF neural network by utilizing the input and the expected output, inputting the current target direction angles into the trained RBF neural network to obtain a current determinant factor, and calculating to obtain a first target position according to the current determinant factor; meanwhile, respectively acquiring the time for measuring the vibration by the vibration sensor, taking the time as the input of the RBF neural network, taking the propagation speed mean value obtained according to the time as the expected output of the RBF neural network, training the RBF neural network by using the input and the expected output, inputting the current time into the trained RBF neural network to obtain the current propagation speed mean value, and calculating to obtain a second target position according to the current propagation speed mean value;
and performing weighted calculation on the first target position and the second target position by using a weighting function to obtain the final position of the tank armored vehicle.
2. The method of claim 1, wherein the number of sets of acoustic cross arrays is three, respectively a first set S of acoustic cross arrays1A second group of acoustic cross arrays S2And a third set of acoustic cross arrays S3。
3. The method of claim 2, wherein each set of acoustic cross arrays comprises a first acoustic sensor s1Second acoustic sensor s2A third acoustic sensor s3And a fourth acoustic sensor s4The acoustic sensors are uniformly distributed around the center, are positioned in the same plane and are distributed in a cross shape, and the distances from the cross center to the acoustic sensors are equal.
4. The method as claimed in claim 2 or 3, wherein the step of obtaining the measured target azimuth angle of the acoustic cross array, using the target azimuth angle as an input of the RBF neural network, using a determinant obtained according to the target azimuth angle as an expected output of the RBF neural network, training the RBF neural network by using the input and the expected output, inputting the current target azimuth angle into the trained RBF neural network to obtain a current determinant, and calculating the first target position according to the current determinant comprises:
separately acquiring a first set of acoustic cross arrays S1A second set of acoustic cross arrays S2And a third set of acoustic cross arrays S3Measured target heading angle θ1、θ2And theta3Angle of direction theta of the target1、θ2And theta3As input to the RBF neural network, will be based on the target steering angle θ1、θ2And theta3The resulting determinant D1、D2And D3As expected output of the RBF neural network, training the RBF neural network by using the input and the expected output, and setting the current target direction angle theta1、θ2And theta3Inputting the trained RBF neural network to obtain the current determinant D1、D2And D3According to the current determinant D1、D2And D3And calculating to obtain a first target position.
5. Method according to any one of claims 1 to 4, characterized in that the number of sets of vibration sensors is three, including a first vibration sensor M1And a second vibration sensor M2And a third vibration sensor M3And are distributed in a triangular shape.
6. The method of claim 5, wherein the steps of obtaining the time for the vibration sensor to measure the vibration, using the time as an input of the RBF neural network, using the mean propagation velocity obtained from the time as an expected output of the RBF neural network, training the RBF neural network using the input and the expected output, inputting the current time into the trained RBF neural network to obtain a current mean propagation velocity, and calculating the second target position according to the current mean propagation velocity comprise:
respectively acquiring first vibration sensors M1A second vibration sensor M2And a third vibration sensor M3Measuring a first time t of vibration1A second time t2And a third time t3A first time t1A second time t2And a third time t3As input to the RBF neural network, will be based on a first time t1A second time t2And a third time t3Obtained mean propagation velocityAndas the expected output of the RBF neural network, the RBF neural network is trained by using the input and the expected output, and the current first time t is1A second time t2And a third time t3Inputting the trained RBF neural network to obtain the average value of the current propagation velocityAndaccording to the mean value of the current propagation velocityAndand calculating to obtain a second target position.
7. The method of any one of claims 1-6, wherein the weighting function is terrain weighting and the final position of the tank armored vehicle is calculated as:
hard ground topography: z is 0.63Zcs+0.37zps,
Soft ground topography: z is 0.37Zcs+0.63zps,
Wherein z ispsIs a first target position, zcsIs the second target position.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011629284.2A CN112710988B (en) | 2020-12-30 | 2020-12-30 | Tank armored vehicle sound vibration artificial intelligence detection positioning method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011629284.2A CN112710988B (en) | 2020-12-30 | 2020-12-30 | Tank armored vehicle sound vibration artificial intelligence detection positioning method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112710988A CN112710988A (en) | 2021-04-27 |
CN112710988B true CN112710988B (en) | 2022-06-21 |
Family
ID=75547710
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011629284.2A Expired - Fee Related CN112710988B (en) | 2020-12-30 | 2020-12-30 | Tank armored vehicle sound vibration artificial intelligence detection positioning method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112710988B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5576972A (en) * | 1992-05-08 | 1996-11-19 | Harrison; Dana C. | Intelligent area monitoring system |
US5995910A (en) * | 1997-08-29 | 1999-11-30 | Reliance Electric Industrial Company | Method and system for synthesizing vibration data |
CN104102838A (en) * | 2014-07-14 | 2014-10-15 | 河海大学 | Transformer noise prediction method based on wavelet neural network and wavelet technology |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6493689B2 (en) * | 2000-12-29 | 2002-12-10 | General Dynamics Advanced Technology Systems, Inc. | Neural net controller for noise and vibration reduction |
CN104302016A (en) * | 2014-09-16 | 2015-01-21 | 北京市信息技术研究所 | Wireless sensor network architecture based on multifunctional combination sensors |
CN109688990A (en) * | 2016-09-06 | 2019-04-26 | 新感知公司 | For providing a user the method and system of attached sensory information |
US10895639B2 (en) * | 2017-11-30 | 2021-01-19 | Avanti R&D, Inc. | Sensor platform improvement utilizing sound sensor analysis |
DE112019000049T5 (en) * | 2018-02-18 | 2020-01-23 | Nvidia Corporation | OBJECT DETECTION AND DETECTION SECURITY SUITABLE FOR AUTONOMOUS DRIVING |
KR102181643B1 (en) * | 2019-08-19 | 2020-11-23 | 엘지전자 주식회사 | Method and apparatus for determining goodness of fit related to microphone placement |
-
2020
- 2020-12-30 CN CN202011629284.2A patent/CN112710988B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5576972A (en) * | 1992-05-08 | 1996-11-19 | Harrison; Dana C. | Intelligent area monitoring system |
US5995910A (en) * | 1997-08-29 | 1999-11-30 | Reliance Electric Industrial Company | Method and system for synthesizing vibration data |
CN104102838A (en) * | 2014-07-14 | 2014-10-15 | 河海大学 | Transformer noise prediction method based on wavelet neural network and wavelet technology |
Non-Patent Citations (4)
Title |
---|
Multi-sensor integration for on-line tool wear estimation through radial basis function networks and fuzzy neural network;R.J. Kuoa and P.H. Cohen;《Neural Networks》;19991231;正文全文 * |
基于互感式位移传感器的小口径火炮内径测量系统设计;裴金顶 等;《计算机测量与控制》;20181231;正文全文 * |
基于噪声和振动的快速路交通事件检测方法;熊烈强等;《武汉理工大学学报(交通科学与工程版)》;20050430(第02期);正文全文 * |
基于声振联合特征熵的断路器故障诊断方法;赵书涛等;《华北电力大学学报(自然科学版)》;20161130(第06期);正文全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112710988A (en) | 2021-04-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101476860B (en) | Magnetic positioning method and device in high background magnetic field | |
US6176837B1 (en) | Motion tracking system | |
JPH06221805A (en) | Device and method for determining position and direction of separated body | |
CN108363041B (en) | Unmanned aerial vehicle sound source positioning method based on K-means clustering iteration | |
CN110082611B (en) | Positioning method of electric field measuring device | |
CN106842080B (en) | A kind of magnetic field measuring device posture swing interference minimizing technology | |
CN110160528B (en) | Mobile device pose positioning method based on angle feature recognition | |
CN109633724A (en) | Passive object localization method based on single star Yu more earth station's combined measurements | |
US5537511A (en) | Neural network based data fusion system for source localization | |
CN114325584B (en) | Synthetic aperture-based multi-array-element ultrasonic sound source three-dimensional imaging method and system | |
CN113483885B (en) | Composite pulse vibration source positioning method based on scorpion hair seam coupling positioning mechanism | |
CN112710988B (en) | Tank armored vehicle sound vibration artificial intelligence detection positioning method | |
CN112710989B (en) | Tank armored vehicle sound vibration artificial intelligence detection positioning system | |
CN110686684A (en) | Optical collaborative orbit determination method for small celestial body surrounding detector | |
CN118011499A (en) | Full-aviation equivalent demagnetizing transient electromagnetic-based unmanned detection plane platform and method | |
CN112147577B (en) | Explosion target passive positioning system and method based on seismic wave feature analysis | |
JP2000146509A (en) | Measuring system of magnetic motion capture device | |
CN113551663B (en) | System and method for resolving aircraft attitude by combining images and geomagnetism | |
CN113503891B (en) | SINSDVL alignment correction method, system, medium and equipment | |
CN115560757A (en) | Neural network-based unmanned aerial vehicle direct positioning correction method under random attitude error condition | |
CN113155154A (en) | Error correction method based on attitude and mileage of sensor and camera | |
KR20220036583A (en) | Apparatus and method for detecting and identifying buried objects based on artificial intelligence | |
CN115790574B (en) | Unmanned aerial vehicle optical flow positioning method and device and unmanned aerial vehicle | |
CN112666600B (en) | Method and device for inspecting attitude angle of submarine node instrument | |
CN114543789B (en) | Star map identification method and system based on one-dimensional convolutional neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20220621 |
|
CF01 | Termination of patent right due to non-payment of annual fee |