CN103984936A - Multi-sensor multi-feature fusion recognition method for three-dimensional dynamic target recognition - Google Patents

Multi-sensor multi-feature fusion recognition method for three-dimensional dynamic target recognition Download PDF

Info

Publication number
CN103984936A
CN103984936A CN201410231814.6A CN201410231814A CN103984936A CN 103984936 A CN103984936 A CN 103984936A CN 201410231814 A CN201410231814 A CN 201410231814A CN 103984936 A CN103984936 A CN 103984936A
Authority
CN
China
Prior art keywords
target
image
theta
sensor
recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410231814.6A
Other languages
Chinese (zh)
Inventor
刘博�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Aeronautical Radio Electronics Research Institute
Original Assignee
China Aeronautical Radio Electronics Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Aeronautical Radio Electronics Research Institute filed Critical China Aeronautical Radio Electronics Research Institute
Priority to CN201410231814.6A priority Critical patent/CN103984936A/en
Publication of CN103984936A publication Critical patent/CN103984936A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a multi-sensor multi-feature fusion recognition method for three-dimensional dynamic target recognition. The method includes the following steps that firstly, two-dimensional planarization is conducted on a recognition target, and a two-dimensional projected image library is established; secondly, images are grayed and binarized, Hu moment features and Zernike moment features are extracted, and an image feature moment information database is established; thirdly, two BP neural networks are respectively trained according to two kinds of image feature moment information; fourthly, target image sequences to be recognized acquired through different sensors are preprocessed, Hu moment features and Zernike moment features are extracted, and the two kinds of feature moment information is input to the two trained BP neural networks respectively, an elementary probability distribution function is acquired through calculation, time domain fusion and space domain fusion are conducted on acquired elementary probability assigned values based on a D-S evidence theory so that recognition result information can be acquired, decision making is conducted on the recognition result information according to a judgment rule, and finally target recognition result information is acquired. According to the multi-sensor multi-feature fusion recognition method, the target recognition correctness probability can be increased.

Description

Many Fusion Features of multisensor recognition methods for the identification of Three-Dimensional Dynamic target
Technical field
The present invention relates to multimachine same type of sensor information fusion recognition technology, many feature multi-levels of the multisensor fusion identification method particularly combining based on BP neural network and D-S evidence theory, is applicable to multimachine or multi-platform cooperative and carries out Aircraft Targets type identification.
Background technology
In modern war, information dominance power is the key factor of the impact strategy overall situation, and imaging reconnaissance and target identification are the major ways of obtaining information.The military targets such as military aircraft have very important strategic importance in war, are bringing into play very important effect.In a large amount of aerial reconnaissance view data, find rapidly these strategic objectives, the efficient method of identifying rapidly military aircraft target type is conducive to operational commanding person and holds in real time enemy and dynamically carry out decision analysis, thereby and make a response rapidly and get the triumph of war, the key content that Shi Wo army fights and wins victory.Therefore, the research of military aircraft target discriminator technology is had to important theory significance and great actual application value, for China's national defense construction, also there is important strategic importance and social benefit.
Researcher has carried out a large amount of research for target identification technology field both at home and abroad, has proposed to be much applied to the method for target discriminator.Although the operable method in target discriminator field has a lot, but at present when using these method for identifying and classifying, the Partial Feature information of the target mostly only acquiring according to single-sensor is carried out corresponding discriminator, or uses the characteristic information of the same race of the target that a plurality of sensors acquire to carry out corresponding discriminator.
When objective is identified, the target signature that single-sensor extracts is often because the detection feature of himself can not obtain the description completely to target, the similar characteristic information of the target that a plurality of sensors extractions obtain is also comparatively limited to the coverage rate of target signature, thereby causes the correct probability of target classification identification lower.And in fact, can describe same clarification of objective has a variety of, utilize existing sensor to extract very easily, therefore if utilize multiple independent, the complementary proper vector of the target that a plurality of sensors acquire simultaneously, can obtain target is described comparatively completely, must be conducive to greatly improve the correct probability of target identification.
Summary of the invention
The defect existing for existing method for identifying and classifying, goal of the invention of the present invention is to provide a kind of many Fusion Features of multisensor recognition methods for the identification of Three-Dimensional Dynamic target type, BP neural net method and D-S Method of Evidence Theory are carried out to effective combination, use Hu invariant moment features and the Zernike invariant moment features of target simultaneously, the recognition result of different sensors is carried out to the fusion in time domain and spatial domain, finally fusion recognition result is judged and draws final recognition result according to decision rule.Discriminator result correct probability of the present invention is high, anti-interference, error performance is strong.
Goal of the invention of the present invention is achieved through the following technical solutions:
Many Fusion Features of multisensor recognition methods for the identification of Three-Dimensional Dynamic target, comprises following steps:
Step 1, identification target is carried out to two dimensional surface, set up two-dimensional projection image storehouse;
Step 2, the image in two-dimensional projection image storehouse is carried out to pre-service, carry out gray processing and binaryzation, and pretreated image is extracted to Hu moment characteristics and Zernike moment characteristics simultaneously, set up characteristics of image square information database;
Step 3, by the characteristics of image square information of the Hu square calculating and Zernike square, be input to respectively a BP neural network, the 2nd BP neural network, the one BP neural network, the 2nd BP neural network are trained, obtain two BP neural networks that train and preserve;
Step 4, the target image sequence to be identified that different sensors is got carries out pre-service and extracts Hu moment characteristics and Zernike moment characteristics, two kinds of feature square information are input to respectively to a BP neural network, the 2nd BP neural network having trained in the 3rd step, calculate basic probability assignment function, the fusion that utilizes D-S evidence theory to carry out time domain and spatial domain to the elementary probability assignment obtaining obtains recognition result information, recognition result information is carried out to decision-making according to decision rule, finally obtain target recognition result information.
According to above-mentioned feature, described step 1 is utilized software modeling method, the two-dimensional projection image storehouse by Software Create target under different attitudes.
According to above-mentioned feature, the gray processing in described step 2 is used method of weighted mean, gives different weights, and makes the value weighted mean of R, G, B, that is: according to importance or other indexs to R, G, B
R=G=B=(W RR+W GG+W BB)/3,
W wherein r, W g, W bthe weights that are respectively R, G, B, make W g> W r> W bto obtain rational gray level image.
According to above-mentioned feature, work as W g=0.59, W r=0.30, W b, during R=G=B=0.30R+0.59G+0.11B, can obtain the most rational gray level image at=0.11 o'clock.
According to above-mentioned feature, the concrete grammar of described step 3 is:
The first-selected image that uses three sensors to extract respectively different dimensions;
Then each coordinate dimensions of every kind of target is chosen the image of some at random, and every width image all extracts Hu square and Zernike square;
The Hu moment characteristics that the image of finally target to be identified being chosen at random obtains is input in a BP neural network, and Zernike square is input in the 2nd BP neural network, preserves BP NEURAL NETWORK 1 and BP2 after training.
According to above-mentioned feature, the basic probability assignment function in described step 4 is:
m ( ω i ) = | y i | Σ i = 1 N | y i | , i = 1,2 , . . . N .
According to above-mentioned feature, the Time Domain Fusion concrete grammar in described step 4 is:
First according to accumulation elementary probability assignment uncertain with the accumulation of distributing to identification framework determine i sensor till the objectionable intermingling cumulative information that k-1 identifies about target constantly;
Then at k constantly, i sensor obtained the new measurement elementary probability assignment about target identification measuring uncertainty about target identification is
Utilize afterwards again Dempster rule of combination, to k, about the accumulation elementary probability assignment of target, be constantly m j i ( k ) , i = 1 , . . . , M , j = 1 , . . . , N Can calculate by following formula:
m j i ( k ) = m j i ( k - 1 ) m jk i + m j i ( k - 1 ) θ j i + θ i ( k - 1 ) m jk i 1 - K k i
In formula, K k i = Σ j ≠ l m lk i m j i ( k - 1 ) , ;
To k, about the accumulation uncertainty of target identification, be constantly:
θ i ( k ) = θ i ( k - 1 ) θ k i 1 - K k i ;
Finally repeat said process and obtain sensor in the Target recognition fusion result of time domain.
According to above-mentioned feature, the spatial domain in described step 4 is merged concrete grammar and is:
First each sensor obtains the accumulation elementary probability assignment of target identification and accumulation when uncertain by recursive fashion in time domain;
Then can by Dempster rule of combination, carry out spatial domain fusion to the time domain accumulative total information of M sensor, the final time/empty cumulative target identification fusion results of sensor i and l is:
m il ( k ) = m j i ( k ) m j l ( k ) + m j i ( k ) θ l ( k ) + θ i ( k ) m j l ( k ) 1 - K k il
In formula, K k il = Σ j ≠ n m j i ( k ) m n l ( k ) ;
Accumulation time/empty the uncertainty being obtained by sensor i and l is:
θ il ( k ) = θ i ( k ) θ l ( k ) 1 - K k il ;
Finally, repeat said process and can obtain the accumulation time/null object identification fusion results information that merges M sensor.
According to above-mentioned feature, the decision-making technique in described step 4 is:
If Θ is identification framework, m is the basic probability assignment function after the time domain that obtains based on Dempster rule of combination and spatial domain merge, and establishes meet:
m ( A 1 ) = max { m ( A i ) , A i ⋐ Θ } ,
If have:
m(A 1)-m(A 2)>ε 1
m(Θ)<ε 2
m(A 1)>m(Θ)
A 1be court verdict, wherein ε 1, ε 2for predefined thresholding.
Many Fusion Features of multisensor recognition methods for the identification of Three-Dimensional Dynamic target provided by the invention is mainly used in type of airplane identification.
This prior art is compared, the present invention carries out effective combination by BP neural net method and D-S Method of Evidence Theory, use Hu invariant moment features and the Zernike invariant moment features of target simultaneously, the recognition result of different sensors is carried out to the fusion in time domain and spatial domain, finally fusion recognition result is judged and draws final recognition result according to decision rule, can effectively improve the correctness of discriminator result, anti-interference, error performance is strong.
Accompanying drawing explanation
Fig. 1 is imager coordinate system schematic diagram;
Fig. 2 is fundamental diagram of the present invention;
Fig. 3 be in embodiment sensor S1 not in the same time for the recognition result of target F-18, wherein the recognition result of Hu feature square is only used in dotted line representative, solid line adds the recognition result that Zernike feature square is only used in * representative, and the recognition result of Hu feature square and Zernike feature square is used in solid line representative simultaneously;
Fig. 4 is sensor S2 not in the same time for the recognition result of target F-18, wherein the recognition result of Hu feature square is only used in dotted line representative, solid line adds the recognition result that Zernike feature square is only used in * representative, and the recognition result of Hu feature square and Zernike feature square is used in solid line representative simultaneously;
Fig. 5 is sensor S3 not in the same time for the recognition result of target F-18, wherein the recognition result of Hu feature square is only used in dotted line representative, solid line adds the recognition result that Zernike feature square is only used in * representative, and the recognition result of Hu feature square and Zernike feature square is used in solid line representative simultaneously;
Fig. 6 is the parts of images of AV-8B aircraft image data base;
Fig. 7 is the parts of images of F-5 aircraft image data base;
Fig. 8 is the parts of images of Su-27 aircraft image data base;
Fig. 9 is the parts of images of F-18 aircraft image data base;
Figure 10 is the parts of images of F-22 aircraft image data base;
Figure 11 is the digital model that AV-8B aircraft is set up;
Figure 12 is the digital model that F-5 aircraft is set up;
Figure 13 is the digital model that Su-27 aircraft is set up;
Figure 14 is the digital model that F-18 aircraft is set up;
Figure 15 is the digital model that F-22 aircraft is set up.
1. embodiment
Below in conjunction with accompanying drawing, the invention will be further described: the present embodiment is implemented take technical solution of the present invention under prerequisite, provided detailed embodiment and concrete operating process, but protection scope of the present invention is not limited to following embodiment.
Embodiment
As shown in Figure 2, the present embodiment comprises following steps:
The first step, treats three-dimensional Aircraft Targets and carries out two dimensional surface according to certain rule, sets up image data base.Specific practice is:
Three-dimensional body can be expressed by one or more significant two dimension views conventionally, allows us to process independently each view, thereby three-dimensional problem can be dissolved as two-dimensional problems.Modeling of aircraft can be realized by some different means.Can utilize model aircraft and ccd image acquisition system to carry out, this method can more intactly keep the shape information of aircraft, but obtaining of each attitude of aircraft existed to inexactness, simultaneously also inconvenient.Therefore, this patent utilizes software modeling method, as shown in Figure 11-15, and the two-dimensional projection image storehouse by Software Create aircraft under different attitudes.The method modeling speed is fast, and accuracy is high.Because aircraft does not have so-called holding position, aloft it can occur by any attitude, thereby modeling of aircraft is more more complex than other object modelings.In order conveniently to set up image library, keep integrality and the diversity of image library simultaneously as far as possible, make the following assumptions, assume picture coordinate system as shown in Figure 1.X-axis is pointed to head, and Y-axis is pointed to aircraft bottom surface, and Z axis points to wing one side, and the plane of delineation is vertical with Z axis, and the attitude of aircraft can be by θ like this xθ yθ zcompletely definite.When aircraft is when Z axis rotates, for projection plane A, be equivalent to aircraft and do luffing angle variation, only there is rotation in the projection of aircraft on the plane of delineation, and its shape, big or small constant, so Image Moment Invariants feature and θ zhave nothing to do; When aircraft is when X-axis is rotated, be equivalent to aircraft in the variation of making roll angle; When aircraft is when Y-axis is rotated, be equivalent to aircraft and doing the variation of yaw angle, so only need consider angle and θ for projection plane A xand θ yvariation.In order to guarantee the integrality of the image library of foundation, for projection plane B, consider the situation that aircraft rotates around Z axis.In order to guarantee to build the integrality in storehouse, can to choose, try one's best little angle (being taken as 5 ° herein) for base unit is to θ x, θ yand θ zquantize.
Second step, the image in the image data base that the first step is set up carries out pre-service, carries out gray processing and binaryzation, and pretreated image is extracted to Hu moment characteristics and Zernike moment characteristics simultaneously, sets up image feature information database;
(1) image gray processing:
Use weighted method, according to importance or other indexs to R, G, B gives different weights, and makes R, G, the value weighted mean of B, i.e. R=G=B=(W rr+W gg+W bb)/3, wherein W r, W g, W bbe respectively R, G, the weights of B.W r, W g, W bget different values, method of weighted mean just will form different gray level images.Because human eye is the highest to green susceptibility, redness is taken second place, minimum to blue susceptibility, therefore makes W g> W r> W bto obtain rational gray level image.When getting W g=0.59, W r=0.30, W b, during R=G=B=0.30R+0.59G+0.11B, can obtain the most rational gray level image at=0.11 o'clock.
(2) Binary Sketch of Grey Scale Image:
Image binaryzation is a basic fundamental in digital image processing techniques.The pixel value of supposing a width gray-scale map is f (i, j) ∈ (r 1, r 2..., r m), establishing threshold value is T=r i, 1≤i≤m:
( i , j ) = 1 f ( i , j ) &GreaterEqual; T 0 f ( i , j ) < T
Conventionally, by 1 in binary map, represent target subgraph, with 0, represent background subgraph.
(3) calculate seven Hu invariant moment features of bianry image:
u pq = &Sigma; x &Sigma; y ( x - x &OverBar; ) p ( y - y &OverBar; ) q f ( x , y ) , p , q = 0,1 , . . . &eta; pq = u pq / u 00 &gamma; , &gamma; = ( p + q ) / 2 + 1
Can calculate 7 of p+q≤3 not bending moment φ 17as follows:
φ 1=η 2002
&phi; 2 = ( &eta; 20 - &eta; 02 ) 2 + 4 &eta; 11 2
φ 3=(η 30-3η 12) 2+(3η 2103) 2
φ 4=(η 3012) 2+(η 2103) 2
φ 5=(η 30-3η 12)(η 3012)[(η 3012) 2-3(η 2103) 2]
+(3η 2103)(η 2103)[3(η 3012) 2-(η 2103) 2]
φ 6=(η 2002)[(η 3012) 2-(η 2103) 2]
+4η 113012)(η 2103)
φ 7=(3η 2102)(η 3012)[(η 3012) 2-3(η 2103) 2]
+(3η 1230)(η 2103)[3(η 3012) 2-(η 2103) 2]
(4) calculate front 16 rank Zernike invariant moment features:
N rank Zernike square is defined as:
A nm = n + 1 &pi; &Integral; &Integral; x 2 + y 2 &le; 1 V nm * ( x , y ) f ( x , y ) dxdy
In fact, it is a kind of image function to be transformed to one group of mapping on orthogonal basis function.Wherein, { V nm(x, y) } be one group of orthogonal polynomial, the Zernike square of image f (x, y) is the projection of this image on this polynomial expression, and this group polynomial expression { V nm(x, y) } { x in unit circle 2+ y 2≤ 1} is quadrature.That is:
In above formula, V nmthe form of (x, y) is:
V nm ( x , y ) = R nm ( x , y ) e jm tan - 1 ( y / x ) ,
Wherein, n is positive integer or zero; M is plus or minus integer, but will meet n-|m|, is even number; R nm(x, y) is radial polynomial, and its expression is:
R nm ( x , y ) = &Sigma; s = 0 ( n - | m | ) / 2 ( - 1 ) s [ ( n - s ) ! ] ( x 2 + y 2 ) ( n - 2 s ) / 2 s ! ( n + | m | 2 - s ) ! ( n - | m | 2 - s ) ! ,
If digital picture, replaces the sign of integration, that is: by summation sign
A nm = n + 1 &pi; &Sigma; x &Sigma; y V nm * ( x , y ) f ( x , y ) , ( x 2 + y 2 &le; 1 ) .
In order to calculate the Zernike square of a secondary Given Graph picture, must, by the barycenter displacement of image to true origin, image pixel point be mapped in unit circle, that is: { x 2+ y 2≤ 1}.In computation process, drop on outside unit circle point do not participate in calculating.Be additionally noted that A nm * = A n . - m .
Calculating the front 16 rank Zernike squares that use comprises: Z 00, Z 11, Z 20, Z 22, Z 31, Z 33, Z 40, Z 42, Z 44, Z 51, Z 53, Z 55, Z 60, Z 62, Z 64, Z 66.
The 3rd step, two kinds of different characteristics of image square information that second step is calculated, Hu square and Zernike square are input to respectively two BP neural networks, and BP neural network is trained, and obtain two neural networks that train and preserve.In many Fusion Features of multisensor target identification method that this patent is used, use the imageing sensor of three homogeneities, each sensor extracts respectively the image of different dimensions, be that sensor S1 correspondence is rotated in along X coordinate axis the image that plane A becomes, sensor S2 correspondence is rotated in along Y coordinate axis the image that plane A becomes, and the corresponding Z-direction axle of sensor S3 is rotated in the image that plane B becomes.The image that aircraft to be identified is extracted is numbered, and the image of every kind of aircraft that different sensors is extracted is number consecutively 1-72 from front to back.Each coordinate dimensions of every kind of aircraft is chosen 36 width images at random, and every width image all extracts Hu square and Zernike square.The Hu moment characteristics that the image that aircraft to be identified is chosen at random obtains is input in BP1 neural network, and Zernike square is input in BP2 neural network, preserves BP NEURAL NETWORK 1 and BP2 after training.
The 4th step, the aircraft image sequence to be identified that different sensors is got carries out pre-service and extracts two kinds of feature squares, two kinds of feature square information are input to respectively in two BP NEURAL NETWORK 1 and BP2 having trained in the 3rd step, calculate basic probability assignment function, the fusion that utilizes D-S evidence theory to carry out time domain and spatial domain to the elementary probability assignment obtaining obtains recognition result information, recognition result information is carried out to decision-making according to decision rule, finally obtain target recognition result information, namely obtain the type information of aircraft to be identified.Specific practice is:
(1) utilize BP neural computing basic probability assignment function:
Through the computing of neural network, utilize its generalization ability, can obtain this evidence to the basic probability assignment function of target to be known respectively.Here define framework of identification Ω=(ω 1, ω 2..., ω n), basic probability assignment function is the output after the normalization of BP neural network, that is:
m ( &omega; i ) = | y i | &Sigma; i = 1 N | y i | , i = 1,2 , . . . N .
(2) utilize D-S evidence theory to carry out time domain fusion:
Suppose i sensor till k-1 constantly about the objectionable intermingling cumulative information of target identification by accumulating elementary probability assignment with the accumulation of distributing to identification framework uncertain (accumulative total that is target identification is uncertain) determine, at k constantly, sensor i has obtained the new measurement elementary probability assignment about target identification measuring uncertainty about target identification is
Utilize Dempster rule of combination, to the accumulation elementary probability assignment of the k moment about target m j i ( k ) , i = 1 , . . . , M , j = 1 , . . . , N Can calculate by following formula:
m j i ( k ) = m j i ( k - 1 ) m jk i + m j i ( k - 1 ) &theta; j i + &theta; i ( k - 1 ) m jk i 1 - K k i ,
In formula, K k i = &Sigma; j &NotEqual; l m lk i m j i ( k - 1 ) .
To k, about the accumulation uncertainty of target identification, be constantly:
&theta; i ( k ) = &theta; i ( k - 1 ) &theta; k i 1 - K k i .
Repeat said process and just can obtain sensor in the Target recognition fusion result of time domain.
(3) utilize D-S evidence theory to carry out spatial domain fusion:
When each sensor obtains the accumulation elementary probability assignment of target identification and accumulation when uncertain by recursive fashion in time domain, can to the time domain of M sensor totally information by Dempster rule of combination, carry out spatial domain fusion.
Final time/empty cumulative target identification the fusion results of sensor i and l is:
m il ( k ) = m j i ( k ) m j l ( k ) + m j i ( k ) &theta; l ( k ) + &theta; i ( k ) m j l ( k ) 1 - K k il ,
In formula, K k il = &Sigma; j &NotEqual; n m j i ( k ) m n l ( k ) .
Accumulation time/empty the uncertainty being obtained by sensor i and l is:
&theta; il ( k ) = &theta; i ( k ) &theta; l ( k ) 1 - K k il .
Above-mentioned formula has formed the identification of the target based on D-S evidence theory recurrence time/spatial domain Fusion Model when objectionable intermingling data structure, repeats said process and can obtain the accumulation time/null object identification fusion results information that merges M sensor.
(4) recognition result judgement
If Θ is identification framework, m is the basic probability assignment function after the time domain that obtains based on Dempster rule of combination and spatial domain merge, and adopts decision-making technique below.
If &Exists; A 1 , A 2 &Subset; &Theta; , Meet:
m ( A 1 ) = max { m ( A i ) , A i &Subset; &Theta; } ,
If have:
m(A 1)-m(A 2)>ε 1
m(Θ)<ε 2
m(A 1)>m(Θ)
A 1be court verdict, wherein ε 1, ε 2for predefined thresholding.
Test case:
For verifying the validity of many Fusion Features of the multisensor recognition methods for the identification of Three-Dimensional Dynamic target that the present invention proposes, carry out Simulating Test Study.The scheme of emulation experiment is: the Aircraft Targets type of foundation comprises AV-8B, F-5, and Su-27, F-18, five kinds of F-22, target to be tested is this wherein a kind of.The digital model of setting up is as shown in accompanying drawing 6-accompanying drawing 10.Choosing F-18 is target to be tested.The method proposing according to the present invention is identified test.
Sensor has 3 (S1, S2, S3).Being input to each sensor is Hu square and the Zernike square of the picture extraction of five class Aircraft Targets, and for these two kinds of data, each sensor is to having two BP neural networks as sorter.Three sensors use same training sample.Top view Figure 180 that training sample is three kinds of aircrafts choosing at random, every class aircraft is chosen 36 at random; 180, x direction view, every class aircraft is chosen 36 at random; 180, y direction view, every class aircraft is chosen 36 at random.So amounting to neural metwork training data is Hu square and the Zernike square that 540 width images extract.Hu square is input in BP NEURAL NETWORK 1, and Zernike square is input in BP NEURAL NETWORK 2.The data that test data is trained for the non-participation of choosing at random, 60, every class aircraft, comprise that top view Figure 20 width is as the test sample book of sensor S1, x direction view 20 width are as the test sample book of sensor S2, and y direction view 20 width are as the test sample book of sensor S3.In simulation process, 20 width image representative sensors obtain 20 of image not in the same time.
Sensor S1, S2, respectively corresponding two the BP neural networks of S3, BP NEURAL NETWORK 1 has 7 inputs, 5 outputs, hidden layer is chosen 15 unit, and network is inputted the Hu square vector of corresponding target, and output corresponding target to be identified belongs to the degree of belief m ∈ [0 of five types of aircrafts, 1], and mapping show.BP NEURAL NETWORK 2 has 16 inputs, 5 outputs, and hidden layer is chosen 25 unit, and network is inputted the Zernike square vector of corresponding target, and output corresponding target to be identified belongs to the degree of belief m ∈ [0,1] of five types of aircrafts, and the demonstration of mapping.And two kinds of feature acquired results uses are carried out to D-S evidence theory and carry out feature level fusion, the demonstration of mapping equally.Three simulation results are placed in same image and show, are convenient to comparison.For the simulation result of F-18 as shown in Figure 3-Figure 5.
The simulation results:
Table 1 has been listed the recognition result of 3 moment, 3 sensors for F-18 target, the data of table 2 are time domain fusion recognition result for the first time, the data of table 3 are the 2nd time domain fusion recognition result and spatial domain fusion recognition result, and table 4 is the method for identifying and classifying that uses of this patent and the recognition correct rate comparison of additive method.
Table 1
Table 2
Table 3
Table 4

Claims (10)

1. for many Fusion Features of multisensor recognition methods of Three-Dimensional Dynamic target identification, comprise following steps:
Step 1, identification target is carried out to two dimensional surface, set up two-dimensional projection image storehouse;
Step 2, the image in two-dimensional projection image storehouse is carried out to pre-service, carry out gray processing and binaryzation, and pretreated image is extracted to Hu moment characteristics and Zernike moment characteristics simultaneously, set up characteristics of image square information database;
Step 3, by the characteristics of image square information of the Hu square calculating and Zernike square, be input to respectively a BP neural network, the 2nd BP neural network, the one BP neural network, the 2nd BP neural network are trained, obtain two BP neural networks that train and preserve;
Step 4, the target image sequence to be identified that different sensors is got carries out pre-service and extracts Hu moment characteristics and Zernike moment characteristics, two kinds of feature square information are input to respectively to a BP neural network, the 2nd BP neural network having trained in the 3rd step, calculate basic probability assignment function, the fusion that utilizes D-S evidence theory to carry out time domain and spatial domain to the elementary probability assignment obtaining obtains recognition result information, recognition result information is carried out to decision-making according to decision rule, finally obtain target recognition result information.
2. many Fusion Features of multisensor according to claim 1 recognition methods, is characterized in that described step 1 utilizes software modeling method, the two-dimensional projection image storehouse by Software Create target under different attitudes.
3. many Fusion Features of multisensor according to claim 1 recognition methods, it is characterized in that the gray processing in described step 2 is used method of weighted mean, according to importance or other indexs, to R, G, B, give different weights, and make the value weighted mean of R, G, B, that is:
R=G=B=(W RR+W GG+W BB)/3,
W wherein r, W g, W bthe weights that are respectively R, G, B, make W g> W r> W bto obtain rational gray level image.
4. many Fusion Features of multisensor according to claim 3 recognition methods, is characterized in that working as W g=0.59, W r=0.30, W b, during R=G=B=0.30R+0.59G+0.11B, can obtain the most rational gray level image at=0.11 o'clock.
5. many Fusion Features of multisensor according to claim 1 recognition methods, is characterized in that the concrete grammar of described step 3 is:
The first-selected image that uses three sensors to extract respectively different dimensions;
Then each coordinate dimensions of every kind of target is chosen the image of some at random, and every width image all extracts Hu square and Zernike square;
The Hu moment characteristics that the image of finally target to be identified being chosen at random obtains is input in a BP neural network, and Zernike square is input in the 2nd BP neural network, preserves a BP neural network and the 2nd BP neural network after training.
6. many Fusion Features of multisensor according to claim 1 recognition methods, is characterized in that the basic probability assignment function in described step 4 is:
m ( &omega; i ) = | y i | &Sigma; i = 1 N | y i | , i = 1,2 , . . . N .
7. many Fusion Features of multisensor according to claim 1 recognition methods, is characterized in that the Time Domain Fusion concrete grammar in described step 4 is:
First according to accumulation elementary probability assignment uncertain with the accumulation of distributing to identification framework determine i sensor till the objectionable intermingling cumulative information that k-1 identifies about target constantly;
Then at k constantly, i sensor obtained the new measurement elementary probability assignment about target identification measuring uncertainty about target identification is
Utilize afterwards again Dempster rule of combination, to k, about the accumulation elementary probability assignment of target, be constantly m j i ( k ) , i = 1 , . . . , M , j = 1 , . . . , N Can calculate by following formula:
m j i ( k ) = m j i ( k - 1 ) m jk i + m j i ( k - 1 ) &theta; j i + &theta; i ( k - 1 ) m jk i 1 - K k i ,
In formula, K k i = &Sigma; j &NotEqual; l m lk i m j i ( k - 1 ) , ;
To k, about the accumulation uncertainty of target identification, be constantly:
&theta; i ( k ) = &theta; i ( k - 1 ) &theta; k i 1 - K k i ,
Finally repeat said process and obtain sensor in the Target recognition fusion result of time domain.
8. many Fusion Features of multisensor according to claim 1 recognition methods, is characterized in that the spatial domain fusion concrete grammar in described step 4 is:
First each sensor obtains target identification by recursive fashion in time domain accumulation elementary probability assignment and accumulation are uncertain;
Then can by Dempster rule of combination, carry out spatial domain fusion to the time domain accumulative total information of M sensor, the final time/empty cumulative target identification fusion results of sensor i and l is:
m il ( k ) = m j i ( k ) m j l ( k ) + m j i ( k ) &theta; l ( k ) + &theta; i ( k ) m j l ( k ) 1 - K k il ,
In formula, K k il = &Sigma; j &NotEqual; n m j i ( k ) m n l ( k ) ;
Accumulation time/empty the uncertainty being obtained by sensor i and l is:
&theta; il ( k ) = &theta; i ( k ) &theta; l ( k ) 1 - K k il ;
Finally, repeat said process and can obtain the accumulation time/null object identification fusion results information that merges M sensor.
9. many Fusion Features of multisensor according to claim 1 recognition methods, is characterized in that the decision-making technique in described step 4 is:
If Θ is identification framework, m is the basic probability assignment function after the time domain that obtains based on Dempster rule of combination and spatial domain merge, and establishes meet:
m ( A 1 ) = max { m ( A i ) , A i &Subset; &Theta; } ,
If have:
m(A 1)-m(A 2)>ε 1
m(Θ)<ε 2
m(A 1)>m(Θ)
A 1be court verdict, wherein ε 1, ε 2for predefined thresholding.
10. arbitrary many Fusion Features of the multisensor recognition methods described in claim 1 to 9 is applied to type of airplane identification.
CN201410231814.6A 2014-05-29 2014-05-29 Multi-sensor multi-feature fusion recognition method for three-dimensional dynamic target recognition Pending CN103984936A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410231814.6A CN103984936A (en) 2014-05-29 2014-05-29 Multi-sensor multi-feature fusion recognition method for three-dimensional dynamic target recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410231814.6A CN103984936A (en) 2014-05-29 2014-05-29 Multi-sensor multi-feature fusion recognition method for three-dimensional dynamic target recognition

Publications (1)

Publication Number Publication Date
CN103984936A true CN103984936A (en) 2014-08-13

Family

ID=51276898

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410231814.6A Pending CN103984936A (en) 2014-05-29 2014-05-29 Multi-sensor multi-feature fusion recognition method for three-dimensional dynamic target recognition

Country Status (1)

Country Link
CN (1) CN103984936A (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104240432A (en) * 2014-10-10 2014-12-24 西安石油大学 System and method for manganese-rich slag production safety monitoring on basis of information fusion
CN105425828A (en) * 2015-11-11 2016-03-23 山东建筑大学 Robot anti-impact double-arm coordination control system based on sensor fusion technology
CN107255818A (en) * 2017-06-13 2017-10-17 厦门大学 A kind of submarine target quick determination method of bidimensional multiple features fusion
CN108052976A (en) * 2017-12-13 2018-05-18 中国兵器装备集团自动化研究所 A kind of multi-band image fusion identification method
CN108133238A (en) * 2017-12-29 2018-06-08 国信优易数据有限公司 A kind of human face recognition model training method and device and face identification method and device
CN108960083A (en) * 2018-06-15 2018-12-07 北京邮电大学 Based on automatic Pilot objective classification method combined of multi-sensor information and system
CN109409431A (en) * 2018-10-29 2019-03-01 吉林大学 Multisensor attitude data fusion method and system neural network based
CN110008843A (en) * 2019-03-11 2019-07-12 武汉环宇智行科技有限公司 Combine cognitive approach and system based on the vehicle target of cloud and image data
CN110021036A (en) * 2019-04-13 2019-07-16 北京环境特性研究所 Infrared target detection method, apparatus, computer equipment and storage medium
CN110291358A (en) * 2017-02-20 2019-09-27 欧姆龙株式会社 Shape estimation device
CN111902851A (en) * 2018-03-15 2020-11-06 日本音响工程株式会社 Learning data generation method, learning data generation device, and learning data generation program
CN112114303A (en) * 2020-08-17 2020-12-22 安徽捷纳森电子科技有限公司 Method for identifying sand stealer through unidirectional passive array control scanning
CN112668454A (en) * 2020-12-25 2021-04-16 南京华格信息技术有限公司 Bird micro-target identification method based on multi-sensor fusion
CN113239829A (en) * 2021-05-17 2021-08-10 哈尔滨工程大学 Cross-dimension remote sensing data target identification method based on space occupation probability characteristics
CN113538357A (en) * 2021-07-09 2021-10-22 同济大学 Shadow interference resistant road surface state online detection method
CN113642627A (en) * 2021-08-09 2021-11-12 中国人民解放军海军航空大学航空作战勤务学院 Image and decision multi-source heterogeneous information fusion identification method and device based on deep learning
CN114863556A (en) * 2022-04-13 2022-08-05 上海大学 Multi-neural-network fusion continuous action recognition method based on skeleton posture

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08129025A (en) * 1994-10-28 1996-05-21 Mitsubishi Space Software Kk Three-dimensional image processing flow velocity measuring method
CN102222240A (en) * 2011-06-29 2011-10-19 东南大学 DSmT (Dezert-Smarandache Theory)-based image target multi-characteristic fusion recognition method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08129025A (en) * 1994-10-28 1996-05-21 Mitsubishi Space Software Kk Three-dimensional image processing flow velocity measuring method
CN102222240A (en) * 2011-06-29 2011-10-19 东南大学 DSmT (Dezert-Smarandache Theory)-based image target multi-characteristic fusion recognition method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵丹丹: "多传感器数据融合在目标识别中的应用研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104240432A (en) * 2014-10-10 2014-12-24 西安石油大学 System and method for manganese-rich slag production safety monitoring on basis of information fusion
CN105425828A (en) * 2015-11-11 2016-03-23 山东建筑大学 Robot anti-impact double-arm coordination control system based on sensor fusion technology
CN110291358B (en) * 2017-02-20 2022-04-05 欧姆龙株式会社 Shape estimating device
CN110291358A (en) * 2017-02-20 2019-09-27 欧姆龙株式会社 Shape estimation device
US11036965B2 (en) 2017-02-20 2021-06-15 Omron Corporation Shape estimating apparatus
CN107255818A (en) * 2017-06-13 2017-10-17 厦门大学 A kind of submarine target quick determination method of bidimensional multiple features fusion
CN108052976A (en) * 2017-12-13 2018-05-18 中国兵器装备集团自动化研究所 A kind of multi-band image fusion identification method
CN108052976B (en) * 2017-12-13 2021-04-06 中国兵器装备集团自动化研究所 Multiband image fusion identification method
CN108133238A (en) * 2017-12-29 2018-06-08 国信优易数据有限公司 A kind of human face recognition model training method and device and face identification method and device
CN108133238B (en) * 2017-12-29 2020-05-19 国信优易数据有限公司 Face recognition model training method and device and face recognition method and device
CN111902851A (en) * 2018-03-15 2020-11-06 日本音响工程株式会社 Learning data generation method, learning data generation device, and learning data generation program
CN108960083A (en) * 2018-06-15 2018-12-07 北京邮电大学 Based on automatic Pilot objective classification method combined of multi-sensor information and system
CN109409431A (en) * 2018-10-29 2019-03-01 吉林大学 Multisensor attitude data fusion method and system neural network based
CN109409431B (en) * 2018-10-29 2020-10-09 吉林大学 Multi-sensor attitude data fusion method and system based on neural network
CN110008843B (en) * 2019-03-11 2021-01-05 武汉环宇智行科技有限公司 Vehicle target joint cognition method and system based on point cloud and image data
CN110008843A (en) * 2019-03-11 2019-07-12 武汉环宇智行科技有限公司 Combine cognitive approach and system based on the vehicle target of cloud and image data
CN110021036A (en) * 2019-04-13 2019-07-16 北京环境特性研究所 Infrared target detection method, apparatus, computer equipment and storage medium
CN110021036B (en) * 2019-04-13 2021-03-16 北京环境特性研究所 Infrared target detection method and device, computer equipment and storage medium
CN112114303A (en) * 2020-08-17 2020-12-22 安徽捷纳森电子科技有限公司 Method for identifying sand stealer through unidirectional passive array control scanning
CN112668454A (en) * 2020-12-25 2021-04-16 南京华格信息技术有限公司 Bird micro-target identification method based on multi-sensor fusion
CN113239829B (en) * 2021-05-17 2022-10-04 哈尔滨工程大学 Cross-dimension remote sensing data target identification method based on space occupation probability characteristics
CN113239829A (en) * 2021-05-17 2021-08-10 哈尔滨工程大学 Cross-dimension remote sensing data target identification method based on space occupation probability characteristics
CN113538357A (en) * 2021-07-09 2021-10-22 同济大学 Shadow interference resistant road surface state online detection method
CN113538357B (en) * 2021-07-09 2022-10-25 同济大学 Shadow interference resistant road surface state online detection method
CN113642627A (en) * 2021-08-09 2021-11-12 中国人民解放军海军航空大学航空作战勤务学院 Image and decision multi-source heterogeneous information fusion identification method and device based on deep learning
CN113642627B (en) * 2021-08-09 2024-03-08 中国人民解放军海军航空大学航空作战勤务学院 Deep learning-based image and decision multi-source heterogeneous information fusion identification method and device
CN114863556A (en) * 2022-04-13 2022-08-05 上海大学 Multi-neural-network fusion continuous action recognition method based on skeleton posture
CN114863556B (en) * 2022-04-13 2024-07-19 上海大学 Multi-neural network fusion continuous action recognition method based on skeleton gesture

Similar Documents

Publication Publication Date Title
CN103984936A (en) Multi-sensor multi-feature fusion recognition method for three-dimensional dynamic target recognition
CN111091105B (en) Remote sensing image target detection method based on new frame regression loss function
CN107392964B (en) The indoor SLAM method combined based on indoor characteristic point and structure lines
CN109559338B (en) Three-dimensional point cloud registration method based on weighted principal component analysis method and M estimation
CN106991368A (en) A kind of finger vein checking personal identification method based on depth convolutional neural networks
CN109543606A (en) A kind of face identification method that attention mechanism is added
CN104063702B (en) Three-dimensional gait recognition based on shielding recovery and partial similarity matching
CN105404894B (en) Unmanned plane target tracking method and its device
CN105205453B (en) Human eye detection and localization method based on depth self-encoding encoder
CN105512680A (en) Multi-view SAR image target recognition method based on depth neural network
CN107833249A (en) A kind of carrier-borne aircraft landing mission attitude prediction method of view-based access control model guiding
CN105046710A (en) Depth image partitioning and agent geometry based virtual and real collision interaction method and apparatus
CN104392228A (en) Unmanned aerial vehicle image target class detection method based on conditional random field model
CN104298974A (en) Human body behavior recognition method based on depth video sequence
CN108520203A (en) Multiple target feature extracting method based on fusion adaptive more external surrounding frames and cross pond feature
Liu et al. R2YOLOX: A lightweight refined anchor-free rotated detector for object detection in aerial images
CN104751111A (en) Method and system for recognizing human action in video
CN105488541A (en) Natural feature point identification method based on machine learning in augmented reality system
CN108320051A (en) A kind of mobile robot dynamic collision-free planning method based on GRU network models
CN110348310A (en) A kind of Hough ballot 3D colour point clouds recognition methods
CN112446253B (en) Skeleton behavior recognition method and device
CN103942786A (en) Self-adaptation block mass target detecting method of unmanned aerial vehicle visible light and infrared images
CN107247917A (en) A kind of airplane landing control method based on ELM and DSmT
CN104008374B (en) Miner&#39;s detection method based on condition random field in a kind of mine image
CN113313824A (en) Three-dimensional semantic map construction method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140813