CN109766905B - Target grouping method based on self-organizing feature mapping network - Google Patents

Target grouping method based on self-organizing feature mapping network Download PDF

Info

Publication number
CN109766905B
CN109766905B CN201811200842.6A CN201811200842A CN109766905B CN 109766905 B CN109766905 B CN 109766905B CN 201811200842 A CN201811200842 A CN 201811200842A CN 109766905 B CN109766905 B CN 109766905B
Authority
CN
China
Prior art keywords
target
vector
distance
grouping
sensor data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811200842.6A
Other languages
Chinese (zh)
Other versions
CN109766905A (en
Inventor
黄震宇
白娟
张振兴
杨任农
王栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Air Force Engineering University of PLA
Original Assignee
Air Force Engineering University of PLA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Air Force Engineering University of PLA filed Critical Air Force Engineering University of PLA
Priority to CN201811200842.6A priority Critical patent/CN109766905B/en
Publication of CN109766905A publication Critical patent/CN109766905A/en
Application granted granted Critical
Publication of CN109766905B publication Critical patent/CN109766905B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

A target clustering method based on self-organizing feature mapping network is disclosed, which comprises the following steps: reading data obtained by a sensor at the current moment; cleaning the read sensor data; introducing SOM to group the processed data, calculating the distance between the neuron and the sensor data by using a hybrid calculation method, and checking the grouping accuracy by using a standardized confidence value; evaluating the target grouping condition, and timely correcting according to the actual condition; and outputting a target clustering result, and repeating the process. According to the method, data cleaning is carried out before the target grouping, so that noise interference is effectively filtered, and the accuracy of the target grouping is improved; the difference between targets can be effectively reflected, and the accuracy of target grouping is improved; by introducing the SOM, the key problems that the number of the groups needs to be specified in advance and the threshold value needs to be set are solved, the accuracy and the speed of the target group are improved, and the requirements of practical application are met; and the CV test target grouping condition is introduced, so that the robustness of the algorithm is improved.

Description

Target grouping method based on self-organizing feature mapping network
Technical Field
The invention relates to the field of situation estimation, in particular to a Self-Organizing Feature mapping (SOM) based target clustering method which can be used for situation estimation, intention identification and command control systems.
Background
The target grouping is to reliably and effectively group the target information which is similar in type and data and comes from multiple sensors, so that the information identification degree can be improved, the information dazzling problem is solved, and the situation is fast grasped, so that the correct decision is made.
At present, typical target clustering methods include K-means, hierarchical clustering methods, genetic algorithms and the like. Wherein:
the K-means method is easy to realize, but the clustering number needs to be given in advance, the clustering number is inconsistent with the actual situation, the grouping result is related to the initial clustering center, and the robustness is poor;
the hierarchical clustering algorithm does not need to specify the grouping number, but still needs to manually input a threshold, and for grouping problems with different measurement scales, the threshold needs to be set respectively, and an effective threshold selection method is lacked;
the genetic algorithm is a classic intelligent algorithm and is widely applied to engineering, but the grouping number needs to be set in advance, and the problem of unstable grouping results can occur due to the limited global optimization capability.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a target grouping method based on a self-organizing feature mapping (SOM), which meets the real-time requirement, does not need to appoint the grouping number in advance and can quickly and accurately group targets.
The key technology for realizing the invention is as follows: in the process of target grouping, firstly, data are processed to effectively remove noise interference, secondly, a mixed calculation method is used for calculating the distance between targets, and a self-organizing feature mapping network is introduced to classify the processed data, so that the grouping accuracy and speed are improved. The implementation steps comprise:
the invention relates to a target clustering method based on a self-organizing feature mapping network, which comprises the following steps:
step 1, reading data
1.1 Let initial time k =1, read the type of the tth target at time k
Figure BSA0000172083220000011
Course of course
Figure BSA0000172083220000012
Position of
Figure BSA0000172083220000013
And velocity
Figure BSA0000172083220000014
t is 1,2, …, N k ,N k The total number of targets at the moment k is;
1.2)to facilitate the description of the target grouping problem, the t-th target sensor data at time k uses a one-dimensional vector
Figure BSA0000172083220000015
Is shown in which
Figure BSA0000172083220000016
Indicating the t-th target property at time k,
Figure BSA0000172083220000017
indicating the t-th object type at time k,
Figure BSA0000172083220000018
indicating the t-th target heading at time k,
Figure BSA0000172083220000019
indicating the t-th target position at time k,
Figure BSA00001720832200000110
represents the t-th target speed at the k moment, and all the target sensor data at the k moment are collected into
Figure BSA00001720832200000111
Step 2, data cleaning
2.1 Selecting an outlier in the sensor data detected by the isolated forest algorithm;
2.2 For quantitative analysis of relative situation between targets, the WGS-84 geodetic coordinate system obtained by GPS is converted into a national coordinate system of our country;
2.3 To maintain uniformity of data ranges, sensor data is normalized:
Figure BSA0000172083220000021
wherein x is * Is normalized sensor data, x is raw sensor data, x max For the number of attribute sensors in all targetsAccording to the maximum value, x min The minimum value of the attribute sensor data in all targets;
step 3, training the self-organizing feature mapping network
3.1 Setting the number of neurons in the input layer to be 6, setting the competition layer to be a planar array consisting of n multiplied by n neurons, wherein n is a non-zero natural number, and the iteration number is n max Secondly; random initialization n 2 Weight vector of each competition layer
Figure BSA0000172083220000022
3.2 Calculate k time N k Sensor data X of individual target k And n 2 Weight vector of each competition layer
Figure BSA00001720832200000217
The best matching unit weight vector w is determined c The method specifically comprises the following steps:
3.2.1 Initializing t =1,i =1, distance record base
Figure BSA0000172083220000023
3.2.2 Calculate the t-th target sensor data at time k
Figure BSA0000172083220000024
And the ith weight vector
Figure BSA0000172083220000025
A distance L [ O ] therebetween i ,O j ]In which O is i 、O j Denotes vector i and vector j, i =1,2 2 ,j=1,2,...,n 2 The method specifically comprises the following steps:
3.2.2.1 ) calculation of
Figure BSA0000172083220000026
And the ith weight vector
Figure BSA0000172083220000027
Distance of discrete attributes D [ O ] i ,O j ]Discrete attributes comprising only object types
Figure BSA0000172083220000028
D[O i ,O j ]=βδ(y i ,y j )
Figure BSA0000172083220000029
Wherein D [ O ] i ,O j ]Represents the ith vector O at time k i And the jth vector O j Distance between discrete attributes, δ (y) i ,y j ) Representing the i-th vector O without weighting i And the jth vector O j Distance between discrete attributes, beta represents the target type
Figure BSA00001720832200000210
Weight of y i An object type representing an ith object;
3.2.2.2 ) calculation of
Figure BSA00001720832200000211
And
Figure BSA00001720832200000212
distance of consecutive attributes C [ O ] i ,O j ]The continuous attribute includes a target heading
Figure BSA00001720832200000213
Position of
Figure BSA00001720832200000214
And velocity
Figure BSA00001720832200000215
Figure BSA00001720832200000216
Figure BSA0000172083220000031
Wherein, C [ O ] i ,O j ]Represents the ith vector O i And the jth vector O j Distance between successive properties, ω k A weight value representing the kth consecutive attribute,
Figure BSA0000172083220000032
representing the kth continuous attribute value of the ith target, determining the weight beta of different attributes and the weight omega of the kth continuous attribute by using a hierarchical analysis method AHP (analytic hierarchy process) in combination with expert opinions k
3.2.2.3 ) calculating
Figure BSA0000172083220000033
And
Figure BSA0000172083220000034
a distance L [ O ] therebetween i ,O j ]And mixing L [ O ] i ,O j ]Addition to U:
L[O i ,O j ]=C[O i ,O j ]+D[O i ,O j ]
wherein, L [ O ] i ,O j ]Representing the distance between the ith vector and the jth vector;
3.2.3 Let i = i +1, if i ≦ n 2 Returning to the step 3.2.2) for iteration if i > n 2 Stopping iteration and executing the step 3.2.4);
3.2.4 Let t = t +1, if t ≦ N k Returning to the step 3.2.2) for iteration if t is more than N k Stopping iteration and executing the step 3.2.5);
3.2.5 Based on the distance record library U, selecting the weight vector corresponding to the closest distance of each target sensor data as the best matching unit
Figure RE-GSB0000178598130000015
Figure BSA0000172083220000036
Wherein, B c Representing the weight vector of the c-th best matching unit;
3.2.6 N) determining the number of best matching units BMU
3.3 Determine the c-th best matching unit B c Nearby neuron vectors
Figure BSA0000172083220000037
The method specifically comprises the following steps:
3.3.1 C =1,i =1, distance record library is initialized
Figure BSA0000172083220000038
3.3.2 C) according to step 3.2.2), the c-th best matching unit B is calculated c And the ith weight vector
Figure BSA0000172083220000039
A distance L [ O ] therebetween c ,O i ]Adding it to U;
3.3.3 Let c = c +1,n BMU Number of best matching units obtained in step 3.2), if c < n BMU Returning to step 3.3.2) for iteration if c = n BMU Determining the ith weight vector according to the distance record library U
Figure BSA00001720832200000310
Emptying U and executing step 3.2.4) by the nearest optimal matching unit;
3.3.4 Let i = i +1,o be the number of neurons except the best matching unit, if i is less than or equal to o, return to step 3.2.2) for iteration, if i > o, stop iteration, execute step 3.4);
3.4 Update the c-th best matching unit B c Weight vector of neurons in the vicinity thereof
Figure BSA00001720832200000311
Weight variation Δ w i Is composed of
Figure BSA00001720832200000312
Wherein alpha (t) represents a learning rate, 0 < alpha (t) < 1;
3.5 ) whether the maximum number of iterations n has been reached is determined max If yes, executing step 4, otherwise, returning to step 3.2);
step 4, using the standardized confidence value to check the target grouping result
4.1 By computing an input data vector X input And the c-th best matching unit weight vector B c The minimum quantization error MQE can be obtained from the distance between the input vector and the standard state, and the difference between the input vector and the standard state can be further measured
MQE=L[X input ,B c ]
Wherein, X input Representing an input data vector;
4.2 To be able to reflect the current training level in a compact manner, a standardized confidence value CV in the range of 0 to 1 is proposed on the basis of the minimum quantization error MQE
Figure BSA0000172083220000041
Wherein, c 0 =-MQE 0 1/2 /ln CV 0 ,MQE 0 Denotes MQE, CV under Standard conditions 0 Represents an initial CV value, the CV being between 0 and 1;
4.3 Web learning rules according to self-organizing feature mapping: the more similar the current state characteristic is to the standard state characteristic, the smaller the MQE value is, and the larger the CV value is; when the grouping precision is low or an error grouping occurs, the higher the corresponding MQE value is, the smaller the CV value is; setting a threshold value u, if CV is less than u, indicating that the grouping result is better, executing a step 3.7), if a certain CV is more than or equal to u, checking the grouping result by combining with the practical condition of zc, and correcting in time;
step 5, outputting the target grouping result
5.1 Output all target grouping results;
5.2 Examine sensor data at the next time
Figure BSA0000172083220000042
And if yes, enabling k = k +1, returning to the step 1 for iteration, and otherwise, ending the flow.
The invention has the following advantages:
1) According to the invention, through data cleaning before the target grouping, noise interference is effectively filtered, and the accuracy of the target grouping is improved;
2) According to the method, the distances among different targets are obtained by using a hybrid calculation method, the discrete attribute and the continuous attribute are considered, the difference among the targets is effectively reflected, and the accuracy of target grouping is improved;
3) The invention solves the key problems that the number of the packets needs to be specified in advance and the threshold needs to be set by introducing the SOM, improves the accuracy and the speed of the target packets, meets the requirements of practical application and has practical application value. By introducing CV test target grouping conditions, the robustness of the algorithm is improved.
Drawings
FIG. 1 is a flow chart of a method for clustering objects based on an ad hoc feature mapping network in accordance with the present invention;
FIG. 2 is a diagram of target formation;
FIG. 3 illustrates a self-organizing feature mapping network neuron vector visualization;
FIG. 4 illustrates grouping minimum quantization error values;
FIG. 5 illustrates a packet normalized confidence value;
FIG. 6 is a target grouping result for a mapping network based on self-organizing features;
fig. 7 is a three-dimensional situation diagram of the target grouping result.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples.
Referring to fig. 1, the target clustering method based on the self-organizing feature mapping network of the present invention specifically includes the following steps:
step 1, reading data
1.1 Let initial time k =1, read the type of the tth target at time k
Figure BSA0000172083220000051
Course of course
Figure BSA0000172083220000052
Position of
Figure BSA0000172083220000053
And velocity
Figure BSA0000172083220000054
t is 1,2, …, N k ,N k The total number of targets at the moment k is;
1.2 To describe the target grouping problem, the t-th target sensor data at time k uses a one-dimensional vector
Figure BSA0000172083220000055
Is shown in which
Figure BSA0000172083220000056
Indicating the t-th target property at time k,
Figure BSA0000172083220000057
indicating the t-th object type at time k,
Figure BSA0000172083220000058
indicating the t-th target heading at time k,
Figure BSA0000172083220000059
indicating the tth target position at time k,
Figure BSA00001720832200000510
representing the t target speed at time k, the total number of target sensors at time kAccording to the set as
Figure BSA00001720832200000511
Step 2, data cleaning
2.1 Selecting an isolated forest algorithm to detect abnormal values (Liu FT, ting KM, zhou ZH. Isoaltion forest. Proceedings of the 8th IEEE International Conference on Data Mining;2008 Dec 15-19; washington.dc, usa.ieee Press;2008. P.413-22.);
2.2 For quantitative analysis of relative situation between targets, the WGS-84 geodetic coordinate system obtained by GPS is converted into a national coordinate system of our country;
2.3 To maintain uniformity of data ranges, sensor data is normalized:
Figure BSA00001720832200000512
wherein x is * Is normalized sensor data, x is raw sensor data, x max Is the maximum value, x, of the attribute sensor data in all targets min The minimum value of the attribute sensor data among all targets.
Step 3, training the self-organizing feature mapping network
3.1 Setting the number of neurons in the input layer to be 6, setting the competition layer to be a planar array consisting of n multiplied by n neurons (n is a non-zero natural number), and setting the iteration number to be n max Next, the process is carried out. Random initialization of n 2 Weight vector of each competition layer
Figure BSA0000172083220000061
Figure BSA0000172083220000062
3.2 Calculate k time N k Sensor data X of individual target k And n 2 Weight vector of each competition layer
Figure BSA0000172083220000063
The distance between the two units, and determining the weight vector w of the optimal matching unit c The method specifically comprises the following steps:
3.2.1 Initializing t =1,i =1, distance record base
Figure BSA0000172083220000064
3.2.2 To calculate the tth target sensor data at time k
Figure BSA0000172083220000065
And the ith weight vector
Figure BSA0000172083220000066
A distance L [ O ] therebetween i ,O j ]In which O is i 、O j Representing vector i and vector j, i =1,2, ·, n, respectively 2 ,j=1,2,...,n 2 The method specifically comprises the following steps:
3.2.2.1 ) calculation of
Figure BSA0000172083220000067
And the ith weight vector
Figure BSA0000172083220000068
Distance of discrete attributes D [ O ] i ,O j ]Discrete attributes comprising only object types
Figure BSA0000172083220000069
D[O i ,O j ]=βδ(y i ,y j )
Figure BSA00001720832200000610
Wherein D [ O ] i ,O j ]Represents the ith vector O at time k i And the jth vector O j Distance between discrete attributes, δ (y) i ,y j ) Representing the i-th vector O without weighting i And j (th)Vector O j Distance between discrete attributes, beta represents the target type
Figure BSA00001720832200000611
Weight of (a), y i An object type representing an ith object;
3.2.2.2 ) calculation of
Figure BSA00001720832200000612
And
Figure BSA00001720832200000613
distance of consecutive attributes C [ O ] i ,O j ]The continuous attribute includes a target heading
Figure BSA00001720832200000614
Position of
Figure BSA00001720832200000615
And velocity
Figure BSA00001720832200000616
Figure BSA00001720832200000617
Figure BSA00001720832200000618
Wherein, C [ O ] i ,O j ]Represents the ith vector O i And the jth vector O j Distance between successive properties, ω k A weight value representing the kth consecutive attribute,
Figure BSA00001720832200000619
representing the kth continuous attribute value of the ith target, determining the weight beta of different attributes and the weight omega of the kth continuous attribute by using a hierarchical analysis method (AHP) in combination with expert opinions k
3.2.2.3 ) calculation of
Figure BSA00001720832200000620
And
Figure BSA00001720832200000621
a distance L [ O ] therebetween i ,O j ]And mixing L [ O ] i ,O j ]Addition to U:
L[O i ,O j ]=C[O i ,O j ]+D[O i ,O j ]
wherein, L [ O ] i ,O j ]Representing the distance between the ith and jth vectors.
3.2.3 Let i = i +1, if i ≦ n 2 Returning to the step 3.2.2) for iteration if i > n 2 Stopping iteration and executing the step 3.2.4);
3.2.4 Let t = t +1, if t ≦ N k Returning to the step 3.2.2) for iteration if t is more than N k Stopping iteration and executing the step 3.2.5);
3.2.5 According to the distance record library U, selecting the weight vector corresponding to the closest distance of each target sensor data as an optimal matching unit;
Figure BSA0000172083220000071
wherein, B c Representing the weight vector of the c-th best matching unit;
3.2.6 N) determining the number of best matching units BMU
3.3 Determine the c-th best matching unit B c Nearby neuron vector
Figure BSA0000172083220000072
The method specifically comprises the following steps:
3.3.1 Initialize c =1,i =1, distance record base
Figure BSA0000172083220000073
3.3.2 According to step 3.2.2)) the c-th best matching unit B is calculated c And the ith weight vector
Figure BSA0000172083220000074
A distance L [ O ] therebetween c ,O i ]Adding it to U;
3.3.3 Let c = c +1,n BMU The number of best matching units obtained in step 3.2) if c < n BMU And returning to the step 3.3.2) for iteration, and if c = n BMU Determining the ith weight vector according to the distance record library U
Figure BSA0000172083220000075
Emptying U from the nearest optimal matching unit, and executing step 3.2.4);
3.3.4 Let i = i +1,o be the number of neurons except the best matching unit, if t is less than or equal to o, return to step 3.2.2) for iteration, if t > o, stop iteration, execute step 3.4);
3.4 Update the c-th best matching unit B c Weight vector of neurons in the vicinity thereof
Figure BSA0000172083220000076
Weight variation Δ w i Is composed of
Figure BSA0000172083220000077
Wherein alpha (t) represents a learning rate, 0 < alpha (t) < 1;
3.5 ) whether the maximum number of iterations n has been reached is determined max If yes, step 4 is executed, otherwise, step 3.2) is returned to.
Step 4, using the standardized confidence value to check the target grouping result
4.1 By computing an input data vector X input And the c-th best matching unit weight vector B c The minimum quantization error MQE can be obtained from the distance between the input vector and the standard state, and the difference between the input vector and the standard state can be further measured
MQE=L[X input ,B c ]
Wherein, X input Representing an input data vector;
4.2 To be able to reflect the current training level in a compact manner, a normalized confidence value CV in the range of 0-1 is proposed on the basis of the minimum quantization error MQE
Figure BSA0000172083220000081
Wherein, c 0 =-MQE 0 1/2 /ln CV 0 ,MQE 0 Denotes MQE, CV under Standard State 0 Represents an initial CV value, the CV being between 0 and 1;
4.3 Web learning rules according to self-organizing feature mapping: the more similar the current state features are to the standard state features, the smaller the MQE value, the larger the CV value. When the grouping precision is low or error grouping occurs, the higher the corresponding MQE value is, the smaller the CV value is; setting a threshold value u, if CV is less than u, indicating that the grouping result is better, executing a step 3.7), and if a certain CV is more than or equal to u, checking the grouping result by combining with the practical condition of zc and correcting in time.
Step 5, outputting the target grouping result
5.1 Output all target grouping results;
5.2 Examine sensor data at the next time
Figure BSA0000172083220000082
And if yes, enabling k = k +1, returning to the step 1 for iteration, and otherwise, ending the flow.
The effect of the invention can be further illustrated by the following simulation experiment:
1. simulation conditions
Simulation environment: the computer adopts InterXeon (R) E5 CPU 4GB RAM, and the software adopts pycharm simulation experiment platform and tenserflow deep learning algorithm library.
Simulation parameters: the number of neurons in the input layer is 6, the competition layer is a planar array consisting of 4 multiplied by 4 neurons, and the iteration frequency is 10 times. In order to improve the operation efficiency, a single-layer SOM neural network is used for training.
2. Simulation method
The method comprises the following steps: the method of the invention;
the method 2 comprises the following steps: the K-means method;
the method 3 comprises the following steps: a Chameleon hierarchical clustering algorithm;
the method 4 comprises the following steps: ant lion optimization algorithm;
3. emulated content and results
Simulation 1: grouping targets with method 1
To verify the validity of the ad hoc signature mapping network, experimental verification was performed using 10 sets of data sets. One set of data is shown in table 1.
TABLE 1
Figure BSA0000172083220000083
Figure BSA0000172083220000091
As can be seen from fig. 2 and table 1, there are 19 sets of targets in total, and the important resources of our party are to be destroyed through cooperative cooperation between the targets. The targets are roughly divided into 6 groups, wherein group 1 mainly executes the early protection task to prepare for group 6 to destroy target resources, and group 2 and 4 start from both sides to ensure safe return voyage of group 2 and correspond to group 1, and group 5 mainly provides information and guidance for all targets.
From fig. 3, the neuron vectors of the competitive layer can be observed, which is helpful for analyzing the target grouping situation.
From fig. 4 and fig. 5, it can be seen that MQE and CV values of the remaining points are normal, except that MQE at the 13 th point is higher and CV value is lower, which indicates that the training of the SOM neural network is better, and only one point may be an abnormal classification. Next, specific analysis is performed by combining the classification results, and the target grouping result is shown in fig. 6, where the ordinate corresponds to the serial number of the best matching unit.
As can be seen from FIG. 6, the 19 objects are grouped into 6 classes, where one class has only one object, namely the object with the anomaly in MQE value and CV value. Through analysis, the targets with abnormal CV values are in the sixth group, and play a role in providing information support for other targets in the rear, and are greatly different from other targets in space and type. Therefore, they should be individually grouped into one category. Compared with fig. 2, the classification result of the SOM neural network is completely consistent with the actual target situation, which shows that the SOM neural network can quickly and accurately group the targets. Meanwhile, the MQE value and the CV value play a good auxiliary analysis role. A three-dimensional map of the target groupings is shown in fig. 7.
Simulation 2: a plurality of groups of experiments are carried out by using the four methods, each group of methods is operated for 20 times, the grouping correct rate, the average operation time and the memory occupation peak value are counted, and the grouping result is shown in a table 2:
TABLE 2
Figure BSA0000172083220000101
As can be seen from the statistics in table 2: the classification result of the method 2 is influenced by the initial clustering center, so that the grouping result is unstable, and the grouping accuracy is low; in the method 3, the threshold value needs to be set, different threshold values cannot be accurately set aiming at different target conditions, and the accuracy is moderate; the method 4 is a group intelligent algorithm with strong optimization capability, has high accuracy, needs to specify the number of groups in advance, and has slow iteration each time and low efficiency. The invention automatically groups the targets through the self-organizing feature mapping network, has high operation efficiency, uses a hybrid calculation method to replace an Euclidean distance method, adopts CV values to test the grouping result, further improves the accuracy and robustness of grouping, and has practical application value.

Claims (1)

1. The target grouping method based on the self-organizing feature mapping network comprises the following steps:
step 1, reading data
1.1 Let initial time k =1, read the t-th time at kType of an object
Figure FSA0000172083210000011
Course of course
Figure FSA00001720832100000119
Position of
Figure FSA0000172083210000012
And velocity
Figure FSA0000172083210000013
t is 1,2, …, N k ,N k The total number of targets at the moment k is;
1.2 To describe the target grouping problem, the t-th target sensor data at time k uses a one-dimensional vector
Figure FSA0000172083210000014
Is shown in which
Figure FSA0000172083210000015
Indicating the t-th target property at time k,
Figure FSA0000172083210000016
indicating the t-th object type at time k,
Figure FSA0000172083210000017
indicating the t-th target heading at time k,
Figure FSA0000172083210000018
indicating the t-th target position at time k,
Figure FSA0000172083210000019
represents the t-th target speed at the k moment, and all the target sensor data at the k moment are collected into
Figure FSA00001720832100000110
Step 2, data cleaning
2.1 Selecting abnormal values in the sensor data detected by an isolated forest algorithm;
2.2 For quantitative analysis of relative situation between targets, the WGS-84 geodetic coordinate system obtained by GPS is converted into the national coordinate system of our country;
2.3 To maintain the unity of the data range, the sensor data is normalized:
Figure FSA00001720832100000111
wherein x is * Is normalized sensor data, x is raw sensor data, x max Is the maximum value, x, of the attribute sensor data in all targets min The minimum value of the attribute sensor data in all targets;
step 3, training the self-organizing feature mapping network
3.1 Setting the number of neurons in the input layer to be 6, setting the competition layer to be a planar array consisting of n multiplied by n neurons, wherein n is a non-zero natural number, and the iteration number is n max Secondly; random initialization of n 2 Weight vector of each competition layer
Figure FSA00001720832100000112
3.2 Calculate k time N k Sensor data X of individual target k And n 2 Weight vector of each competition layer
Figure FSA00001720832100000120
The best matching unit weight vector w is determined c The method specifically comprises the following steps:
3.2.1 Initializing t =1,i =1, distance record base
Figure FSA00001720832100000113
3.2.2)Calculating the t-th target sensor data at the k moment
Figure FSA00001720832100000114
And the ith weight vector
Figure FSA00001720832100000115
A distance L [ O ] therebetween i ,O j ]In which O is i 、O j Denotes vector i and vector j, i =1,2 2 ,j=1,2,...,n 2 The method specifically comprises the following steps:
3.2.2.1 ) calculation of
Figure FSA00001720832100000116
And the ith weight vector
Figure FSA00001720832100000117
Distance of discrete attributes D [ O ] i ,O j ]Discrete attributes comprising only object types
Figure FSA00001720832100000118
D[O i ,O j ]=βδ(y i ,y j )
Figure FSA0000172083210000021
Wherein D [ O ] i ,O j ]Represents the ith vector O at time k i And the jth vector O j Distance between discrete attributes, δ (y) i ,y j ) Representing the i-th vector O without weighting i And the jth vector O j Distance between discrete attributes, beta represents the object type
Figure FSA0000172083210000022
Weight of y i An object type representing an ith object;
3.2.2.2 ) calculating
Figure FSA0000172083210000023
And with
Figure FSA0000172083210000024
Distance of consecutive attributes C [ O ] i ,O j ]The continuous attribute includes a target heading
Figure FSA0000172083210000025
Position of
Figure FSA0000172083210000026
And velocity
Figure FSA0000172083210000027
Figure FSA0000172083210000028
Figure FSA0000172083210000029
Wherein, C [ O ] i ,O j ]Represents the ith vector O i And the jth vector O j Distance between successive properties, ω k A weight value representing the k-th consecutive attribute,
Figure FSA00001720832100000210
representing the kth continuous attribute value of the ith target, and determining the weight beta of different attributes and the weight omega of the kth continuous attribute by using an Analytic Hierarchy Process (AHP) in combination with expert opinions k
3.2.2.3 ) calculation of
Figure FSA00001720832100000211
And with
Figure FSA00001720832100000212
A distance L [ O ] therebetween i ,O j ]And mixing L [ O ] i ,O j ]Addition to U:
L[O i ,O j ]=C[O i ,O j ]+D[O i ,O j ]
wherein, L [ O ] i ,O j ]Representing the distance between the ith vector and the jth vector;
3.2.3 Let i = i +1, if i ≦ n 2 Returning to the step 3.2.2) for iteration if i > n 2 Stopping iteration and executing the step 3.2.4);
3.2.4 Let t = t +1, if t ≦ N k And returning to the step 3.2.2) for iteration, if t is more than N k Stopping iteration and executing the step 3.2.5);
3.2.5 Based on the distance record library U, selecting the weight vector corresponding to the closest distance of each target sensor data as the best matching unit
Figure FSA00001720832100000213
Figure FSA00001720832100000214
Wherein, B c Representing the weight vector of the c-th best matching unit;
3.2.6 N) determining the number of best matching units BMU
3.3 Determine the c-th best matching unit B c Nearby neuron vectors
Figure FSA00001720832100000215
The method specifically comprises the following steps:
3.3.1 Initialize c =1,i =1, distance record base
Figure FSA0000172083210000031
3.3.2 ) according to step 3.2.2), calculatingThe c-th best matching unit B c And the ith weight vector
Figure FSA0000172083210000032
A distance L [ O ] therebetween c ,O i ]Adding it to U;
3.3.3 Let c = c +1,n BMU The number of best matching units obtained in step 3.2) if c < n BMU And returning to the step 3.3.2) for iteration, and if c = n BMU Determining the ith weight vector according to the distance record library U
Figure FSA0000172083210000033
Emptying U from the nearest optimal matching unit, and executing step 3.2.4);
3.3.4 Let i = i +1,o be the number of neurons except the best matching unit, if i is less than or equal to o, return to step 3.2.2) for iteration, if i > o, stop iteration, execute step 3.4);
3.4 Update the c-th best matching unit B c Weight vector of neurons in the vicinity thereof
Figure FSA0000172083210000034
Weight variation Δ w i Is composed of
Figure FSA0000172083210000035
Wherein alpha (t) represents a learning rate, 0 < alpha (t) < 1;
3.5 ) whether the maximum number of iterations n has been reached is determined max If yes, executing step 4, otherwise, returning to step 3.2);
step 4, using the standardized confidence value to check the target grouping result
4.1 By computing an input data vector X input And the c-th best matching unit weight vector B c The minimum quantization error MQE can be obtained by the distance between the input vector and the standard state, and the difference between the input vector and the standard state can be further measured
MQE=L[X input ,B c ]
Wherein, X input Representing an input data vector;
4.2 To be able to reflect the current training level in a compact manner, a normalized confidence value CV in the range of 0-1 is proposed on the basis of the minimum quantization error MQE
Figure FSA0000172083210000036
Wherein, c 0 =-MQE 0 1/2 /lnCV 0 ,MQE 0 Denotes MQE, CV under Standard conditions 0 Represents an initial CV value, the CV being between 0 and 1;
4.3 Network learning rules according to self-organizing feature mapping: the more similar the current state characteristic is to the standard state characteristic, the smaller the MQE value is, and the larger the CV value is; when the grouping precision is low or an error grouping occurs, the higher the corresponding MQE value is, the smaller the CV value is; setting a threshold value u, if CV is less than u, indicating that the grouping result is better, executing a step 3.7), if a certain CV is more than or equal to u, checking the grouping result by combining the actual condition, and correcting in time;
step 5, outputting the target grouping result
5.1 Output all target grouping results;
5.2 Examine sensor data at the next time
Figure FSA0000172083210000041
And if yes, enabling k = k +1, returning to the step 1 for iteration, and otherwise, ending the process.
CN201811200842.6A 2018-09-28 2018-09-28 Target grouping method based on self-organizing feature mapping network Active CN109766905B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811200842.6A CN109766905B (en) 2018-09-28 2018-09-28 Target grouping method based on self-organizing feature mapping network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811200842.6A CN109766905B (en) 2018-09-28 2018-09-28 Target grouping method based on self-organizing feature mapping network

Publications (2)

Publication Number Publication Date
CN109766905A CN109766905A (en) 2019-05-17
CN109766905B true CN109766905B (en) 2023-02-28

Family

ID=66449065

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811200842.6A Active CN109766905B (en) 2018-09-28 2018-09-28 Target grouping method based on self-organizing feature mapping network

Country Status (1)

Country Link
CN (1) CN109766905B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110285812A (en) * 2019-06-27 2019-09-27 电子科技大学 Sensor matching method and system in multiple target tracking based on air threat priority
CN110751196B (en) * 2019-10-12 2020-09-18 东北石油大学 Oil-like drop attachment identification method in oil-water two-phase flow transparent pipe wall
CN117633563B (en) * 2024-01-24 2024-05-10 中国电子科技集团公司第十四研究所 Multi-target top-down hierarchical grouping method based on OPTICS algorithm

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000121586A (en) * 1998-10-15 2000-04-28 Eiji Uchino Metal pipe discrimination method using neural network
WO2005006249A1 (en) * 2003-07-09 2005-01-20 Raptor International Holdings Pty Ltd Method and system of data analysis using neural networks
CN106599927A (en) * 2016-12-20 2017-04-26 中国电子科技集团公司第五十四研究所 Target grouping method based on fuzzy ART division

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000121586A (en) * 1998-10-15 2000-04-28 Eiji Uchino Metal pipe discrimination method using neural network
WO2005006249A1 (en) * 2003-07-09 2005-01-20 Raptor International Holdings Pty Ltd Method and system of data analysis using neural networks
CN106599927A (en) * 2016-12-20 2017-04-26 中国电子科技集团公司第五十四研究所 Target grouping method based on fuzzy ART division

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于SOFM神经网络的多目标跟踪方法;李乐等;《四川兵工学报》;20090425(第04期);全文 *
自组织特征映射网络在目标分类识别中的应用;寇英信等;《火力与指挥控制》;20090115(第01期);全文 *

Also Published As

Publication number Publication date
CN109766905A (en) 2019-05-17

Similar Documents

Publication Publication Date Title
CN109766905B (en) Target grouping method based on self-organizing feature mapping network
CN111580500A (en) Evaluation method for safety of automatic driving automobile
CN107579846B (en) Cloud computing fault data detection method and system
CN104539484A (en) Method and system for dynamically estimating network connection reliability
CN112135248B (en) WIFI fingerprint positioning method based on K-means optimal estimation
CN110895333B (en) Rapid 77G vehicle-mounted radar data clustering method based on Doppler frequency
CN111311702B (en) Image generation and identification module and method based on BlockGAN
CN104715154B (en) Core K average Data Associations based on KMDL criterion criterions
CN111444769A (en) Laser radar human leg detection method based on multi-scale self-adaptive random forest
CN111553348A (en) Anchor-based target detection method based on centernet
CN108810799B (en) Multi-floor indoor positioning method and system based on linear discriminant analysis
Morales et al. LAMDA-HAD, an Extension to the LAMDA Classifier in the Context of Supervised Learning
CN110310322B (en) Method for detecting assembly surface of 10-micron-level high-precision device
CN113627075B (en) Projectile pneumatic coefficient identification method based on adaptive particle swarm optimization extreme learning
CN112767429B (en) Ground-snow surface point cloud rapid segmentation method
CN114417095A (en) Data set partitioning method and device
Xu et al. A hybrid approach using multistage collaborative calibration for wireless sensor network localization in 3D environments
CN105373805A (en) A multi-sensor maneuvering target tracking method based on the principle of maximum entropy
CN112711912A (en) Air quality monitoring and alarming method, system, device and medium based on cloud computing and machine learning algorithm
CN111796250A (en) False trace point multi-dimensional hierarchical suppression method based on risk assessment
WO2022237508A1 (en) Method and apparatus for calibrating vehicle control parameters, and vehicle
CN107423319B (en) Junk web page detection method
CN113065604B (en) Air target grouping method based on DTW-DBSCAN algorithm
CN106055883B (en) Transient stability evaluation input feature validity analysis method based on improved Sammon mapping
CN114861857A (en) Particle swarm optimization method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant