CN116030358A - Remote sensing fine granularity classification method for star group distributed parameter feature fusion - Google Patents

Remote sensing fine granularity classification method for star group distributed parameter feature fusion Download PDF

Info

Publication number
CN116030358A
CN116030358A CN202211642236.6A CN202211642236A CN116030358A CN 116030358 A CN116030358 A CN 116030358A CN 202211642236 A CN202211642236 A CN 202211642236A CN 116030358 A CN116030358 A CN 116030358A
Authority
CN
China
Prior art keywords
network
node
parameters
target
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211642236.6A
Other languages
Chinese (zh)
Other versions
CN116030358B (en
Inventor
王智睿
汪越雷
赵良瑾
陈凯强
成培瑞
牛子清
王喆超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aerospace Information Research Institute of CAS
Original Assignee
Aerospace Information Research Institute of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aerospace Information Research Institute of CAS filed Critical Aerospace Information Research Institute of CAS
Priority to CN202211642236.6A priority Critical patent/CN116030358B/en
Publication of CN116030358A publication Critical patent/CN116030358A/en
Application granted granted Critical
Publication of CN116030358B publication Critical patent/CN116030358B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a remote sensing fine granularity classification method for star group distributed parameter feature fusion, which comprises the steps of firstly, based on fine granularity sample set distribution of each node, developing independent sample training on a node network and preparing to finish feature fusion of node parameters; then, calculating the network weight of each node network based on the target sample quality information table of each node network, and fusing the network parameters of K node networks with the top ranking of the network weights to obtain global network parameters; and finally, fusing the global network parameters with the packet network parameters corresponding to each major class, obtaining fused packet parameters and sending the fused packet parameters to the corresponding node network. Therefore, the generalization performance of the node network can be improved, the individuation performance of the node network can be ensured, the classification precision of each node network can be improved, and the robustness and universality are realized.

Description

Remote sensing fine granularity classification method for star group distributed parameter feature fusion
Technical Field
The invention relates to the technical field of aerospace and computers, in particular to a remote sensing fine granularity classification method for star group distributed parameter feature fusion.
Background
Networking observation of a remote sensing platform is an important trend of the development of the current remote sensing satellite. At present, space observation platforms are gradually increased, and aircrafts with the same task often run densely in a constellation shape, such as high-score series, beidou satellites and the like, so that the cooperative coordination of multiple platforms of the constellation task needs to be studied. The existing system for processing remote sensing data on the ground needs to wait for data downloading of a satellite platform, has low remote sensing image interpretation speed, and cannot well meet the requirement of real-time processing of remote sensing information. Along with the enhancement of single-ended processing capability of a remote sensing satellite, the existing remote sensing platform can be used for deploying a lightweight network on the single-ended processing capability of the remote sensing satellite so as to achieve the purpose of detecting and learning the latest imaging result in real time. However, the lightweight network also has the problems of insufficient data, poor interpretation effect, weak generalization performance and the like. Therefore, the end-cloud cooperation of the remote sensing platform is one of important research subjects in the current remote sensing field, and has great practical significance.
Disclosure of Invention
Aiming at the technical problems, the invention adopts the following technical scheme:
the embodiment of the invention provides a remote sensing fine granularity classification method for star-group-oriented distributed parameter feature fusion, wherein a distributed network comprises a central network deployed at a ground control end and n node networks deployed on n aircrafts, the central network is in communication connection with the n node networks, and the network structure of each node network is the same and comprises m network parameters; the n aircrafts are used for detecting targets belonging to M major classes, and each node network is used for detecting targets belonging to the same major class; the method comprises the following steps:
s100, for any node network N i By N i Corresponding sample image set IMG i For N i Training, and obtaining the corresponding network parameters as N when the iteration times are C0 i Is set to the initial network parameters; s400 is executed;
s200, for any node network N i If the fusion packet parameters sent by the central network are received, setting C=0, and updating the current network parameters by utilizing the received fusion packet parameters; s300 is executed; i has a value of 1 to n;
s300, based on node network N i Using N i Corresponding sample image set IMG i For N i Training is performed if N i Converging, then N i As N i Target network parameters of (a); otherwise, executing S400;
s400, setting c=c+1, if C > C0, executing S500; otherwise, update N i S300 is performed;
s500, N is i Target sample quality assessment tables Ti and IMG under current network parameters i The number y (i) of categories contained in the list is sent to the central network; the kth column of Ti includes (C ik ,S ik ),C ik Is IMG i The kth subcategory S contained in ik Is C ik Corresponding weights; k has a value of 1 to f (k), f (k) being IMG i The number of sub-categories contained in the list;
s600, based on the acquired Ti and y (i), acquiring a node network N i Network weight w of (2) i =f(S k Y (i)) and corresponding subclasses; obtaining M1 network weights w 1 ,w 2 ,…,w i ,…,w n And Q major classes; m1 is the number of node networks currently sending a target sample quality evaluation list to a central network; q is the number of large classes obtained based on M1 target sample quality evaluation tables;
s700, obtaining K target network weights based on M1 network weights, and obtaining current network parameters sent by node networks corresponding to the K target network weights;
s800, based on the target network weight, the current network parameters sent by the node network corresponding to the target network weight and Q major classes, obtaining the fusion packet parameters corresponding to each major class, and sending the fusion packet parameters to the corresponding node network, and executing S200.
The invention has at least the following beneficial effects:
according to the remote sensing fine granularity classification method for the star group distributed parameter feature fusion, firstly, based on fine granularity sample set distribution of each node, independent sample training is carried out on a node network, and feature fusion of node parameters is prepared to be completed; then, calculating the network weight of each node network based on the target sample quality information table of each node network, and fusing the network parameters of K node networks with the top ranking of the network weights to obtain global network parameters; and finally, fusing the global network parameters with the packet network parameters corresponding to each major class, obtaining fused packet parameters and sending the fused packet parameters to the corresponding node network. Therefore, the generalization performance of the node network can be improved, the individuation performance of the node network can be ensured, the classification precision of each node network can be improved, and the robustness and universality are realized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a remote sensing fine granularity classification method for star-group-oriented distributed parameter feature fusion provided by an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to fall within the scope of the invention.
Fig. 1 is a flowchart of a remote sensing fine granularity classification method for star-group-oriented distributed parameter feature fusion provided by an embodiment of the invention.
The embodiment of the invention provides a remote sensing fine granularity classification method for star-group-oriented distributed parameter feature fusion, which is used for acquiring proper network parameters for each node network of a distributed network.
In an embodiment of the invention, the distributed network comprises a central network which can be deployed on a ground control end and n node networks deployed on n aircrafts. The central network is in communication connection with the n node networks, and each node network has the same network structure and comprises m network parameters. The n aircrafts are used for detecting objects belonging to M major classes, each node network is used for detecting objects belonging to the same major class, namely, each node network is used for detecting objects of one major class.
In an embodiment of the invention, the aircraft may be, for example, a satellite system. The node network may be a lightweight neural network, such as CNN, VGGNet, and ResNet. The M broad categories may be set based on actual needs, and in one illustrative embodiment, may include aircraft, watercraft, buildings, and the like.
In an embodiment of the present invention, the size of the convolution kernel used by each node network may be the convolution kernel of 1*1.
As shown in fig. 1, the method may include the steps of:
s100, for any sectionPoint network N i By N i Corresponding sample image set IMG i For N i Training, and obtaining the corresponding network parameters as N when the iteration times are C0 i Is set to the initial network parameters; s400 is performed.
In an embodiment of the invention, IMG i The sample images in the image are marked with categories and can comprise N i Sample images corresponding to sub-categories of the detected major category correlations. The number of sample images may be set based on the actual situation. C0 may be set based on actual needs, and in one exemplary embodiment, C0 < 10, preferably 2.ltoreq.C0.ltoreq.5.
Those skilled in the art know that N i The method for obtaining the initial network parameters of (a) may be the prior art, for example, the current network parameters are the network parameters obtained by adjusting according to the last training result, i.e. the feature map.
S200, for any node network N i If the fusion packet parameters sent by the central network are received, setting C=0, and updating the current network parameters by utilizing the received fusion packet parameters; s300 is executed; i has a value of 1 to n.
S300, based on node network N i Using N i Corresponding sample image set IMG i For N i Training is performed if N i Converging, then N i As N i Target network parameters of (a); otherwise, S400 is performed.
In the embodiment of the invention, N i The condition of convergence may be set based on actual needs, for example, may be set such that network loss is no longer changing for a set period of time, or the like.
In the embodiment of the invention, if N i Convergence is not carried out on the target sample quality evaluation table Ti and IMG under the current network parameters i The number of categories y (i) contained therein.
S400, setting c=c+1, if C > C0, executing S500; otherwise, update N i S300 is performed.
S500, N is i Objectives under current network parametersStandard sample intrinsic quantity evaluation table Ti and IMG i The number y (i) of categories contained in the list is sent to the central network; the kth column of Ti includes (C ik ,S ik ),C ik Is IMG i The kth subcategory S contained in ik Is C ik Corresponding weights; k has a value of 1 to f (k), f (k) being IMG i The number of sub-categories contained therein.
Further, in S500, S ik The method can be obtained based on the following steps:
s501 from N i Output of C ik D feature images are randomly acquired from the feature images of the model (C) to be spliced.
d can be set based on actual needs and can be an empirical value. Those skilled in the art will recognize that any method of stitching d feature images falls within the scope of the present invention.
S502, performing dimension reduction calculation on the spliced feature images to obtain S ik And obtaining the weight of the spliced feature map.
Those skilled in the art know that any dimension reduction calculation is performed on the spliced feature images to obtain S ik The method of the method belongs to the protection scope of the invention.
In the embodiment of the present invention, the score in the target sample quality evaluation table initially sent to the central network may be randomly generated, or may be obtained based on S501 to S502.
S600, based on the acquired Ti and y (i), acquiring a node network N i Network weight w of (2) i =f(S k Y (i)) and corresponding subclasses; obtaining M1 network weights w 1 ,w 2 ,…,w i ,…,w n And Q major classes; m1 is the number of node networks currently sending a target sample quality evaluation list to a central network; q is the number of large classes based on M1 target sample quality assessment tables.
In one embodiment of the present invention,
Figure BDA0004008034360000041
D ik is C ik Corresponding sample image number, h (i) is IMG i In (a) sampleNumber of images.
In another exemplary embodiment of the present invention,
Figure BDA0004008034360000042
compared with the embodiment, the network weight is in nonlinear correlation with the weight of each category, so that the calculation accuracy can be improved.
In the embodiment of the invention, because in the interaction process of the node network and the central network, some node networks may have converged based on the received fusion packet parameters, and no corresponding target sample quality evaluation tables are sent to the central network, the number of target sample quality evaluation information tables received by the central network each time can be changed, and similarly, the number of large classes acquired each time can be changed, namely M1 is less than or equal to n, and Q is less than or equal to M.
In the embodiment of the invention, N i The corresponding large class may be obtained based on a class distribution in the corresponding target sample quality assessment table.
S700, obtaining K target network weights based on n network weights, and obtaining current network parameters sent by node networks corresponding to the K target network weights.
In S700, the K target network weights are the first K network weights of the n network weights arranged in descending order. Specifically, the n network weights are ordered in a descending order, and then K network weights positioned in the first K bits are obtained from the n network weights ordered in the descending order to serve as K target network weights. K may be set based on actual needs, e.g.,
Figure BDA0004008034360000051
representing an upward rounding.
Further, the current network parameters sent by the node network corresponding to each target network weight may be obtained by:
s701, acquiring a major class of a target detected by each target network based on a target sample quality evaluation table corresponding to the node network corresponding to each target network weight.
S702, selecting a corresponding sampling mode to sample the currently output feature map based on the acquired class of the detected target.
The sampling mode corresponding to each major class is set according to the actual requirement. Specifically, if the detected target belongs to a large class of aircraft, selecting a downsampling mode; if the detected target belongs to a large class of ships, selecting an original sampling mode; if the detected object belongs to a large class of buildings, a downsampling method is selected.
S703, updating the current network parameters based on the sampling result, and taking the updated network parameters as the current network parameters sent to the central network.
Those skilled in the art will recognize that any method of updating current network parameters based on the sampling results is within the scope of the present invention.
The technical effect of S701 to S703 is that the corresponding network parameters can be adjusted based on the category detected by each node network, and the detection accuracy can be improved.
The technical effect of the S700 is that only the network parameters of the node network corresponding to the target network weight are obtained, so that on one hand, the network parameters of the node network with better aspects such as category coverage capability and category training effect can be ensured to participate in fusion calculation, and on the other hand, the communication times between the node network and the central network can be reduced.
S800, based on the target network weight, the current network parameters sent by the node network corresponding to the target network weight and Q major classes, obtaining the fusion packet parameters corresponding to each major class, and sending the fusion packet parameters to the corresponding node network, and executing S200.
Further, S800 may specifically include:
s802, carrying out normalization processing on the K target network weights to obtain K target network weights after normalization processing. The existing normalization mode can be adopted to normalize the weights of the K target networks.
S804, obtaining the global network parameter pw= (PW 1 ,PW 2 ,…,PW j ,…,PW m ) The method comprises the steps of carrying out a first treatment on the surface of the Jth global network parameter
Figure BDA0004008034360000061
Figure BDA0004008034360000062
P uj The current parameter value, w ', of the jth network parameter corresponding to the node network corresponding to the jth network weight in the target network weights' u And the normalized network weight corresponding to the node network corresponding to the u-th network weight in the target network weights. />
S806, acquiring a node network N corresponding to the r-th major class of the Q major classes from M1 node networks which currently need to send the fusion packet parameters r =(N r1 ,N r2 ,…,N rs ,…,N rg(r) ) Its corresponding current network weight w r =(w r1 ,w r2 ,…,w rs ,…,w rg(r) );N rs For the network of the s-th node corresponding to the r-th major class, w rs Is N rs The corresponding current network weight, s, is 1 to g (r), and g (r) is the number of node networks corresponding to the r-th major class; r has a value of 1 to Q.
For a certain major class, such as an aircraft, if the major class, such as an aircraft, is contained in a sub-class in the target sample quality assessment table of a certain node network, the node network is taken as the node network corresponding to the major class.
S808, acquiring ascending network weight w up r =(w up r1 ,w up r2 ,…,w up rs ,…,w up rg(r) ) And based on w up r Acquiring initial packet parameter pgr= (PGr) 1 ,PGr 2 ,…,PGr j ,…,PGr m ) Wherein the jth initial packet parameter
Figure BDA0004008034360000063
P up rsj Is w up rs Current parameter values of the j-th network parameter of the corresponding node network; w (w) up r1 +w up r2 +…+w up rs +…+w up rg(r) =1, and w up r1 ≥w up r2 ≥…≥w up rs ≥…≥w up rg(r) I.e. w up r To w is r And (5) carrying out normalization processing and sequencing the weights in an ascending order.
S810, acquiring fusion grouping parameters PGFr= (PGFr) corresponding to the (r) th major class 1 ,PGFr 2 ,…,PGFr j ,…,PFGr m ) And sending the data to a corresponding node network; wherein, the j-th fusion grouping parameter PGFr j =(1-a)*PW j +a*PGr j The method comprises the steps of carrying out a first treatment on the surface of the a is a coefficient between 0 and 1; s200 is performed. In the embodiment of the present invention, preferably, a=0.5.
In the embodiment of the invention, each fusion grouping parameter is fused with the global network parameter and the initial grouping parameter, so that the existing personalized classification performance of the node network can be adaptively ensured as much as possible, and the generalization capability of the node network is improved.
In the embodiment of the present invention, the execution bodies of S100 to S500 are node networks, and the execution bodies of S600 to S800 are central networks.
Further, in the embodiment of the present invention, in S300, N i Loss of (2)
Figure BDA0004008034360000064
Figure BDA0004008034360000065
P t Is IMG i The class detection probability of the t sample in (1), namely the probability of detecting belonging to a certain subcategory; TP (Transmission protocol) t Is IMG i The true probability of the category of the t sample in (a) is the probability of actually belonging to a sub-category; PG i Is N i Corresponding fusion grouping parameters, h (i) is IMG i Lambda is the hyper-parameter.
In the embodiment of the invention, the generalized performance of the final result can be achieved by considering the distance between the loss function of each node network and the global network parameter.
Further, in another embodiment of the present invention, S808 is replaced with:
s809 obtaining ascending network weight w up r =(w up r1 ,w up r2 ,…,w up rs ,…,w up rg(r) ) And based on w up r Acquiring initial packet parameter pgr= (PGr) 1 ,PGr 2 ,…,PGr j ,…,PGr m ) Wherein, if g (r) is less than or equal to E, the jth initial grouping parameter
Figure BDA0004008034360000071
P up rsj Is w up rs Current parameter values of the j-th network parameter of the corresponding node network; if g (r) > E, +.>
Figure BDA0004008034360000072
H is the set value, H < g (r), preferably, < >>
Figure BDA0004008034360000073
w up r1 +w up r2 +…+w up rs +…+w up rg(r) =1, and w up r1 ≥w up r2 ≥…≥w up rs ≥…≥w up rg(r) . E is a set number threshold, which may be an empirical value.
The technical effect of S809 is that since g (r) is considered as an initial grouping parameter, when g (r) is larger than S808, all data may not be used, and the amount of calculation can be reduced.
The remote sensing fine granularity classification method for the star group distributed parameter feature fusion provided by the embodiment of the invention has at least the following advantages:
(1) The network is divided into each node, and meanwhile, network parameters on the nodes have the capability of uploading to a central network, so that the generalization performance of the node network is further improved. The combination of the dispersibility of data acquisition, the lightweight of the node network and the generalization of the central network is fully considered, and the optimal balance point is found between rapid reasoning on the end, the lightweight of parameters and the fusion generalization of the center to the maximum extent.
(2) From the point of multi-terminal data fusion, the characteristic domain division of the star group network is realized. And extracting and summarizing the data distribution characteristics on the node network to form a central network fusion reference index. The method reduces the communication times of the intermediate network, improves the processing efficiency of the information between the characteristic domains, provides accurate quantitative reference for the generalization benefit brought by different node networks to the central network, and greatly improves the performance of the network domain division processing.
(4) From the optimization angle of the central network to the multi-node network, the dimension reduction fusion of the star network is realized. The central parameter network and the distribution characteristics of different categories are adaptively optimized, so that the influence of the central network on the personalized performance of the node network is reduced while the fusion capability of the node network is improved. The grouping convolution network can better extract network performances of different classifications, and dimension reduction fusion can be effectively carried out on the classifications of different distribution characteristics.
(5) The challenges brought by the parameter increase of the node quantity to data fusion and the computation complexity of multi-node optimization are fully considered, and a series of improvements are developed on the parameter selection scheme and the robust performance in the network optimization process. The adaptive loss function is used for accelerating the convergence of the network, and the extraction performance and the generalization performance of the network for different node numbers are effectively improved.
While certain specific embodiments of the invention have been described in detail by way of example, it will be appreciated by those skilled in the art that the above examples are for illustration only and are not intended to limit the scope of the invention. Those skilled in the art will also appreciate that many modifications may be made to the embodiments without departing from the scope and spirit of the invention. The scope of the present disclosure is defined by the appended claims.

Claims (10)

1. The remote sensing fine granularity classification method for the star group distributed parameter feature fusion is characterized in that the distributed network comprises a central network deployed at a ground control end and n node networks deployed on n aircrafts, the central network is in communication connection with the n node networks, and the network structure of each node network is the same and comprises m network parameters; the n aircrafts are used for detecting targets belonging to M major classes, and each node network is used for detecting targets belonging to the same major class; the method comprises the following steps:
s100, for any node network N i By N i Corresponding sample image set IMG i For N i Training, and obtaining the corresponding network parameters as N when the iteration times are C0 i Is set to the initial network parameters; s400 is executed;
s200, for any node network N i If the fusion packet parameters sent by the central network are received, setting C=0, and updating the current network parameters by utilizing the received fusion packet parameters; s300 is executed; i has a value of 1 to n;
s300, based on node network N i Using N i Corresponding sample image set IMG i For N i Training is performed if N i Converging, then N i As N i Target network parameters of (a); otherwise, executing S400;
s400, setting c=c+1, if C > C0, executing S500; otherwise, update N i S300 is performed;
s500, N is i Target sample quality assessment tables Ti and IMG under current network parameters i The number y (i) of categories contained in the list is sent to the central network; the kth column of Ti includes (C ik ,S ik ),C ik Is IMG i The kth subcategory S contained in ik Is C ik Corresponding weights; k has a value of 1 to f (k), f (k) being IMG i The number of sub-categories contained in the list;
s600, based on the acquired Ti and y (i), acquiring a node network N i Network weight w of (2) i =f(S k Y (i)) and corresponding subclasses; obtaining M1 network weights w 1 ,w 2 ,…,w i ,…,w n And Q major classes; m1 is the number of node networks currently sending a target sample quality evaluation list to a central network; q is the number of large classes obtained based on M1 target sample quality evaluation tables;
s700, obtaining K target network weights based on M1 network weights, and obtaining current network parameters sent by node networks corresponding to the K target network weights;
s800, based on the target network weight, the current network parameters sent by the node network corresponding to the target network weight and Q major classes, obtaining the fusion packet parameters corresponding to each major class, and sending the fusion packet parameters to the corresponding node network, and executing S200.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
Figure FDA0004008034350000011
3. the method of claim 1, wherein the step of determining the position of the substrate comprises,
Figure FDA0004008034350000012
R ik =D ik /h(i),D ik is C ik Corresponding sample image number, h (i) is IMG i The number of sample images in the image data set.
4. The method of claim 1, wherein in S700, the K target network weights are first K network weights among n network weights arranged in descending order.
5. The method according to claim 1, wherein S800 specifically comprises:
s802, carrying out normalization processing on the K target network weights to obtain K target network weights after normalization processing;
s804, obtaining the global network parameter pw= (PW 1 ,PW 2 ,…,PW j ,…,PW m ) The method comprises the steps of carrying out a first treatment on the surface of the Jth global network parameter
Figure FDA0004008034350000021
Figure FDA0004008034350000022
P uj The current parameter value, w ', of the jth network parameter corresponding to the node network corresponding to the jth network weight in the target network weights' u Normalized network weights corresponding to the node network corresponding to the u-th network weight in the target network weights;
s806, acquiring a node network N corresponding to the r-th major class of the Q major classes from M1 node networks which currently need to send the fusion packet parameters r =(N r1 ,N r2 ,…,N rs ,…,N rg(r) ) Its corresponding current network weight w r =(w r1 ,w r2 ,…,w rs ,…,w rg(r) );N rs For the network of the s-th node corresponding to the r-th major class, w rs Is N rs The corresponding current network weight, s, is 1 to g (r), and g (r) is the number of node networks corresponding to the r-th major class; r has a value of 1 to Q;
s808, acquiring ascending network weight w up r =(w up r1 ,w up r2 ,…,w up rs ,…,w up rg(r) ) And based on w up r Acquiring initial packet parameter pgr= (PGr) 1 ,PGr 2 ,…,PGr j ,…,PGr m ) Wherein the jth initial packet parameter
Figure FDA0004008034350000023
Figure FDA0004008034350000024
P up rsj Is w up rs Current parameter values of the j-th network parameter of the corresponding node network; w (w) up r1 +w up r2 +…+w up rs +…+w up rg(r) =1, and w up r1 ≥w up r2 ≥…≥w up rs ≥…≥w up rg(r)
S810, acquiring fusion grouping parameters PGFr= (PGFr) corresponding to the (r) th major class 1 ,PGFr 2 ,…,PGFr j ,…,PFGr m ) And sending the data to a corresponding node network; wherein, the j-th fusion grouping parameter PGFr j =(1-a)*PW j +a*PGr j The method comprises the steps of carrying out a first treatment on the surface of the a is a coefficient between 0 and 1; s200 is performed.
6. The method of claim 5, wherein S808 is replaced with:
s809 obtaining ascending network weight w up r =(w up r1 ,w up r2 ,…,w up rs ,…,w up rg(r) ) And based on w up r Acquiring initial packet parameter pgr= (PGr) 1 ,PGr 2 ,…,PGr j ,…,PGr m ) Wherein, if g (r) is less than or equal to E, the jth initial grouping parameter
Figure FDA0004008034350000025
P up rsj Is w up rs Current parameter values of the j-th network parameter of the corresponding node network; if it is
Figure FDA0004008034350000031
H is a set value, H is less than g (r); w (w) up r1 +w up r2 +…+w up rs +…+w up rg(r) =1, and w up r1 ≥w up r2 ≥…≥w up rs ≥…≥w up rg(r) E is a set number threshold.
7. The method according to claim 1, wherein in S700, the current network parameters transmitted by the node network corresponding to each target network weight are obtained by:
s701, acquiring a major class of a target detected by each target network based on a target sample quality evaluation table corresponding to the node network corresponding to each target network weight;
s702, selecting a corresponding sampling mode to sample a currently output feature map based on the acquired class of the detected target;
s703, updating the current network parameters based on the sampling result, and taking the updated network parameters as the current network parameters sent to the central network.
8. The method of claim 7, wherein the M major classes include aircraft, watercraft, and buildings.
9. The method according to claim 1, wherein in S500, S ik The method comprises the following steps of:
s501 from N i Output of C ik D feature images are randomly acquired from the feature images of the image sensor to be spliced;
s502, performing dimension reduction calculation on the spliced feature images to obtain S ik
10. The method according to claim 1, wherein in S300N i Loss of (2)
Figure FDA0004008034350000032
Figure FDA0004008034350000033
P t Is IMG i Class detection probability, TP, of the t-th sample in (a) t Is IMG i Class true of the t-th sample in (2)Real probability, PG i Is N i Corresponding fusion grouping parameters, h (i) is IMG i Lambda is the hyper-parameter. />
CN202211642236.6A 2022-12-20 2022-12-20 Remote sensing fine granularity classification method for star group distributed parameter feature fusion Active CN116030358B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211642236.6A CN116030358B (en) 2022-12-20 2022-12-20 Remote sensing fine granularity classification method for star group distributed parameter feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211642236.6A CN116030358B (en) 2022-12-20 2022-12-20 Remote sensing fine granularity classification method for star group distributed parameter feature fusion

Publications (2)

Publication Number Publication Date
CN116030358A true CN116030358A (en) 2023-04-28
CN116030358B CN116030358B (en) 2023-06-23

Family

ID=86075190

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211642236.6A Active CN116030358B (en) 2022-12-20 2022-12-20 Remote sensing fine granularity classification method for star group distributed parameter feature fusion

Country Status (1)

Country Link
CN (1) CN116030358B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110555446A (en) * 2019-08-19 2019-12-10 北京工业大学 Remote sensing image scene classification method based on multi-scale depth feature fusion and transfer learning
CN112101190A (en) * 2020-09-11 2020-12-18 西安电子科技大学 Remote sensing image classification method, storage medium and computing device
US20220058446A1 (en) * 2019-07-12 2022-02-24 Tencent Technology (Shenzhen) Company Limited Image processing method and apparatus, terminal, and storage medium
WO2022052367A1 (en) * 2020-09-10 2022-03-17 中国科学院深圳先进技术研究院 Neural network optimization method for remote sensing image classification, and terminal and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220058446A1 (en) * 2019-07-12 2022-02-24 Tencent Technology (Shenzhen) Company Limited Image processing method and apparatus, terminal, and storage medium
CN110555446A (en) * 2019-08-19 2019-12-10 北京工业大学 Remote sensing image scene classification method based on multi-scale depth feature fusion and transfer learning
WO2022052367A1 (en) * 2020-09-10 2022-03-17 中国科学院深圳先进技术研究院 Neural network optimization method for remote sensing image classification, and terminal and storage medium
CN112101190A (en) * 2020-09-11 2020-12-18 西安电子科技大学 Remote sensing image classification method, storage medium and computing device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李玉峰;顾曼璇;赵亮;: "采用改进Faster R-CNN的遥感图像目标检测方法", 信号处理, no. 08, pages 181 - 191 *

Also Published As

Publication number Publication date
CN116030358B (en) 2023-06-23

Similar Documents

Publication Publication Date Title
CN106295714B (en) Multi-source remote sensing image fusion method based on deep learning
CN109919108B (en) Remote sensing image rapid target detection method based on deep hash auxiliary network
CN108615071B (en) Model testing method and device
CN110132263B (en) Star map identification method based on representation learning
CN104268524A (en) Convolutional neural network image recognition method based on dynamic adjustment of training targets
CN111723780B (en) Directional migration method and system of cross-domain data based on high-resolution remote sensing image
CN113554156B (en) Multitask image processing method based on attention mechanism and deformable convolution
CN112001403B (en) Image contour detection method and system
CN112396097B (en) Unsupervised domain self-adaptive visual target detection method based on weighted optimal transmission
Kharkovskii et al. Nonmyopic Gaussian process optimization with macro-actions
CN110852440A (en) Ocean front detection method based on dynamic fuzzy neural network
CN115050022A (en) Crop pest and disease identification method based on multi-level self-adaptive attention
CN116030358B (en) Remote sensing fine granularity classification method for star group distributed parameter feature fusion
CN114694028A (en) Ship weld defect detection method based on convolution confidence generation countermeasure network model
Liu et al. Estimation and fusion for tracking over long-haul links using artificial neural networks
CN114494819B (en) Anti-interference infrared target identification method based on dynamic Bayesian network
CN110569871A (en) saddle point identification method based on deep convolutional neural network
Tabarisaadi et al. An optimized uncertainty-aware training framework for neural networks
CN115909027A (en) Situation estimation method and device
CN115131671A (en) Cross-domain high-resolution remote sensing image typical target fine-grained identification method
CN112015894B (en) Text single class classification method and system based on deep learning
CN114022516A (en) Bimodal visual tracking method based on high rank characteristics and position attention
CN113409351A (en) Unsupervised field self-adaptive remote sensing image segmentation method based on optimal transmission
Guan et al. A terrain matching navigation algorithm for UAV
CN113724325B (en) Multi-scene monocular camera pose regression method based on graph convolution network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant