CN109308524B - BPNN (binary noise network) feature identification method based on improved NBA (negative bias noise floor) algorithm - Google Patents

BPNN (binary noise network) feature identification method based on improved NBA (negative bias noise floor) algorithm Download PDF

Info

Publication number
CN109308524B
CN109308524B CN201811237688.XA CN201811237688A CN109308524B CN 109308524 B CN109308524 B CN 109308524B CN 201811237688 A CN201811237688 A CN 201811237688A CN 109308524 B CN109308524 B CN 109308524B
Authority
CN
China
Prior art keywords
edge
algorithm
nba
characteristic
particle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811237688.XA
Other languages
Chinese (zh)
Other versions
CN109308524A (en
Inventor
简琤峰
林崇
李苗
张美玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201811237688.XA priority Critical patent/CN109308524B/en
Publication of CN109308524A publication Critical patent/CN109308524A/en
Application granted granted Critical
Publication of CN109308524B publication Critical patent/CN109308524B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/086Learning methods using evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Physiology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a BPNN feature recognition method based on an improved NBA algorithm, which comprises the steps of preprocessing a boundary adjacency graph, extracting a feature factor minimum subgraph, aggregating feature factors belonging to the same feature into a composite feature, carrying out feature coding on each feature factor after aggregation to obtain a feature coding sequence, improving the NBA algorithm by adopting a second-order oscillation mechanism and a differential algorithm, optimizing a BP neural network by using the improved NBA algorithm, and carrying out feature recognition. The invention identifies the characteristics with engineering significance to the maximum extent, greatly improves the accuracy and efficiency of characteristic identification due to the excellent learning performance of the neural network, optimizes the BP neural network by utilizing the improved NBA algorithm, can realize the interconversion between the control local search and the global search, avoids the defect of local optimum, and has better convergence. The invention identifies the characteristics after training, and effectively improves the accuracy and efficiency of characteristic identification.

Description

BPNN (binary noise network) feature identification method based on improved NBA (negative bias noise floor) algorithm
Technical Field
The invention relates to the technical field of reading or recognizing printed or written characters or recognizing graphs, in particular to a BPNN feature recognition method which adopts composite processing feature extraction and maximally recognizes characteristics with engineering significance based on an improved NBA algorithm.
Background
With the development of science and technology, the feature recognition technology is continuously updated, and experts and scholars in the scientific field make great progress through continuous research and updating of algorithms and years of effort, so that various methods are endless.
However, the existing feature recognition technology still has many problems which are difficult to solve or the effect of which is difficult to satisfy. Among them, the graph-based method is the most widely used method, but the method has a large amount of calculation, and due to the intersection characteristics of several elements on the graph, various combinations and reconstructions having a possibility may be generated, and the combined characteristics cannot be effectively identified. In some methods, a pattern and a Neural Network are combined to perform feature recognition, a BP Neural Network (BPNN) is the most basic Neural Network, an output result of the BPNN is forward propagated, and an error is performed in a backward Propagation manner.
The NBA Algorithm (Novel Bat Algorithm, NBA) is an innovation of Bat Algorithm (Bat Algorithm, BA), the main principle of the NBA Algorithm is to simulate Bat to detect a hunting object by using a sonar and avoid a random search Algorithm of the obstacle, and compared with bionic algorithms such as a traditional particle swarm Algorithm and a genetic Algorithm, the NBA Algorithm has the advantages of less required parameters, better global optimization capability, high calculation efficiency and the like, but the NBA Algorithm still has the defects of low optimization precision, easiness in entering particles in the later iteration stage, prematurity and the like.
Disclosure of Invention
The invention solves the problems that the feature recognition method can not effectively recognize combined features, or the error of feature recognition is large, the convergence error is large, or the optimization precision is not high, and the particles are easy to enter in the later period of iteration and are premature in the prior art, and provides the optimized BPNN feature recognition method based on the improved NBA algorithm.
The invention adopts the technical scheme that a BPNN feature recognition method based on an improved NBA algorithm comprises the following steps:
step 1: preprocessing the face adjacency graph, extracting a characteristic factor minimum subgraph, and aggregating characteristic factors belonging to the same characteristic into a composite characteristic;
step 2: performing characteristic coding on each characteristic factor after polymerization is completed to obtain a characteristic coding sequence;
and step 3: a second-order oscillation mechanism and a differential algorithm are adopted to improve the NBA algorithm;
and 4, step 4: optimizing the BP neural network by using an improved NBA algorithm, and performing feature recognition by using the optimized BP neural network.
Preferably, the step 1 comprises the steps of:
step 1.1: traversing any surface in the surface edge adjacency graph, creating a vertex of an attribute adjacency graph corresponding to each surface, and extracting the attribute of each surface as the attribute corresponding to the vertex of the attribute adjacency graph;
step 1.2: for each two faces in the face edge adjacency graph, identifying the adjacency relation between the two faces, and taking the adjacency relation as the attribute of the corresponding edge of the two faces;
step 1.3: based on step 1.1 and step 1.2, an attribute adjacency graph AAG (V, E) is formed, where V ═ V { (V)1,V2…Vi…VnThe vertex set of the face edge adjacency graph is set, each face in the face edge adjacency graph corresponds to one element in V, and E is the set of edges between two intersected faces in the face edge adjacency graph;
step 1.4: traversing the attribute adjacency graph AAG (V, E), and extracting a characteristic factor minimum subgraph;
step 1.5: and judging whether any characteristic factor minimum subgraph has intersecting characteristics, if so, aggregating the characteristic factors belonging to the same characteristic into a composite characteristic, and otherwise, outputting all characteristic factor minimum subgraphs.
Preferably, in said step 1.3, ViThe attribute of the surface-edge adjacency graph comprises the type of the surface, the unevenness of the surface, the number of the outer ring edges, the type of the outer ring edges and the unevenness of the outer ring edges, and the E is the edge attribute of the surface-edge adjacency graph and comprises the unevenness, the curve, the straight line and the adjacency angle of the edges.
Preferably, said step 1.4 comprises the steps of:
step 1.4.1: storing the attribute adjacency graph AAG (V, E) by using an adjacency matrix, wherein each element of the adjacency matrix is a [ i ] [ j ], the a [ i ] [ j ] is a numerical value generated after the i surface and the j surface are intersected, the unit digit [ j ] represents a concave edge or a convex edge by 0 or 1 respectively, and the tens digit [ i ] correspondingly represents different shapes by using a preset value;
step 1.4.2: searching a row containing a concave edge from the initial row, if the unit number of any element in the current row is not 0, the current row is a convex row, neglecting the mark on the convex row, continuously searching the next row, if the unit number of any element in any row is 0, the current row is a concave edge, and performing the next step;
step 1.4.3: determining a row intersected with the current row in the matrix according to the first concave edge of the current searched row, taking the intersected row as the next searched row, and marking the currently processed concave edge;
step 1.4.4: repeating the step 1.4.3 until all concave edges contained in the adjacent matrix are marked, wherein the marked rows are m, and forming a sub-matrix of m × m according to the sequence of rows and columns.
Preferably, said step 1.5 comprises the steps of:
step 1.5.1: comparing any two subgraph matrixes, if no common surface exists, comparing the next two subgraph matrixes, if no common surface exists, taking all subgraph matrixes as subgraphs with the minimum characteristic factor, and performing the step 2, otherwise, performing the next step;
step 1.5.2: comparing the characteristic base vectors of the two subgraph matrixes, if the characteristic base vectors are the same, extending the common surface of the two subgraph matrixes, and if the extension is not blocked by an entity, aggregating the two subgraph matrixes into a composite characteristic;
step 1.5.3: and updating the corresponding sub-graph matrix and returning to the step 1.5.1.
Preferably, the step 2 comprises the steps of:
step 2.1: converting the information of each characteristic factor after the aggregation into the weights of the surface, the edge and the ring;
the value is 0 if the abutment surface is a plane, 1 if the abutment surface is a convex curved surface, and-1 if the abutment surface is a concave curved surface;
if there is no inner ring in the adjacent surface, the value is 0, if there is a concave ring in the adjacent surface, the value is 5, if there is a convex ring, the value is-1;
if the adjacent edge is a straight edge, the value is 1 or-1 according to the concave-convex property of the adjacent edge, and if the adjacent edge is a curved edge, the value is 2 or-2 according to the concave-convex property of the adjacent edge;
the value is 0 when the adjoining included angle is a right angle, 0.5 when the adjoining included angle is an obtuse angle, and 0.8 when the adjoining included angle is an acute angle;
step 2.2: with FiAs a weight of the adjacent surface, EijIs the weight of the ring in the adjoining plane, LikIs the weight of the adjacent edge, VixIs the angular weight of the adjacent angle, to
Figure BDA0001838582700000041
Figure BDA0001838582700000042
Calculating A of each characteristic factoriA value;
step 2.3: a is to beiThe value is used as the characteristic coding sequence of the current characteristic factor, and if the coded dimension is less than 9 bits, the coded dimension is filled with 1 after coding.
Preferably, in step 2.1, the abutment edge C abuts the surface A and the surface B, and (A)x,Ay,Az) Is a point on the surface A, (B)x,By,Bz) Is a point of plane B, (C)x,Cy,Cz) Let the straight line BC be perpendicular to the plane A for a point on the adjacent edge, the normal vector of the plane A is (i, j, k), calculate
Figure BDA0001838582700000043
When K is>When 0, the adjacent edge is a concave edge, K<The adjacent edge is a convex edge at 0.
Preferably, the step 3 comprises the steps of:
step 3.1: in the process of searching the target by the particles, when t +1 times of iteration is carried out, the pulse frequency of the ith particle at the jth dimensional position is fij=fmin+(fmax-fmin) r, velocity of
Figure BDA0001838582700000044
Figure BDA0001838582700000045
The location update formula is
Figure BDA0001838582700000046
The frequency of the transmitted pulses being
Figure BDA0001838582700000047
The pulse sound intensity is updated by the formula
Figure BDA0001838582700000048
Wherein r ∈ [0,1 ]],
Figure BDA0001838582700000049
For the global optimum at the t-th iteration, α ∈ [0,1 ]],γ>0;
Step 3.2: the second-order oscillation mechanism is used for improving the particle speed to obtain the improved particle speed
Figure BDA00018385827000000410
Figure BDA00018385827000000411
/1=c1r1,/2=c2r2Where ω is the inertial weight factor of the particle update, c1A learning factor for individual particles, c2A learning factor of a population of particles, r1、r2∈[0,1],piG represents the group optimal position of the whole particle group at the current moment;
step 3.3: further improving the NBA algorithm by using a differential evolution algorithm, wherein when t +1 times of iteration is carried out, the speed of the ith particle in the jth dimension is
Figure BDA0001838582700000051
Wherein i ≠ p1 ≠ p2 ≠ p3, p1, p2, p3 respectively represent individuals in the population,
Figure BDA0001838582700000052
in order to be a scaling factor, the scaling factor,
Figure BDA0001838582700000053
step 3.4: setting a crossover mechanism
Figure BDA0001838582700000054
Wherein cr is the cross probability and jr is a random positive integer in the particle dimension;
step 3.5: finally, t +1 iterations are obtained, and the optimal update position of the ith particle is
Figure BDA0001838582700000055
Wherein the function f is an objective function.
Preferably, in said step 3.2,
Figure BDA0001838582700000056
Figure BDA0001838582700000057
Figure BDA0001838582700000058
c1=c1s+(c1e-c1s)*sinω,c2=c2s+(c2e-c2s) Sin ω, wherein r3、r4、r∈[0,1],GmaaIs the maximum number of iterations, ωsAnd ωeInitial and final values of the inertial weight, respectively, t being the number of current iterations, c1sAnd c2sIs c1And c2Initial value of c1eAnd c2eIs c1And c2The iteration final value of (c).
Preferably, the step 4 comprises the steps of:
step 4.1: defining a target function particle fitness function f, wherein n is the number of individual particles, and the dimension of each particle is j; setting the maximum iteration number as G;
step 4.2: the improved NBA algorithm is initialized, and the position vector of the ith particle iteration is initialized to
Figure BDA0001838582700000059
The velocity vector is
Figure BDA00018385827000000510
The frequency of the particles is
Figure BDA00018385827000000511
The frequency of the pulses is
Figure BDA00018385827000000512
Sound intensity
Figure BDA00018385827000000513
i=1,2,3…n;
Step 4.3: calculating the fitness value of each particle during iteration at the current moment, finding out the global optimal position g, and updating the speed of the particle swarm
Figure BDA0001838582700000061
Figure BDA0001838582700000062
Position of
Figure BDA0001838582700000063
Step 4.4: generating a random number rand1 if
Figure BDA0001838582700000064
Then with Xnew=Xold+εAtRandomly generating a new position, XoldFor the most recently updated position, ε ∈ [0,1 ]];
If it is
Figure BDA0001838582700000065
And f (X)new)<f(Xold) Then the position of the particle is updated to XnewAnd updating the pulse frequency
Figure BDA0001838582700000066
Update the sound intensity of
Figure BDA0001838582700000067
If f (X)new) < f (g), then X is addednewSetting the current global optimal particle position;
step 4.5: generating a random number rand2If rand2< cr, then
Figure BDA0001838582700000068
Figure BDA0001838582700000069
Substitution into
Figure BDA00018385827000000613
Selecting to obtain
Figure BDA00018385827000000611
If it is
Figure BDA00018385827000000612
If the fitness function value is smaller than the fitness function value of the last global optimal position, updating the particle position or the global optimal particle position;
step 4.6: calculating an error, if the error does not reach a set value or the iteration frequency is less than G, returning to the step 4.3, otherwise, carrying out the next step;
step 4.7: outputting an adaptive value and an optimal position of the globally optimal individual;
step 4.8: and updating the weight and the threshold of the BP neural network according to the optimal position, establishing an optimal feature recognition network model, and outputting a prediction result.
The invention provides an optimized BPNN feature recognition method based on an improved NBA algorithm, which comprises the steps of preprocessing a face-to-face adjacency graph, extracting a feature factor minimum subgraph, aggregating feature factors belonging to the same feature into a composite feature, performing feature coding on each feature factor after aggregation to obtain a feature coding sequence, improving the NBA algorithm by adopting a second-order oscillation mechanism and a differential algorithm, and optimizing a BP neural network by using the improved NBA algorithm to perform feature recognition on the optimized BP neural network. The composite processing feature extraction method based on graph and feature factor clustering provided by the invention can maximally identify the features with engineering significance, and meanwhile, because the neural network has excellent learning performance, the accuracy and efficiency of feature identification can be greatly improved by utilizing the neural network to carry out feature identification, and the BP neural network is optimized by utilizing the improved NBA algorithm, so that the mutual conversion between local search and global search can be controlled, the defect of local optimum is avoided, and the method has better convergence. The invention identifies the characteristics after training, and effectively improves the accuracy and efficiency of characteristic identification.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic diagram of a top border adjacency graph according to the present invention;
FIG. 3 is an attribute adjacency graph AAG obtained after the face-edge adjacency graph of FIG. 2 is processed;
FIG. 4 is a schematic diagram of a structure of a side-to-side adjacency graph with intersecting features according to the present invention;
FIG. 5 is an attribute adjacency graph AAG with intersecting features obtained after the face-edge adjacency graph of FIG. 4 is processed;
in fig. 2 to 5, the surfaces are represented by numbers, and the lines between the numbers represent the connection relationship between the surfaces where edges exist.
Detailed Description
The present invention is described in further detail with reference to the following examples, but the scope of the present invention is not limited thereto.
The invention relates to a BPNN feature recognition method based on an improved NBA algorithm, which comprises the following steps.
Step 1: and preprocessing the face adjacency graph, extracting a subgraph with the minimum characteristic factor, and aggregating the characteristic factors belonging to the same characteristic into a composite characteristic.
The step 1 includes the following steps.
Step 1.1: traversing any surface in the surface edge adjacency graph, creating a vertex of an attribute adjacency graph corresponding to each surface, and extracting the attribute of each surface as the attribute corresponding to the vertex of the attribute adjacency graph.
Step 1.2: and for each two faces in the face-edge adjacency graph, identifying the adjacency relation between the two faces, and taking the adjacency relation as the attribute of the corresponding edge of the two faces.
Step 1.3: based on step 1.1 and step 1.2, an attribute adjacency graph AAG (V, E) is formed, where V ═ V { (V)1,V2…Vi…VnAnd E is the set of edges between two intersecting faces in the face edge adjacency graph.
In said step 1.3, ViThe attribute of the surface-edge adjacency graph comprises the type of the surface, the unevenness of the surface, the number of the outer ring edges, the type of the outer ring edges and the unevenness of the outer ring edges, and the E is the edge attribute of the surface-edge adjacency graph and comprises the unevenness, the curve, the straight line and the adjacency angle of the edges.
In the invention, the STEPs 1.1-1.3 are mainly to process the STEP file of the model, the file contains three-dimensional model information and is a face-edge adjacent graph, and the BPNN algorithm can be applied to the file because the file is topological data and can not provide uniform vector data.
Specifically, in the present invention, the surface and edge geometric topology information in the STEPAP242 file is extracted to form a surface edge adjacency graph of the model, and attributes are added to vertices and arcs on the basis of the surface edge adjacency graph to form an attribute adjacency graph AAG (V, E).
Step 1.4: and traversing the attribute adjacency graph AAG (V, E) and extracting a subgraph with the minimum characteristic factor.
Said step 1.4 comprises the following steps.
Step 1.4.1: and storing the attribute adjacency graph AAG (V, E) in an adjacency matrix, wherein each element of the adjacency matrix is a [ i ] [ j ], the a [ i ] [ j ] is a numerical value generated after the i surface and the j surface are intersected, the unit digit [ j ] represents the concave side or the convex side by 0 or 1 respectively, and the tens digit [ i ] correspondingly represents different shapes by preset values.
Step 1.4.2: and searching a row containing a concave edge from the initial row, if the unit number of any element in the current row is not 0, the current row is a convex row, neglecting the mark on the convex row, continuously searching the next row, and if the unit number of any element in any row is 0, the current row is a concave edge, and performing the next step.
Step 1.4.3: and determining a row intersected with the current row in the matrix according to the first concave edge of the current searched row, taking the intersected row as the next searched row, and marking the currently processed concave edge.
Step 1.4.4: repeating the step 1.4.3 until all concave edges contained in the adjacent matrix are marked, wherein the marked rows are m, and forming a sub-matrix of m × m according to the sequence of rows and columns.
In the present invention, an AAG matrix including n characteristic surfaces is decomposed, wherein a convex row means that the surface has no concave side, and because the convex side has no characteristic property, the omission is performed, and the omission mark is performed without involving the use of the surface in the subsequent step. One embodiment is given by the AAG matrix comprising n eigenplanes shown in the following table.
Table 1: AAG matrix comprising n eigenplanes
V1 V2 V3 V4 V5 V6 V7 V8 V9 V10 V11
V1 0 11 -1 11 11 11 -1 -1 -1 -1 -1
V2 11 0 11 -1 11 11 -1 -1 -1 -1 -1
V3 -1 11 0 11 11 11 -1 -1 -1 -1 -1
V4 11 -1 11 0 11 11 -1 -1 -1 -1 -1
V5 11 11 11 11 0 -1 10 10 10 10 -1
V6 11 11 11 11 -1 0 -1 -1 -1 -1 -1
V7 -1 -1 -1 -1 10 -1 0 11 -1 11 11
V8 -1 -1 -1 -1 10 -1 11 0 11 -1 11
V9 -1 -1 -1 -1 10 -1 -1 11 0 11 11
V10 -1 -1 -1 -1 10 -1 11 -1 11 0 11
V11 -1 -1 -1 -1 -1 -1 11 11 11 11 0
In this embodiment, the vertices of the diagram are listed in rows and columns, and the number of the vertex intersection position is a [ i ] [ j ], where the unit digit [ j ] represents the concave edge or the convex edge with 0 or 1, respectively, the tens digit [ i ] represents different shapes with preset values, and in the current embodiment, 1, 2, and 3 represent a straight line, an arc, and an ellipse, respectively. The boundary relationship between V1 and V1 is represented by "0" since it is itself; the relationship between V1 and V2 is shown in the value "11" of a [ i ] [ j ], that is, ten-digit "1" indicates that the sides of V1 and V2 are straight lines, and digit "1" indicates that the sides are convex; the relationship between V5 and V7 is "10", which means that the edge is a straight line and is a concave edge; the relationship between V1 and V3 is "-1", indicating that there is no associated edge between V1 and V3.
In this embodiment, when the row is traversed, when the row where V5 is located is reached, the first concave edge "10" appears, that is, the next search is started from the row V7, the row V7 has the value of 10, that is, the concave edge, which intersects with the row V5, and therefore this row needs to be saved, but there is no other concave edge (value) except for the row V7-V5, so that the traversal is directly continued to the point after V5-V7, and the next eligible value is the intersection value of the row V5 and the row V8, which is 10, and the above operations are repeated, so that the sub-graph matrix of 5 × 5 shown in table 2 is finally obtained.
Table 2: TABLE 1 sub-graph matrix after completion of the matrix traversal
V5 V7 V8 V9 V10
V5 0 10 10 10 10
V7 10 0 11 -1 11
V8 10 11 0 11 -1
V9 10 -1 11 0 11
V10 10 11 -1 11 0
Step 1.5: and judging whether any characteristic factor minimum subgraph has intersecting characteristics, if so, aggregating the characteristic factors belonging to the same characteristic into a composite characteristic, and otherwise, outputting all characteristic factor minimum subgraphs.
Step 1.5 comprises the following steps:
step 1.5.1: comparing any two subgraph matrixes, if no common surface exists, comparing the next two subgraph matrixes, if no common surface exists, taking all subgraph matrixes as subgraphs with the minimum characteristic factor, and performing the step 2, otherwise, performing the next step;
step 1.5.2: comparing the characteristic base vectors of the two subgraph matrixes, if the characteristic base vectors are the same, extending the common surface of the two subgraph matrixes, and if the extension is not blocked by an entity, aggregating the two subgraph matrixes into a composite characteristic;
step 1.5.3: and updating the corresponding sub-graph matrix and returning to the step 1.5.1.
In the invention, after the operation, a plurality of independent sub-graph matrixes may be obtained, and for the graph with the intersecting characteristic, after the minimum characteristic factor decomposition is completed, a shared surface may exist among the plurality of sub-graph matrixes, that is, the obtained sub-graph matrixes still may have a convex edge connection relation, so that the sub-graph matrixes with convex edge connection AAG need to be decomposed secondarily.
In the invention, the surface with the most directed edges of each surface of the submatrix with the convex connection is obtained as a decomposition base surface, reverse continuation is carried out along the direction of an external edge, if no entity blocks the continuation, aggregation is carried out, if the continuation is blocked by the entity, the continuation is invalid, the continuation is carried out on the rest surfaces and the surface, if the intersection line is in the inner part, the base surface is divided into two surfaces, and the process is repeated until all the submatrixes with the convex connection are divided.
Step 2: and carrying out characteristic coding on each characteristic factor after the polymerization is finished to obtain a characteristic coding sequence.
The step 2 includes the following steps.
Step 2.1: converting the information of each characteristic factor after aggregation into the weights of a surface, an edge and a ring:
the value is 0 if the abutment surface is a plane, 1 if the abutment surface is a convex curved surface, and-1 if the abutment surface is a concave curved surface;
if there is no inner ring in the adjacent surface, the value is 0, if there is a concave ring in the adjacent surface, the value is 5, if there is a convex ring, the value is-1;
if the adjacent edge is a straight edge, the value is 1 or-1 according to the concave-convex property of the adjacent edge, and if the adjacent edge is a curved edge, the value is 2 or-2 according to the concave-convex property of the adjacent edge;
the value is 0 when the adjoining angle is a right angle, 0.5 when the adjoining angle is an obtuse angle, and 0.8 when the adjoining angle is an acute angle.
In step 2.1, the abutment edge C abuts the surfaces A and B, and (A)x,Ay,Az) Is a point on the surface A, (B)x,By,Bz) Is a point of plane B, (C)x,Cy,Cz) Let the straight line BC be perpendicular to the plane A for a point on the adjacent edge, the normal vector of the plane A is (i, j, k), calculate
Figure BDA0001838582700000111
When K is>When 0, the adjacent edge is a concave edge, K<The adjacent edge is a convex edge at 0.
Step 2.2: with FiAs a weight of the adjacent surface, EijIs the weight of the ring in the adjoining plane, LikIs the weight of the adjacent edge, VixIs the angular weight of the adjacent angle, to
Figure BDA0001838582700000112
Figure BDA0001838582700000113
Calculating A of each characteristic factoriThe value is obtained.
Step 2.3: a is to beiAnd taking the value as a feature coding sequence of the current feature factor, and if the dimension of the feature coding is less than 9 bits, filling the feature coding sequence with 1 after coding.
In the invention, the feature coding is to convert the features into input vectors of the neural network, the feature coding should contain information of various features as much as possible to fully express the types of the features, and different features should have different feature coding combinations.
In the invention, the characteristic coding sequence can be used as a training set of a subsequent BP neural network on one hand, and can be used as a characteristic for a new STEP file (a face-edge adjacency graph) to be identified by using the trained BP neural network on the other hand.
In the invention, if the dimension of the feature code exceeds 9 bits, the dimension is ignored, because more than 9 other surfaces are far away from the feature base surface, the effect in the feature identification process is small.
And step 3: the NBA algorithm is improved by adopting a second-order oscillation mechanism and a differential algorithm.
In the invention, because the NBA algorithm has the defects of low optimization precision, easy particle entering in the later iteration stage, precocity and the like, a second-order oscillation mechanism and a differential algorithm are adopted to optimize the NBA algorithm. The second-order oscillation mechanism can adjust the global and local searching capabilities of the group algorithm, avoid the algorithm from falling into local optimization, enhance the self-learning capability of the group, enrich the particle diversity of the group in the later searching period, and the difference algorithm can improve the individual diversity based on the advantages of the mechanisms such as variation, intersection, selection and the like in the optimizing process, has good application in the aspects of improving the local searching capability of the particles, preventing the particles from getting premature and the like, and can better ensure that the optimal solution is searched.
The step 3 includes the following steps.
Step 3.1: in the process of searching the target by the particles, when t +1 times of iteration is carried out, the pulse frequency of the ith particle at the jth dimensional position is fij=fmin+(fmax-fmin) r, velocity of
Figure BDA0001838582700000121
Figure BDA0001838582700000122
The location update formula is
Figure BDA0001838582700000123
The frequency of the transmitted pulses being
Figure BDA0001838582700000124
The pulse sound intensity is updated by the formula
Figure BDA0001838582700000125
Wherein r ∈ [0,1 ]],
Figure BDA0001838582700000126
For the global optimum at the t-th iteration, α ∈ [0,1 ]],γ>0。
In the invention, the frequency of the particles needs to be updated in the process of searching the target so as to better approach the target, and the frequency always falls into the range [ fmin,fmax]Inner, fijThe particles can fall in the frequency range when being randomly updated each time;
Figure BDA0001838582700000127
and
Figure BDA0001838582700000128
representing the velocity values of the particle at the t +1 and t iterations respectively,
Figure BDA0001838582700000129
and
Figure BDA00018385827000001210
the specific positions of the particles at the t +1 and t iterations are indicated, respectively.
Step 3.2: the second-order oscillation mechanism is used for improving the particle speed to obtain the improved particle speed
Figure BDA00018385827000001211
Figure BDA00018385827000001212
/1=c1r1,/2=c2r2Where ω is the inertial weight factor of the particle update, c1A learning factor for individual particles, c2A learning factor of a population of particles, r1、r2∈[0,1],piAnd g represents the group optimal position of the whole particle group at the current moment, wherein the group optimal position is the individual optimal position of the ith particle in the t iteration.
In the step 3.2, the step of the method,
Figure BDA00018385827000001213
Figure BDA0001838582700000131
Figure BDA0001838582700000132
c1=c1s+(c1e-c1s)*sinω,c2=c2s+(c2e-c2s) Sin ω, wherein r3、r4、r∈[0,1],Gma1Is the maximum number of iterations, ωsAnd (c).eInitial and final values of the inertial weight, respectively, t being the current iterationNumber of times of (c)1sAnd c2sIs c1And c2Initial value of c1eAnd c2eIs c1And c2The iteration final value of (c).
In the present invention,. epsilon.1And ε2The method is used for ensuring the global optimizing capability of the algorithm in the early stage and the convergence of the algorithm in the later stage.
Step 3.3: further improving the NBA algorithm by using a differential evolution algorithm, wherein when t +1 times of iteration is carried out, the speed of the ith particle in the jth dimension is
Figure BDA0001838582700000133
Wherein i ≠ p1 ≠ p2 ≠ p3, p1, p2, p3 respectively represent individuals in the population,
Figure BDA0001838582700000134
in order to be a scaling factor, the scaling factor,
Figure BDA0001838582700000135
in the invention, the velocity formula of step 3.3 is to make the velocity of the particle i generate variation, and random 3 individual positions except i are used in the variation process
Figure BDA0001838582700000136
To help the particles i to achieve a variation in velocity.
Step 3.4: setting a crossover mechanism
Figure BDA0001838582700000137
Where cr is the crossover probability and jr is a random positive integer in the particle dimension.
In the invention, the particle is only D, jr is a random positive integer in the particle dimension, namely jr belongs to [1, D ].
Step 3.5: finally, t +1 iterations are obtained, and the optimal update position of the ith particle is
Figure BDA0001838582700000138
Wherein the function f is an objective function.
In the invention, the selection of the speed and the position is embodied in the second-order oscillation and difference algorithm improved by the NBA algorithm,
Figure BDA0001838582700000139
and
Figure BDA00018385827000001310
in order to avoid falling into local optima.
In the invention, the target function is the target function particle fitness f (x) in the BP neural networki)。
And 4, step 4: optimizing the BP neural network by using an improved NBA algorithm, and performing feature recognition by using the optimized BP neural network.
In the invention, after a vector matrix approximating to a real weight and a threshold value is obtained through optimization, the BP neural network (BPNN) is trained by using optimized parameters to obtain a final characteristic-identified BP neural network structure.
According to the overall principle, firstly, a characteristic coding sequence is extracted to serve as a training set of a BP neural network, if the BP neural network which is not improved is directly used, the BP neural network is easy to cause the search process to be in local optimum, the global optimum solution cannot be searched, the error of characteristic identification is large, the convergence error is large, namely, the weight and the threshold value cannot be well adjusted, the identification effect is poor, and therefore the BP neural network must be improved; and then, using an improved NBA algorithm to obtain the optimal weight and threshold of the BP neural network, and continuously updating if the NBA algorithm is not adjusted in place, and finally establishing the BP neural network.
In the invention, if an improved NBA algorithm is used to obtain the optimal weight and threshold of the BP neural network, the weight and threshold of the BP neural network need to be particlized; the particlization means that the weight and the threshold of the BP network correspond to the position vector of the particle, that is, the position vector of each particle corresponds to a network structure, each component of the position vector represents a weight or a threshold, and the dimension of the position vector is equal to the sum of the number of the weights and the thresholds in the network; in brief, the optimal weight and threshold in the BP network can be considered to be found as long as the NBA algorithm is used to find the globally optimal solution.
The step 4 comprises the following steps:
step 4.1: defining a target function particle fitness function f, wherein n is the number of individual particles, and the dimension of each particle is j; setting the maximum iteration number as G;
step 4.2: the improved NBA algorithm is initialized, and the position vector of the ith particle iteration is initialized to
Figure BDA0001838582700000141
The velocity vector is
Figure BDA0001838582700000142
The frequency of the particles is
Figure BDA0001838582700000143
The frequency of the pulses is
Figure BDA0001838582700000144
Sound intensity
Figure BDA0001838582700000145
i=1,2,3…n;
Step 4.3: calculating the fitness value of each particle during iteration at the current moment, finding out the global optimal position g, and updating the speed of the particle swarm
Figure BDA0001838582700000146
Figure BDA0001838582700000147
Position of
Figure BDA0001838582700000148
Step 4.4: generating a random number rand1 if
Figure BDA0001838582700000151
Then with Xnew=Xold+εAtRandomly generating a new position, XoldFor the most recently updated position, ε ∈ [0,1 ]];
If it is
Figure BDA0001838582700000152
And f (X)new)<f(Xold) Then the position of the particle is updated to XnewAnd updating the pulse frequency
Figure BDA0001838582700000153
Update the sound intensity of
Figure BDA0001838582700000154
If f (X)new) < f (g), then X is addednewSetting the current global optimal particle position;
step 4.5: generating a random number rand2If rand2< cr, then
Figure BDA0001838582700000155
Figure BDA0001838582700000156
Substitution into
Figure BDA0001838582700000157
Selecting to obtain
Figure BDA0001838582700000158
If it is
Figure BDA0001838582700000159
If the fitness function value is smaller than the fitness function value of the last global optimal position, updating the particle position or the global optimal particle position;
step 4.6: calculating an error, if the error does not reach a set value or the iteration frequency is less than G, returning to the step 4.3, otherwise, carrying out the next step;
step 4.7: outputting an adaptive value and an optimal position of the globally optimal individual;
step 4.8: and updating the weight and the threshold of the BP neural network according to the optimal position, establishing an optimal feature recognition network model, and outputting a prediction result.
In the present invention, the weight and threshold values in step 4.3 are obtained according to the improved NBA algorithm.
In the present invention, the fitness function
Figure BDA00018385827000001510
Wherein x isiA position vector representing particle i, q the number of learning samples, ypkAnd opkRespectively representing the network output and the ideal output of the kth learning sample under the network structure determined by the particle i; the fitness function may be set by one skilled in the art as desired.
In the invention, the principle of obtaining the optimal weight and threshold of the BP neural network by using the improved NBA algorithm is as follows: firstly, carrying out normalization processing on a sample, establishing a BP network topological structure, initializing an improved NBA algorithm, calculating an adaptive value of each particle, comparing to obtain an update speed and position after finding out a global optimal position; calculating a new particle adaptation value and further comparing, when the random number meets a certain condition, executing a differential algorithm, calculating the particle adaptation value at the time, recording corresponding data, checking whether the error and the iteration number meet requirements, if not, continuing iteration, otherwise, outputting an optimal position, updating a weight and a threshold of the BP neural network, establishing an optimal feature recognition network model, and outputting a prediction result.
The invention preprocesses the face border adjacency graph, extracts the characteristic factor minimum subgraph, aggregates the characteristic factors belonging to the same characteristic into a composite characteristic, performs characteristic coding on each aggregated characteristic factor to obtain a characteristic coding sequence, improves the NBA algorithm by adopting a second-order oscillation mechanism and a differential algorithm, optimizes the BP neural network by the improved NBA algorithm, and performs characteristic identification by the optimized BP neural network. The composite processing feature extraction method based on graph and feature factor clustering provided by the invention can maximally identify the features with engineering significance, and meanwhile, because the neural network has excellent learning performance, the accuracy and efficiency of feature identification can be greatly improved by utilizing the neural network to carry out feature identification, and the BP neural network is optimized by utilizing the improved NBA algorithm, so that the mutual conversion between local search and global search can be controlled, the defect of local optimum is avoided, and the method has better convergence. The invention identifies the characteristics after training, and effectively improves the accuracy and efficiency of characteristic identification.

Claims (8)

1. A BPNN feature recognition method based on an improved NBA algorithm is characterized in that: the method comprises the following steps:
step 1: preprocessing the face adjacency graph, extracting a characteristic factor minimum subgraph, and aggregating characteristic factors belonging to the same characteristic into a composite characteristic;
step 2: performing characteristic coding on each characteristic factor after polymerization is completed to obtain a characteristic coding sequence;
and step 3: a second-order oscillation mechanism and a differential algorithm are adopted to improve the NBA algorithm;
the step 3 comprises the following steps:
step 3.1: in the process of searching the target by the particles, when t +1 times of iteration is carried out, the pulse frequency of the ith particle at the jth dimensional position is fij=fmin+(fmax-fmin) r, velocity of
Figure FDA0002826699250000011
The location update formula is
Figure FDA0002826699250000012
The frequency of the transmitted pulses being
Figure FDA0002826699250000013
The pulse sound intensity is updated by the formula
Figure FDA0002826699250000014
Wherein r ∈ [0,1 ]],
Figure FDA0002826699250000015
For the global optimum at the t-th iteration, α ∈ [0,1 ]],γ>0;
Step 3.2: the second-order oscillation mechanism is used for improving the particle speed to obtain the improved particle speed
Figure FDA0002826699250000016
Figure FDA0002826699250000017
β1=c1r1,β2=c2r2Where ω is the inertial weight factor of the particle update, c1A learning factor for individual particles, c2A learning factor of a population of particles, r1、r2∈[0,1],piG represents the group optimal position of the whole particle group at the current moment;
in the step 3.2, the step of the method,
Figure FDA0002826699250000021
Figure FDA0002826699250000022
Figure FDA0002826699250000023
c1=c1s+(c1e-c1s)*sinω,c2=c2s+(c2e-c2s) Sin ω, wherein r3、r4、r∈[0,1],GmaxIs the maximum number of iterations, ωsAnd ωeInitial and final values of the inertial weight, respectively, t being the number of current iterations, c1sAnd c2sIs c1And c2Initial value of c1eAnd c2eIs c1And c2The iteration final value of (a);
step 3.3: further improving the NBA algorithm by using a differential evolution algorithm, wherein when t +1 times of iteration is carried out, the speed of the ith particle in the jth dimension is
Figure FDA0002826699250000024
Wherein i ≠ p1 ≠ p2 ≠ p3, p1, p2, p3 respectively represent individuals in the population,
Figure FDA0002826699250000025
in order to be a scaling factor, the scaling factor,
Figure FDA0002826699250000026
step 3.4: setting a crossover mechanism
Figure FDA0002826699250000027
Wherein cr is the cross probability and jr is a random positive integer in the particle dimension;
step 3.5: finally, t +1 iterations are obtained, and the optimal update position of the ith particle is
Figure FDA0002826699250000028
Wherein the function f is an objective function;
and 4, step 4: optimizing the BP neural network by using an improved NBA algorithm, and performing feature recognition by using the optimized BP neural network.
2. The BPNN feature recognition method based on the improved NBA algorithm according to claim 1, wherein: the step 1 comprises the following steps:
step 1.1: traversing any surface in the surface edge adjacency graph, creating a vertex of an attribute adjacency graph corresponding to each surface, and extracting the attribute of each surface as the attribute corresponding to the vertex of the attribute adjacency graph;
step 1.2: for each two faces in the face edge adjacency graph, identifying the adjacency relation between the two faces, and taking the adjacency relation as the attribute of the corresponding edge of the two faces;
step 1.3: based on step 1.1 and step 1.2, an attribute adjacency graph AAG (V, E) is formed, where V ═ V { (V)1,V2...Vi...VnThe vertex set of the attribute adjacency graph is set, each surface in the surface edge adjacency graph corresponds to one element in V, and E is the set of edges between two intersected surfaces in the surface edge adjacency graph;
step 1.4: traversing the attribute adjacency graph AAG (V, E), and extracting a characteristic factor minimum subgraph;
step 1.5: and judging whether any characteristic factor minimum subgraph has intersecting characteristics, if so, aggregating the characteristic factors belonging to the same characteristic into a composite characteristic, and otherwise, outputting all characteristic factor minimum subgraphs.
3. The BPNN feature recognition method based on the improved NBA algorithm according to claim 2, wherein: in said step 1.3, ViThe attribute of the surface-edge adjacency graph comprises the type of the surface, the unevenness of the surface, the number of the outer ring edges, the type of the outer ring edges and the unevenness of the outer ring edges, and the E is the edge attribute of the surface-edge adjacency graph and comprises the unevenness, the curve, the straight line and the adjacency angle of the edges.
4. The BPNN feature recognition method based on the improved NBA algorithm according to claim 2, wherein: the step 1.4 comprises the following steps:
step 1.4.1: storing the attribute adjacency graph AAG (V, E) by using an adjacency matrix, wherein each element of the adjacency matrix is a [ i ] [ j ], the a [ i ] [ j ] is a numerical value generated after the i surface and the j surface are intersected, the unit digit [ j ] represents a concave edge or a convex edge by 0 or 1 respectively, and the tens digit [ i ] correspondingly represents different shapes by using a preset value;
step 1.4.2: searching a row containing a concave edge from the initial row, if the unit number of any element in the current row is not 0, the current row is a convex row, neglecting the mark on the convex row, continuously searching the next row, if the unit number of any element in any row is 0, the current row is a concave edge, and performing the next step;
step 1.4.3: determining a row intersected with the current row in the matrix according to the first concave edge of the current searched row, taking the intersected row as the next searched row, and marking the currently processed concave edge;
step 1.4.4: repeating the step 1.4.3 until all concave edges contained in the adjacent matrix are marked, wherein the marked rows are m, and forming a sub-matrix of m × m according to the sequence of rows and columns.
5. The BPNN feature recognition method based on the improved NBA algorithm as claimed in claim 4, wherein: step 1.5 comprises the following steps:
step 1.5.1: comparing any two subgraph matrixes, if no common surface exists, comparing the next two subgraph matrixes, if no common surface exists, taking all subgraph matrixes as subgraphs with the minimum characteristic factor, and performing the step 2, otherwise, performing the next step;
step 1.5.2: comparing the characteristic base vectors of the two subgraph matrixes, if the characteristic base vectors are the same, extending the common surface of the two subgraph matrixes, and if the extension is not blocked by an entity, aggregating the two subgraph matrixes into a composite characteristic;
step 1.5.3: and updating the corresponding sub-graph matrix and returning to the step 1.5.1.
6. The BPNN feature recognition method based on the improved NBA algorithm according to claim 1, wherein: the step 2 comprises the following steps:
step 2.1: converting the information of each characteristic factor after the aggregation into the weights of the surface, the edge and the ring;
the value is 0 if the abutment surface is a plane, 1 if the abutment surface is a convex curved surface, and-1 if the abutment surface is a concave curved surface;
if there is no inner ring in the adjacent surface, the value is 0, if there is a concave ring in the adjacent surface, the value is 5, if there is a convex ring, the value is-1;
if the adjacent edge is a straight edge, the value is 1 or-1 according to the concave-convex property of the adjacent edge, and if the adjacent edge is a curved edge, the value is 2 or-2 according to the concave-convex property of the adjacent edge;
the value is 0 when the adjoining included angle is a right angle, 0.5 when the adjoining included angle is an obtuse angle, and 0.8 when the adjoining included angle is an acute angle;
step 2.2: with FiAs a weight of the adjacent surface, EijIs the weight of the ring in the adjoining plane, LikIs the weight of the adjacent edge, VixIs the angular weight of the adjacent angle, to
Figure FDA0002826699250000051
Figure FDA0002826699250000052
Calculating A of each characteristic factoriA value;
step 2.3: a is to beiThe value is used as the characteristic coding sequence of the current characteristic factor, and if the coded dimension is less than 9 bits, the coded dimension is filled with 1 after coding.
7. The BPNN feature recognition method based on the improved NBA algorithm as claimed in claim 6, wherein: in step 2.1, the abutment edge C abuts the surfaces A and B, and (A)x,Ay,Az) Is a point on the surface A, (B)x,By,Bz) Is a point of plane B, (C)x,Cy,Cz) Let the straight line BC be perpendicular to the plane A for a point on the adjacent edge, the normal vector of the plane A is (i, j, k), calculate
Figure FDA0002826699250000053
When K is more than 0, the adjacent edge is a concave edge, and when K is less than 0, the adjacent edge is a convex edge.
8. The BPNN feature recognition method based on the improved NBA algorithm according to claim 1, wherein: the step 4 comprises the following steps:
step 4.1: defining a target function particle fitness function f, wherein n is the number of individual particles, and the dimension of each particle is j; setting the maximum iteration number as G;
step 4.2: the improved NBA algorithm is initialized, and the position vector of the ith particle iteration is initialized to
Figure FDA0002826699250000061
The velocity vector is
Figure FDA0002826699250000062
The frequency of the particles is
Figure FDA0002826699250000063
The frequency of the pulses is
Figure FDA0002826699250000064
Sound intensity
Figure FDA0002826699250000065
Step 4.3: calculating the fitness value of each particle during iteration at the current moment, finding out the global optimal position g, and updating the speed of the particle swarm
Figure FDA0002826699250000066
Position of
Figure FDA0002826699250000067
Step 4.4: generating a random number rand1If, if
Figure FDA0002826699250000068
Then with Xnew=Xold+εAtRandomly generating a new position, XoldFor the most recently updated position, ε ∈ [0,1 ]](ii) a If it is
Figure FDA0002826699250000069
And f (X)new)<f(Xold) Then the position of the particle is updated to XnewAnd updating the pulse frequency
Figure FDA00028266992500000610
Update the sound intensity of
Figure FDA00028266992500000611
If f (X)new) < f (g), then X is addednewSetting the current global optimal particle position;
step 4.5: generating a random number rand2If rand2< cr, then
Figure FDA00028266992500000612
Figure FDA00028266992500000613
Substitution into
Figure FDA00028266992500000614
Selecting to obtain
Figure FDA00028266992500000615
If it is
Figure FDA00028266992500000616
If the fitness function value is smaller than the fitness function value of the last global optimal position, updating the particle position or the global optimal particle position;
step 4.6: calculating an error, if the error does not reach a set value or the iteration frequency is less than G, returning to the step 4.3, otherwise, carrying out the next step;
step 4.7: outputting an adaptive value and an optimal position of the globally optimal individual;
step 4.8: and updating the weight and the threshold of the BP neural network according to the optimal position, establishing an optimal feature recognition network model, and outputting a prediction result.
CN201811237688.XA 2018-10-23 2018-10-23 BPNN (binary noise network) feature identification method based on improved NBA (negative bias noise floor) algorithm Active CN109308524B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811237688.XA CN109308524B (en) 2018-10-23 2018-10-23 BPNN (binary noise network) feature identification method based on improved NBA (negative bias noise floor) algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811237688.XA CN109308524B (en) 2018-10-23 2018-10-23 BPNN (binary noise network) feature identification method based on improved NBA (negative bias noise floor) algorithm

Publications (2)

Publication Number Publication Date
CN109308524A CN109308524A (en) 2019-02-05
CN109308524B true CN109308524B (en) 2021-04-09

Family

ID=65225676

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811237688.XA Active CN109308524B (en) 2018-10-23 2018-10-23 BPNN (binary noise network) feature identification method based on improved NBA (negative bias noise floor) algorithm

Country Status (1)

Country Link
CN (1) CN109308524B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102629287A (en) * 2012-02-29 2012-08-08 沈阳理工大学 Automatic identification method based on standard for the exchange of product model data-compliant numerical control data interface (STEP-NC) intersection features
CN106453293A (en) * 2016-09-30 2017-02-22 重庆邮电大学 Network security situation prediction method based on improved BPNN (back propagation neural network)

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108009527A (en) * 2017-12-26 2018-05-08 东北大学 A kind of intelligent characteristic recognition methods towards STEP-NC2.5D manufacturing features

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102629287A (en) * 2012-02-29 2012-08-08 沈阳理工大学 Automatic identification method based on standard for the exchange of product model data-compliant numerical control data interface (STEP-NC) intersection features
CN106453293A (en) * 2016-09-30 2017-02-22 重庆邮电大学 Network security situation prediction method based on improved BPNN (back propagation neural network)

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于图和神经网络的数控加工特征识别;高远 等;《机电一体化》;20160107;第7-12页 *
混沌遗传算法对BP神经网络的改进研究;孟栋 等;《数学理论与应用》;20140315;第34卷(第1期);第102-110页 *

Also Published As

Publication number Publication date
CN109308524A (en) 2019-02-05

Similar Documents

Publication Publication Date Title
CN109948029B (en) Neural network self-adaptive depth Hash image searching method
CN105512289B (en) Image search method based on deep learning and Hash
Torshizi et al. On type-reduction of type-2 fuzzy sets: A review
CN107679562B (en) Analysis processing method and device for three-dimensional model
CN106846425A (en) A kind of dispersion point cloud compression method based on Octree
CN109960738B (en) Large-scale remote sensing image content retrieval method based on depth countermeasure hash learning
CN111625276B (en) Code abstract generation method and system based on semantic and grammar information fusion
CN109743196B (en) Network characterization method based on cross-double-layer network random walk
CN110673840A (en) Automatic code generation method and system based on tag graph embedding technology
Xu et al. GenExp: Multi-objective pruning for deep neural network based on genetic algorithm
US20220383127A1 (en) Methods and systems for training a graph neural network using supervised contrastive learning
CN103914527B (en) Graphic image recognition and matching method based on genetic programming algorithms of novel coding modes
CN116681104B (en) Model building and realizing method of distributed space diagram neural network
CN108009527A (en) A kind of intelligent characteristic recognition methods towards STEP-NC2.5D manufacturing features
Lata et al. Data augmentation using generative adversarial network
CN103824285B (en) Image segmentation method based on bat optimal fuzzy clustering
CN114241267A (en) Structural entropy sampling-based multi-target architecture search osteoporosis image identification method
CN109308524B (en) BPNN (binary noise network) feature identification method based on improved NBA (negative bias noise floor) algorithm
Wang et al. A self-adaptive mixed distribution based uni-variate estimation of distribution algorithm for large scale global optimization
CN109116300A (en) A kind of limit learning position method based on non-abundant finger print information
CN116976405A (en) Variable component shadow quantum neural network based on immune optimization algorithm
CN107133348A (en) Extensive picture concentrates the proximity search method based on semantic consistency
CN116451859A (en) Bayesian optimization-based stock prediction method for generating countermeasure network
CN113191486B (en) Graph data and parameter data mixed dividing method based on parameter server architecture
CN113628104B (en) Initial image pair selection method for disordered image incremental SfM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant