CN107194404A - Submarine target feature extracting method based on convolutional neural networks - Google Patents
Submarine target feature extracting method based on convolutional neural networks Download PDFInfo
- Publication number
- CN107194404A CN107194404A CN201710237910.5A CN201710237910A CN107194404A CN 107194404 A CN107194404 A CN 107194404A CN 201710237910 A CN201710237910 A CN 201710237910A CN 107194404 A CN107194404 A CN 107194404A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- dimensional
- network
- characteristic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Abstract
The present invention is to provide a kind of submarine target feature extracting method based on convolutional neural networks.1st, by the sample sequence of primary radiation noise signal, it is divided into 25 continuous parts, each part sets 25 sampled points again;2nd, the sample of jth segment data signal is normalized and centralization is handled;Carry out Short Time Fourier Transform and obtain LoFAR figures;4th, by vector assignment into existing 3-dimensional tensor;5th, characteristic vector will be obtained it is input to full articulamentum being classified and calculate the error with label data, checks whether loss error is less than error threshold, if stopping network training less than if, otherwise into step 6;6th, parameter adjustment is successively carried out from back to front to network using gradient descent method, and is transferred to step 2.The discrimination of the inventive method has carried out the weighting operations of Multidimensional Spatial Information degree to feature figure layer compared with traditional convolution neural network algorithm, to make up the defect that the spatial information brought by the one-dimensional vector of full articulamentum is lost.
Description
Technical field
The present invention relates to a kind of submarine target feature extracting method.
Background technology
At present, mainly there are two methods of time domain and frequency domain on submarine target feature extraction, time domain is i.e. special from waveform configuration
Levy and extracted, time-domain analysis is based primarily upon following theory:Difference can reflect between structure, material or shape between target
On its echo shape, target difference is more obvious, and its waveform configuration difference is more obvious;In addition the receiving angle of echo and target
Posture is different, can also produce considerable influence to the waveform of time domain, the characteristic of these differences also under cover between target, i.e., from waveform
The characteristic of division of target is extracted in structure.Frequency domain character refers to the spectrum signature that will be obtained after signal transacting, passes through the side of Power estimation
Method is distinguished target, and therefrom carries out target signature parameter extraction, and its method includes without ginseng Power estimation, and parametrization spectrum is estimated
Meter and high-order Power estimation.But in practical operation, underwater signal is generally non-stationary signal, frequency change is more violent, also mixes
Many ambient noises.If transforming to the structural information loss that sequential is may result in domain space, it is impossible to which complete portrays
Architectural feature when going out long in sequential of echo signal.This allows for being currently based on the submarine target feature extracting method of frequency domain
Effect is queried deeply.Therefore, how time-domain signal and frequency-region signal are effectively merged, realizes that the feature based on signal fused is carried
Method is taken to turn into the key issue that the present invention needs to solve.
Traditional CNN networks, which are one, has the deep neural network structure of many hidden layers.This network is carried out as follows
Description:
Each layer is all to be enumerated to form by multiple two dimensional character figures in convolutional neural networks, a pixel in each characteristic pattern
Represent a neuron node.Neuron node is divided into convolutional Neural member and pond neuron in network.Pondization nerve tuple
Into two-dimentional pond characteristic pattern, its activation value character pair pixel, and the combination of pond characteristic pattern forms pond layer.Volume
There is similar relation between product neuron, convolution characteristic pattern and convolutional layer.CNN networks are with convolutional layer and pond layer alternating stack
Structure is formed by connecting, and network regard two-dimensional image data as input.It is different from traditional mode means of identification, the data of sample
Reason, feature extraction and classification process are all implicit to be embedded into the convolutional network of this depth interconnection architecture.Generally,
Convolutional layer is otherwise known as feature extraction layer, and a certain local receptor field of preceding layer is input to the corresponding god of convolutional layer with appropriately sized
Through in member, this process is referred to as to extract local feature, that is to say, that the position relationship between local feature is defeated compared with last layer
Enter is that change in location does not occur;Pond layer is referred to as Feature Mapping layer or down-sampling layer again, each characteristic pattern is mapped as one
Individual plane.In order to keep the displacement of feature, rotational invariance during Feature Mapping, the activation primitive of convolutional layer is generally using swashing
Value living is difficult the Sigmoid functions of diverging.In addition, because the neuron on each Feature Mapping layer uses the original that weights are shared
Then, so as to greatly reduce the quantity of network parameter, it turn avoid because of the over-fitting that excessive free parameter is brought.Net
It is this with pond structure after the feature extraction layer of each in network (convolutional layer) along with Feature Mapping layer (pond layer)
Network, can cause model to have very strong noise reduction and antijamming capability to initial data.
That big neuron of multiple neurons in the layer of pond in a certain region, only activation value can just play reinforcing power
The effect of value, this has been also complied with " maximum detection hypothesis ".This neuron also controls week while itself is continually strengthened
Enclose the same characteristic features that the feature extracted in the output result of neuron, that is, Feature Mapping figure is each regional area.
Convolutional neural networks structure has used 4 layer network structures using original image as network inputs in figure, every layer each
Adjacent neurons in characteristic pattern are successively transmitted local message to lower floor in units of convolution kernel size, and lower floor is then to passing
It is feature extraction, such as edge feature or direction character that the information passed, which carries out convolution algorithm,.The training process of network is then not
The process of parameter in disconnected modification convolution kernel.And same convolution kernel is shared by characteristic pattern, convolution kernel can be regarded as one
Slidable wave filter, the process for scanning whole characteristic pattern is designated as the process extracted to a certain feature.And it is used as secondary spy
The pond layer for levying extraction is more like fuzzy filter.It can be understood as passing through numerous characteristic informations mixed in input data
The filtering of network has been finally dispersed on each low resolution characteristic pattern.
Characteristic information often has very strong disguise in two dimensional image, because the change of Observed Position, viewing angle
The not equal of degree can all cause observed object to produce deformation, displacement or even distort.But will have with explicit method in image
The difficulty that the feature extraction of displacement, scaling and distortion consistency comes out is huge again, even if in the presence of also without extensive suitable
The property used.
For it is such the problem of, CNN feature detection mechanism then gives good answer.CNN each convolution algorithm
Both for local feature carry out extraction, thus target change in location or scaling change produced all without to characteristic extraction procedure
Raw influence, and this implicit extraction process of CNN causes model to have wide applicability.It is constant in a large amount of training datas
Property structural information is successively extracted in the feature space of network.Again because of its structure the characteristics of, can classify with logistic regression
Device carries out seamless fusion, realizes image processing method end to end.It can directly input an image into network, and obtain
Classification information.The data reconstruction between feature extraction and classification is avoided, and characteristic extraction procedure therein has been hidden in network
Among structure.
, can be by a network based on this feature because the weights between characteristic pattern are that local-connection and weights are shared
It is placed in multiple machines while being trained and realizes parallel effect.It is substantially better than other in calculating and training speed and connected entirely
Connect neutral net.
The design inspiration of CNN network models comes from bionics, and its design feature is closer to biological neural net, so at place
Manage the primary signal of some natures, such as image information or during acoustic information with unique superiority.
It can summarize, convolutional neural networks have following advantage compared with other network models in terms of feature extraction:
1st, detection data can directly as network input data, without any preprocessing process;
2nd, data handling procedure end to end, simplifies the complexity of data reconstruction;
3rd, the shared strategy of weights, while training burden is alleviated, the possibility of parallelization is provided for training.
Showed due to being originally inputted by way of the further feature data that deep layer network is obtained all are with characteristic pattern,
And three-dimensional feature finally can all be input to full articulamentum by way of one-dimensional vector is to connect entirely and carry out classification processing, its
In, the main distinction of full articulamentum and convolutional layer is that convolutional layer embedded in substantial amounts of spatial information, and full articulamentum does not have then.
The space structure of convolutional layer can represent that its representation is H × W × D with a 3-dimensional tensor, wherein, H, W represents convolution
Longitudinal neuron number of a convolution characteristic pattern in layer and horizontal neuron number, D then in expression layer characteristic pattern number.
This 3-dimensional tensor can be understood as the regional area that convolutional layer has resolved into two-dimentional input data multiple H × W, and D dimensions this
The set of the regional area composition of sample together describes a kind of visual pattern.And full articulamentum obtains convolutional layer goes out as defeated
Enter, but the characteristic pattern of 3-dimensional can be subjected to vectorization, 1 obtained dimensional feature vector represents the characteristic vector of grader.At this
During, spatial information can be lost and space characteristics can not be restored at SoftMax layers, and then it is accurate to have influence on classification
Degree, and in the continuous feedback regulation of network can the extraction of indirect effect characteristicses quality.
The content of the invention
It is an object of the invention to provide a kind of simple direct, it can overcome the disadvantages that what is brought by the one-dimensional vector of full articulamentum
The submarine target feature extracting method based on convolutional neural networks that spatial information is lost.
The object of the present invention is achieved like this:
1st, by the sample sequence S (n) of primary radiation noise signal, it is divided into 25 continuous parts, each part sets 25 again
Individual sampled point, wherein allowing the part for having data cross overlapping between 25 continuous parts, juxtaposition degree is set to 50%;
2nd, by the sample M of jth segment data signalj(n) normalization and centralization processing are done;
Normalized:
L value is set to 2 integral number power;
Centralization processing:
3rd, Short Time Fourier Transform is carried outThe LoFAR figures of jth segment data signal are obtained after conversion:
Operation obtains complete LoFAR figures more than, regard the LoFAR spectrograms obtained as the defeated of convolutional network
Enter, perform following substep:
(1) random initializtion parametric technique is used to carry out assignment for the weights in convolutional network in initial phase, and
It is gradient descent method to determine network parameter method of adjustment, and sets network training end condition, sets output data and label
The deviation threshold of data is 0.07;
(2) using the LoFAR figures handled well as the input matrix of network, in the propagated forward stage, input matrix passes through two
Characteristic pattern has been delivered to weighting layer by secondary alternate convolution and pond computing, obtains preliminary profile information;
(3) it is characterized figure in weighting layer to be weighted, weighting procedure is divided into two parts:One is that spatial dimensions add
Power, another is the weighting of channel dimensions;
4th, obtain after two kinds of weight vectors, by vector assignment into existing 3-dimensional tensor;Its specific operation includes:Pass through Chi Huahou 3-dimensional characteristic tensor for last layer of convolutional layer, K represents of characteristic pattern in whole pond layer
Number, W, H is the planar dimensions of each characteristic pattern, xkijFor (i, j) individual pixel value in k-th of characteristic pattern in 3-dimensional characteristic tensor,For the 3-dimensional characteristic tensor after weighting, x'kijRepresent in the 3-dimensional tensor after weighting the under k-th of characteristic pattern
Its acquisition pattern of (i, j) individual character pixel is the space weight factor α by that will calculateij, passage weight factor βkWith
Character pixel xkijThree is multiplied,
x'kij=αijβkxkij
It is last to sum up pondization operation in each characteristic pattern to χ ', the Weighted characterization tensor of 3-dimensional is aggregated into one
One-dimensional characteristic vector F={ f1,f2,...,fk, wherein fkIt is calculated as formula,
The Weighted characterization tensor of 3-dimensional is aggregated into an one-dimensional characteristic vector by the processing more than;
5th, characteristic vector will be obtained it is input to full articulamentum being classified and calculate the error with label data, checks loss
Whether error is less than error threshold, if stopping network training less than if, otherwise into step 6;
6th, parameter adjustment is successively carried out from back to front to network using gradient descent method, and is transferred to step 2.
The present invention proposes a simple directly method, mentioned above because of the one-dimensional vector of full articulamentum to make up
Change brought spatial information to lose.
Before last feature figure layer vectorization, the present invention will strengthen the spatial information of feature figure layer so that this
Planting spatial information can be input in last full articulamentum.The angle of reinforcing then can be from passage (channel) and space
(spatial) two dimensions strengthen, and are finally polymerized to one-dimensional vector and are input to full articulamentum.
The method of reinforcing characteristic pattern sheaf space information is mainly, and assumes initially that the significance level between different characteristic patterns is
It is different, than if any characteristic pattern in the information that includes can be relatively simple, and some meetings are because the data of some characteristic informations
Expression is non-linear and linearly inseparable, so still including many information in the characteristic pattern after in-depth filtration.Equally,
The significance level of each character pixel is also to have differentiation.Based on above idea, the present invention will give different characteristic figure and feature
Pixel assigns weights, to strengthen spatial structural form, and then extracts high-quality data characteristics.
The step of first against multidimensional weighting algorithm and flow are illustrated.During specific tax power, it is assumed that
The final feature figure layer during a propagated forward is obtained, multidimensional weighting flow is carried out to this feature figure layer as follows:
1st, local pond:
Each characteristic pattern of last layer of convolutional layer is operated using local pondization, pond window size is w × h, window
Sliding step is s.A 3-dimensional tensor is obtained by Chi Huahou.
2nd, space (spatial) weight factor is calculated:
One weight α is assigned to each character pixel on Chi Huahou characteristic patternij, on character pair figure (i,
J) individual pixel.
3rd, passage (channel) weight factor is calculated:
Each characteristic pattern for Chi Huahou is passage k, all assigns a weight betak.(passage weight process is calculated in detail
It will be introduced in 3.3.2)
4th, weighted calculation:
Upper two step is calculated into obtained space weights and passage weights are imparted on its corresponding position, one is finally obtained
The 3-dimensional tensor of cum rights.
5th, vector normalization:
It is normalized for data above, the multidimensional weighted feature vector after being polymerize.
The vector that 5 steps are handled well more than is referred to as multidimensional weighted feature vector.In above flow, last layer is rolled up
It is to play a part of feature second extraction that lamination, which carries out the purpose that pondization operates,.Meanwhile, the pond operating method can be with
Effectively reduce pending data scale.
Followed by weighting procedure.
Operate, obtained after two kinds of weight vectors, it is necessary to by vector assignment into existing 3-dimensional tensor more than completing.It has
The operation of body is to define firstPass through Chi Huahou 3-dimensional characteristic tensor for last layer of convolutional layer.K represents whole
The number of characteristic pattern in individual pond layer, W, H is the planar dimensions of each characteristic pattern.Define xkijFor k-th in 3-dimensional characteristic tensor
(i, j) individual pixel value in characteristic pattern.DefinitionFor the 3-dimensional characteristic tensor after weighting, x'kijRepresent after weighting
3-dimensional tensor in (i, j) individual character pixel under k-th of characteristic pattern, and its acquisition pattern is by that will calculate
Space weight factor αij, passage weight factor βkWith character pixel xkijThree is multiplied.Such as following formula.
x'kij=αijβkxkij
It is last to sum up pondization operation in each characteristic pattern to χ ', the Weighted characterization tensor of 3-dimensional is aggregated into one
One-dimensional characteristic vector F={ f1,f2,...,fk, wherein fkIt is calculated as formula.
Processing more than, aggregates into an one-dimensional characteristic vector by the Weighted characterization tensor of 3-dimensional.Next step
Need to do for obtained characteristic vector F and be connected into full articulamentum after normalized again, the main purpose being polymerize is exactly to be
The input size of Quan Lian stratum is reduced, and then reduces the network connection weights for needing to train, so as to avoid over-fitting
Occur.Then characteristic vector is done into normalization again, exactly obtains final multidimensional weighted feature vector.
Next the calculating process of two kinds of weights is specifically introduced.
For the space weight and passage weight in multidimensional weighting algorithm, the present invention is main to take two kinds of printenvs
The computational methods of space weight factor and passage weight factor, wherein printenv refer to that convolutional network will not be brought additionally
Parameter, and make its have influence on the training effectiveness of network and may occur over-fitting.
According to hypothesis of the neocognitron to the formation of modifiable synapse:If in nearby there are than y more for neuron y
Strong activation neuron y', then the Synaptic junction from x to y is just without reinforcing.This is that is, the reinforcing of this Synaptic junction
It should meet " maximum detection hypothesis ", i.e., in the neuronal ensemble existed in a certain zonule (being referred to as neighborhood), only
The reinforcing for inputting cynapse just occurs for the maximum neuron of output.
It is theoretical it is to be understood that the bigger neuron of activation value is bigger on the connection weight influence near it more than, its
Significance level is also bigger.So define CkFor k-th of characteristic pattern in 3-dimensional characteristic tensor χ,It is characterized in tensor
All characteristic patterns it is cumulative, i.e., such as formula.
Preliminary space weight matrix is obtained by the formula above formula, its implication is by the same position of different characteristic figure
Put activation value xkijIt is overlapped, so as to reflect the intensity of the accumulative activation value in a certain position on plane space, i.e., intensity is bigger should
Position is also more important, position correspondence αijValue also just should be bigger, afterwards be exactly S is normalized operation, obtain most
Whole weight matrix A, what normalized herein was chosen is the normalized functions containing two super ginsengs:
Wherein sij(i, j) individual pixel value in S is represented, adjustable parameter a, b selection can be according to network training situations
It is fixed.
For the selection towards the weight vector on passage, the present invention proposes a kind of weighting algorithm based on image entropy.
Because the data of some characteristic informations represent it is in non-linear and linearly inseparable, the characteristic pattern after in-depth filtration
Still many information are included.Therefore the significance level for each characteristic pattern can be divided with the information content wherein contained
Analysis.
The how many comentropy of scaling information is to carry out analysis from the statistical property of whole information source to obtain.It is from
Reflect the overall characteristic of information source on average.For certain specific information source, its comentropy only one of which.Different statistical properties
Information source, its comentropy also has corresponding change.For the variable that non-intellectual is larger, the number of its comentropy is also relative
It is more.
In view of comentropy above characteristic, the present invention is introduced into the thought of image entropy to weigh many of information content in characteristic pattern
It is few.Although image entropy refers to the average information in whole image, one-dimensional image entropy can only reflect grey in image
Spend the aggregation situation of distribution.Distribution characteristics of the information in space can not be shown.Therefore the present invention in order to be able to characteristic pattern phase
Adapt to and reflect the spatial distribution characteristic of information in characteristic pattern simultaneously, solved present invention employs the method for two dimensional image entropy
The problem.In concrete operations, the neighborhood of present invention selection characteristic pattern activates average as the space characteristics amount of activation Distribution value,
With the tuple of activation value composition characteristic two of a certain pixel in characteristic pattern.
But the gray value in image entropy is the numerical value of a discretization, and activation value is by continuously swashing in characteristic pattern
What function living was obtained, so before the image entropy for calculating characteristic pattern again, the present invention is using the method for wide discretization to characteristic pattern
Again handle, realize the discretization of continuous data.Wherein the activation value of each pixel is handled as shown in formula.
Wherein x is the activation value of certain pixel, Xmin,XmaxFor the effective lower bound of activation primitive and the upper bound, m is after discrete
Siding-to-siding block length.The discrete activation value that (I, J) is characterized pixel for the feature binary group wherein I (0≤I≤m) after discretization is defined,
J (0≤J≤m) is the characteristic pattern neighborhood activation average after discretization.The entropy of k-th characteristic pattern is Hk, specifically it is calculated as follows formula
It is shown,
Wherein
And f (i, j) represents the frequency that feature binary group (i, j) occurs, HW is characterized graphical rule size.So far provide
Characteristic pattern entropy calculation formula both reflected information contained amount in characteristic pattern number, highlight again in characteristic pattern activation value with
The information distribution situation of its neighborhood.Finally the entropy of all characteristic patterns is normalized again, i.e., such as formula.
Upper part describes a kind of new multidimensional characteristic introduced to strengthen the spatial information in convolution characteristic pattern
The calculation of weighting algorithm and the algorithm.
The present invention realizes the effective integration of two kinds of signals using LoFAR spectrograms, and LoFAR spectrograms remain signal in the time
With the information of two dimensions of frequency.The signal in a continuous time section is divided into multiple frames, the signal of each frame in is asked
Its power spectrum, is finally deployed multiple power spectrum with time sequencing.And deep learning knowledge is utilized, implicit extraction target is special
Levy, reduce the work manually participated in.So, the present invention uses convolutional neural networks for LoFAR spectrograms
(Convolutional Neural Network, CNN) carries out Time-frequency Analysis to target, and extraction being capable of reflecting time structure
With the feature of frequency information.
Groundwork of the present invention is studied and improved for tradition CNN networks.Convolution process energy mistake in CNN networks
Journey can extract the local feature information in original image or voice signal, and work well, and be widely used.
Beneficial effects of the present invention are:
The discrimination of the inventive method has carried out spatial information compared with traditional convolution neural network algorithm to feature figure layer
The weighting operations of various dimensions, to make up the spatial information brought by the one-dimensional vector of full articulamentum loss mentioned above
Defect.
When carrying out target identification using LoFAR collection of illustrative plates, the convolutional Neural net of characteristic pattern various dimensions weighting algorithm is combined
The discrimination of network is more accurate, and discrimination is brought up into 75% or so.
Brief description of the drawings
Fig. 1 is weighted into figure for the present invention's;
Fig. 2 is flow chart of the invention;
Fig. 3 is convolutional network structure chart;
Fig. 4 a and Fig. 4 b are receptive field size and two groups of experimental programs of wave filter quantity;
Fig. 5 a and Fig. 5 b are Experiment Parameter effect contrast figure;
Fig. 6 is that contrast experiment restrains design sketch.
Embodiment
Illustrate below and the present invention is described in more detail.
The conversion generation of time-frequency domain is carried out to original noise can represent the LoFAR spectrograms of time-frequency domain information.Specifically
Processing procedure is:
1st, the sample sequence that S (n) is primary radiation noise signal is defined, 25 continuous parts, each part are divided into
25 sampled points are set again.Allow the part for having data cross overlapping between wherein 25 continuous parts, juxtaposition degree is set
For 50%.
2nd, M is definedj(n) it is the sample of jth segment signal, and it is normalized and centralization processing, the purpose is to
The amplitude direct current that is evenly distributed and reaches in time of radiated noise signals is allowed to make the average of sample be zero.
Normalized:
In order to just carry out the calculating of Fourier transformation, L value is set to 2 integral number power.
Centralization processing:
3. definitionFor Short Time Fourier Transform, the LoFAR figures of jth segment data signal are obtained after conversion:
The power spectrum of each segment data achieved above is deployed successively in chronological order, that is, obtains complete LoFAR figures.
Above-mentioned is the general step that LoFAR figures are obtained, although LoFAR spectrograms are the images of two dimension, and transverse axis represents the time,
The longitudinal axis represents frequency, but reflection is three-dimensional information.It can be represented with the size of gray value in the time and the frequency
Under energy size.
Input of the LoFAR spectrograms as convolutional network will be obtained, the concrete structure of the network weighted with characteristic pattern
Such as Fig. 3.
(1) uses random initializtion parametric technique to carry out assignment for the weights in convolutional network in initial phase, and
Determine that network parameter method of adjustment is restrained for gradient descent method, i.e. network backpropagation training stage using gradient descent method
Whole network.And network training end condition is set, it is 0.07 to set output data and the deviation threshold of label data.
(2) will handle input matrix of the LoFAR figures as network well.In the propagated forward stage, input matrix passes through twice
Hand over
Characteristic pattern has been delivered to weighting layer by the convolution and pond computing replaced.Preliminary profile information is now obtained.
(3) is realizes that Enhanced feature map space information is characterized figure in weighting layer and is weighted, and weighting procedure is roughly divided into
Two is big
Part:The weighting of one channel dimension of weighting of one spatial dimension.
According to hypothesis of the neocognitron to the formation of modifiable synapse:If in nearby there are than y more for neuron y
Strong activation neuron y', then the Synaptic junction from x to y is just without reinforcing.This is that is, the reinforcing of this Synaptic junction
It should meet " maximum detection hypothesis ", i.e., in the neuronal ensemble existed in a certain zonule (being referred to as neighborhood), only
The reinforcing for inputting cynapse just occurs for the maximum neuron of output.
It is theoretical it is to be understood that the bigger neuron of activation value is bigger on the connection weight influence near it more than, its
Significance level is also bigger.So define CkFor k-th of characteristic pattern in 3-dimensional characteristic tensor χ,It is characterized in tensor
All characteristic patterns it is cumulative, i.e., such as formula.
Preliminary space weight matrix is obtained by the formula above formula, its implication is by the same position of different characteristic figure
Put activation value xkijIt is overlapped, so as to reflect the intensity of the accumulative activation value in a certain position on plane space, i.e., intensity is bigger should
Position is also more important, position correspondence αijValue also just should be bigger, afterwards be exactly S is normalized operation, obtain most
Whole weight matrix A, what normalized herein was chosen is the normalized functions containing two super ginsengs:
Wherein sij(i, j) individual pixel value in S is represented, adjustable parameter α=0.5, b=1 is set.
For the selection towards the weight vector on passage, the present invention proposes a kind of weighting algorithm based on image entropy.
Because the data of some characteristic informations represent it is in non-linear and linearly inseparable, the characteristic pattern after in-depth filtration
Still many information are included.Therefore the significance level for each characteristic pattern can be divided with the information content wherein contained
Analysis.
The how many comentropy of scaling information is to carry out analysis from the statistical property of whole information source to obtain.It is from
Reflect the overall characteristic of information source on average.For certain specific information source, its comentropy only one of which.Different statistical properties
Information source, its comentropy also has corresponding change.For the variable that non-intellectual is larger, the number of its comentropy is also relative
It is more.
In view of comentropy above characteristic, the present invention is introduced into the thought of image entropy to weigh many of information content in characteristic pattern
It is few.Although image entropy refers to the average information in whole image, one-dimensional image entropy can only reflect grey in image
Spend the aggregation situation of distribution.Distribution characteristics of the information in space can not be shown.Therefore the present invention in order to be able to characteristic pattern phase
Adapt to and reflect the spatial distribution characteristic of information in characteristic pattern simultaneously, solved present invention employs the method for two dimensional image entropy
The problem.In concrete operations, the neighborhood of present invention selection characteristic pattern activates average as the space characteristics amount of activation Distribution value,
With the tuple of activation value composition characteristic two of a certain pixel in characteristic pattern.
But the gray value in image entropy is the numerical value of a discretization, and activation value is by continuously swashing in characteristic pattern
What function living was obtained, so before the image entropy for calculating characteristic pattern again, the present invention is using the method for wide discretization to characteristic pattern
Again handle, realize the discretization of continuous data.Wherein the activation value of each pixel is handled as shown in formula.
Wherein x is the activation value of certain pixel, Xmin,XmaxFor the effective lower bound of activation primitive and the upper bound, m=255 is set
For the siding-to-siding block length after discrete.It is that the feature binary group wherein I (0≤I≤m) after discretization is characterized pixel to define (I, J)
Discrete activation value, J (0≤J≤m) is the characteristic pattern neighborhood activation average after discretization.The entropy of k-th characteristic pattern is Hk, tool
Body is calculated as follows shown in formula (3-6).
Wherein
And f (i, j) represents the frequency that feature binary group (i, j) occurs, HW is characterized graphical rule size.So far provide
Characteristic pattern entropy calculation formula both reflected information contained amount in characteristic pattern number, highlight again in characteristic pattern activation value with
The information distribution situation of its neighborhood.Finally the entropy of all characteristic patterns is normalized again, i.e., such as formula.
Upper part describes new multidimensional characteristic weighting algorithm and the calculation of the algorithm.
4th, operate, obtained after two kinds of weight vectors, it is necessary to by vector assignment into existing 3-dimensional tensor more than completing.Its
Specific operation is to define firstPass through Chi Huahou 3-dimensional characteristic tensor for last layer of convolutional layer.K is represented
The number of characteristic pattern in the layer of whole pond, W, H is the planar dimensions of each characteristic pattern.Define xkijFor kth in 3-dimensional characteristic tensor
(i, j) individual pixel value in individual characteristic pattern.DefinitionFor the 3-dimensional characteristic tensor after weighting, x'kijRepresent weighting
(i, j) individual character pixel in 3-dimensional tensor afterwards under k-th of characteristic pattern, and its acquisition pattern is by that will calculate
Space weight factor αij, passage weight factor βkWith character pixel xkijThree is multiplied.Such as following formula.
x'kij=αijβkxkij
It is last to sum up pondization operation in each characteristic pattern to χ ', the Weighted characterization tensor of 3-dimensional is aggregated into one
One-dimensional characteristic vector F={ f1,f2,...,fk, wherein fkIt is calculated as formula.
Processing more than, aggregates into an one-dimensional characteristic vector by the Weighted characterization tensor of 3-dimensional.
5th, characteristic vector will be obtained it is input to full articulamentum being classified and calculate the error with label data.Check loss
Whether error is less than error threshold, and network training is stopped if being less than.Otherwise step 6 is entered.
6th, parameter adjustment is successively carried out from back to front to network using gradient descent method, because weighting layer does not introduce volume
Outer parameter, so whole adjustment process is as good as with traditional convolutional network.And it is transferred to step 2.
Experimental result and analysis:
(1) selection of data set
This section is by the form of emulation, it is now assumed that target noise is made up of line spectrum and continuous spectrum and to meet stationary state random
Process, wherein line spectrum are typically distributed on below 1kHz low frequency end.And it regard multigroup sine wave with random phase as target
The line spectrum component of signal.It is expressed as following formula.
Wherein K is line spectrum quantity, AkFor the amplitude of kth bar line spectrum, fkFor line spectral frequencies, φkFor random phase.In mould
When intending different signals, fkControl is within 1kHz.In order to simulate the noise situations under true environment, in the clock signal of emulation
In be mixed into the different white Gaussian noise of amplitude.
As a result and analysis
(1) influence of receptive field size and wave filter quantity to algorithm
In convolutional network, quality and efficiency that receptive field size and wave filter quantity effect characteristicses are extracted, so they
Value have a great impact to algorithm.In an experiment, multigroup different experiments scheme is set to carry out to such as Fig. 5 a and Fig. 5 b, from
As can be seen that receptive field fine size experiment effect is better in effect contrast figure, and the more experiment effects of wave filter are better.
(2) contrast of weighting CNN algorithms and tradition CNN algorithms
The present invention is proposed that weight CNN is contrasted with traditional convolutional network and RICNN, network convergence effect such as Fig. 6.
From
It can be seen from the figure that, the convergence rate of the network model with weighting CNN is better than other two kinds of network models.
Conclusion:
The innovatory algorithm of the present invention has done following improvement on the basis of traditional convolutional neural networks, mainly:
The characteristic pattern of convolution pond part can cause the loss of spatial information when entering full articulamentum in convolutional network,
In order to make up spatial signature information, the present invention proposes multidimensional characteristic weighting algorithm, by channel dimension and Spatial Dimension
Information assigns certain weights and strengthened to carry out the space of feature figure layer, and the reinforcing thought for channel dimension is to utilize image entropy
Theory carries out entropy calculating for the characteristic pattern on each passage.
Claims (3)
1. a kind of submarine target feature extracting method based on convolutional neural networks, it is characterized in that:
(1), by the sample sequence S (n) of primary radiation noise signal, it is divided into 25 continuous parts, each part sets 25 again
Sampled point;
(2), by the sample M of jth segment data signalj(n) normalization and centralization processing are done;
Normalized:
<mrow>
<msub>
<mi>u</mi>
<mi>j</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<msub>
<mi>M</mi>
<mi>j</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mi>m</mi>
<mi>a</mi>
<mi>x</mi>
<mo>&lsqb;</mo>
<msub>
<mi>M</mi>
<mi>j</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo>&rsqb;</mo>
</mrow>
</mfrac>
<mo>,</mo>
<mn>1</mn>
<mo>&le;</mo>
<mi>i</mi>
<mo>&le;</mo>
<mi>L</mi>
</mrow>
L value is set to 2 integral number power;
Centralization processing:
<mrow>
<msub>
<mi>x</mi>
<mi>j</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<msub>
<mi>u</mi>
<mi>j</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mfrac>
<mn>1</mn>
<mi>L</mi>
</mfrac>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>L</mi>
</munderover>
<msub>
<mi>u</mi>
<mi>j</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo>;</mo>
</mrow>
(3) Short Time Fourier Transform, is carried outThe LoFAR figures of jth segment data signal are obtained after conversion:
Operation obtains complete LoFAR figures more than, using the LoFAR figures obtained as the input of convolutional network, holds
The following substep of row:
1) use random initializtion parametric technique to carry out assignment for the weights in convolutional network in initial phase, and determine net
Network parameter regulation means are gradient descent method, and set network training end condition, set output data and label data
Deviation threshold is 0.07;
2) using the LoFAR figures handled well as the input matrix of network, in the propagated forward stage, input matrix by replacing twice
Convolution and pond computing characteristic pattern has been delivered to weighting layer, obtain preliminary profile information;
3) it is characterized figure in weighting layer to be weighted, weighting procedure is divided into two parts:One be spatial dimensions weighting, separately
One be channel dimensions weighting;
(4), obtain after two kinds of weight vectors, by vector assignment into existing 3-dimensional tensor;
(5) characteristic vector, will be obtained it is input to full articulamentum being classified and calculate the error with label data, checks that loss is missed
Whether difference is less than error threshold, if stopping network training less than if, otherwise into step (6);
(6), successively carry out parameter adjustment from back to front to network using gradient descent method, and be transferred to step (2).
2. the submarine target feature extracting method according to claim 1 based on convolutional neural networks, it is characterized in that:25
Allow the part for having data cross overlapping between continuous part, juxtaposition degree is set to 50%.
3. the submarine target feature extracting method according to claim 1 based on convolutional neural networks, it is characterized in that described
Vector assignment is specifically included into existing 3-dimensional tensor:It is special by Chi Huahou 3-dimensional for last layer of convolutional layer
Tensor is levied, K represents the number of characteristic pattern in whole pond layer, W, and H is the planar dimensions of each characteristic pattern, xkijFor 3-dimensional feature
(i, j) individual pixel value in k-th of characteristic pattern in amount,For the 3-dimensional characteristic tensor after weighting, x'kijRepresent
(i, j) individual character pixel its acquisition pattern in 3-dimensional tensor after weighting under k-th of characteristic pattern is by that will calculate
Space weight factor αij, passage weight factor βkWith character pixel xkijThree is multiplied,
x'kij=αijβkxkij
It is last to sum up pondization in each characteristic pattern to χ ' and operate, by the Weighted characterization tensor of 3-dimensional aggregate into one it is one-dimensional
Characteristic vector F={ f1,f2,...,fk, wherein fkIt is calculated as formula,
<mrow>
<msub>
<mi>f</mi>
<mi>k</mi>
</msub>
<mo>=</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>W</mi>
</munderover>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>j</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>H</mi>
</munderover>
<msub>
<msup>
<mi>x</mi>
<mo>&prime;</mo>
</msup>
<mrow>
<mi>k</mi>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
</mrow>
The Weighted characterization tensor of 3-dimensional is aggregated into an one-dimensional characteristic vector by the processing more than.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710237910.5A CN107194404B (en) | 2017-04-13 | 2017-04-13 | Underwater target feature extraction method based on convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710237910.5A CN107194404B (en) | 2017-04-13 | 2017-04-13 | Underwater target feature extraction method based on convolutional neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107194404A true CN107194404A (en) | 2017-09-22 |
CN107194404B CN107194404B (en) | 2021-04-20 |
Family
ID=59871107
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710237910.5A Active CN107194404B (en) | 2017-04-13 | 2017-04-13 | Underwater target feature extraction method based on convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107194404B (en) |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108009594A (en) * | 2017-12-25 | 2018-05-08 | 北京航空航天大学 | A kind of image-recognizing method based on change packet convolution |
CN108388904A (en) * | 2018-03-13 | 2018-08-10 | 中国海洋大学 | A kind of dimension reduction method based on convolutional neural networks and covariance tensor matrix |
CN108549832A (en) * | 2018-01-21 | 2018-09-18 | 西安电子科技大学 | LPI radar signal sorting technique based on full Connection Neural Network |
CN108596865A (en) * | 2018-03-13 | 2018-09-28 | 中山大学 | A kind of characteristic pattern for convolutional neural networks enhances system and method |
CN109269547A (en) * | 2018-07-12 | 2019-01-25 | 哈尔滨工程大学 | Submarine target Ship Detection based on line spectrum |
CN109299185A (en) * | 2018-10-18 | 2019-02-01 | 上海船舶工艺研究所(中国船舶工业集团公司第十研究所) | A kind of convolutional neural networks for timing flow data extract the analysis method of feature |
CN109590805A (en) * | 2018-12-17 | 2019-04-09 | 杭州国彪超声设备有限公司 | A kind of determination method and system of turning cutting tool working condition |
CN109858496A (en) * | 2019-01-17 | 2019-06-07 | 广东工业大学 | A kind of image characteristic extracting method based on weighting depth characteristic |
CN110221266A (en) * | 2019-06-11 | 2019-09-10 | 哈尔滨工程大学 | A kind of marine radar target rapid detection method based on support vector machines |
CN110245608A (en) * | 2019-06-14 | 2019-09-17 | 西北工业大学 | A kind of Underwater targets recognition based on semi-tensor product neural network |
CN110319818A (en) * | 2019-07-17 | 2019-10-11 | 伟志股份公司 | A kind of Finish Construction Survey of Underground Pipeline system and method |
CN111178507A (en) * | 2019-12-26 | 2020-05-19 | 集奥聚合(北京)人工智能科技有限公司 | Atlas convolution neural network data processing method and device |
CN111291771A (en) * | 2018-12-06 | 2020-06-16 | 西安宇视信息科技有限公司 | Method and device for optimizing characteristics of pooling layer |
CN111358451A (en) * | 2020-03-17 | 2020-07-03 | 乐普(北京)医疗器械股份有限公司 | Blood pressure prediction method and device |
CN111401548A (en) * | 2020-03-03 | 2020-07-10 | 西北工业大学 | L off line spectrum detection method based on deep learning |
CN111626341A (en) * | 2020-05-12 | 2020-09-04 | 哈尔滨工程大学 | Feature level information fusion method for underwater target identification |
CN111898614A (en) * | 2019-05-05 | 2020-11-06 | 阿里巴巴集团控股有限公司 | Neural network system, image signal and data processing method |
CN112329819A (en) * | 2020-10-20 | 2021-02-05 | 中国海洋大学 | Underwater target identification method based on multi-network fusion |
CN113191208A (en) * | 2021-04-09 | 2021-07-30 | 湖北工业大学 | Feature extraction method and computer equipment for remote sensing image instance segmentation |
CN113542171A (en) * | 2021-07-12 | 2021-10-22 | 湖南大学 | Modulation pattern recognition method and system based on CNN and combined high-order spectral image |
CN116910710A (en) * | 2023-07-19 | 2023-10-20 | 问久软件科技(山东)有限公司 | Anti-addiction management method and system based on group supervision |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101794440A (en) * | 2010-03-12 | 2010-08-04 | 东南大学 | Weighted adaptive super-resolution reconstructing method for image sequence |
CN105279556A (en) * | 2015-11-05 | 2016-01-27 | 国家卫星海洋应用中心 | Enteromorpha detection method and enteromorpha detection device |
CN105868797A (en) * | 2015-01-22 | 2016-08-17 | 深圳市腾讯计算机系统有限公司 | Network parameter training method, scene type identification method and devices |
CN106096605A (en) * | 2016-06-02 | 2016-11-09 | 史方 | A kind of image obscuring area detection method based on degree of depth study and device |
-
2017
- 2017-04-13 CN CN201710237910.5A patent/CN107194404B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101794440A (en) * | 2010-03-12 | 2010-08-04 | 东南大学 | Weighted adaptive super-resolution reconstructing method for image sequence |
CN105868797A (en) * | 2015-01-22 | 2016-08-17 | 深圳市腾讯计算机系统有限公司 | Network parameter training method, scene type identification method and devices |
CN105279556A (en) * | 2015-11-05 | 2016-01-27 | 国家卫星海洋应用中心 | Enteromorpha detection method and enteromorpha detection device |
CN106096605A (en) * | 2016-06-02 | 2016-11-09 | 史方 | A kind of image obscuring area detection method based on degree of depth study and device |
Non-Patent Citations (7)
Title |
---|
YANNIS KALANTIDIS 等: "Cross-dimensional Weighting for Aggregated Deep Convolutional Features", 《EUROPEAN CONFERENCE ON COMPUTER VISION》 * |
安彧等: "海战场的目标检测与识别", 《华中科技大学学报(自然科学版)》 * |
宗振宇等: "基于LOFAR谱图的水下目标识别方法", 《海军航空工程学院学报》 * |
李富生: "视觉显著性检测及其在视频编码中的应用研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
江程: "基于结构相似度的图像客观质量评价方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
王长龙等: "《漏磁检测的缺陷可视化技术》", 28 February 2014, 北京:国防工业出版社 * |
蔺素珍: "双色中波红外图像差异特征分析及融合方法研究", 《万方学位论文数据库》 * |
Cited By (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108009594B (en) * | 2017-12-25 | 2018-11-13 | 北京航空航天大学 | A kind of image-recognizing method based on change grouping convolution |
CN108009594A (en) * | 2017-12-25 | 2018-05-08 | 北京航空航天大学 | A kind of image-recognizing method based on change packet convolution |
CN108549832B (en) * | 2018-01-21 | 2021-11-30 | 西安电子科技大学 | Low-interception radar signal classification method based on full-connection neural network |
CN108549832A (en) * | 2018-01-21 | 2018-09-18 | 西安电子科技大学 | LPI radar signal sorting technique based on full Connection Neural Network |
CN108388904A (en) * | 2018-03-13 | 2018-08-10 | 中国海洋大学 | A kind of dimension reduction method based on convolutional neural networks and covariance tensor matrix |
CN108596865A (en) * | 2018-03-13 | 2018-09-28 | 中山大学 | A kind of characteristic pattern for convolutional neural networks enhances system and method |
CN108596865B (en) * | 2018-03-13 | 2021-10-26 | 中山大学 | Feature map enhancement system and method for convolutional neural network |
CN108388904B (en) * | 2018-03-13 | 2022-05-03 | 中国海洋大学 | Dimensionality reduction method based on convolutional neural network and covariance tensor matrix |
CN109269547A (en) * | 2018-07-12 | 2019-01-25 | 哈尔滨工程大学 | Submarine target Ship Detection based on line spectrum |
CN109299185A (en) * | 2018-10-18 | 2019-02-01 | 上海船舶工艺研究所(中国船舶工业集团公司第十研究所) | A kind of convolutional neural networks for timing flow data extract the analysis method of feature |
CN109299185B (en) * | 2018-10-18 | 2023-04-07 | 上海船舶工艺研究所(中国船舶集团有限公司第十一研究所) | Analysis method for convolutional neural network extraction features aiming at time sequence flow data |
CN111291771B (en) * | 2018-12-06 | 2024-04-02 | 西安宇视信息科技有限公司 | Method and device for optimizing pooling layer characteristics |
CN111291771A (en) * | 2018-12-06 | 2020-06-16 | 西安宇视信息科技有限公司 | Method and device for optimizing characteristics of pooling layer |
CN109590805B (en) * | 2018-12-17 | 2019-11-29 | 杭州国彪超声设备有限公司 | A kind of determination method and system of turning cutting tool working condition |
CN109590805A (en) * | 2018-12-17 | 2019-04-09 | 杭州国彪超声设备有限公司 | A kind of determination method and system of turning cutting tool working condition |
CN109858496A (en) * | 2019-01-17 | 2019-06-07 | 广东工业大学 | A kind of image characteristic extracting method based on weighting depth characteristic |
CN111898614A (en) * | 2019-05-05 | 2020-11-06 | 阿里巴巴集团控股有限公司 | Neural network system, image signal and data processing method |
CN110221266A (en) * | 2019-06-11 | 2019-09-10 | 哈尔滨工程大学 | A kind of marine radar target rapid detection method based on support vector machines |
CN110221266B (en) * | 2019-06-11 | 2022-12-13 | 哈尔滨工程大学 | Marine radar target rapid detection method based on support vector machine |
CN110245608A (en) * | 2019-06-14 | 2019-09-17 | 西北工业大学 | A kind of Underwater targets recognition based on semi-tensor product neural network |
CN110245608B (en) * | 2019-06-14 | 2022-05-17 | 西北工业大学 | Underwater target identification method based on half tensor product neural network |
CN110319818A (en) * | 2019-07-17 | 2019-10-11 | 伟志股份公司 | A kind of Finish Construction Survey of Underground Pipeline system and method |
CN111178507A (en) * | 2019-12-26 | 2020-05-19 | 集奥聚合(北京)人工智能科技有限公司 | Atlas convolution neural network data processing method and device |
CN111401548A (en) * | 2020-03-03 | 2020-07-10 | 西北工业大学 | L off line spectrum detection method based on deep learning |
CN111401548B (en) * | 2020-03-03 | 2022-03-22 | 西北工业大学 | Lofar line spectrum detection method based on deep learning |
CN111358451B (en) * | 2020-03-17 | 2022-07-29 | 乐普(北京)医疗器械股份有限公司 | Blood pressure prediction method and device |
CN111358451A (en) * | 2020-03-17 | 2020-07-03 | 乐普(北京)医疗器械股份有限公司 | Blood pressure prediction method and device |
CN111626341B (en) * | 2020-05-12 | 2022-04-08 | 哈尔滨工程大学 | Feature level information fusion method for underwater target identification |
CN111626341A (en) * | 2020-05-12 | 2020-09-04 | 哈尔滨工程大学 | Feature level information fusion method for underwater target identification |
CN112329819A (en) * | 2020-10-20 | 2021-02-05 | 中国海洋大学 | Underwater target identification method based on multi-network fusion |
CN113191208A (en) * | 2021-04-09 | 2021-07-30 | 湖北工业大学 | Feature extraction method and computer equipment for remote sensing image instance segmentation |
CN113191208B (en) * | 2021-04-09 | 2022-10-21 | 湖北工业大学 | Feature extraction method and computer equipment for remote sensing image instance segmentation |
CN113542171A (en) * | 2021-07-12 | 2021-10-22 | 湖南大学 | Modulation pattern recognition method and system based on CNN and combined high-order spectral image |
CN116910710A (en) * | 2023-07-19 | 2023-10-20 | 问久软件科技(山东)有限公司 | Anti-addiction management method and system based on group supervision |
CN116910710B (en) * | 2023-07-19 | 2024-01-26 | 问久软件科技(山东)有限公司 | Anti-addiction management method and system based on group supervision |
Also Published As
Publication number | Publication date |
---|---|
CN107194404B (en) | 2021-04-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107194404A (en) | Submarine target feature extracting method based on convolutional neural networks | |
CN103824054B (en) | A kind of face character recognition methods based on cascade deep neural network | |
Chen et al. | Automatic feature extraction in X-ray image based on deep learning approach for determination of bone age | |
Pandey et al. | An image augmentation approach using two-stage generative adversarial network for nuclei image segmentation | |
CN103996056B (en) | Tattoo image classification method based on deep learning | |
CN107529650A (en) | The structure and closed loop detection method of network model, related device and computer equipment | |
CN102306301B (en) | Motion identification system for simulating spiking neuron of primary visual cortex | |
CN106373109A (en) | Medical image modal synthesis method | |
CN107273845A (en) | A kind of facial expression recognizing method based on confidence region and multiple features Weighted Fusion | |
CN106845541A (en) | A kind of image-recognizing method based on biological vision and precision pulse driving neutral net | |
CN107607992A (en) | More ripple matching process based on convolutional neural networks | |
CN105095833A (en) | Network constructing method for human face identification, identification method and system | |
CN108764186A (en) | Personage based on rotation deep learning blocks profile testing method | |
CN104299232B (en) | SAR image segmentation method based on self-adaptive window directionlet domain and improved FCM | |
Desai et al. | Breast cancer detection using gan for limited labeled dataset | |
CN110533683A (en) | A kind of image group analysis method merging traditional characteristic and depth characteristic | |
Ahangaryan et al. | Persian banknote recognition using wavelet and neural network | |
CN106056059A (en) | Multidirectional SLGS characteristic description and performance cloud weight fusion face recognition method | |
CN107451594A (en) | A kind of various visual angles Approach for Gait Classification based on multiple regression | |
CN109871869A (en) | A kind of Lung neoplasm classification method and its device | |
Ma et al. | PCFNet: Deep neural network with predefined convolutional filters | |
Yuan et al. | Learning hierarchical and shared features for improving 3D neuron reconstruction | |
CN110334747A (en) | Based on the image-recognizing method and application for improving convolutional neural networks | |
Nejad et al. | The hybrid method and its application to smart pavement management | |
CN106504199A (en) | A kind of eye fundus image Enhancement Method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |