CN104714237A - Fish identification method with multi-feature and multidirectional data fused - Google Patents

Fish identification method with multi-feature and multidirectional data fused Download PDF

Info

Publication number
CN104714237A
CN104714237A CN201510054151.XA CN201510054151A CN104714237A CN 104714237 A CN104714237 A CN 104714237A CN 201510054151 A CN201510054151 A CN 201510054151A CN 104714237 A CN104714237 A CN 104714237A
Authority
CN
China
Prior art keywords
feature
probability
vector
orientation
fish
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510054151.XA
Other languages
Chinese (zh)
Inventor
杜伟东
李海森
李若
马丞浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN201510054151.XA priority Critical patent/CN104714237A/en
Publication of CN104714237A publication Critical patent/CN104714237A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/96Sonar systems specially adapted for specific applications for locating fish
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/539Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to the field of acoustic fish identification, in particular to a fish identification method with multi-feature and multidirectional data fused. The method comprises the steps that acoustical signals are emitted to the underwater, and fish body multidirectional acoustic scattering signals are acquired; the acquired multidirectional acoustic scattering signals are normalized and filtered; the preprocessed signals are subjected to multi-feature extraction; the preprocessed multidirectional acoustic scattering data are subjected to orthogonal transformation, envelopes are extracted, the wavelet packet coefficient singular value features, the time domain mass center features and the frequency domain mass center features of envelope information are extracted, and feature fusion and dimension reduction processing are conducted. The multidirectional data acquiring method is simple and easy to implement. Based on the extracted multiple features, the multidirectional acoustical scattering features are subjected to collaboration fusion, the fusion degree is high, fusion is compact, and the problems that identification is not clear, and correct identification even cannot be achieved when only single-dimension acoustical scattering information is classified can be effectively solved.

Description

A kind of multiple features and multi-faceted data fusion fish identification method
Technical field
What the present invention relates to is a kind of acoustics fish identification field, is specifically related to a kind of multiple features and multi-faceted data fusion fish identification method.
Background technology
Patent of invention " a kind of fish identification method based on wavelet packet multi-scale information entropy and system in substantive examination, publication number: 103308919A " describe a kind of fish identification method, the technical characterstic of the method is to carry out fish identification based on the single features of single bearing data, but fish is a complicated sound scatterer, the orientation of fish body and air bladder, size, shape has a strong impact on the acoustic scattering feature of fish, thus make the acoustic scattering signal of fish also become very complicated, extract from sophisticated signal and can reflect that the feature of fish essence sound scattering characteristics is particularly important in fish identification.The scattering properties that fish shows in different acoustic scattering orientation is different, and therefore single orientation detection will certainly lose the scattered information of fish.And fish is free-swimming in water, the concrete orientation of unpredictable fish in wave beam during detection, therefore the sound scattering characteristics obtaining fish under actual environment shows certain randomness in scattering orientation, causes the accuracy of identification problem that is poor, that even can not correctly identify that to there is the orientation difference due to fish and cause in fish identifying.
Summary of the invention
The object of the invention is to solve the high precision fish identification problem based on acoustic data, and a kind of multiple features and multi-faceted data fusion fish identification method are provided.
The object of the present invention is achieved like this:
Multiple features and multi-faceted data fusion fish identification method, comprise the steps:
(1) to underwater emission acoustical signal, the multi-faceted acoustic scattering signal of fish body is obtained;
(2) to obtain multi-faceted acoustic scattering signal be normalized, filtering process;
(3) multi-feature extraction is carried out to pretreated signal: orthogonal transformation is carried out to pretreated multi-faceted acoustic scattering data, extract envelope, wavelet packet coefficient singular value features, time domain centroid feature, frequency domain centroid feature are extracted to envelope information, carries out Fusion Features and dimension-reduction treatment;
(4) characteristic quantity is input to support vector machine classifier and carries out fish identification: by support vector machine classifier, the class result of decision is expressed as posterior probability, utilize the decision probability of the decision probability in each orientation to all the other orientation to be weighted simultaneously, all decision packages cooperate mutually, complete fish identification.
Concrete grammar envelope information being extracted to wavelet packet coefficient singular value features is: obtain wavelet packet coefficient by WAVELET PACKET DECOMPOSITION and reconstruct, formed wavelet packet coefficient matrix, obtain wavelet packet coefficient singular value:
SD m , n = λ m , n
D mfor the matrix of wavelet package reconstruction coefficient composition, λ m,nfor d m hd mnonzero eigenvalue, SD m,nfor characteristic quantity, m is take over party's item, and n is Characteristic Number.
Concrete grammar envelope information being extracted to time domain centroid feature is:
(3.1.1) whole segment signal time domain barycenter TC is calculated 11, obtain two subband [0, TC of ground floor 11] and [TC 11, T max], signal time length is 0-T max;
(3.1.2) on the basis of ground floor segmentation, calculate the time domain barycenter of each subband respectively, be respectively TC 21and TC 22, obtain three subband [0, TC of the second layer 21], [TC 21, TC 22], [TC 22, T max];
(3.1.3) in i-th layer of each subband, calculate time domain barycenter, and it can be used as the foundation that lower one deck divides, be generally advisable with 3 ~ 6 layers, then character pair number is 3 ~ 6.
Extract frequency domain centroid feature to envelope information to comprise:
(3.2.1) the frequency domain barycenter SC of frequency range is calculated 11, obtain two subband [0, SC of ground floor 11] and [SC 11, f max], the scope that desirable signal energy is concentrated is as original analysis frequency range, and signal frequency range is 0-f max;
(3.2.2) on the basis of ground floor segmentation, calculate the frequency domain barycenter of each subband respectively, be respectively SC 21and SC 22, obtain three subband [0, SC of the second layer 21], [SC 21, SC 22], [SC 22, f max];
(3.2.3) by that analogy, calculate time domain barycenter, and it can be used as the foundation that lower one deck divides, be generally advisable with 3 ~ 6 layers in each subband of jth layer, then character pair number is 3 ~ 6.
Fusion Features and dimension-reduction treatment refer to three kinds of features to combine, composition fusion feature vector, and carry out Feature Dimension Reduction process by Fisher method of discrimination.
In step (4), support vector machine exports and is:
f(T m)=wφ(T m)+b
Wherein, f (x) is for exporting lineoid, and w is weight vector, and b is bias amount, and φ is Nonlinear Mapping, T mrepresentative feature, m is orientation number;
Posterior probability estimation:
p ( c | T m ) = exp ( wφ ( T m ) + b ) Σ c = 1 C l exp ( wφ ( T m ) + b )
Wherein c is category label, C lfor total class quantity;
The posterior probability vector that each orientation exports is:
p m = ( p ( c 1 | T m ) , . . . , p ( c C l | T m ) )
The probability vector p that orientation m exports mbe sent to other decision packages, receive the probability vector p that other decision packages export simultaneously 1..., p m-1, p m+1... p m, be independent distribution, then the probability-weighted of orientation m:
p weig ( c k | t m ) = Π i ≠ j A p m ( c k | T m )
K represents the label of a certain class, t mcharacteristic vector space after representative reconfigures;
After obtaining decision probability vector and probability-weighted vector, for avoiding introducing new sorter again, combine, definition slope K sL, then K sLbe expressed as:
K SL = p ( c 1 | T ) - p ( c 2 | T ) p ( c 1 | T )
Wherein
p ( c 1 | T ) ≥ p ( c 2 | T ) ≥ . . . p ( c C l | T )
Then decision probability vector is respectively with probability-weighted vector weight:
W init = K SL init K SL init + K SL weig , W weig = K SL weig K SL init + K SL weig
According to shared weight, calculate the final decision probability in each orientation:
p m f ( c | T ) = p m ( c | T m ) * W init + p weig ( c | T m ) * W weig
Wherein T is total characteristic vector space;
Final Classification and Identification result then for each class is:
P ( c | T ) = Π m = 1 M p m f ( c | T )
Wherein M is orientation quantity;
For the feature being input to support vector machine classifier, the probability of which class greatly then expresses support for vector machine classifier and the feature of current input is identified as this class.
Beneficial effect of the present invention is:
(1) the multi-faceted acoustic scattering data based on fish are extracted wavelet packet coefficient singular value, time domain barycenter, frequency domain barycenter three kinds of features, multiple features reflects the acoustics essential characteristic of fish, describe different fingerling at time domain and frequency domain distribution property difference, and can effective supplement be carried out between these features, these features are carried out merging and improves the comprehensive of characteristic information, simultaneously, for improving feature stability, reducing identifying complexity, adopt Fisher method of discrimination to carry out Feature Dimension Reduction process, remain the more effective feature of fish identification.Multiple features can effectively solve based on the low problem of the discrimination of single features.
(2) multi-faceted data capture method is simple, is easy to realize; Based on the multiple features of said extracted, multi-faceted acoustic scattering feature is carried out cooperation and is merged by the present invention, merges degree high and tight, effectively can solve when a folk prescription position acoustic scattering information is classified and identify problem that is unclear, that even can not correctly identify.
Accompanying drawing explanation
Fig. 1 is multiple features and multi-faceted data fusion fish identification method structured flowchart
Fig. 2 is multi-faceted acoustic scattering data capture method schematic diagram;
Fig. 3 is multiple features feature extracting method structured flowchart;
Fig. 4 is that multi-faceted acoustic scattering collaboration data merges fish identification method structured flowchart
Embodiment
Below in conjunction with accompanying drawing, the invention will be described further with enforcement:
Step (01) utilizes a single beam transmitting transducer to launch acoustical signal to fish body, utilizes M nautical receiving set to receive fish volume scattering acoustical signal M different azimuth, obtains multi-faceted fish volume scattering data, M=6 ~ 9 simultaneously.
The process such as step (02) is normalized the multi-faceted acoustic scattering signal obtained, filtering;
Step (03) extracts wavelet packet coefficient singular value, time domain centroid feature, and carries out Fusion Features, and concrete grammar is:
Step (03) obtains wavelet packet coefficient singular value by formula (1):
SD m , n = λ m , n - - - ( 1 )
If d mfor the matrix of wavelet package reconstruction coefficient composition, then λ m,nfor d m hd mnonzero eigenvalue, SD m,nfor characteristic quantity, m is take over party's item, and n is Characteristic Number.
Wavelet packet coefficient singular value features is:
T sd,m=(SD m,1,SD m,2,…,SD m,n) (2)
Time domain barycenter is solved by method below:
(1) acoustic scattering signal time domain barycenter TC is calculated 11, obtain two subband [0, TC of ground floor 11] and [TC 11, T max];
(2) on the basis of ground floor segmentation, calculate the time domain barycenter of each subband respectively, be respectively TC 21and TC 22, obtain three subband [0, TC of the second layer 21], [TC 21, TC 22], [TC 22, T max];
(3) by that analogy, in i-th layer of each subband, calculate time domain barycenter, and it can be used as the foundation that lower one deck divides, i=3 ~ 6.
According to the strategy of (1)-(3), each time domain barycenter method for solving is:
TC m = Σ 0 L - 1 n n m 2 ( l ) Σ 0 L - 1 x m 2 ( l ) - - - ( 3 )
Wherein, L is the signal length of each subband, the amplitude that x (l) is signal.
Time domain centroid feature is:
T tc,m=(TC m,1,TC m,2,…,TC m,n) (4)
Frequency domain barycenter is solved by method below:
(1) the frequency domain barycenter SC of frequency range is calculated 11, obtain two subband [0, SC of ground floor 11] and [SC 11, f max], the scope that desirable signal energy is concentrated is as original analysis frequency range;
(2) on the basis of ground floor segmentation, calculate the frequency domain barycenter of each subband respectively, be respectively SC 21and SC 22, obtain three subband [0, SC of the second layer 21], [SC 21, SC 22], [SC 22, f max];
(3) by that analogy, in each subband of jth layer, calculate time domain barycenter, and it can be used as the foundation that lower one deck divides, j=3 ~ 6.
According to the strategy of (1)-(3), each frequency domain barycenter method for solving is:
SC m = ∫ 0 f max f E m ( f ) df ∫ 0 f max E m ( f ) df - - - ( 5 )
Wherein: f is signal frequency; E (f) is the spectrum energy of respective frequencies after continued time domain signal x (t) Fourier transform.
Frequency domain centroid feature is:
T sc,m=(SC m,1,SC m,2,…,SC m,n) (6)
Fusion feature is:
T m=(T sd,m,T tc,m,T sc,m) (7)
On the basis of merging, carry out Feature Dimension Reduction by formula (8):
D T = | μ x - μ y | 2 σ x 2 + σ y 2 - - - ( 8 )
Wherein x, y represent classification, wherein μ x, μ y, be respectively class x, the mean and variance of y character pair amount.
Step (04) adopts support vector machine classifier to carry out fish identification, and support vector machine exports and is:
f(T m)=wφ(T m)+b (9)
Wherein, f (x) is for exporting lineoid, and w is weight vector, and b is bias amount, and φ is Nonlinear Mapping, T mrepresentative feature, m is orientation number.
Described step (04) obtains posterior probability estimation by formula (10):
p ( c | T m ) = exp ( wφ ( T m ) + b ) Σ c = 1 C l exp ( wφ ( T m ) + b ) - - - ( 10 )
Wherein c is category label, C lfor total class quantity.
The posterior probability vector that each orientation exports is:
p m = ( p ( c 1 | T m ) , . . . , p ( c C l | T m ) ) - - - ( 11 )
The probability vector p that orientation m exports mbe sent to other decision packages, receive the probability vector p that other decision packages export simultaneously 1..., p m-1, p m+1... p m, suppose that these probability are independent distribution, then the probability-weighted of orientation m:
p weig ( c k | t m ) = Π i ≠ j A p m ( c k | T m ) - - - ( 12 )
Wherein k represents the label of a certain class, t mfeature space after representative reconfigures.
After obtaining decision probability vector and probability-weighted vector, for avoiding introducing new sorter again, formula (13) method is adopted to combine, definition slope K sL, then K sLbe expressed as:
K SL = p ( c 1 | T ) - p ( c 2 | T ) p ( c 1 | T ) - - - ( 13 )
Wherein
p ( c 1 | T ) ≥ p ( c 2 | T ) ≥ . . . p ( c C l | T ) - - - ( 14 )
Then decision probability vector is respectively with probability-weighted vector weight:
W init = K SL init K SL init + K SL weig , W weig = K SL weig K SL init + K SL weig - - - ( 15 )
According to shared weight, calculate the final decision probability in each orientation:
p m f ( c | T ) = p m ( c | T m ) * W init + p weig ( c | T m ) * W weig - - - ( 16 )
Final Classification and Identification result then for each class is:
P ( c | T ) = Π m = 1 M p m f ( c | T ) - - - ( 17 )
For the feature being input to support vector machine classifier, the probability of which class greatly then expresses support for vector machine classifier and the feature of current input is identified as this class.
Fig. 2 is multi-faceted acoustic scattering data capture method schematic diagram, and wherein filled circles represents transmitting transducer, and open circles represents receiving transducer, and transmitting transducer of the present invention is phase array transducer, and receiving transducer is standard hydrophone.
Extract in the present invention and obtain wavelet packet coefficient singular value, time domain barycenter, frequency domain centroid feature, multiple features feature extraction flow process as shown in Figure 3.Feature extraction mainly comprises several steps such as envelope information extraction, feature extraction, Fusion Features, Feature Dimension Reduction, wavelet packet coefficient singular value characterizes the joint distribution characteristic of fish on time-frequency domain, time domain barycenter then illustrates the power distribution properties of fish in time domain, and frequency domain barycenter then illustrates the power distribution properties of fish on frequency domain.
Specific implementation method is:
Step (01) utilizes a single beam phased transducer to launch acoustical signal to fish body, utilizes M nautical receiving set to receive fish body acoustic scattering signal M different azimuth, obtains multi-faceted fish volume scattering data, M=6 ~ 9 simultaneously;
The process such as step (02) is normalized the multi-faceted scattered signal obtained, filtering;
Step (03) extracts wavelet packet coefficient singular value features, time domain centroid feature, frequency domain centroid feature, obtains the fusion feature of three, and carries out Feature Dimension Reduction process;
As shown in Figure 4, the corresponding decision package of the proper vector in each orientation in step (04), decision package mainly comprises the processing procedure to acoustic scattering data such as sorter, probability weight.For orientation m, first extraction feature is carried out to acoustic scattering data, and by support vector machine classifier, the class result of decision is expressed as posterior probability, utilize the decision probability of the decision probability of orientation m to all the other each orientation to be weighted simultaneously, all decision packages cooperate mutually, complete the prediction of unknown fingerling.
Compared with prior art, advantage of the present invention is:
(1) the multi-faceted acoustic scattering data based on fish are extracted wavelet packet coefficient singular value, time domain barycenter, frequency domain barycenter three kinds of features, multiple features reflects the acoustics essential characteristic of fish, describe different fingerling at time domain and frequency domain distribution property difference, and can effective supplement be carried out between these features, these features are carried out merging and improves the comprehensive of characteristic information, simultaneously, for improving feature stability, reducing identifying complexity, adopt Fisher method of discrimination to carry out Feature Dimension Reduction process, remain the more effective feature of fish identification.Multiple features can effectively solve based on the low problem of the discrimination of single features.
(2) multi-faceted data capture method is simple, is easy to realize; Based on the multiple features of said extracted, multi-faceted acoustic scattering feature is carried out cooperation and is merged by the present invention, merges degree high and tight, effectively can solve when a folk prescription position acoustic scattering information is classified and identify problem that is unclear, that even can not correctly identify.

Claims (6)

1. multiple features and a multi-faceted data fusion fish identification method, is characterized in that, comprise the steps:
(1) to underwater emission acoustical signal, the multi-faceted acoustic scattering signal of fish body is obtained;
(2) to obtain multi-faceted acoustic scattering signal be normalized, filtering process;
(3) multi-feature extraction is carried out to pretreated signal: orthogonal transformation is carried out to pretreated multi-faceted acoustic scattering data, extract envelope, wavelet packet coefficient singular value features, time domain centroid feature, frequency domain centroid feature are extracted to envelope information, carries out Fusion Features and dimension-reduction treatment;
(4) characteristic quantity is input to support vector machine classifier and carries out fish identification: by support vector machine classifier, the class result of decision is expressed as posterior probability, utilize the decision probability of the decision probability in each orientation to all the other orientation to be weighted simultaneously, all decision packages cooperate mutually, complete fish identification.
2. a kind of multiple features according to claim 1 and multi-faceted data fusion fish identification method, it is characterized in that: described to the concrete grammar of envelope information extraction wavelet packet coefficient singular value features is: obtain wavelet packet coefficient by WAVELET PACKET DECOMPOSITION and reconstruct, formed wavelet packet coefficient matrix, obtained wavelet packet coefficient singular value:
SD m , n λ m , n
D mfor the matrix of wavelet package reconstruction coefficient composition, λ m,nfor d m hd mnonzero eigenvalue, SD m,nfor characteristic quantity, m is take over party's item, and n is Characteristic Number.
3. a kind of multiple features according to claim 1 and multi-faceted data fusion fish identification method, is characterized in that: the described concrete grammar to envelope information extraction time domain centroid feature is:
(3.1.1) whole segment signal time domain barycenter TC is calculated 11, obtain two subband [0, TC of ground floor 11] and [TC 11, T max], signal time length is 0-T max;
(3.1.2) on the basis of ground floor segmentation, calculate the time domain barycenter of each subband respectively, be respectively TC 21and TC 22, obtain three subband [0, TC of the second layer 21], [TC 21, TC 22], [TC 22, T max];
(3.1.3) in i-th layer of each subband, calculate time domain barycenter, and it can be used as the foundation that lower one deck divides, be generally advisable with 3 ~ 6 layers, then character pair number is 3 ~ 6.
4. a kind of multiple features according to claim 1 and multi-faceted data fusion fish identification method, is characterized in that: described comprises envelope information extraction frequency domain centroid feature:
(3.2.1) the frequency domain barycenter SC of frequency range is calculated 11, obtain two subband [0, SC of ground floor 11] and [SC 11, f max], the scope that desirable signal energy is concentrated is as original analysis frequency range, and signal frequency range is 0-f max;
(3.2.2) on the basis of ground floor segmentation, calculate the frequency domain barycenter of each subband respectively, be respectively SC 21and SC 22, obtain three subband [0, SC of the second layer 21], [SC 21, SC 22], [SC 22, f max];
(3.2.3) by that analogy, calculate time domain barycenter, and it can be used as the foundation that lower one deck divides, be generally advisable with 3 ~ 6 layers in each subband of jth layer, then character pair number is 3 ~ 6.
5. a kind of multiple features according to claim 1 and multi-faceted data fusion fish identification method, it is characterized in that: described Fusion Features and dimension-reduction treatment refer to three kinds of features to combine, composition fusion feature vector, and carry out Feature Dimension Reduction process by Fisher method of discrimination.
6. a kind of multiple features according to claim 1 and multi-faceted data fusion fish identification method, is characterized in that: in described step (4), support vector machine exports and is:
f(T m)=wφ(T m)+b
Wherein, f (x) is for exporting lineoid, and w is weight vector, and b is bias amount, and φ is Nonlinear Mapping, T mrepresentative feature, m is orientation number;
Posterior probability estimation:
p ( c | T m ) = exp ( wφ ( T m ) + b ) Σ c = 1 C l exp ( wφ ( T m ) + b )
Wherein c is category label, C lfor total class quantity;
The posterior probability vector that each orientation exports is:
p m = ( p ( c 1 | T m ) , . . . , p ( c C l | T m ) )
The probability vector p that orientation m exports mbe sent to other decision packages, receive the probability vector p that other decision packages export simultaneously 1..., p m-1, p m+1... p m, be independent distribution, then the probability-weighted of orientation m:
p weig ( c k | t m ) = Π i ≠ j A p m ( c k | T m )
K represents the label of a certain class, t mcharacteristic vector space after representative reconfigures;
After obtaining decision probability vector and probability-weighted vector, for avoiding introducing new sorter again, combine, definition slope K sL, then K sLbe expressed as:
K SL = p ( c 1 | T ) - p ( c 2 | T ) p ( c 1 | T )
Wherein
p(c 1|T)≥p(c 2|T)≥…p(c Cl|T)
Then decision probability vector is respectively with probability-weighted vector weight:
W init = K SL init K SL init + K SL weig , W weig = K SL weig K SL init + K SL weig
According to shared weight, calculate the final decision probability in each orientation:
p m f ( c | T ) = p m ( c | T m ) * W init + W init + p weig ( c | T m ) * W weig
Wherein T is total characteristic vector space;
Final Classification and Identification result then for each class is:
P ( c | T ) = Π m = 1 M p m f ( c | T )
Wherein M is orientation quantity;
For the feature being input to support vector machine classifier, the probability of which class greatly then expresses support for vector machine classifier and the feature of current input is identified as this class.
CN201510054151.XA 2015-01-30 2015-01-30 Fish identification method with multi-feature and multidirectional data fused Pending CN104714237A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510054151.XA CN104714237A (en) 2015-01-30 2015-01-30 Fish identification method with multi-feature and multidirectional data fused

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510054151.XA CN104714237A (en) 2015-01-30 2015-01-30 Fish identification method with multi-feature and multidirectional data fused

Publications (1)

Publication Number Publication Date
CN104714237A true CN104714237A (en) 2015-06-17

Family

ID=53413722

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510054151.XA Pending CN104714237A (en) 2015-01-30 2015-01-30 Fish identification method with multi-feature and multidirectional data fused

Country Status (1)

Country Link
CN (1) CN104714237A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106417143A (en) * 2016-09-05 2017-02-22 华中农业大学 Freshwater fish variety identifying device and method based on passive acoustic information
CN107305248A (en) * 2016-04-18 2017-10-31 中国科学院声学研究所 A kind of ultrabroad band target identification method and device based on HMM
CN107609395A (en) * 2017-08-31 2018-01-19 中国长江三峡集团公司 A kind of numerical value Fusion Model construction method and device
CN107727749A (en) * 2017-08-30 2018-02-23 南京航空航天大学 A kind of ultrasonic quantitative detection method based on wavelet packet fusion feature extraction algorithm
CN110414554A (en) * 2019-06-18 2019-11-05 浙江大学 One kind being based on the improved Stacking integrated study fish identification method of multi-model
CN112244819A (en) * 2020-11-10 2021-01-22 浙大宁波理工学院 System and method for identifying abnormal gait of child based on plantar pressure array detection
CN114155879A (en) * 2021-12-06 2022-03-08 哈尔滨工程大学 Abnormal sound detection method for compensating abnormal perception and stability by using time-frequency fusion

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101907712A (en) * 2009-06-05 2010-12-08 古野电气株式会社 Fish finder
CN103308919A (en) * 2012-03-12 2013-09-18 中国科学院声学研究所 Fish identification method and system based on wavelet packet multi-scale information entropy
CN103969639A (en) * 2014-05-09 2014-08-06 哈尔滨工程大学 Signal processing system and method of five-wave-beam fish finder

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101907712A (en) * 2009-06-05 2010-12-08 古野电气株式会社 Fish finder
CN103308919A (en) * 2012-03-12 2013-09-18 中国科学院声学研究所 Fish identification method and system based on wavelet packet multi-scale information entropy
CN103969639A (en) * 2014-05-09 2014-08-06 哈尔滨工程大学 Signal processing system and method of five-wave-beam fish finder

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
刘寅等: "基于小波包多尺度信息熵的鱼类识别方法", 《网络新媒体技术》 *
周超等: "水声技术在水产养殖中的应用与展望", 《声学技术》 *
张鑫瑜等: "利用支持向量机和高阶累量实现飞机类型识别", 《哈尔滨工程大学学报》 *
杜伟东等: "《基于SVM的协作融合鱼分类方法》", 31 December 2014 *
毛汉颖: "裂纹源的支持向量机与神经网络定位对比研究", 《广西大学学报:自然科学版》 *
王向红等: "基于小波神经网络的水轮叶片裂纹源的定位技术", 《上海交通大学学报》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107305248A (en) * 2016-04-18 2017-10-31 中国科学院声学研究所 A kind of ultrabroad band target identification method and device based on HMM
CN106417143A (en) * 2016-09-05 2017-02-22 华中农业大学 Freshwater fish variety identifying device and method based on passive acoustic information
CN106417143B (en) * 2016-09-05 2019-03-19 华中农业大学 A kind of fresh-water fishes variety ecotype device and method based on passive acoustic information
CN107727749A (en) * 2017-08-30 2018-02-23 南京航空航天大学 A kind of ultrasonic quantitative detection method based on wavelet packet fusion feature extraction algorithm
CN107727749B (en) * 2017-08-30 2020-08-07 南京航空航天大学 Ultrasonic quantitative detection method based on wavelet packet fusion feature extraction algorithm
CN107609395A (en) * 2017-08-31 2018-01-19 中国长江三峡集团公司 A kind of numerical value Fusion Model construction method and device
CN107609395B (en) * 2017-08-31 2020-10-13 中国长江三峡集团公司 Numerical fusion model construction method and device
CN110414554A (en) * 2019-06-18 2019-11-05 浙江大学 One kind being based on the improved Stacking integrated study fish identification method of multi-model
CN112244819A (en) * 2020-11-10 2021-01-22 浙大宁波理工学院 System and method for identifying abnormal gait of child based on plantar pressure array detection
CN114155879A (en) * 2021-12-06 2022-03-08 哈尔滨工程大学 Abnormal sound detection method for compensating abnormal perception and stability by using time-frequency fusion

Similar Documents

Publication Publication Date Title
CN104714237A (en) Fish identification method with multi-feature and multidirectional data fused
CN101470194B (en) Torpedo target recognition method
CN103308919B (en) Fish identification method and system based on wavelet packet multi-scale information entropy
CN107728142B (en) Radar high-resolution range profile target identification method based on two-dimensional convolutional network
CN108229404A (en) A kind of radar echo signal target identification method based on deep learning
CN103323532B (en) Fish identification method and system based on psychoacoustics parameters
CN103730121B (en) A kind of recognition methods pretending sound and device
CN103854660B (en) A kind of four Mike's sound enhancement methods based on independent component analysis
Peso Parada et al. Using Gaussian mixture models to detect and classify dolphin whistles and pulses
CN110109058A (en) A kind of planar array deconvolution identification of sound source method
CN103969634B (en) Objective attribute target attribute feature extracting method based on complete polarization attribute scattering center model
Sun et al. Underwater single-channel acoustic signal multitarget recognition using convolutional neural networks
CN103824093A (en) SAR (Synthetic Aperture Radar) image target characteristic extraction and identification method based on KFDA (Kernel Fisher Discriminant Analysis) and SVM (Support Vector Machine)
CN104751183A (en) Polarimetric SAR image classification method based on tensor MPCA
CN105741844A (en) DWT-SVD-ICA-based digital audio watermarking algorithm
CN104732970A (en) Ship radiation noise recognition method based on comprehensive features
CN107330457A (en) A kind of Classification of Polarimetric SAR Image method based on multi-feature fusion
CN104536007B (en) Fish identification method based on multi-perspective acoustic data
Li et al. Automated classification of Tursiops aduncus whistles based on a depth-wise separable convolutional neural network and data augmentation
CN101644768B (en) Torpedo target recognition method based on cepstrum analysis
CN105589073A (en) Walsh-conversion-based fish active acoustic identification method
CN106990392A (en) A kind of extraterrestrial target fine motion information acquisition method based on random stepped frequency signal
CN103308918B (en) Fish identification method and system based on segmented time-domain centroid features
Starkhammar et al. Separating overlapping click trains originating from multiple individuals in echolocation recordings
CN103323853A (en) Fish identification method and system based on wavelet packets and bispectrum

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20150617

RJ01 Rejection of invention patent application after publication