CN106057196A - Vehicular voice data analysis identification method - Google Patents

Vehicular voice data analysis identification method Download PDF

Info

Publication number
CN106057196A
CN106057196A CN201610534783.0A CN201610534783A CN106057196A CN 106057196 A CN106057196 A CN 106057196A CN 201610534783 A CN201610534783 A CN 201610534783A CN 106057196 A CN106057196 A CN 106057196A
Authority
CN
China
Prior art keywords
model
state
parameter
sequence
observation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610534783.0A
Other languages
Chinese (zh)
Other versions
CN106057196B (en
Inventor
谢欣霖
陈波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Zhida Science And Technology Co Ltd
Original Assignee
Chengdu Zhida Science And Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Zhida Science And Technology Co Ltd filed Critical Chengdu Zhida Science And Technology Co Ltd
Priority to CN201610534783.0A priority Critical patent/CN106057196B/en
Publication of CN106057196A publication Critical patent/CN106057196A/en
Application granted granted Critical
Publication of CN106057196B publication Critical patent/CN106057196B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/14Speech classification or search using statistical models, e.g. Hidden Markov Models [HMMs]
    • G10L15/142Hidden Markov Models [HMMs]
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/14Speech classification or search using statistical models, e.g. Hidden Markov Models [HMMs]
    • G10L15/142Hidden Markov Models [HMMs]
    • G10L15/144Training of HMMs

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Machine Translation (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a vehicular voice data analysis identification method. The method comprises the following steps: reading voice to be read, obtaining an observation sequence after front processing, calculating condition probabilities between the observation sequence and all entry corresponding models, and according to the condition probabilities, determining entry identification results. The vehicular voice data analysis identification method brought forward by the invention does not need a mark training sample set in an offline dictionary, is little dependent on rules and improves the identification precision, thereby being adapted to the demand for continuously updating a vehicle-mounted system.

Description

Vehicle voice data resolves recognition methods
Technical field
The present invention relates to speech recognition, resolve recognition methods particularly to a kind of vehicle voice data.
Background technology
The technology such as cloud computing, big data, data mining, by promoting the faster and better development of information services industry, wherein merge The information service of natural language understanding technology, obtains required information and service more accurately and efficiently by guiding people.Voice Technology, as the man-machine interaction mode of desirable, will progressively become one more and more crucial in many interactive modes. Specific to automotive field, available natural language understanding technology customizes out highly practical intelligent information service system, with more people Property man-machine interaction mode more convenient, information speech order accurately and navigation are provided, will be prospect for the experience driven Wide lifting.But existing vehicle environment speech recognition is by needing a large amount of labellings training samples in bigger off-line dictionary This collection carries out study and carries out semantic supposition, higher, the most flexibly to the dependency of rule, it is impossible to enough adapt to onboard system continuous Change demand and precision and accuracy relatively low.
Summary of the invention
For solving the problem existing for above-mentioned prior art, the present invention proposes a kind of vehicle voice data and resolves identification side Method, including:
The voice that reading is to be identified, obtains observation sequence, calculating observation sequence and all entries pair after front-end processing Answer the conditional probability of model, determine entry recognition result according to described conditional probability.
Preferably, before the conditional probability of described calculating observation sequence model corresponding with all entries, also include:
Estimating the speech primitive corresponding to characteristic parameter sequence to be identified, described primitive includes word, syllable, initial consonant, rhythm Mother, is converted to Recognition unit by characteristic parameter sequence;The process setting up model includes:
(1), randomly choose initial parameter value, initialize HMM model λ;
(2) it is, that each state carries out cutting by observation sequence;The result of cutting is exactly the corresponding observation of each state Frame set;
(3), utilizing Segment Clustering algorithm that the observation vector set belonging to each state is divided into M bunch, M is Gaussian Mixture Degree, every cluster, for a single Gaussian parameter of Gaussian-mixture probability density, carries out the estimation of following parameter afterwards:
cjkThe vector sum of the vectorial number of kth set/belong to state j it is in during=state j;
μjkThe vectorial sample average of kth set it is in under=state j;
UjkThe vectorial sample covariance matrix of kth set it is in under=state j;
A HMM model λ ' updated is obtained according to above-mentioned parameter;
(4), model λ ' is compared with initial model λ, if model difference exceedes threshold value set in advance, by mould Type λ replaces with λ ' and repeat the above steps 2 and 3;If difference is less than threshold value, it is determined that restrain for model, preserving should Model;
By above-mentioned iteration so that initial parameter value constantly being corrected during whole model training.
The present invention compared to existing technology, has the advantage that
The present invention proposes a kind of vehicle voice data and resolves recognition methods, it is not necessary to the labelling training sample in off-line dictionary This collection, little to the dependency of rule, improve accuracy of identification, adapt to the demand that onboard system is constantly updated.
Accompanying drawing explanation
Fig. 1 is the flow chart that vehicle voice data according to embodiments of the present invention resolves recognition methods.
Detailed description of the invention
Hereafter provide retouching in detail one or more embodiment of the present invention together with the accompanying drawing of the diagram principle of the invention State.Describe the present invention in conjunction with such embodiment, but the invention is not restricted to any embodiment.The scope of the present invention is only by right Claim limits, and the present invention contains many replacements, amendment and equivalent.Illustrate in the following description many details with Thorough understanding of the present invention is just provided.These details are provided for exemplary purposes, and without in these details Some or all details can also realize the present invention according to claims.
An aspect of of the present present invention provides a kind of vehicle voice data and resolves recognition methods.Fig. 1 is to implement according to the present invention The vehicle voice data of example resolves recognition methods flow chart.
The onboard system of the present invention is combined by identification module, semantic supposition sort module.Utilize machine learning method Corpus is effectively learnt thus sets up segmenter.CRF is utilized to carry out POS-tagging.Then semantic supposition is carried out. Classification to proper noun, it is simple to the storage of proper noun and tissue and voice command.
The identification process of identification module can be described as: reads in voice to be identified, after front-end processing, and the sight that will obtain Surveying sequence X to mate all entries, namely design conditions probability, the entry that wherein model of maximum probability is corresponding is just It it is recognition result.Complete above-mentioned identification, it is necessary to first complete learning model training.
Go out the speech primitive corresponding to characteristic parameter sequence to be identified from probability angle estimation, this primitive include word, syllable, Initial consonant, simple or compound vowel of a Chinese syllable, thus characteristic parameter sequence is converted to Recognition unit.The process setting up model includes:
1, randomly choose initial parameter value, initialize HMM model λ.
2, it is that each state carries out cutting by observation sequence.The result of cutting is exactly the corresponding observation frame of each state Set.
3, utilizing segmentation K mean algorithm that the observation vector set belonging to each state is divided into M bunch, M is Gaussian Mixture Degree, every cluster, for a single Gaussian parameter of Gaussian-mixture probability density, carries out the estimation of following parameter afterwards.
cjkThe vector sum of the vectorial number of kth set/belong to state j it is in during=state j
μjkThe vectorial sample average of kth set it is in under=state j
UjkThe vectorial sample covariance matrix of kth set it is in under=state j
A HMM model λ ' updated is obtained according to above-mentioned parameter;
4, model λ ' is compared with initial model λ, if model difference exceedes threshold value set in advance, by model λ replaces with λ ' and repeat the above steps 2 and 3;If difference is less than threshold value, it is determined that restrains for model, preserves this mould Type.
By above-mentioned iteration so that initial parameter value constantly being corrected during whole model training.
In the training stage of model, directly by MFCC characteristic parameter as observation, a MFCC vector is exactly a sight Measured value.Secondly parameter b of composition probability density function is calculatedj(o):
bj(o)=∑ cjmM(o,μjm,Ujm), 1 < j < N
Wherein o represents observation, cjmRepresenting the m-th mixed coefficint of state j, M is ellipsometry density function, and it is by putting down Mean vector μjmWith covariance matrix Ujm
Model training process is as follows:
(1), all of training characteristics parameter is sliced in each state.
(2), by the characteristic parameter that each state is had give the single Gauss model of mixed Gauss model, model is entered Row is following to be revised:
C , j k = &Sigma; l = 1 L &gamma; t ( j , k ) &Sigma; l = 1 L &Sigma; m = 1 M &gamma; t ( j , m )
&mu; , j k = &Sigma; l = 1 L &gamma; t ( j , k ) o &Sigma; l = 1 L &gamma; t ( j , k )
U , j k = &Sigma; l = 1 L &gamma; t ( j , k ) ( o - &mu; j k ) &Sigma; l = 1 L &gamma; t ( j , k )
Wherein γt(j k) isL is sample size;CtFor t scale factor.
(3), convergence is judged whether.If convergence, terminating, if being unsatisfactory for, continuing iteration.
In proper noun is classified.First training sample set and the test set of data base are obtained.To training sample Training sample set is trained on the basis of carrying out pretreatment and text representation by this collection, it is thus achieved that a grader.In assessment Test set is tested by the grader stage.After training sample is carried out pretreatment, obtain participle, by each proper noun Change into the vector being made up of morpheme.Utilize training sample to add up the word frequency of each morpheme and reverse frequency, and thus calculate every Individual morpheme is to the regularization word frequency of predefined classification and reverse frequency ratio, as this word d power to the corresponding i that classifies Value wi(d)。
w i ( d ) = log N / &Sigma; i = 1 N log 2 ( N / n i + 0.1 )
Wherein N is proper noun sum, niFor comprising the proper noun quantity of entry i;During test, count respectively The weights sum that each is classified by pending proper noun, provides final classification results.
n ( Z Y ) = max i &Element; &lsqb; 1 , M &rsqb; ( &Sigma; j = 1 N w i ( ZY j ) )
Wherein ZY is proper noun to be sorted, and M is classification number ZyjIt is jth morpheme in ZY.
Every voice command is arranged four contents by data base, is purpose, voice command original text, voice command respectively Participle information, the POS-tagging information of voice command.Generate the training file of participle, the test file of participle, POS-tagging Training file, the test file of POS-tagging.First the present invention carries out participle and POS-tagging to voice command.For repeatability Mistake, it may be found that mistake add in the self-defining dictionary of program modifying of batch to.
Before participle, a point word problem is converted into sequence mark problem.The beginning of tagged words, the centre of word, word The word that ending and single word are constituted, then uses the feature presetting template definition needs study, every a line generation in template file One template of table, what row in grand [row, the col] in each template represented is relative line number, and col represents is absolute Line number.According to the template defined in template file, training file is according to these template generation Feature Words.
Participle training file is utilized to carry out the learning training of condition random field, it is thus achieved that a Words partition system.Utilize part of speech mark Note training file carries out the learning training of condition random field, it is thus achieved that a POS-tagging system.Survey with participle test file Examination, obtains the precision of participle.POS-tagging test file is tested, obtains the precision of POS-tagging system.
After participle terminates, need that the result of participle is carried out suitable conversion so that later use POS-tagging system System carries out POS-tagging.For every voice command, Words partition system B1 word segmentation result of output.Utilize the part of speech mark that training obtains Note system carries out POS-tagging to file, and every voice command can obtain B1 word segmentation result, and this step is for each input Having B2 POS-tagging output result, therefore every voice command finally can obtain B1*B2 recognition result, therefrom filters out B excellent recognition result, writes back the participle of data base's Plays with POS-tagging information.When participle is same with POS-tagging result Time completely the same time, determine that recognition result is correct.
Wherein in generating B1 word segmentation result and B2 POS-tagging result, farther include to be tied by each participle respectively The probability of fruit extracts, and is saved in array p1In, the probability of POS-tagging result extracts and is saved in p2In array.For B1*B2 recognition result of every voice command can pass through p1、p2The two array calculates its generating probability, and calculating process is:
P [i]=p2[i]*p1[i/B2] i=0,1,2 ..., B1*B2-1
Generating probability p [i] of this B1*B2 recognition result is ranked up, B the recognition result that output probability is the highest.
After completing training study, the participle of the given voice command of output module output and POS-tagging result.For The voice command of input, searches dictionary, searches special noun in voice command, if finding, is replaced with the special symbol of correspondence Number, and write file 1;Meanwhile the proper noun bracket found is identified, and writes file 2.File 1 is converted Test the pattern of the input of module for condition random field participle and write file 3.File 3 is carried out participle, the result of participle is protected It is stored in file 4.The result of participle is converted into the pattern of the input of POS-tagging system and writes file 5.For every voice B1 the word segmentation result that order produces, is saved in the probability of word segmentation result in p1.Voice command in file 5 is carried out part of speech Labelling.Result is saved in file 6.B1*B2 recognition result is obtained the most altogether for every voice command, needs to export B good recognition result.B final result is written in file 7.File 7 is converted into the output format finally specified.
In the present invention, semantic understanding uses method based on statistical learning.Semantic big class in onboard system includes road of navigating Line, road conditions, answer and call, air-conditioning regulation, the function such as weather voice command, radio.Wherein part of semantic also needs to Parameter on band: as called it is to be appreciated that the concrete telephone number dialed.The semantic supposition problem of the present invention can be converted into The purpose of the text of input is assigned to predefined purpose apoplexy due to endogenous wind.First the type of voice command purpose is speculated, if purpose needs Further parameter, then find the parameter of correspondence in voice command.
Following conditional probability is modeled by POS-tagging problem:
p(s1…sm|x1…xm), for, x1…xmRepresent the single word in the voice command of input, and s1…sm∈ S generation The all possible part of speech of table combines.For one by x1…xmThe voice command constituted has kmPlant POS-tagging combination.K=| S |.Hence set up this kmPlant the probability distribution of POS-tagging result.
p ( s 1 ... s m | x 1 ... x m ) = &Pi; i = 1 m p ( s i | s i - 1 x 1 ... x m )
Then obtain:
Represent the finite aggregate characteristic vector of predefined set of words X and tag set Y, pass through s1…smAnd x1…xm's A large amount of training, it is thus achieved that parameter vector w, finally give p (s1…sm|x1…xm)。
After completing training, ask for for input x1…xmState s1…sm, i.e. solve:
argmax s 1 ... s m p ( s 1 ... s m | x 1 ... x m )
Word after the feature used is voice command participle can also be corresponding part of speech, or the combination of the two. After choosing series of features, add feature according to training sample, regulate weights.
For in the text classification problem of semantics recognition, for the training sample of given set of words X Yu tag set Y (xi,yi), i=l ..., n, n are total sample number, arrange following optimization problem:
min w T w / 2 + C &lsqb; m a x ( 1 - y i w T x i , 0 ) + &Sigma; i = 1 l l o g ( 1 + e - y i w T x i ) &rsqb;
During test, for an x according to wTX > 0 provides classification results.
In sum, the present invention proposes a kind of vehicle voice data and resolves recognition methods, it is not necessary in off-line dictionary Labelling training sample set, little to the dependency of rule, improve accuracy of identification, adapt to the demand that onboard system is constantly updated..
Obviously, it should be appreciated by those skilled in the art, each module of the above-mentioned present invention or each step can be with general Calculating system realize, they can concentrate in single calculating system, or be distributed in multiple calculating system and formed Network on, alternatively, they can realize with the executable program code of calculating system, it is thus possible to by they store Performed by calculating system within the storage system.So, the present invention is not restricted to the combination of any specific hardware and software.
It should be appreciated that the above-mentioned detailed description of the invention of the present invention is used only for exemplary illustration or explains the present invention's Principle, and be not construed as limiting the invention.Therefore, that is done in the case of without departing from the spirit and scope of the present invention is any Amendment, equivalent, improvement etc., should be included within the scope of the present invention.Additionally, claims purport of the present invention Whole within containing the equivalents falling into scope and border or this scope and border change and repair Change example.

Claims (2)

1. a vehicle voice data resolves recognition methods, it is characterised in that including:
The voice that reading is to be identified, obtains observation sequence after front-end processing, calculating observation sequence mould corresponding with all entries The conditional probability of type, determines entry recognition result according to described conditional probability.
Method the most according to claim 1, it is characterised in that described calculating observation sequence model corresponding with all entries Before conditional probability, also include:
Estimating the speech primitive corresponding to characteristic parameter sequence to be identified, described primitive includes word, syllable, initial consonant, simple or compound vowel of a Chinese syllable, will Characteristic parameter sequence is converted to Recognition unit;The process setting up model includes:
(1), randomly choose initial parameter value, initialize HMM model λ;
(2) it is, that each state carries out cutting by observation sequence;The result of cutting is exactly the corresponding observation frame collection of each state Close;
(3), utilizing Segment Clustering algorithm that the observation vector set belonging to each state is divided into M bunch, M is Gaussian Mixture degree, Every cluster, for a single Gaussian parameter of Gaussian-mixture probability density, carries out the estimation of following parameter afterwards:
cjkThe vector sum of the vectorial number of kth set/belong to state j it is in during=state j;
μjkThe vectorial sample average of kth set it is in under=state j;
UjkThe vectorial sample covariance matrix of kth set it is in under=state j;
A HMM model λ ' updated is obtained according to above-mentioned parameter;
(4), model λ ' is compared with initial model λ, if model difference exceedes threshold value set in advance, by model λ Replace with λ ' and repeat the above steps 2 and 3;If difference is less than threshold value, it is determined that restrains for model, preserves this mould Type;
By above-mentioned iteration so that initial parameter value constantly being corrected during whole model training.
CN201610534783.0A 2016-07-08 2016-07-08 Vehicle voice data parses recognition methods Expired - Fee Related CN106057196B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610534783.0A CN106057196B (en) 2016-07-08 2016-07-08 Vehicle voice data parses recognition methods

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610534783.0A CN106057196B (en) 2016-07-08 2016-07-08 Vehicle voice data parses recognition methods

Publications (2)

Publication Number Publication Date
CN106057196A true CN106057196A (en) 2016-10-26
CN106057196B CN106057196B (en) 2019-06-11

Family

ID=57184974

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610534783.0A Expired - Fee Related CN106057196B (en) 2016-07-08 2016-07-08 Vehicle voice data parses recognition methods

Country Status (1)

Country Link
CN (1) CN106057196B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106971721A (en) * 2017-03-29 2017-07-21 沃航(武汉)科技有限公司 A kind of accent speech recognition system based on embedded mobile device
CN108986811A (en) * 2018-08-31 2018-12-11 北京新能源汽车股份有限公司 Voice recognition detection method, device and equipment
CN111353292A (en) * 2020-02-26 2020-06-30 支付宝(杭州)信息技术有限公司 Analysis method and device for user operation instruction

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1490786A (en) * 2002-10-17 2004-04-21 中国科学院声学研究所 Phonetic recognition confidence evaluating method, system and dictation device therewith
CN101930735A (en) * 2009-06-23 2010-12-29 富士通株式会社 Speech emotion recognition equipment and speech emotion recognition method
CN101980336B (en) * 2010-10-18 2012-01-11 福州星网视易信息系统有限公司 Hidden Markov model-based vehicle sound identification method
CN103065626A (en) * 2012-12-20 2013-04-24 中国科学院声学研究所 Automatic grading method and automatic grading equipment for read questions in test of spoken English
CN103810998A (en) * 2013-12-05 2014-05-21 中国农业大学 Method for off-line speech recognition based on mobile terminal device and achieving method
CN105390133A (en) * 2015-10-09 2016-03-09 西北师范大学 Tibetan TTVS system realization method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1490786A (en) * 2002-10-17 2004-04-21 中国科学院声学研究所 Phonetic recognition confidence evaluating method, system and dictation device therewith
CN101930735A (en) * 2009-06-23 2010-12-29 富士通株式会社 Speech emotion recognition equipment and speech emotion recognition method
CN101980336B (en) * 2010-10-18 2012-01-11 福州星网视易信息系统有限公司 Hidden Markov model-based vehicle sound identification method
CN103065626A (en) * 2012-12-20 2013-04-24 中国科学院声学研究所 Automatic grading method and automatic grading equipment for read questions in test of spoken English
CN103810998A (en) * 2013-12-05 2014-05-21 中国农业大学 Method for off-line speech recognition based on mobile terminal device and achieving method
CN105390133A (en) * 2015-10-09 2016-03-09 西北师范大学 Tibetan TTVS system realization method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106971721A (en) * 2017-03-29 2017-07-21 沃航(武汉)科技有限公司 A kind of accent speech recognition system based on embedded mobile device
CN108986811A (en) * 2018-08-31 2018-12-11 北京新能源汽车股份有限公司 Voice recognition detection method, device and equipment
CN111353292A (en) * 2020-02-26 2020-06-30 支付宝(杭州)信息技术有限公司 Analysis method and device for user operation instruction
CN111353292B (en) * 2020-02-26 2023-06-16 支付宝(杭州)信息技术有限公司 Analysis method and device for user operation instruction

Also Published As

Publication number Publication date
CN106057196B (en) 2019-06-11

Similar Documents

Publication Publication Date Title
CN108711422B (en) Speech recognition method, speech recognition device, computer-readable storage medium and computer equipment
EP2727103B1 (en) Speech recognition using variable-length context
US6188976B1 (en) Apparatus and method for building domain-specific language models
CN107818164A (en) A kind of intelligent answer method and its system
CN110517693B (en) Speech recognition method, speech recognition device, electronic equipment and computer-readable storage medium
CN104978587B (en) A kind of Entity recognition cooperative learning algorithm based on Doctype
CN105205124B (en) A kind of semi-supervised text sentiment classification method based on random character subspace
CN107330011A (en) The recognition methods of the name entity of many strategy fusions and device
CN101562012B (en) Method and system for graded measurement of voice
CN111046670B (en) Entity and relationship combined extraction method based on drug case legal documents
CN103678271B (en) A kind of text correction method and subscriber equipment
CN113495900A (en) Method and device for acquiring structured query language sentences based on natural language
CN106340297A (en) Speech recognition method and system based on cloud computing and confidence calculation
CN106294344A (en) Video retrieval method and device
CN110992988B (en) Speech emotion recognition method and device based on domain confrontation
CN110019741B (en) Question-answering system answer matching method, device, equipment and readable storage medium
CN112347780B (en) Judicial fact finding generation method, device and medium based on deep neural network
CN106057196A (en) Vehicular voice data analysis identification method
CN106202045B (en) Special audio recognition method based on car networking
CN112949288B (en) Text error detection method based on character sequence
CN108681532A (en) A kind of sentiment analysis method towards Chinese microblogging
CN113408301A (en) Sample processing method, device, equipment and medium
CN106203520B (en) SAR image classification method based on depth Method Using Relevance Vector Machine
CN112489689B (en) Cross-database voice emotion recognition method and device based on multi-scale difference countermeasure
CN113780006A (en) Training method of medical semantic matching model, medical knowledge matching method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190611

Termination date: 20210708