CN101178895A - Model self-adapting method based on generating parameter listen-feel error minimize - Google Patents

Model self-adapting method based on generating parameter listen-feel error minimize Download PDF

Info

Publication number
CN101178895A
CN101178895A CNA2007101910771A CN200710191077A CN101178895A CN 101178895 A CN101178895 A CN 101178895A CN A2007101910771 A CNA2007101910771 A CN A2007101910771A CN 200710191077 A CN200710191077 A CN 200710191077A CN 101178895 A CN101178895 A CN 101178895A
Authority
CN
China
Prior art keywords
model
acoustic
self
listen
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2007101910771A
Other languages
Chinese (zh)
Inventor
秦龙
凌震华
胡郁
胡国平
吴晓如
刘庆峰
王仁华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
iFlytek Co Ltd
Original Assignee
iFlytek Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by iFlytek Co Ltd filed Critical iFlytek Co Ltd
Priority to CNA2007101910771A priority Critical patent/CN101178895A/en
Publication of CN101178895A publication Critical patent/CN101178895A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention relates to a model adaptive method based on generated parameter auditory sense error minimization. The method consists of determination of a rational acoustic parameters auditory sense distance computing pattern combined with auditory sense experiments. An acoustic parameter, which is estimated based on an original speaker acoustic model and a transformation matrix between the original speaker acoustic model and target speaker model, calculates the auditory sense distance relative adaptive data acoustic parameters. The transformation matrix between the original speak model and the target speaker model is iteratively adjusted successively to the best conversion effect based on the auditory sense error minimization. The invention aims at the defect of prior art and providing the model adaptive method based on generated parameter auditory sense error minimization for voice conversion to minish the auditory sense error and improve the effect of voice conversion.

Description

Based on the model self-adapting method that generates parameter listen-feel error minimize
Technical field
The present invention relates to the method for speaker's conversion in the phonetic synthesis, be specifically related in the model adaptation training process, with the corresponding criterion that generates the listen-feel error of parameter as adaptive training of minimize adaptation data, to satisfy the actual requirement of phonetic synthesis, improve the method for speaker's conversion effect.
Background technology
Along with the develop rapidly of speech synthesis technique, the tonequality and the naturalness of synthetic speech all are greatly improved, and people have no longer satisfied the situation that speech synthesis system only can synthesize the voice of single tone color, single style.In order to make speech synthesis system can synthesize the voice of multiple tone color, multiple style, if use traditional speech synthesis technique, then need to record the sound storehouse of a plurality of speakers' difference pronunciation style, yet recording of sound storehouse is the work that a cost is very big and need long time just can finish.In order to realize having the speech synthesis system of many expressive forces, be unlikely to record more bigger sound storehouses again simultaneously, speaker's switch technology is suggested and broad research.
Present widely used speaker's conversion method has based on the method for code book mapping (Codebook Mapping) with based on the method for mixed Gauss model (Gaussian Mixture Model).Code book mapping and based on speaker's conversion method of mixed Gauss model, general speech data that all needs the target speaker and original speaker's speech data is corresponding on text, so just can utilize the corresponding relation between original speaker and target speaker's the parameters,acoustic, set up by original speaker to the man-to-man mapping the target speaker at parameter space.But, require text and target speaker voice correspondence to increase the use difficulty of real system.Simultaneously, adopt in the target speaker's that these two kinds of methods are converted to the synthetic speech, the often general discontinuous phenomenon of frequency spectrum that exists has caused the reduction of synthetic speech tonequality.Method based on hidden Markov model (HiddenMarkov Model) also is a kind of speaker's conversion method of comparatively widespread use.Based on speaker's conversion method of hidden Markov model, the behavioral characteristics carrying out having taken into full account when parameter generates speech parameter can generate comparatively level and smooth voice spectrum, well solves the non-continuous event in the synthetic speech.But general speaker's conversion method based on hidden Markov model, in the model adaptation training process, estimate that with maximum-likelihood criterion original speaker model is to target speaker model transition matrix, but the actual requirement of this and phonetic synthesis is also inequality.In phonetic synthesis, more it is desirable for and to generate the parameters,acoustic the most approaching, to guarantee the naturalness and the tonequality of synthetic speech with natural-sounding.
The content of invention
The present invention is directed to the defective of prior art, its purpose is exactly for a kind of model self-adapting method based on the generation parameter listen-feel error minimize of the speaker's of being used for conversion is provided, to improve speaker's conversion effect.
Technical scheme of the present invention is as follows:
Based on the model self-adapting method that generates parameter listen-feel error minimize, described method includes following steps and realizes:
(1) utilizes the parameters,acoustic of original speaker's acoustic model and original speaker model use when the transition matrix of target speaker model generates text to self-adapting data and synthesize;
(2) utilize original speaker's acoustic model and the parameters,acoustic that original speaker model arrives the transition matrix estimation self-adaptive data of target speaker model; According to the text of self-adapting data correspondence and relevant contextual information and original speaker's acoustic model and the transition matrix that original speaker model arrives the target speaker model, estimate the corresponding parameters,acoustic that is used for the estimation of synthetic speech of each frame of parameters,acoustic with self-adapting data;
(3) comprehensively go out listen-feel error computing formula between the parameters,acoustic of the parameters,acoustic of self-adapting data and estimation;
(4) estimate the parameters,acoustic of generation and listen-feel error self-adapting data parameters,acoustic according to original speaker's acoustic model and original speaker model to the transition matrix of target speaker model by the self-adapting data text with the calculating of listen-feel error computing formula;
(5) transition matrix that obtains with the linear regression algorithm of maximum likelihood is an initial value, utilize gradient descent algorithm, method by successive iteration, upgrade original speaker model to the transition matrix of target speaker model, to guarantee the reduction gradually of the generation parameter listen-feel error of self-adapting data correspondence after each iteration; Obtain minimizing listen-feel error;
(6) transition matrix that obtains the most at last is applied to original speaker's acoustic model, realizes based on the model adaptation process that generates parameter listen-feel error minimize.
Based on the model self-adapting method that generates parameter listen-feel error minimize, be to utilize self-adapting data in the described above-mentioned steps (1), calculate the transition matrix of original speaker to the target speaker.
Based on the model self-adapting method that generates parameter listen-feel error minimize, described original speaker is calculated by the linear regression model adaptive algorithm of maximum likelihood to target speaker's transition matrix.
Based on the model self-adapting method that generates parameter listen-feel error minimize, the parameters,acoustic of self-adapting data in the described above-mentioned steps (2), adopt following formula to draw:
C=[c 1,c 2,...,c T]
Wherein C is the parameters,acoustic of self-adapting data, and T is a totalframes; The parameters,acoustic of estimating in the described step (2), adopt following formula to draw:
C ~ ( λ , M ) = [ c ~ 1 , c ~ 2 , . . . , c ~ T ]
Wherein
Figure S2007101910771D00032
Be the parameters,acoustic of estimating, T is a totalframes.
Based on the model self-adapting method that generates parameter listen-feel error minimize, the parameters,acoustic that described modeling is adopted is the line spectral frequencies parameter, that is:
c t=[lsf t,1,...,lsf t,N] c ~ t = [ ls f ~ t , 1 , . . . , ls f ~ t , N ]
Based on the model self-adapting method that generates parameter listen-feel error minimize, the parameters,acoustic of described self-adapting data parameters,acoustic C and estimation Between listen-feel error calculate by following formula:
D ( C , C ~ ( λ , M ) ) =
Σ t = 1 T Σ p = 1 N ( lsf t , p - ls f ~ t , p ) 2 / min ( lsf t , p - lsf t , p - 1 , lsf t , p + 1 - lsf t , p )
Based on the model self-adapting method that generates parameter listen-feel error minimize, utilizing gradient descent algorithm in the described step (5) is to utilize following formula to calculate:
M ( n + 1 ) = m ( n ) - ϵ n ∂ l ( C n , M ) ∂ M | M = M ( n )
Wherein n is an iterations, ε nBe the iteration step length in each step, the transition matrix parameter after the n time iteration of M (n) expression.
Based on the model self-adapting method that generates parameter listen-feel error minimize, use hidden Markov model as acoustic model.
The present invention utilizes above algorithm to carry out model adaptation training and phonetic synthesis experiment, and the frequency spectrum parameter of selection is the line spectral frequencies parameter on 40 rank; In order to realize valid metric to listen-feel error, calculate between two groups of line spectral frequencies apart from the time, utilize that the difference inverse has carried out weighting to the Euclidean distance of each rank line spectral frequencies between rank; Use hidden Markov model as the parameters,acoustic model; In adaptive process, use estimate based on the linear regression model adaptive algorithm of maximum likelihood the original speaker that obtains to target speaker's transition matrix as initial value, utilize again to minimize to generate the parameter listen-feel error method transition matrix parameter is carried out the iteration adjustment.From the effect of synthetic speech, use this algorithm after, synthetic speech tonequality and with target speaker's similarity on all be improved to some extent; In tendentiousness subjective audiometry, think that the ratio that the synthetic speech quality of speaker's conversion of using this algorithm will be higher than based on the model adaptation result of maximum likelihood has accounted for about 60% to synthetic speech.
Experimental result shows, utilizes above algorithm through after 10~20 iteration, generates parameter listen-feel error and can obtain convergence; For the test shows of the outer data of collection, use based on minimizing and generate parameter listen-feel error model adaptation algorithm, the listen-feel error that can obtain relative model adaptation algorithm about 10% based on maximum likelihood through adaptive model reduces.
Terminological interpretation:
Phonetic synthesis (Text-To-Speech): be called the literary composition language again and transform.It relates to multiple subjects such as acoustics, linguistics, digital signal processing, multimedia, is a cutting edge technology in Chinese information processing field.The subject matter that speech synthesis technique solves is: how the Word message with electronic text is converted into the acoustic information that can play.Modern age, speech synthesis technique was along with the development of computer technology and Digital Signal Processing grew up, and purpose is to allow computing machine can produce the continuous speech of high definition, high naturalness.
The speaker changes (Voice Conversion): it is the focus of the research of in the phonetic synthesis field in recent years, mainly is that a people's (original speaker) voice are handled, and makes it sound like the same that another person (target speaker) says.It can use a plurality of fields such as commerce, military affairs, amusement.
Maximal possibility estimation (Maximum Likelihood Estimation): the distribution function of supposing stochastic variable X is that (X, θ), density function is that (X, θ), θ is a parameter to p, θ=(θ to F 1... θ m) ∈ Θ, X 1..., X nDerive from family of distributions { F (X, θ): θ ∈ Θ }, the definition likelihood function L ( θ ) = Π i = 1 n p ( x i , θ ) Be θ=(θ 1... θ m) function, if Be unique maximum of points of L (θ), then claim
Figure S2007101910771D00043
Maximal possibility estimation for θ.
Gradient descent algorithm (Gradient Descent Algorithm): a kind of rudimentary algorithm that is used for solved function unconstrained extrema problem, the negative gradient direction (direction of steepest descent) of its choice function direction of search during as iteration.
Description of drawings
Accompanying drawing is the model adaptation FB(flow block).
Embodiment
Shown in accompanying drawing.
Based on the model self-adapting method that generates parameter listen-feel error minimize, its method may further comprise the steps:
(1). utilize self-adapting data,, calculate the transition matrix M of original speaker to the target speaker by the linear regression model adaptive algorithm of maximum likelihood.
(2). calculate the generation parameter listen-feel error of the text message correspondence of self-adapting data
A) utilize original speaker's acoustic model λ and the parameters,acoustic that original speaker model arrives the transition matrix M estimation self-adaptive data correspondence of target speaker model, according to the text of self-adapting data correspondence and relevant contextual information and original speaker's acoustic model λ and the transition matrix M that original speaker model arrives the target speaker model, estimate the corresponding parameters,acoustic that is used for synthetic speech of each frame of parameters,acoustic C with self-adapting data
Figure S2007101910771D00051
Wherein:
C=[c 1,c 2,...,c T]
C ~ ( λ , M ) = [ c ~ 1 , c ~ 2 , . . . , c ~ T ]
T is a totalframes, uses hidden Markov model as acoustic model with based on the parameter method of generationing of maximum likelihood, and the parameters,acoustic of modeling employing is the line spectral frequencies parameter, that is:
c t=[lsf t,1,...,lsf t,N]
c ~ t = [ ls f ~ t , 1 , . . . , ls f ~ t , N ]
Wherein N is the exponent number of line spectral frequencies parameter, and N is 40;
B) listen-feel error between the calculating parameters,acoustic
Owing to use the line spectral frequencies parameter to carry out the parameters,acoustic modeling, therefore calculate self-adapting data parameters,acoustic C and generate parameter by following formula
Figure S2007101910771D00054
Between listen-feel error:
D ( C , C ~ ( λ , M ) ) =
Σ t = 1 T Σ p = 1 N ( lsf t , p - ls f ~ t , p ) 2 / min ( lsf t , p - lsf t , p - 1 , lsf t , p + 1 - lsf t , p )
(3). to minimize listen-feel error is target, adjusts the transition matrix M of original speaker model to the target speaker model.In order to find the solution listen-feel error hour, pairing original speaker model is to the transition matrix M of target speaker model, and the method that adopts gradient to descend comes each parameter in the transition matrix is carried out progressively adjustment, that is:
M ( n + 1 ) = M ( n ) - ϵ n ∂ l ( C n , M ) ∂ M | M = M ( n )
Concrete parameter updating method can be derived definite by following formula in conjunction with the listen-feel error computing formula, wherein n is an iterations, ε nBe the iteration step length in each step, the transition matrix parameter after the n time iteration of M (n) expression.
(4). step (2) and (3) are iterated,, then finished of the renewal of original speaker model to target speaker model transition matrix until generating the parameter listen-feel error convergence.
(5). use original speaker's acoustic model λ and pass through the transition matrix M of the original speaker model of iteration renewal to the target speaker model, calculate target speaker's acoustic model λ ', then finally finished based on the model adaptation process that generates parameter listen-feel error minimize.
The present invention utilizes above algorithm to carry out model adaptation training and phonetic synthesis experiment, and the frequency spectrum parameter of selection is the line spectral frequencies parameter on 40 rank; In order to realize valid metric to listen-feel error, calculate between two groups of line spectral frequencies apart from the time, utilize that the difference inverse has carried out weighting to the Euclidean distance of each rank line spectral frequencies between rank; Use hidden Markov model as the parameters,acoustic model; In adaptive process, use estimate based on the linear regression model adaptive algorithm of maximum likelihood the original speaker that obtains to target speaker's transition matrix as initial value, utilize again to minimize to generate the parameter listen-feel error method transition matrix parameter is carried out the iteration adjustment.
Experimental result shows, utilizes above algorithm through after 10~20 iteration, generates parameter listen-feel error and can obtain convergence; For the test shows of the outer data of collection, use based on minimizing and generate parameter listen-feel error model adaptation algorithm, the listen-feel error that can obtain relative model adaptation algorithm about 10% based on maximum likelihood through adaptive model reduces.

Claims (8)

1. based on the model self-adapting method that generates parameter listen-feel error minimize, it is characterized in that described method includes following steps and realizes:
(1) utilizes the parameters,acoustic of original speaker's acoustic model and original speaker model use when the transition matrix of target speaker model generates text to self-adapting data and synthesize;
(2) utilize original speaker's acoustic model and the parameters,acoustic that original speaker model arrives the transition matrix estimation self-adaptive data of target speaker model; According to the text of self-adapting data correspondence and relevant contextual information and original speaker's acoustic model and the transition matrix that original speaker model arrives the target speaker model, estimate the corresponding parameters,acoustic that is used for the estimation of synthetic speech of each frame of parameters,acoustic with self-adapting data;
(3) comprehensively go out listen-feel error computing formula between the parameters,acoustic of the parameters,acoustic of self-adapting data and estimation;
(4) estimate the parameters,acoustic of generation and listen-feel error self-adapting data parameters,acoustic according to original speaker's acoustic model and original speaker model to the transition matrix of target speaker model by the self-adapting data text with the calculating of listen-feel error computing formula;
(5) transition matrix that obtains with the linear regression algorithm of maximum likelihood is an initial value, utilize gradient descent algorithm, method by successive iteration, upgrade original speaker model to the transition matrix of target speaker model, to guarantee the reduction gradually of the generation parameter listen-feel error of self-adapting data correspondence after each iteration; Obtain minimizing listen-feel error;
(6) transition matrix that obtains the most at last is applied to original speaker's acoustic model, realizes based on the model adaptation process that generates parameter listen-feel error minimize.
2. the model self-adapting method based on the generation parameter listen-feel error minimize according to claim 1 is characterized in that in the described step (1) it being to utilize self-adapting data, calculates the transition matrix of original speaker to the target speaker.
3. according to claim 2 based on the model self-adapting method that generates parameter listen-feel error minimize, it is characterized in that the transition matrix of described original speaker to the target speaker, calculate by the linear regression model adaptive algorithm of maximum likelihood.
4. according to claim 1 based on the model self-adapting method that generates parameter listen-feel error minimize, it is characterized in that the parameters,acoustic of self-adapting data in the described step (2), adopt following formula to draw:
C=[c 1,c 2,...,c T]
Wherein C is the parameters,acoustic of self-adapting data, and T is a totalframes; The parameters,acoustic of estimating in the described step (2), adopt following formula to draw:
C ~ ( λ , M ) = [ c ~ 1 , c ~ 2 , . . . , c ~ T ]
Wherein
Figure S2007101910771C00022
Be the parameters,acoustic of estimating, T is a totalframes.
5. according to claim 4 based on the model self-adapting method that generates parameter listen-feel error minimize, it is characterized in that the parameters,acoustic that described modeling is adopted is the line spectral frequencies parameter, that is:
C t=[lsf t,1,...,lsf t,N] c ~ t = [ ls f ~ t , 1 , . . . , ls f ~ t , N ]
6. according to claim 4 based on the model self-adapting method that generates parameter listen-feel error minimize, it is characterized in that the parameters,acoustic of described self-adapting data parameters,acoustic C and estimation
Figure S2007101910771C00024
Between listen-feel error calculate by following formula:
D ( C , C ~ ( λ , M ) ) =
Σ t = 1 T Σ p = 1 N ( lsf t , p - ls f ~ t , p ) 2 / min ( lsf t , p - lsf t , p - 1 , lsf t , p + 1 - lsf t , p )
7. according to claim 1 based on the model self-adapting method that generates parameter listen-feel error minimize, it is characterized in that utilizing in the described step (5) gradient descent algorithm is to utilize following formula to calculate:
M ( n + 1 ) = M ( n ) - ϵ n ∂ l ( C n , M ) ∂ M | M = M ( n )
Wherein n is an iterations, ε nBe the iteration step length in each step, the transition matrix parameter after the n time iteration of M (n) expression.
8. according to claim 1 based on the model self-adapting method that generates parameter listen-feel error minimize, it is characterized in that using hidden Markov model as acoustic model.
CNA2007101910771A 2007-12-06 2007-12-06 Model self-adapting method based on generating parameter listen-feel error minimize Pending CN101178895A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNA2007101910771A CN101178895A (en) 2007-12-06 2007-12-06 Model self-adapting method based on generating parameter listen-feel error minimize

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNA2007101910771A CN101178895A (en) 2007-12-06 2007-12-06 Model self-adapting method based on generating parameter listen-feel error minimize

Publications (1)

Publication Number Publication Date
CN101178895A true CN101178895A (en) 2008-05-14

Family

ID=39405118

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2007101910771A Pending CN101178895A (en) 2007-12-06 2007-12-06 Model self-adapting method based on generating parameter listen-feel error minimize

Country Status (1)

Country Link
CN (1) CN101178895A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102243870A (en) * 2010-05-14 2011-11-16 通用汽车有限责任公司 Speech adaptation in speech synthesis
CN101751922B (en) * 2009-07-22 2011-12-07 中国科学院自动化研究所 Text-independent speech conversion system based on HMM model state mapping
CN105185372A (en) * 2015-10-20 2015-12-23 百度在线网络技术(北京)有限公司 Training method for multiple personalized acoustic models, and voice synthesis method and voice synthesis device
CN110992935A (en) * 2014-09-12 2020-04-10 微软技术许可有限责任公司 Computing system for training neural networks
CN111179905A (en) * 2020-01-10 2020-05-19 北京中科深智科技有限公司 Rapid dubbing generation method and device
CN111862933A (en) * 2020-07-20 2020-10-30 北京字节跳动网络技术有限公司 Method, apparatus, device and medium for generating synthesized speech
WO2022253061A1 (en) * 2021-06-03 2022-12-08 华为技术有限公司 Voice processing method and related device

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101751922B (en) * 2009-07-22 2011-12-07 中国科学院自动化研究所 Text-independent speech conversion system based on HMM model state mapping
CN102243870A (en) * 2010-05-14 2011-11-16 通用汽车有限责任公司 Speech adaptation in speech synthesis
US9564120B2 (en) 2010-05-14 2017-02-07 General Motors Llc Speech adaptation in speech synthesis
CN110992935A (en) * 2014-09-12 2020-04-10 微软技术许可有限责任公司 Computing system for training neural networks
CN105185372A (en) * 2015-10-20 2015-12-23 百度在线网络技术(北京)有限公司 Training method for multiple personalized acoustic models, and voice synthesis method and voice synthesis device
CN105185372B (en) * 2015-10-20 2017-03-22 百度在线网络技术(北京)有限公司 Training method for multiple personalized acoustic models, and voice synthesis method and voice synthesis device
CN111179905A (en) * 2020-01-10 2020-05-19 北京中科深智科技有限公司 Rapid dubbing generation method and device
CN111862933A (en) * 2020-07-20 2020-10-30 北京字节跳动网络技术有限公司 Method, apparatus, device and medium for generating synthesized speech
WO2022253061A1 (en) * 2021-06-03 2022-12-08 华为技术有限公司 Voice processing method and related device

Similar Documents

Publication Publication Date Title
Liu et al. Diffsinger: Singing voice synthesis via shallow diffusion mechanism
CN106251859B (en) Voice recognition processing method and apparatus
CN107545903B (en) Voice conversion method based on deep learning
US11222620B2 (en) Speech recognition using unspoken text and speech synthesis
CN109767778B (en) Bi-L STM and WaveNet fused voice conversion method
CN101178896B (en) Unit selection voice synthetic method based on acoustics statistical model
Morgan Deep and wide: Multiple layers in automatic speech recognition
CN1222924C (en) Voice personalization of speech synthesizer
WO2019214047A1 (en) Method and apparatus for establishing voice print model, computer device, and storage medium
CN1835074B (en) Speaking person conversion method combined high layer discription information and model self adaption
Qian et al. Improved prosody generation by maximizing joint probability of state and longer units
Jiang et al. Geometric methods for spectral analysis
CN108831435B (en) Emotional voice synthesis method based on multi-emotion speaker self-adaption
CN110648684B (en) Bone conduction voice enhancement waveform generation method based on WaveNet
CN101178895A (en) Model self-adapting method based on generating parameter listen-feel error minimize
CN104123933A (en) Self-adaptive non-parallel training based voice conversion method
CN110047501B (en) Many-to-many voice conversion method based on beta-VAE
KR101664815B1 (en) Method for creating a speech model
Yin et al. Modeling F0 trajectories in hierarchically structured deep neural networks
US20240087558A1 (en) Methods and systems for modifying speech generated by a text-to-speech synthesiser
TWI503813B (en) Speaking-rate controlled prosodic-information generating device and speaking-rate dependent hierarchical prosodic module
Purohit et al. Intelligibility improvement of dysarthric speech using mmse discogan
CN109036370B (en) Adaptive training method for speaker voice
CN107785030B (en) Voice conversion method
Li et al. Speech intelligibility enhancement using non-parallel speaking style conversion with stargan and dynamic range compression

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Open date: 20080514