CN103531196B - A kind of waveform concatenation phonetic synthesis select sound method - Google Patents

A kind of waveform concatenation phonetic synthesis select sound method Download PDF

Info

Publication number
CN103531196B
CN103531196B CN201310481306.9A CN201310481306A CN103531196B CN 103531196 B CN103531196 B CN 103531196B CN 201310481306 A CN201310481306 A CN 201310481306A CN 103531196 B CN103531196 B CN 103531196B
Authority
CN
China
Prior art keywords
obtains
syllable
similarity
primitive
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310481306.9A
Other languages
Chinese (zh)
Other versions
CN103531196A (en
Inventor
陶建华
张冉
温正棋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongke Extreme Element Hangzhou Intelligent Technology Co Ltd
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN201310481306.9A priority Critical patent/CN103531196B/en
Publication of CN103531196A publication Critical patent/CN103531196A/en
Application granted granted Critical
Publication of CN103531196B publication Critical patent/CN103531196B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

What the invention discloses a kind of waveform concatenation phonetic synthesis selects sound method, and the method comprises the following steps: carry out the model training based on hidden Markov based on original audio, obtains acoustic model collection and characteristic of correspondence decision tree; Input some training texts, feature based decision tree search obtains associated acoustic models, and then obtains corresponding target voice and target syllable; According to the similarity of the target voice candidate motif corresponding with it, and the likelihood probability of each parameters,acoustic under current acoustic model of candidate motif, training obtains similarity sorter; Input any text to be synthesized, reject dissimilar candidate motif based on similarity sorter, for remaining candidate motif, utilize concatenated cost minimum principle to select to obtain best primitive, and splicing obtains synthetic speech.Employing the inventive method can synthesize the voice compared with high tone quality.

Description

A kind of waveform concatenation phonetic synthesis select sound method
Technical field
The present invention relates to Intelligent Information Processing field, what particularly relate to a kind of waveform concatenation phonetic synthesis selects sound method.
Background technology
Voice are as one of the Main Means of Human communication's information, and speech synthesis technique mainly allows computing machine can produce the continuous speech of high definition, high naturalness.In the evolution of speech synthesis technique, early stage research mainly adopts parameter synthesis method, afterwards along with the synthetic method of waveform concatenation has appearred again in the development of computer technology.Along with the continuous increase of corpus, the quantity of candidate motif, also in continuous growth, how according to input text, is selected best primitive and is spliced, more and more receive publicity.
Based on the parameter speech synthesis system of hidden Markov model be the speech synthesis technique of most main flow of nearly more than ten years based on the splicing system of unit selection, and mixing voice synthesis system combines the advantage of the two, have employed acoustic model that the former trains to instruct unit selection, thus select more suitable primitive and splice.This mixing voice synthesis system select sound method more more stable than traditional joining method, and manual intervention is less, but still there is a lot of deficiencies, be mainly manifested in following some:
1, select sound method not embody the perception effect of people's ear, obtain a high score existing choosing in sound method, and do not mean that the voice that have selected and be more suitable for people's sense of hearing;
2, the method selecting sound method to have employed factor weighted stacking carries out selecting sound, each feature by primitive calculates filial generation valency respectively, then weight is given respectively, superposition becomes one and total selects sound cost to select sound again, the method supposes that the impact of all factors on the acceptance of primitive is linear superposition, and this does not obviously meet the fact.
Summary of the invention
For solving above-mentioned one or more problems, what the invention provides a kind of waveform concatenation phonetic synthesis selects sound method.The method combines the subjective auditory perception of people, can select the primitive of the most applicable people's ear sense of hearing, finally splice good voice.
The sound method of selecting of waveform concatenation phonetic synthesis provided by the invention comprises the following steps:
Parameter extraction is carried out to original sound storehouse, and in conjunction with corresponding text marking information, carries out the model training based on hidden Markov; Input some training texts, carry out text analyzing, utilize decision tree search correlation model, and utilize parameter generation algorithm to synthesize corresponding target voice, and carry out the cutting of syllable, obtain target syllable; The similarity of artificial judge synthesis syllable verbal audio and its candidate motif voice is used as categorical attribute, simultaneously the likelihood probability under the current model of each parameters,acoustic of calculated candidate primitive, as the proper vector of input, thus trains a similarity sorter; Given any text to be synthesized, uses sorter to reject dissimilar candidate motif, to remaining candidate motif, utilizes concatenated cost minimum principle to select best primitive, finally splice synthetic speech.
As can be seen from technique scheme, the sound method of selecting of waveform concatenation phonetic synthesis of the present invention has following beneficial effect:
(1) similar to the syllable of parameter synthesis primitive, has stress identical with it and intonation, and the voice adopting this standard to select splice, and can obtain having both stability and conforming voice;
(2) similar to the syllable of parameter synthesis primitive, also more easily splices, because they reach unanimity more in the feature of boundary, does not need or only needs little level and smooth, thus ensure that the level and smooth of raw tone and nature;
(3) selecting in sound the subjective sense of hearing factor introducing people, making to select sound result to be more suitable for the subjectivity hobby of people.
Accompanying drawing explanation
Fig. 1 be according to an embodiment of the invention waveform concatenation phonetic synthesis select sound method flow diagram;
Fig. 2 is acoustic training model flow process according to an embodiment of the invention;
Fig. 3 is that hidden Markov trains process flow diagram according to an embodiment of the invention;
Fig. 4 is the product process figure of target syllable according to an embodiment of the invention;
Fig. 5 is that process flow diagram trained by sorter according to an embodiment of the invention;
Fig. 6 is the process flow diagram selecting sound according to an embodiment of the invention according to sorter.
Embodiment
For making the object, technical solutions and advantages of the present invention clearly understand, below in conjunction with specific embodiment, and with reference to accompanying drawing, the present invention is described in more detail.
It should be noted that, in accompanying drawing or instructions describe, similar or identical part all uses identical figure number.The implementation not illustrating in accompanying drawing or describe is form known to a person of ordinary skill in the art in art.In addition, although herein can providing package containing the demonstration of the parameter of particular value, should be appreciated that, parameter without the need to definitely equaling corresponding value, but can be similar to corresponding value in acceptable error margin or design constraint.
Fig. 1 be according to an embodiment of the invention waveform concatenation phonetic synthesis select sound method flow diagram, as shown in Figure 1, this selects sound method to comprise the following steps:
Step S1, based on the model training extracting the original audio obtained carry out based on hidden Markov from audio database, obtains acoustic model collection and characteristic of correspondence decision tree;
As shown in Figure 2, described step S1 is further comprising the steps:
Step S11, obtains the original audio in audio database;
Step S12, carries out the extraction of frequency spectrum parameter and base frequency parameters frame by frame for described original audio;
Described step S12 is further comprising the steps:
Step S121, carries out framing windowing process by described original audio;
Framing windowing is audio signal processing technique conventional in prior art, and therefore not to repeat here.
Step S122, such as extracts its mel cepstrum coefficients with STRAIGHT algorithm to processing the every frame audio frequency obtained;
In an embodiment of the present invention, first extract the static mel cepstrum coefficients in 25 rank, then calculate their first order difference and second order difference respectively, the mel cepstrum coefficients finally obtained is 75 dimensions.
Step S123, calculates the base frequency parameters of every frame audio frequency;
In an embodiment of the present invention, first calculate the base frequency parameters of every frame audio frequency, then calculate its first order difference and second order difference equally, the base frequency parameters finally obtained is 3 dimensions.
Step S13, the text corresponding for described original audio carries out synchronous mark, marks out the contextual feature information of corresponding syllable in described original audio, carries out segment cutting mark to described original audio simultaneously;
In an embodiment of the present invention, in units of syllable, carry out contextual feature information labeling, the pronunciation character that the rhythm structure characteristic sum 24 employing 66 dimensions is altogether tieed up, described mark is primarily of manually carrying out.
Segmental information in described segment cutting is unimportant, and the present invention adopts the result of automatic segmentation.
Step S14, based on frequency spectrum parameter and the base frequency parameters of described original audio, contextual feature information labeling, and segment cutting mark, carry out traditional hidden Markov model training, obtain the Models Sets comprising duration, fundamental frequency and frequency spectrum, and respective feature decision tree.
In this step, adopt the mode of many spatial probability distribution to carry out modeling, in an embodiment of the present invention, given parameter and characteristic sequence are carried out to the hidden Markov model training of 10 states.Concrete training flow process as shown in Figure 3.
Step S2, inputs some training texts, obtains associated acoustic models based on described feature decision tree search, and then obtains corresponding target voice and target syllable;
As shown in Figure 4, described step S2 is further comprising the steps:
Step S21, is inputted the training text of multiple syllable balance, through the text analyzing of front end, is namely extracted the feature in text by methods such as maximum entropies, obtains corresponding contextual feature sequence;
Text analyzing method based on maximum entropy is text analysis technique conventional in prior art, and therefore not to repeat here.
Have more than 1300 syllable commonly used in Chinese, therefore, in an embodiment of the present invention, input the text of 500 syllables balances, and the text analyzing through front end, obtain corresponding context property;
Step S22, described contextual feature sequence inputting in described feature decision tree, obtains the acoustic model sequence meeting current context;
In this step, according to the contextual feature in described contextual feature sequence, respectively decision-making is carried out to the clustering tree of duration, fundamental frequency and frequency spectrum parameter, obtain corresponding acoustic model sequence and duration modeling;
Step S23, based on described acoustic model sequence, adopts parameter generation algorithm to obtain target voice parameter;
Described target voice parameter comprises fundamental frequency and frequency spectrum parameter;
Step S24, based on described target voice parameter, synthesizes target sentences voice with vocoder, and described target sentences phonetic segmentation is become target syllable.
In this step, the target sound that cutting obtains saves the target voice in similarity-rough set.
Step S3, according to the similarity of the described target voice candidate motif corresponding with it, and the likelihood probability of each parameters,acoustic under current acoustic model of described candidate motif, training obtains similarity sorter;
As shown in Figure 5, described step S3 is further comprising the steps:
Step S31, sentence in described audio database is carried out cutting by syllable, the segment in units of syllable that cutting obtains, be candidate motif, identical syllable is classified as a class, build candidate motif storehouse with this, and distribute to each candidate motif in candidate motif storehouse frame by frame by extracting the frequency spectrum parameter that obtains and base frequency parameters in described step S12;
Step S32, the parameters,acoustic of each corresponding for each described target syllable primitive is brought in the context acoustic model that described step S22 obtains successively, calculate the duration of each primitive, fundamental frequency and the frequency spectrum probability under its corresponding acoustic model, and using the set of all probability as characteristic set;
Step S33, convenes the similarity of some Chinese native persons to described target syllable and candidate motif to carry out binary mark, namely similar or dissimilar, and using this result as categorical attribute;
The number of syllables of each class is different, and artificial in order to reduce, in an embodiment of the present invention, each class syllable gets at most 30 syllables for similarity-rough set.
Step S34, based on described categorical attribute and characteristic set, carries out the training of similarity sorter.
In an embodiment of the present invention, described similarity sorter can adopt CART sorter or SVM classifier, and experiment shows to adopt the SVM of second order polynomial kernel to have better classifying quality.
Step S4, inputs any text to be synthesized, rejects dissimilar candidate motif, namely select sound based on described similarity sorter, for remaining candidate motif, utilize concatenated cost minimum principle to select to obtain best primitive, and splicing obtains synthetic speech.
As shown in Figure 6, described step S4 is further comprising the steps:
Step S41, inputs text to be synthesized, and obtains corresponding acoustic model according to described step S22;
Step S42, calculates the likelihood probability set of each parameters,acoustic under current acoustic model of each primitive, and it can be used as characteristic set according to described step S32;
Step S43, inputs to described characteristic set in described similarity sorter, can dope each primitive and belong to similar classification or dissimilar classification;
Step S44, removes all primitives in dissimilar classification, adopts concatenated cost minimum principle to select sound to remaining primitive;
Step S45, level and smooth to selecting the primitive obtained to carry out windowing, obtain final synthetic speech.
In sum, what the present invention proposes a kind of waveform concatenation phonetic synthesis selects sound method, and the method can synthesize the voice compared with high tone quality.
It should be noted that, the above-mentioned implementation to each parts is not limited in the various implementations mentioned in embodiment, and those of ordinary skill in the art can replace it with knowing simply, such as:
(1) the spectrum parameter adopted in training is mel cepstrum coefficients, can substitute by other parameter, as used the line spectrum pairs parameter of different rank.
(2) to the read statement quantity in sorter training, suitably can increase and decrease according to the computational accuracy of oneself.
Above-described specific embodiment; object of the present invention, technical scheme and beneficial effect are further described; be understood that; the foregoing is only specific embodiments of the invention; be not limited to the present invention; within the spirit and principles in the present invention all, any amendment made, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (7)

1. waveform concatenation phonetic synthesis select a sound method, it is characterized in that, the method comprises the following steps:
Step S1, based on the model training extracting the original audio obtained carry out based on hidden Markov from audio database, obtains acoustic model collection and characteristic of correspondence decision tree;
Step S2, inputs some training texts, obtains associated acoustic models based on described feature decision tree search, and then obtains corresponding target voice and target syllable;
Step S3, according to the similarity of the described target voice candidate motif corresponding with it, and the likelihood probability of each parameters,acoustic under current acoustic model of described candidate motif, training obtains similarity sorter;
Step S4, inputs any text to be synthesized, rejects dissimilar candidate motif based on described similarity sorter, for remaining candidate motif, utilize concatenated cost minimum principle to select to obtain best primitive, and splicing obtains synthetic speech;
Wherein, described step S2 is further comprising the steps:
Step S21, inputs the training text of multiple syllable balance, obtains corresponding contextual feature sequence through text analyzing;
Step S22, described contextual feature sequence inputting in described feature decision tree, obtains the acoustic model sequence meeting current context;
Step S23, based on described acoustic model sequence, adopts parameter generation algorithm to obtain target voice parameter;
Step S24, based on described target voice parameter, synthesizes target sentences voice with vocoder, and described target sentences phonetic segmentation is become target syllable;
Described step S3 is further comprising the steps:
Step S31, sentence in described audio database is carried out cutting by syllable, the segment in units of syllable that cutting obtains, be candidate motif, identical syllable is classified as a class, build candidate motif storehouse with this, and distribute to each candidate motif in candidate motif storehouse frame by frame by extracting the frequency spectrum parameter that obtains and base frequency parameters in described step S12;
Step S32, the parameters,acoustic of each corresponding for each described target syllable primitive is brought in the context acoustic model that described step S22 obtains successively, calculate the duration of each primitive, fundamental frequency and the frequency spectrum probability under its corresponding acoustic model, and using the set of all probability as characteristic set;
Step S33, convenes the similarity of some Chinese native persons to described target syllable and candidate motif to carry out binary mark, namely similar or dissimilar, and using this result as categorical attribute;
Step S34, based on described categorical attribute and characteristic set, carries out the training of similarity sorter.
2. method according to claim 1, is characterized in that, described step S1 is further comprising the steps:
Step S11, obtains the original audio in audio database;
Step S12, carries out the extraction of frequency spectrum parameter and base frequency parameters frame by frame for described original audio;
Step S13, the text corresponding for described original audio carries out synchronous mark, marks out the contextual feature information of corresponding syllable in described original audio, carries out segment cutting mark to described original audio simultaneously;
Step S14, based on frequency spectrum parameter and the base frequency parameters of described original audio, contextual feature information labeling, and segment cutting mark, carry out traditional hidden Markov model training, obtain the Models Sets comprising duration, fundamental frequency and frequency spectrum, and respective feature decision tree.
3. method according to claim 2, is characterized in that, described step S12 is further comprising the steps:
Step S121, carries out framing windowing process by described original audio;
Step S122, to processing every its mel cepstrum coefficients of frame audio extraction obtained;
Step S123, calculates the base frequency parameters of every frame audio frequency.
4. method according to claim 1, is characterized in that, described text analyzing is extract the feature in text.
5. method according to claim 1, it is characterized in that, in described step S22, according to the contextual feature in described contextual feature sequence, respectively decision-making is carried out to the clustering tree of duration, fundamental frequency and frequency spectrum parameter, obtain corresponding acoustic model sequence and duration modeling.
6. method according to claim 1, is characterized in that, described target voice parameter comprises fundamental frequency and frequency spectrum parameter.
7. method according to claim 1, is characterized in that, described step S4 is further comprising the steps:
Step S41, inputs text to be synthesized, and obtains corresponding acoustic model according to described step S22;
Step S42, calculates the likelihood probability set of each parameters,acoustic under current acoustic model of each primitive, and it can be used as characteristic set according to described step S32;
Step S43, inputs to described characteristic set in described similarity sorter, can dope each primitive and belong to similar classification or dissimilar classification;
Step S44, removes all primitives in dissimilar classification, adopts concatenated cost minimum principle to select sound to remaining primitive;
Step S45, level and smooth to selecting the primitive obtained to carry out windowing, obtain final synthetic speech.
CN201310481306.9A 2013-10-15 2013-10-15 A kind of waveform concatenation phonetic synthesis select sound method Active CN103531196B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310481306.9A CN103531196B (en) 2013-10-15 2013-10-15 A kind of waveform concatenation phonetic synthesis select sound method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310481306.9A CN103531196B (en) 2013-10-15 2013-10-15 A kind of waveform concatenation phonetic synthesis select sound method

Publications (2)

Publication Number Publication Date
CN103531196A CN103531196A (en) 2014-01-22
CN103531196B true CN103531196B (en) 2016-04-13

Family

ID=49933149

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310481306.9A Active CN103531196B (en) 2013-10-15 2013-10-15 A kind of waveform concatenation phonetic synthesis select sound method

Country Status (1)

Country Link
CN (1) CN103531196B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104575488A (en) * 2014-12-25 2015-04-29 北京时代瑞朗科技有限公司 Text information-based waveform concatenation voice synthesizing method
WO2017028003A1 (en) * 2015-08-14 2017-02-23 华侃如 Hidden markov model-based voice unit concatenation method
CN105304081A (en) * 2015-11-09 2016-02-03 上海语知义信息技术有限公司 Smart household voice broadcasting system and voice broadcasting method
CN105719641B (en) * 2016-01-19 2019-07-30 百度在线网络技术(北京)有限公司 Sound method and apparatus are selected for waveform concatenation speech synthesis
CN105654940B (en) * 2016-01-26 2019-12-24 百度在线网络技术(北京)有限公司 Speech synthesis method and device
CN106356052B (en) * 2016-10-17 2019-03-15 腾讯科技(深圳)有限公司 Phoneme synthesizing method and device
CN106652986B (en) * 2016-12-08 2020-03-20 腾讯音乐娱乐(深圳)有限公司 Song audio splicing method and equipment
CN106970950B (en) * 2017-03-07 2021-08-24 腾讯音乐娱乐(深圳)有限公司 Similar audio data searching method and device
CN107492371A (en) * 2017-07-17 2017-12-19 广东讯飞启明科技发展有限公司 A kind of big language material sound storehouse method of cutting out
CN107507619B (en) * 2017-09-11 2021-08-20 厦门美图之家科技有限公司 Voice conversion method and device, electronic equipment and readable storage medium
CN109147799A (en) * 2018-10-18 2019-01-04 广州势必可赢网络科技有限公司 A kind of method, apparatus of speech recognition, equipment and computer storage medium
CN109686358B (en) * 2018-12-24 2021-11-09 广州九四智能科技有限公司 High-fidelity intelligent customer service voice synthesis method
CN111899715B (en) * 2020-07-14 2024-03-29 升智信息科技(南京)有限公司 Speech synthesis method
CN113011127A (en) * 2021-02-08 2021-06-22 杭州网易云音乐科技有限公司 Text phonetic notation method and device, storage medium and electronic equipment
CN113096650B (en) * 2021-03-03 2023-12-08 河海大学 Acoustic decoding method based on prior probability

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04313034A (en) * 1990-10-16 1992-11-05 Internatl Business Mach Corp <Ibm> Synthesized-speech generating method
CN101178896A (en) * 2007-12-06 2008-05-14 安徽科大讯飞信息科技股份有限公司 Unit selection voice synthetic method based on acoustics statistical model
CN101471071A (en) * 2007-12-26 2009-07-01 中国科学院自动化研究所 Speech synthesis system based on mixed hidden Markov model
CN102496363A (en) * 2011-11-11 2012-06-13 北京宇音天下科技有限公司 Correction method for Chinese speech synthesis tone

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04313034A (en) * 1990-10-16 1992-11-05 Internatl Business Mach Corp <Ibm> Synthesized-speech generating method
CN101178896A (en) * 2007-12-06 2008-05-14 安徽科大讯飞信息科技股份有限公司 Unit selection voice synthetic method based on acoustics statistical model
CN101471071A (en) * 2007-12-26 2009-07-01 中国科学院自动化研究所 Speech synthesis system based on mixed hidden Markov model
CN102496363A (en) * 2011-11-11 2012-06-13 北京宇音天下科技有限公司 Correction method for Chinese speech synthesis tone

Also Published As

Publication number Publication date
CN103531196A (en) 2014-01-22

Similar Documents

Publication Publication Date Title
CN103531196B (en) A kind of waveform concatenation phonetic synthesis select sound method
CN102779508B (en) Sound bank generates Apparatus for () and method therefor, speech synthesis system and method thereof
CN101751922B (en) Text-independent speech conversion system based on HMM model state mapping
CN101178896B (en) Unit selection voice synthetic method based on acoustics statistical model
CN101000765B (en) Speech synthetic method based on rhythm character
CN101064104B (en) Emotion voice creating method based on voice conversion
JP6523893B2 (en) Learning apparatus, speech synthesis apparatus, learning method, speech synthesis method, learning program and speech synthesis program
CN104112444B (en) A kind of waveform concatenation phoneme synthesizing method based on text message
CN108510976A (en) A kind of multilingual mixing voice recognition methods
CN107452379B (en) Dialect language identification method and virtual reality teaching method and system
CN101064103B (en) Chinese voice synthetic method and system based on syllable rhythm restricting relationship
CN105551071A (en) Method and system of face animation generation driven by text voice
CN1835075B (en) Speech synthetizing method combined natural sample selection and acaustic parameter to build mould
CN103632663B (en) A kind of method of Mongol phonetic synthesis front-end processing based on HMM
CN109346056A (en) Phoneme synthesizing method and device based on depth measure network
CN106653002A (en) Literal live broadcasting method and platform
CN109036376A (en) A kind of the south of Fujian Province language phoneme synthesizing method
CN104575488A (en) Text information-based waveform concatenation voice synthesizing method
CN108172211A (en) Adjustable waveform concatenation system and method
CN106297766B (en) Phoneme synthesizing method and system
Mukherjee et al. A bengali hmm based speech synthesis system
Shah et al. Nonparallel emotional voice conversion for unseen speaker-emotion pairs using dual domain adversarial network & virtual domain pairing
Kayte et al. A Marathi Hidden-Markov Model Based Speech Synthesis System
Panda et al. Text-to-speech synthesis with an Indian language perspective
CN104376850A (en) Estimation method for fundamental frequency of Chinese whispered speech

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20170602

Address after: 100094, No. 4, building A, No. 1, building 2, wing Cheng North Road, No. 405-346, Beijing, Haidian District

Patentee after: Beijing Rui Heng Heng Xun Technology Co., Ltd.

Address before: 100190 Zhongguancun East Road, Beijing, No. 95, No.

Patentee before: Institute of Automation, Chinese Academy of Sciences

TR01 Transfer of patent right
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20181224

Address after: 100190 Zhongguancun East Road, Haidian District, Haidian District, Beijing

Patentee after: Institute of Automation, Chinese Academy of Sciences

Address before: 100094 No. 405-346, 4th floor, Building A, No. 1, Courtyard 2, Yongcheng North Road, Haidian District, Beijing

Patentee before: Beijing Rui Heng Heng Xun Technology Co., Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20190528

Address after: 310019 1105, 11 / F, 4 building, 9 Ring Road, Jianggan District nine, Hangzhou, Zhejiang.

Patentee after: Limit element (Hangzhou) intelligent Polytron Technologies Inc

Address before: 100190 Zhongguancun East Road, Haidian District, Haidian District, Beijing

Patentee before: Institute of Automation, Chinese Academy of Sciences

CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 310019 1105, 11 / F, 4 building, 9 Ring Road, Jianggan District nine, Hangzhou, Zhejiang.

Patentee after: Zhongke extreme element (Hangzhou) Intelligent Technology Co., Ltd

Address before: 310019 1105, 11 / F, 4 building, 9 Ring Road, Jianggan District nine, Hangzhou, Zhejiang.

Patentee before: Limit element (Hangzhou) intelligent Polytron Technologies Inc.