CN110211582A - A kind of real-time, interactive intelligent digital virtual actor's facial expression driving method and system - Google Patents

A kind of real-time, interactive intelligent digital virtual actor's facial expression driving method and system Download PDF

Info

Publication number
CN110211582A
CN110211582A CN201910467244.3A CN201910467244A CN110211582A CN 110211582 A CN110211582 A CN 110211582A CN 201910467244 A CN201910467244 A CN 201910467244A CN 110211582 A CN110211582 A CN 110211582A
Authority
CN
China
Prior art keywords
voice
voice messaging
real
response
facial expression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910467244.3A
Other languages
Chinese (zh)
Inventor
王全伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Quantum Power (shenzhen) Computer Technology Co Ltd
Original Assignee
Quantum Power (shenzhen) Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Quantum Power (shenzhen) Computer Technology Co Ltd filed Critical Quantum Power (shenzhen) Computer Technology Co Ltd
Priority to CN201910467244.3A priority Critical patent/CN110211582A/en
Publication of CN110211582A publication Critical patent/CN110211582A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1822Parsing for meaning understanding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/06Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
    • G10L21/10Transforming into visible information
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Signal Processing (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Quality & Reliability (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention discloses a kind of real-time, interactive intelligent digital virtual actor's facial expression driving methods, and the expression acquiring technology field of virtual portrait comprising following steps: S1 is acquired the voice messaging of user, and obtained voice messaging is exported;S2 parses the voice messaging of acquisition, obtains the text information opposite with the voice messaging, and carry out semantic computation to text information and obtain response simultaneously;S3, voice is converted by the response of acquisition, and expression animation data are converted by the voice, for driving virtual portrait to make corresponding expression, the beneficial effects of the present invention are: enormously simplifying the generation of expression animation, it can be widely applied to the scenes such as intelligent sound box, intelligent robot, chat robots, this kind of product made to personalize, has affective interaction experience, user can interact face-to-face with a virtual portrait.

Description

A kind of real-time, interactive intelligent digital virtual actor's facial expression driving method and system
Technical field
The present invention relates to the expression acquiring technology field of virtual portrait, specifically a kind of real-time, interactive intelligent digital visual humans Object expression driving method and system.
Background technique
Need to acquire the expression of performer when to design virtual portrait in industries such as video display, service or game, in this, as Generate the foundation of virtual actor's facial expression.
Traditional expression captures system and performer is needed to dress dedicated hardware device, and hardware captures performer's by sensor Expression is digitized expression by algorithm, is finally conducted expression parameter and is caused virtual role, drives the expression of virtual role, raw At expression animation.This process is extremely complex, which is dfficult to apply to more extensive scenes, especially in current intelligent sound Under case, intelligent robot, chat robots etc. are fast-developing, traditional expression catching mode is difficult to apply.
Based on this, present applicant proposes a kind of real-time, interactive intelligent digital virtual actor's facial expression driving method and systems.
Summary of the invention
The purpose of the present invention is to provide a kind of real-time, interactive intelligent digital virtual actor's facial expression driving methods, on solving State the problem of proposing in background technique.
To achieve the above object, the invention provides the following technical scheme:
A kind of real-time, interactive intelligent digital virtual actor's facial expression driving method, comprising the following steps:
S1 is acquired the voice messaging of user, and obtained voice messaging is exported;
S2 parses the voice messaging of acquisition, obtains the text information opposite with the voice messaging, and to text information into Row semantic computation obtains response simultaneously;
S3 converts voice for the response of acquisition, and converts expression animation data for the voice, for driving virtual portrait to make Corresponding expression out.
As a further solution of the present invention: in step S1, collected voice messaging in a wireless or wired way into Row output.
As further scheme of the invention: in step S2, voice messaging resolves to text information through ASR module, should Text information carries out semantic computation through NLP module to obtain response.
As further scheme of the invention: in step S2, the result of response is exported in the form of character string.
As further scheme of the invention: in step S3, the result of response is converted into voice through TTS module.
As further scheme of the invention: parsing, semantic computation, response, the conversion of response result of voice messaging And it is carried out in the acquisition of expression animation data beyond the clouds.
As further scheme of the invention: the expression animation data of acquisition are wirelessly or non-wirelessly to be returned.
A kind of real-time, interactive intelligent digital virtual actor's facial expression drive system, including voice collecting end, cloud, ASR module, NLP module and TTS module, the voice collecting end and cloud communicate, for acquiring the voice messaging of user;ASR module, NLP Module and TTS module are arranged in cloud, in which: the ASR module is for parsing the voice messaging of acquisition, to obtain Obtain the text information opposite with user speech information;NLP module, and ASR module communication, by being carried out based on semanteme to text information It calculates, and obtains response;TTS module, with NLP module communication, for converting voice for the result replied, which is converted into table Feelings animation data.
Compared with prior art, the beneficial effects of the present invention are: realizing that user speech turns to text based on ASR module It changes, after semantic calculating and understanding are carried out by NLP module, is translated into voice, using trained convolution mind in advance Expression animation data in network model, are directly obtained, the generation of expression animation is enormously simplified, can be widely applied to intelligence The scenes such as speaker, intelligent robot, chat robots, make this kind of product personalize, and have affective interaction experience, and user can be with It is interacted face-to-face with a virtual portrait.
Detailed description of the invention
Fig. 1 is a kind of structural schematic diagram of real-time, interactive intelligent digital virtual actor's facial expression drive system.
In figure: 100- voice collecting end, the cloud 200-, 201-ASR module, 202-NLP module, 203-TTS module.
Specific embodiment
Example embodiments are described in detail here, and the example is illustrated in the accompanying drawings.Following description is related to When attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodiment Described in embodiment do not represent all implementations consistent with this disclosure.On the contrary, they be only with it is such as appended The example of device and method being described in detail in claims, some aspects disclosed in the present embodiment are consistent.
Embodiment 1
Referring to Fig. 1, in the embodiment of the present invention, a kind of real-time, interactive intelligent digital virtual actor's facial expression driving method, including with Lower step:
S1 is acquired the voice messaging of user, and obtained voice messaging is exported in a wireless or wired way, Herein, the acquisition of voice messaging can be adopted by equipment such as intelligent sound box, intelligent robot, chat robots, microphones Collection;
S2 parses the voice messaging of acquisition, it is preferred that voice messaging is parsed by ASR module, is obtained and is believed with the voice The text information of manner of breathing pair, and semantic computation is carried out to text information and obtains response simultaneously, herein, semantic computation is by NLP module It carries out, NLP module carries out after the completion of semantic computation, it is known that the semantic information of user, and existing interactive voice equipment all has Automatic answer function, therefore, the response opposite with the semantic information that can be obtained according to the semantic information, herein, for the ease of Subsequent processing, response are exported in the form of character string;
The response of acquisition is converted voice by S3, it is preferred that the result of the response is converted into voice through TTS module, this voice phase When the response voice then to user, response voice can be converted into expression animation data, for driving virtual portrait to make pair After the expression answered, i.e. user issue voice, virtual portrait can make corresponding expression and carry out response user.
Specifically, response voice be converted into expression animation data can be by way of convolutional neural networks come real It is existing, response voice is directed into preparatory trained convolutional neural networks, it is contemplated that the weight of expression animation can be obtained directly Obtain expression animation data.
Preferably, be easily achieved in practical application, being acquired to the voice messaging of user, but the later period pair Voice messaging is handled to obtain corresponding expression animation data and need a large amount of calculating, and therefore, the present embodiment believes voice It carries out, obtains in parsing, semantic computation, response, the conversion of response result and the acquisition of expression animation data of breath beyond the clouds Voice messaging exported in a wireless or wired way to cloud, beyond the clouds in the processing such as complete to calculate after, by acquisition Expression animation data are passed back by way of wirelessly or non-wirelessly again.
Embodiment 2
Referring to Fig. 1, in the embodiment of the present invention, a kind of real-time, interactive intelligent digital virtual actor's facial expression drive system, including language Sound collection terminal 100, cloud 200, ASR module 201, NLP module 202 and TTS module 203, in the present embodiment, the voice is adopted Collect end 100 and cloud 200 communicates, for acquiring the voice messaging of user;
ASR module 201, NLP module 202 and TTS module 203 are arranged in cloud 200, in which:
The ASR module 201 is for parsing the voice messaging of acquisition, to obtain the text opposite with user speech information Information;
NLP module 202 is communicated with ASR module 201, for carrying out semantic computation to text information, and obtains response;
TTS module 203 is communicated with NLP module 202, and for converting voice for the result replied, it is dynamic which is converted into expression Draw data.
It should be strongly noted that the conversion of user speech to text is realized based on ASR module 201 in the technical program, After carrying out semantic calculating and understanding by NLP module 202, it is translated into voice, using trained convolution mind in advance Expression animation data in network model, are directly obtained, the generation of expression animation is enormously simplified, can be widely applied to intelligence The scenes such as speaker, intelligent robot, chat robots, make this kind of product personalize, and have affective interaction experience, and user can be with It is interacted face-to-face with a virtual portrait.
Those skilled in the art will readily occur to other realities of the disclosure after considering the disclosure at specification and embodiment Apply scheme.This application is intended to cover any variations, uses, or adaptations of the disclosure, these modifications, purposes or suitable The variation of answering property follows the general principles of this disclosure and including the undocumented common knowledge in the art of the disclosure or used Use technological means.The description and examples are only to be considered as illustrative, and the true scope and spirit of the disclosure are referred to by claim Out.
It should be understood that the present disclosure is not limited to the precise structures that have been described above and shown in the drawings, and And various modifications and changes may be made without departing from the scope thereof.The scope of the present disclosure is only limited by the accompanying claims.

Claims (8)

1. a kind of real-time, interactive intelligent digital virtual actor's facial expression driving method, which comprises the following steps:
S1 is acquired the voice messaging of user, and obtained voice messaging is exported;
S2 parses the voice messaging of acquisition, obtains the text information opposite with the voice messaging, and to text information into Row semantic computation obtains response simultaneously;
S3 converts voice for the response of acquisition, and converts expression animation data for the voice, for driving virtual portrait to make Corresponding expression out.
2. a kind of real-time, interactive intelligent digital virtual actor's facial expression driving method according to claim 1, which is characterized in that In step S1, collected voice messaging is exported in a wireless or wired way.
3. a kind of real-time, interactive intelligent digital virtual actor's facial expression driving method according to claim 1, which is characterized in that In step S2, voice messaging resolves to text information through ASR module, which carries out semantic computation through NLP module to obtain Obtain response.
4. a kind of real-time, interactive intelligent digital virtual actor's facial expression driving method according to claim 3, which is characterized in that In step S2, the result of response is exported in the form of character string.
5. a kind of real-time, interactive intelligent digital virtual actor's facial expression driving method according to claim 1, which is characterized in that In step S3, the result of response is converted into voice through TTS module.
6. a kind of real-time, interactive intelligent digital virtual actor's facial expression driving method according to claim 1, which is characterized in that In the parsing of voice messaging, semantic computation, response, the conversion of response result and the acquisition beyond the clouds of expression animation data into Row.
7. a kind of real-time, interactive intelligent digital virtual actor's facial expression driving method according to claim 1, which is characterized in that The expression animation data of acquisition are wirelessly or non-wirelessly to be returned.
8. a kind of real-time, interactive intelligent digital virtual actor's facial expression drive system characterized by comprising
Voice collecting end (100) is communicated with cloud (200), for acquiring the voice messaging of user;
Cloud (200) includes:
ASR module (201), for being parsed to the voice messaging of acquisition, to obtain the text opposite with user speech information Information;
NLP module (202) is communicated with ASR module (201), for carrying out semantic computation to text information, and obtains response;
TTS module (203) is communicated with NLP module (202), and for converting voice for the result replied, which is converted into table Feelings animation data.
CN201910467244.3A 2019-05-31 2019-05-31 A kind of real-time, interactive intelligent digital virtual actor's facial expression driving method and system Pending CN110211582A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910467244.3A CN110211582A (en) 2019-05-31 2019-05-31 A kind of real-time, interactive intelligent digital virtual actor's facial expression driving method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910467244.3A CN110211582A (en) 2019-05-31 2019-05-31 A kind of real-time, interactive intelligent digital virtual actor's facial expression driving method and system

Publications (1)

Publication Number Publication Date
CN110211582A true CN110211582A (en) 2019-09-06

Family

ID=67789832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910467244.3A Pending CN110211582A (en) 2019-05-31 2019-05-31 A kind of real-time, interactive intelligent digital virtual actor's facial expression driving method and system

Country Status (1)

Country Link
CN (1) CN110211582A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111063339A (en) * 2019-11-11 2020-04-24 珠海格力电器股份有限公司 Intelligent interaction method, device, equipment and computer readable medium
CN111063024A (en) * 2019-12-11 2020-04-24 腾讯科技(深圳)有限公司 Three-dimensional virtual human driving method and device, electronic equipment and storage medium
CN111292743A (en) * 2020-01-22 2020-06-16 北京松果电子有限公司 Voice interaction method and device and electronic equipment
CN112182173A (en) * 2020-09-23 2021-01-05 支付宝(杭州)信息技术有限公司 Human-computer interaction method and device based on virtual life and electronic equipment
CN112215927A (en) * 2020-09-18 2021-01-12 腾讯科技(深圳)有限公司 Method, device, equipment and medium for synthesizing face video
CN113506360A (en) * 2021-07-12 2021-10-15 北京顺天立安科技有限公司 Virtual character expression driving method and system
CN114035678A (en) * 2021-10-26 2022-02-11 山东浪潮科学研究院有限公司 Auxiliary judgment method based on deep learning and virtual reality

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1988493A1 (en) * 2007-04-30 2008-11-05 National Taiwan University of Science and Technology Robotic system and method for controlling the same
CN106485774A (en) * 2016-12-30 2017-03-08 当家移动绿色互联网技术集团有限公司 Expression based on voice Real Time Drive person model and the method for attitude
CN108230438A (en) * 2017-12-28 2018-06-29 清华大学 The facial reconstruction method and device of sound driver secondary side face image
CN108877797A (en) * 2018-06-26 2018-11-23 上海早糯网络科技有限公司 Actively interactive intelligent voice system
CN109240564A (en) * 2018-10-12 2019-01-18 武汉辽疆科技有限公司 Artificial intelligence realizes the device and method of interactive more plot animations branch
CN109712627A (en) * 2019-03-07 2019-05-03 深圳欧博思智能科技有限公司 It is a kind of using speech trigger virtual actor's facial expression and the voice system of mouth shape cartoon

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1988493A1 (en) * 2007-04-30 2008-11-05 National Taiwan University of Science and Technology Robotic system and method for controlling the same
CN106485774A (en) * 2016-12-30 2017-03-08 当家移动绿色互联网技术集团有限公司 Expression based on voice Real Time Drive person model and the method for attitude
CN108230438A (en) * 2017-12-28 2018-06-29 清华大学 The facial reconstruction method and device of sound driver secondary side face image
CN108877797A (en) * 2018-06-26 2018-11-23 上海早糯网络科技有限公司 Actively interactive intelligent voice system
CN109240564A (en) * 2018-10-12 2019-01-18 武汉辽疆科技有限公司 Artificial intelligence realizes the device and method of interactive more plot animations branch
CN109712627A (en) * 2019-03-07 2019-05-03 深圳欧博思智能科技有限公司 It is a kind of using speech trigger virtual actor's facial expression and the voice system of mouth shape cartoon

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SHIGEO MORISHIMA: "《Face Analysis and Synthesis》", 《IEEE SIGNAL PROCESSING MAGAZINE》 *
陈益强等: "《基于机器学习的语音驱动人脸动画方法》", 《软件学报》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111063339A (en) * 2019-11-11 2020-04-24 珠海格力电器股份有限公司 Intelligent interaction method, device, equipment and computer readable medium
CN111063024A (en) * 2019-12-11 2020-04-24 腾讯科技(深圳)有限公司 Three-dimensional virtual human driving method and device, electronic equipment and storage medium
CN111292743A (en) * 2020-01-22 2020-06-16 北京松果电子有限公司 Voice interaction method and device and electronic equipment
CN111292743B (en) * 2020-01-22 2023-09-26 北京小米松果电子有限公司 Voice interaction method and device and electronic equipment
CN112215927A (en) * 2020-09-18 2021-01-12 腾讯科技(深圳)有限公司 Method, device, equipment and medium for synthesizing face video
CN112215927B (en) * 2020-09-18 2023-06-23 腾讯科技(深圳)有限公司 Face video synthesis method, device, equipment and medium
CN112182173A (en) * 2020-09-23 2021-01-05 支付宝(杭州)信息技术有限公司 Human-computer interaction method and device based on virtual life and electronic equipment
CN113506360A (en) * 2021-07-12 2021-10-15 北京顺天立安科技有限公司 Virtual character expression driving method and system
CN114035678A (en) * 2021-10-26 2022-02-11 山东浪潮科学研究院有限公司 Auxiliary judgment method based on deep learning and virtual reality

Similar Documents

Publication Publication Date Title
CN110211582A (en) A kind of real-time, interactive intelligent digital virtual actor's facial expression driving method and system
CN106294854B (en) Man-machine interaction method and device for intelligent robot
CN110413841A (en) Polymorphic exchange method, device, system, electronic equipment and storage medium
CN105141587B (en) A kind of virtual puppet interactive approach and device
CN107728780A (en) A kind of man-machine interaction method and device based on virtual robot
CN110070065A (en) The sign language systems and the means of communication of view-based access control model and speech-sound intelligent
CN107181818A (en) Robot remote control and management system and method based on cloud platform
CN107784355A (en) The multi-modal interaction data processing method of visual human and system
CN106547884A (en) A kind of behavior pattern learning system of augmentor
CN101808047A (en) Instant messaging partner robot and instant messaging method with messaging partner
CN107038241A (en) Intelligent dialogue device and method with scenario analysis function
WO2024011903A1 (en) Video generation method and apparatus, and computer-readable storage medium
CN106997243A (en) Speech scene monitoring method and device based on intelligent robot
JP2011186521A (en) Emotion estimation device and emotion estimation method
CN107645523A (en) A kind of method and system of mood interaction
CN116009748B (en) Picture information interaction method and device in children interaction story
CN105957129A (en) Television animation manufacturing method based on speech driving and image recognition
CN108052250A (en) Virtual idol deductive data processing method and system based on multi-modal interaction
CN109917917A (en) A kind of visual human's interactive software bus system and its implementation
CN106557165B (en) The action simulation exchange method and device and smart machine of smart machine
CN109343695A (en) Exchange method and system based on visual human's behavioral standard
CN106653020A (en) Multi-business control method and system for smart sound and video equipment based on deep learning
CN114882861A (en) Voice generation method, device, equipment, medium and product
US20200412773A1 (en) Method and apparatus for generating information
CN106056503A (en) Intelligent music teaching platform and application method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190906

RJ01 Rejection of invention patent application after publication