CN117636840A - Audio data acquisition and personalized voice training and reasoning method - Google Patents

Audio data acquisition and personalized voice training and reasoning method Download PDF

Info

Publication number
CN117636840A
CN117636840A CN202311613575.6A CN202311613575A CN117636840A CN 117636840 A CN117636840 A CN 117636840A CN 202311613575 A CN202311613575 A CN 202311613575A CN 117636840 A CN117636840 A CN 117636840A
Authority
CN
China
Prior art keywords
text
user
audio
model
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311613575.6A
Other languages
Chinese (zh)
Inventor
南晓杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beiyin Financial Technology Co ltd
Original Assignee
Beiyin Financial Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beiyin Financial Technology Co ltd filed Critical Beiyin Financial Technology Co ltd
Priority to CN202311613575.6A priority Critical patent/CN117636840A/en
Publication of CN117636840A publication Critical patent/CN117636840A/en
Pending legal-status Critical Current

Links

Landscapes

  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention provides an audio data acquisition and personalized voice training and reasoning method, which comprises the following steps: deploying a personalized speech synthesis environment; constructing a user organization architecture; deploying data acquisition and algorithm service; audio data acquisition and audio text alignment; text preprocessing and audio preprocessing; model training and evaluation; model reasoning and management. Data leakage is avoided, recording efficiency of a user is improved, and data quality is further improved.

Description

Audio data acquisition and personalized voice training and reasoning method
Technical Field
The invention relates to the technical field of voice synthesis, in particular to an audio data acquisition and personalized voice training and reasoning method.
Background
With the development of artificial intelligence technology, speech synthesis, in particular personalized speech synthesis, has made great progress. The personalized voice synthesis is widely applied to the fields of digital people, intelligent customer service, personalized voice navigation and the like at present, and great economic benefits are brought to society. However, personalized speech synthesis is currently commercially available, and if hundreds or thousands of people in a company need a large amount of personalized speech generation, the economic burden of the enterprise tends to be increased. In order to facilitate enterprise staff to quickly generate personalized voice generation, the invention provides a scheme which can be privately deployed, is simple to operate, saves GPU server resources and can perform personalized voice synthesis in a large amount.
Personalized speech synthesis requires processes such as personal speech data acquisition, text alignment, model training, model reasoning and the like. The personal voice data acquisition can be carried out through recording equipment such as a mobile phone, a computer and the like, and then manually packed to model personnel; according to APP developed by manufacturers, users only need to read according to given characters, and after the user data are finished, the user data can be automatically given to the manufacturers, but the problem of data leakage is caused. Text alignment currently has two main schemes: firstly, a sound recorder gives a text, a user reads the text according to the text, the text corresponds to voice, the situation that the user may read inaccurately, miss or read more is ignored, and the data quality is reduced; secondly, the user can directly provide the audio frequency for about 30 minutes, and then the voice recognition algorithm is used for recognizing the characters, but after all, the recognition effect is not 100% accurate, so that data errors can be caused, in addition, long audio frequency is required to be sliced when the model is trained, the long audio frequency is kept within 20 seconds, and certain syllable leakage can be caused when the model is sliced. The model training can be used for training single speaker and multiple speakers, and most common services are training of single speaker, if the user group is large, the training mode consumes more resources. The model reasoning is mostly single speaker reasoning, the invention fuses the single speaker reasoning and the multi-speaker reasoning, and improves the degree of freedom of personalized speech synthesis reasoning.
1 personalized Speech Synthesis data acquisition
The defects of the prior proposal include 1) after the time and labor are consumed and the voice is recorded manually, packaging is carried out and the voice is sent to a model developer for quality detection, and if the voice is not qualified, the voice is required to be re-recorded; 2) The risk of data leakage is recorded by manufacturer software, so that recording quality can be ensured, but personal voice data is leaked, and even if the personal voice data is recorded, a user cannot conduct export.
The invention can conveniently construct a voice database in the enterprise by arranging the data acquisition system and the environmental noise detection system in the enterprise, and organize the voice data in the form of an organization framework, thereby being convenient for the department or the project group to simultaneously carry out personalized voice training and generation.
Recording and text alignment
The prior proposal has the defects that: 1) If a text template prepared in advance is given for template fixation, the user can read the text template, and the phenomenon of misreading and misreading can occur, and if some correction techniques are not available, the situation that the language is not aligned with the text can be easily caused. Or that some correction techniques are too dead, the user takes a long time to correct. 2) The imprecise word recognition aims at the condition that only a user uploads a recording, if the recording quality is low, the existing voice recognition algorithm can not accurately recognize the recorded words by 100%, and a certain text error can be caused. 3) Partial syllable leakage aims at the situation that only a user uploads a record, if the uploaded audio exceeds 20s, slicing is needed, and at a breakpoint, some syllable tones are leaked and lost.
The prior proposal has the defects that: 1) Model training mode model service purchased singly often only supports single training, and multiple people cannot train simultaneously, so that a large amount of GPU resources occupy a long time and consume a large disk space. 2) A user with a model management deficiency may engage in a plurality of services, the sound required by each service line is different, the user can record different forms of sound to meet the requirements of different services, the existing scheme cannot well manage the personal voice synthesis model, and the synthesis tone is single.
Disclosure of Invention
In view of the above, the present invention has been made to provide an audio data acquisition and personalized speech training and reasoning method that overcomes or at least partially solves the above-mentioned problems.
According to one aspect of the present invention, there is provided an audio data acquisition and personalized speech training and reasoning method, the reasoning method comprising:
deploying a personalized speech synthesis environment;
constructing a user organization architecture;
deploying data acquisition and algorithm service;
audio data acquisition and audio text alignment;
text preprocessing and audio preprocessing;
model training and evaluation;
model reasoning and management.
Optionally, the deployment of the personalized speech synthesis environment specifically includes:
according to the requirements, configuring different servers for model training and reasoning;
the data acquisition and algorithm service needs a software environment, if the algorithm is constructed based on the python environment, a specific dependency package is installed, and the ffmpeg is installed for the voice, so that the audio is conveniently sampled and subjected to format conversion processing.
Optionally, the constructing specifically includes:
the manager can conveniently manage the rights of the using member, and the rights comprise whether to allow the inviting of other people to perform multi-person personalized voice synthesis;
the user can record different voice data to meet different service requirements and share the voice data for use.
Optionally, the deployment data acquisition and algorithm service specifically includes:
the data acquisition is carried out on a PC end and a mobile end in a web form, and is rapidly realized through a gradio or streamlite front-end application package of python;
algorithm services include ambient noise detection, sound intensity measurement algorithms, speech recognition, text contrast, and speech synthesis techniques; the algorithm service is deployed and tested for call-through.
Optionally, the audio data collection and the audio text alignment specifically include:
entering a data acquisition page, and operating at a pc end and a mobile end;
detecting environmental noise, namely detecting the environmental noise after a user enters a collection page, and if the noise threshold is exceeded, the user is required to find a quiet environment to record;
selecting an acquisition model, wherein a user only needs to read a given text template according to the text template, so that the method is simple and convenient, and the audio acquisition is recommended in the mode;
the text is compared and aligned, the text is collected according to a text template, and when the text is read by a user according to a text, a sound intensity measurement algorithm and a voice recognition algorithm are called at the moment;
the sound intensity measurement algorithm is used for judging whether the recording of the user meets the requirement, and if the sound is too low, the user can increase the volume;
the voice recognition algorithm is used for recognizing whether the pronunciation of the user is aligned with the text, and whether the phenomena of misreading, misreading and multi-reading exist or not, so that the user is pointed out;
the user adjusts according to the text content of the feedback and also selects to repeat the speakable reading to correct the previous pronunciation.
Optionally, the text preprocessing and the audio preprocessing specifically include:
the text is regularized, phonemic, tonal, pause converted before being fed into the model;
audio preprocessing.
Optionally, the model training and evaluating specifically includes:
the user can select an open source or self-grinding model to carry out model training;
note that the training mode is single speaker and multiple speakers, which is convenient for the subsequent management of the model;
evaluating the speech synthesis model, and evaluating by using subjective MOS indexes;
evaluation of polyphones, numbers and english pronunciation.
The invention provides an audio data acquisition and personalized voice training and reasoning method, which comprises the following steps: deploying a personalized speech synthesis environment; constructing a user organization architecture; deploying data acquisition and algorithm service; audio data acquisition and audio text alignment; text preprocessing and audio preprocessing; model training and evaluation; model reasoning and management. Data leakage is avoided, recording efficiency of a user is improved, and data quality is further improved.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an audio data collection and personalized speech training and reasoning method provided by an embodiment of the invention;
FIG. 2 is a logic diagram of a hardware configuration according to an embodiment of the present invention;
FIG. 3 is a flow chart of audio data collection and audio text alignment provided by an embodiment of the present invention;
fig. 4 is a flowchart of text alignment and alignment provided in an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The terms "comprising" and "having" and any variations thereof in the description embodiments of the invention and in the claims and drawings are intended to cover a non-exclusive inclusion, such as a series of steps or elements.
The technical scheme of the invention is further described in detail below with reference to the accompanying drawings and the examples.
As shown in fig. 1, an audio data acquisition and personalized voice training and reasoning method, the reasoning method includes: deploying a personalized speech synthesis environment; constructing a user organization architecture; deploying data acquisition and algorithm service; audio data acquisition and audio text alignment; text preprocessing and audio preprocessing; model training and evaluation; model reasoning and management.
Personalized speech synthesis environment deployment
Hardware configuration
Different servers are configured for model training and reasoning, as required. If the personalized voice generation is only used offline and the personnel are not enough, one GPU server (such as a memory of more than display card RTX309020G and A100-40G) can meet the requirements. If the real-time requirement is high, such as personalized voice customer service, quick response to customer feedback is required, a highly configured reasoning server is required. The following is a personalized speech hardware configuration sample which can satisfy 20 people customer service, each customer service can develop 3 intelligent outbound calls at the same time, and a user can adjust a server according to actual conditions:
the hardware configuration logic diagram is shown in fig. 2.
The software environment configuration data acquisition and algorithm service needs a specific software environment, if the algorithm is constructed based on the python environment, a specific dependency package needs to be installed, and ffmpeg needs to be installed for voice, so that the audio is conveniently sampled, format converted and the like.
Building user organization architecture
The management personnel is mainly convenient for managing the rights of the using members, the rights comprise whether to allow inviting other people to perform multi-person personalized voice synthesis, and the operation can save a large amount of GPU service resources. The user can record different voice data to meet different service requirements, and can share the voice data for other people to use.
Deploying data acquisition and algorithm services
The data acquisition can be carried out on the PC end and the mobile end in a web form, software such as app is not needed to be downloaded by a user, and the data acquisition can be rapidly realized through a gradio of python or a streamlite front-end application package. Algorithm services include ambient noise detection, sound intensity measurement algorithms, speech recognition, text contrast, speech synthesis, and the like. This step requires deployment and test tuning of the algorithmic service used.
Ambient noise detection may use pyaudio packets of python for audio stream acquisition and decibel calculation.
The sound intensity measurement algorithm, like the environmental noise detection technology, can also use the soundhead package to calculate, and the user judges whether the volume of the user record meets the requirement.
Speech recognition speech recognition may be performed using a whisper algorithm
Text comparison means that the characters of the voice recognition are compared with the characters of the template to see whether the user records the phenomena of misreading, misreading and multi-reading, and then the user is fed back to the user.
The speech synthesis means training by using the recorded and processed text of the user to obtain a speech synthesis model, and then speech synthesis can be carried out on the text input by the user according to the model. Tone color substitution is also possible in the prior art.
Audio data acquisition and audio text alignment
This step is illustrated in flowchart form in fig. 3 and will be clear:
entering a data acquisition page
The data acquisition page is simpler in web form, can be operated at the pc end and the mobile end, and does not need to download the app.
Ambient noise detection
After the user enters the acquisition page, the environment noise is detected first, and if the noise threshold is exceeded, the user is required to find a quiet environment to record. Although noise reduction algorithms may reduce noise, it is desirable to control the quality of the recorded data from the source.
Selecting an acquisition model
The user only needs to read the given text template according to the text template, so that the method is simple and convenient, and the audio acquisition is recommended.
The user uploads the sound recording in 20s, the mode allows the user to select the text to upload the sound recording, and the text is flexible.
Text alignment and alignment
The collection is carried out according to the text template, and when the user finishes reading according to a text, the sound intensity measurement algorithm and the voice recognition algorithm are called. The sound intensity measurement algorithm is used for judging whether the user records meet the requirement, and if the sound is too low, the user can increase the volume. The voice recognition algorithm is used for recognizing whether the pronunciation of the user is aligned with the text, and whether the phenomena of misreading, misreading and multi-reading exist or not, so that the user is pointed out. The user can adjust according to the text content of the feedback, and can also choose to repeatedly read to correct the previous pronunciation. Such as: as shown in fig. 4.
In this example, the given text is "that did not disturb you, thank you for hearing, hope you for pleasure, and see again. After the voice is recorded, the voice recognition algorithm recognizes that the voice recognition algorithm does not disturb you thank you for listening and you for enjoying your life, at the moment, by comparing the two texts, the user can judge which words are misread, missed read and multiple read, the aligned and compared result can be fed back to the user, the user can select to edit the text, the voice recognition result is corrected, the text corresponding to the voice recording in the background of the user can be replaced correspondingly after the voice recognition is completed, and the one-to-one corresponding acquisition of the text and the voice recording is completed. If the user selects to re-record, the user needs to re-record, and the collection of the text record can be completed after the voice recognition result is consistent with the template text.
The audio collection mode that the user uploads the audio in 20s one by one is flexible, the user detects the sound recording noise and sound intensity condition firstly after uploading, the text is identified by using a voice recognition algorithm after meeting the requirements, and when the identified text is inconsistent with the sound recording, the user can select editing and modifying, and then uploading, so that the data collection work of one text and the sound recording is completed.
Completion of data acquisition
After the user finishes data acquisition, a corresponding file is generated in the directory where the user is located, wherein the recording and the text are in one-to-one correspondence. And model training is facilitated.
Text pre-processing and audio pre-processing
Text preprocessing
The text undergoes regularization, phonemic, tonal, pause conversion, etc. before being fed into the model. Text regularization includes text substitution of special punctuation marks (e.g., n, etc.), special words (e.g., , etc.), numbers (e.g., arabic numbers) in the text, facilitating extraction of phoneme tones and pauses.
The phoneme extraction refers to separating words according to phonemes, for example, "a rainbow is hung in the sky" after decomposition according to phonemes, "y i t iao c ai h ong g ua z ai t iank ong".
The pitch refers to the pitch corresponding to each phoneme extracted, taking the example above, with the pitch code of "4 4 2 2 3 3 2 2 44 441 1 1 1".
The pause refers to the corresponding position coding of the pause of the text, so that a model can learn when to pause, and the pause time is what. Taking the above example as an example, the example only needs to mark pauses at the head and tail positions, have different punctuations of the model, and have corresponding pause duration marks.
Audio pre-processing
Some model training and fine tuning requires a specific sampling rate, such as 44100Hz, at which time the recorded audio is required to maintain a consistent sampling rate. Some sound recordings are relatively noisy and may also require noise reduction. If the duration of recording the audio exceeds 20s, audio segmentation work is needed to prevent the memory overflow caused by too large files fed into the model.
Model training and evaluation
The currently widely used speech synthesis models are fastspech 2, tacotron2, VITS and the like, and a user can select an open source or self-grinding model to carry out model training. Note that the training patterns are single speaker and multiple speaker, facilitating subsequent management of the patterns. The speech synthesis model is evaluated, typically using subjective MOS indicators. In addition, the evaluation of pronunciations such as polyphones, numbers and English is included, as these are all scenes actually encountered in actual business.
Model reasoning and management
After model training is completed and evaluated for use, model reasoning and management work is next entered. If the trained model is a multi-speaker model, the user needs to be shielded from other speaker id information, and after the user requests authorization, the user can use the other voice information. If the model reasoning needs to be in real time, the speech synthesis service is generally required to be subjected to streaming processing, so that the response can be better made. The user can organize and manage the synthetic model according to the service requirement.
The beneficial effects are that: personalized speech data collection and speech text alignment techniques. By utilizing the method, data leakage can be avoided, the recording efficiency of the user can be improved, and the data quality can be further improved.
The model service and the user organization architecture are combined, so that the training of multiple speaker models and management at one time can be satisfied, and GPU service resources are saved. And (3) integrating flow from data acquisition to model reasoning.
The personalized voice data acquisition tends to have uneven quality and is easy to cause personal voice data leakage, and the privately-arranged scheme and the environmental noise detection algorithm provided by the invention can help users reduce repeated recording times and avoid data leakage.
The invention integrates the reading model text and the uploading audio of the user, and combines the voice recognition algorithm to greatly improve the alignment effect.
The personalized speech synthesis model has a single training mode, and rarely supports simultaneous training of multiple speaker models. In addition, the model trained by the multiple tone data of one person is not well managed, and when the user uses the model, the composite tone is single. The invention brings in the organization architecture, is convenient for initiating the multi-user synthesized voice request, and is also convenient for managing the tone of the user.
The foregoing detailed description of the invention has been presented for purposes of illustration and description, and it should be understood that the invention is not limited to the particular embodiments disclosed, but is intended to cover all modifications, equivalents, alternatives, and improvements within the spirit and principles of the invention.

Claims (7)

1. An audio data acquisition and personalized speech training and reasoning method, characterized in that the reasoning method comprises the following steps:
deploying a personalized speech synthesis environment;
constructing a user organization architecture;
deploying data acquisition and algorithm service;
audio data acquisition and audio text alignment;
text preprocessing and audio preprocessing;
model training and evaluation;
model reasoning and management.
2. The method for audio data collection and personalized speech training and reasoning according to claim 1, wherein the personalized speech synthesis environment deployment specifically comprises:
according to the requirements, configuring different servers for model training and reasoning;
the data acquisition and algorithm service needs a software environment, if the algorithm is constructed based on the python environment, a specific dependency package is installed, and the ffmpeg is installed for the voice, so that the audio is conveniently sampled and subjected to format conversion processing.
3. The method for audio data collection and personalized speech training and reasoning according to claim 1, wherein the constructing and using the human organization architecture specifically comprises:
the manager can conveniently manage the rights of the using member, and the rights comprise whether to allow the inviting of other people to perform multi-person personalized voice synthesis;
the user can record different voice data to meet different service requirements and share the voice data for use.
4. The method for audio data collection and personalized speech training and reasoning according to claim 1, wherein the deploying data collection and algorithm service specifically comprises:
the data acquisition is carried out on a PC end and a mobile end in a web form, and is rapidly realized through a gradio or streamlite front-end application package of python;
algorithm services include ambient noise detection, sound intensity measurement algorithms, speech recognition, text contrast, and speech synthesis techniques; the algorithm service is deployed and tested for call-through.
5. The method for audio data collection and personalized speech training and reasoning according to claim 1, wherein the audio data collection and audio text alignment specifically comprises:
entering a data acquisition page, and operating at a pc end and a mobile end;
detecting environmental noise, namely detecting the environmental noise after a user enters a collection page, and if the noise threshold is exceeded, the user is required to find a quiet environment to record;
selecting an acquisition model, wherein a user only needs to read a given text template according to the text template, so that the method is simple and convenient, and the audio acquisition is recommended in the mode;
the text is compared and aligned, the text is collected according to a text template, and when the text is read by a user according to a text, a sound intensity measurement algorithm and a voice recognition algorithm are called at the moment;
the sound intensity measurement algorithm is used for judging whether the recording of the user meets the requirement, and if the sound is too low, the user can increase the volume;
the voice recognition algorithm is used for recognizing whether the pronunciation of the user is aligned with the text, and whether the phenomena of misreading, misreading and multi-reading exist or not, so that the user is pointed out;
the user adjusts according to the text content of the feedback and also selects to repeat the speakable reading to correct the previous pronunciation.
6. The method for audio data collection and personalized speech training and reasoning according to claim 1, wherein the text preprocessing and audio preprocessing specifically comprises:
the text is regularized, phonemic, tonal, pause converted before being fed into the model;
audio preprocessing.
7. The method for audio data collection and personalized speech training and reasoning according to claim 1, wherein the model training and evaluation specifically comprises:
the user can select an open source or self-grinding model to carry out model training;
note that the training mode is single speaker and multiple speakers, which is convenient for the subsequent management of the model;
evaluating the speech synthesis model, and evaluating by using subjective MOS indexes;
evaluation of polyphones, numbers and english pronunciation.
CN202311613575.6A 2023-11-29 2023-11-29 Audio data acquisition and personalized voice training and reasoning method Pending CN117636840A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311613575.6A CN117636840A (en) 2023-11-29 2023-11-29 Audio data acquisition and personalized voice training and reasoning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311613575.6A CN117636840A (en) 2023-11-29 2023-11-29 Audio data acquisition and personalized voice training and reasoning method

Publications (1)

Publication Number Publication Date
CN117636840A true CN117636840A (en) 2024-03-01

Family

ID=90035230

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311613575.6A Pending CN117636840A (en) 2023-11-29 2023-11-29 Audio data acquisition and personalized voice training and reasoning method

Country Status (1)

Country Link
CN (1) CN117636840A (en)

Similar Documents

Publication Publication Date Title
US9875752B2 (en) Voice profile management and speech signal generation
CN111883110B (en) Acoustic model training method, system, equipment and medium for speech recognition
CN110148394B (en) Singing voice synthesizing method, singing voice synthesizing device, computer equipment and storage medium
CN110298252A (en) Meeting summary generation method, device, computer equipment and storage medium
CN103778912A (en) System, method and program product for guided speaker adaptive speech synthesis
Stan et al. ALISA: An automatic lightly supervised speech segmentation and alignment tool
CN111883107B (en) Speech synthesis and feature extraction model training method, device, medium and equipment
US11763801B2 (en) Method and system for outputting target audio, readable storage medium, and electronic device
JP2006285254A (en) Method and apparatus for measuring voice speed, and sound recorder
CN111916054B (en) Lip-based voice generation method, device and system and storage medium
CN110047466B (en) Method for openly creating voice reading standard reference model
US20220238118A1 (en) Apparatus for processing an audio signal for the generation of a multimedia file with speech transcription
CN112992162B (en) Tone cloning method, system, device and computer readable storage medium
US6963835B2 (en) Cascaded hidden Markov model for meta-state estimation
CN110782902A (en) Audio data determination method, apparatus, device and medium
US20230298564A1 (en) Speech synthesis method and apparatus, device, and storage medium
Álvarez et al. APyCA: Towards the automatic subtitling of television content in Spanish
CN112885318A (en) Multimedia data generation method and device, electronic equipment and computer storage medium
CN111599338A (en) Stable and controllable end-to-end speech synthesis method and device
CN117636840A (en) Audio data acquisition and personalized voice training and reasoning method
CN115472185A (en) Voice generation method, device, equipment and storage medium
CN115700871A (en) Model training and speech synthesis method, device, equipment and medium
CN114005428A (en) Speech synthesis method, apparatus, electronic device, storage medium, and program product
CN114724540A (en) Model processing method and device, emotion voice synthesis method and device
Zhang et al. Masked acoustic unit for mispronunciation detection and correction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination