CN111341295A - Offline real-time multilingual broadcast sensitive word monitoring method - Google Patents

Offline real-time multilingual broadcast sensitive word monitoring method Download PDF

Info

Publication number
CN111341295A
CN111341295A CN202010162340.XA CN202010162340A CN111341295A CN 111341295 A CN111341295 A CN 111341295A CN 202010162340 A CN202010162340 A CN 202010162340A CN 111341295 A CN111341295 A CN 111341295A
Authority
CN
China
Prior art keywords
voice
keyword
layer
output
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010162340.XA
Other languages
Chinese (zh)
Inventor
吕志良
陈曾
莫舸舸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Huari Communication Technology Co ltd
Original Assignee
Chengdu Huari Communication Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Huari Communication Technology Co ltd filed Critical Chengdu Huari Communication Technology Co ltd
Priority to CN202010162340.XA priority Critical patent/CN111341295A/en
Publication of CN111341295A publication Critical patent/CN111341295A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/005Language recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/16Speech classification or search using artificial neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L2015/088Word spotting

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Machine Translation (AREA)

Abstract

The invention discloses an off-line real-time multilingual broadcast sensitive word monitoring method, which comprises the following steps: normalizing the received speech signal; extracting a voice filterbanks matrix as an acoustic feature; learning different language acoustic feature matrixes by using a convolutional neural network, and carrying out classification and identification; collecting keyword voice corpora to be recognized, and greatly expanding samples; extracting an acoustic feature matrix from the keyword voice sample to obtain a keyword acoustic feature matrix; learning keyword acoustic features using a GRU neural network; and extracting acoustic features of the received broadcast voice in real time, performing learning matching by using the trained GRU neural network, and giving an alarm once a target keyword is matched. The invention adopts an off-line technology without internet connection; simple structure, code operating efficiency are high, and pronunciation detection efficiency is high.

Description

Offline real-time multilingual broadcast sensitive word monitoring method
Technical Field
The invention relates to the technical field, in particular to an off-line real-time multilingual broadcast sensitive word monitoring method.
Background
With the great breakthrough of the artificial intelligence technology in the fields of voice images and the like, the trend of converting the artificial intelligence technical achievements into the field of radio monitoring is more and more mature. In the process of monitoring broadcast voice, the voice conversation content is identified in real time, and the method has great significance for timely monitoring and alarming illegal sensitive information in the content. In a border radio monitoring environment, it is also necessary to perform recognition monitoring on the foreign language and its content. The current common scheme for keyword speech recognition is as follows: 1. converting voice content into a text through a voice-to-text service, and detecting a keyword text from the text; 2. and uploading the voice data and the sensitive word list in a network voice service mode, and returning the sensitive word recognition result after the server analyzes the sensitive word recognition result. The above keyword recognition scheme has disadvantages in that: 1) voice to text service is costly; 2) the conversion from voice to text has higher requirements on hardware; 3) the method of converting the voice into the complete text and then searching the keywords from the text has low efficiency; 4) calling online network services requires stable network connections and cannot be run offline.
Disclosure of Invention
The invention aims to provide an off-line real-time multilingual broadcast sensitive word monitoring method, which is used for solving the problems that the detection efficiency is low and the off-line operation cannot be performed after key voice is converted into text in the prior art.
The invention solves the problems through the following technical scheme:
an off-line real-time multilingual broadcast sensitive word monitoring method comprises the following steps:
step S1: the speech recognition module carries out language discrimination on the received broadcast speech:
step S11: standardizing the received voice, converting the standardized voice into a voice recognition standard voice system, and extracting a voice filterbanks matrix as an acoustic feature matrix;
step S12: learning different language acoustic feature matrixes by adopting a convolutional neural network, performing language identification, and outputting language identification classification results;
step S2: calling a keyword voice recognition module corresponding to the language to perform keyword detection and alarm:
step S21: collecting keyword voice corpora to be recognized, and expanding a sample set;
step S22: extracting an acoustic feature matrix from the keyword voice sample and performing discrete cosine transform to obtain a non-correlated feature matrix, and only retaining the first 13-dimensional data as a new keyword acoustic feature matrix; performing decorrelation dimensionality reduction processing on the original feature matrix, and considering real-time detection operation speed;
step S23: learning keyword acoustic features using a GRU neural network;
and using the GRU neural network as a classifier, and classifying the classification into the number of the target keywords to be detected. The more the target keywords are, the more the classification number is, and the more units in the GRU need to be set. The number of units is generally increased from 20 and increased by steps of 10 as the number of target words increases.
Step S24: and extracting 13-dimensional acoustic features of the received broadcast voice in real time, performing learning matching by using the trained GRU neural network, and giving an alarm once a target keyword is matched.
After the language to which the voice belongs is identified, 13-dimensional acoustic features are extracted from the voice acoustic matrix of the corresponding language and stored in a bidirectional queue, every time the voice data of preset time (such as 0.05s) is received in a real-time monitoring application scene, the data is extracted into the 13-dimensional acoustic feature matrix and added to the tail of the queue, and meanwhile, the acoustic features of the preset time (0.05s) at the head of the queue are removed, so that the consistency of the data length in the queue is ensured. And calling the GRU neural network once to identify and match the queue content every time the queue content is updated, and detecting whether preset keywords are contained or not. By the mode, the received voice can be subjected to real-time keyword detection.
Further, the extracting the acoustic feature matrix in step S11 includes:
step S111: pre-enhancing the speech signal sequence x (t) to obtain y (t) ═ x (t) -0.97x (t-1) and enhancing the signal-to-noise ratio;
step S112: cutting the pre-enhanced speech into temporally overlapped frames (e.g., each frame is 0.025s long and the interval between frames is 0.01 s), and performing a hamming window function on each frame;
step S113: performing 256-point FFT on each frame, and converting a time domain frame into a frequency domain;
step S114: filtering each spectrum using a plurality of (e.g., 20) triangular filters, wherein the intervals between the triangular filters conform to the mel scale, and combining the filtered spectra;
step S114: and subtracting the average value of each coefficient from the merged frequency spectrum to obtain a normalized FilterBank matrix.
Further, the data normalization process in step S11 is: and converting the received voice into channel conversion sampling, and standardizing the voice into a single-channel standard voice format with a 16K sampling rate and 16 bits.
Further, the method for expanding the sample set in step S21 is as follows: and randomly denoising, accelerating and changing the tone of the collected real voice word corpus, and expanding the sample.
Further, the step S12 is performed by using a convolutional neural network with a sequential structure of 5 convolutional layers, wherein:
the 1 st layer convolution uses 3 × 3 convolution kernels, 16 feature maps extract input features, the output is subjected to 3 × 3 pooling processing after ReLU activation, the 2 nd layer convolution uses 3 × 3 convolution kernels, 32 feature maps, the output of the 1 st layer is used as input, the output is subjected to 3 × 3 pooling processing after ReLU activation, the 3 rd layer convolution uses 3 × 3 convolution kernels and 64 feature maps, the output of the 2 nd layer is used as input, the output is subjected to 3 × 3 pooling processing after ReLU activation, the 4 th layer convolution uses 3 × 3 convolution kernels and 128 piece feature maps, the output of the 3 rd layer is used as input, the output is subjected to 3 × 3 pooling processing after ReLU activation, the 5 th layer convolution uses 3 convolution kernels, 256 feature maps, the output of the 4 th layer is used as input, the output is subjected to 3 × 3 pooling processing after ReLU activation, the output of the 5 th layer convolution is further connected with the full-mapped data after Drout and the 5 th layer convolution, finally, the recognition result is output through the full connection layer of the 32 neurons and the Softmax layer of the 8 neurons, the Softmax layer outputs a vector with the length of 8, each element in the vector represents the probability that the input is recognized as the corresponding classification, and the classification corresponding to the maximum probability in the 8 elements is the final recognition result of the neural network.
Compared with the prior art, the invention has the following advantages and beneficial effects:
the invention adopts an off-line technology without internet connection; the structure is simple, the code running efficiency is high, and the voice detection efficiency is high; the neural network and the extracted voice acoustic characteristics are utilized, and the two modes of language identification and keyword identification are combined, so that the multi-channel voice signals can be monitored in real time; the recognized language of the voice provides convenience for converting the voice into a text at a later stage.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples, but the embodiments of the present invention are not limited thereto.
Example 1:
referring to fig. 1, a method for monitoring offline real-time multilingual broadcast sensitive words includes the following steps:
1. standardizing the voice signals received by the receiver, converting the standard voice signals into a voice recognition standard voice system,
specifically, channel conversion and sampling are carried out on the received voice, and the received voice is standardized to 16K sampling rate, 16bits and single-channel Mono format;
2. extracting a voice filterbanks matrix as an acoustic feature;
3. and (3) learning an acoustic feature matrix of different languages by using a convolutional neural network, and classifying and identifying, specifically, using the convolutional neural network with 5 convolutional layers of a sequential structure. The 1 st layer convolution uses 3 × 3 convolution kernels, 16 feature maps extract input features, the output is subjected to 3 × 3 pooling processing after ReLU activation, the 2 nd layer convolution uses 3 × 3 convolution kernels, 32 feature maps, the output of the 1 st layer is used as input, the output is subjected to 3 × 3 pooling processing after ReLU activation, the 3 rd layer convolution uses 3 × 3 convolution kernels and 64 feature maps, the output of the 2 nd layer is used as input, the output is subjected to 3 × 3 pooling processing after ReLU activation, the 4 th layer convolution uses 3 × 3 convolution kernels and 128 piece feature maps, the output of the 3 rd layer is used as input, the output is subjected to 3 × 3 pooling processing after ReLU activation, the 5 th layer convolution uses 3 convolution kernels, 256 feature maps, the output of the 4 th layer is used as input, the output is subjected to 3 × 3 pooling processing after ReLU activation, the output of the 5 th layer convolution is further connected with the full-mapped data after Drout and the 5 th layer convolution, finally, the recognition result is output through the full connection layer of the 32 neurons and the Softmax layer of the 8 neurons, the Softmax layer outputs a vector with the length of 8, each element in the vector represents the probability of the input being recognized as the class, and the classification corresponding to the maximum probability in the 8 elements is the final recognition result of the neural network. In an actual project, advanced optimization operations such as replacing a Dropout layer by a BatchNormalization layer and replacing a full connection layer by an AveragpoLong layer can be performed, and the structure of a specific neural network can be adjusted according to the computing capacity of actual hardware equipment. In the detection process, an acoustic matrix is extracted from input data, the matrix is input into a neural network, and the neural network outputs a language identification result.
4. Collecting keyword voice corpora to be recognized, randomly adding noise and changing tone to data, and greatly expanding samples;
5. extracting an acoustic feature matrix for the keyword voice sample: and after an acoustic feature matrix which is the same as the language data is extracted from each keyword voice sample segment, discrete cosine transform is carried out on the acoustic feature matrix to obtain a non-relevant feature matrix, and only the first 13-dimensional data is reserved as a new keyword acoustic feature matrix.
6. Learning keyword acoustic features using GRU neural networks: and using the GRU neural network as a classifier, and classifying the classification into the number of the target keywords to be detected. The more target keywords are, the more classification numbers are, the more units in the GRU need to be set, generally the number of the units is from 20, and the number is increased by taking 10 as a step diameter along with the increase of the target words;
7. the method comprises the steps of extracting 13-dimensional acoustic features of received broadcast voice in real time, using a trained GRU neural network for learning matching, giving an alarm once a target keyword is matched, specifically, after the language to which the voice belongs is identified, extracting the 13-dimensional acoustic features of a voice acoustic matrix of a corresponding language and storing the 13-dimensional acoustic features in a bidirectional queue, extracting the data into a 13-dimensional acoustic feature matrix and adding the 13-dimensional acoustic feature matrix to the tail of the queue every time 0.05s voice data is received in a real-time monitoring application scene, and simultaneously removing the 0.05s acoustic features at the head of the queue to ensure the consistency of the data length in the queue. And calling the GRU neural network once to identify and match the queue content every time the queue content is updated, and detecting whether preset keywords are contained or not. By the mode, the received voice can be subjected to real-time keyword detection.
Although the present invention has been described herein with reference to the illustrated embodiments thereof, which are intended to be preferred embodiments of the present invention, it is to be understood that the invention is not limited thereto, and that numerous other modifications and embodiments can be devised by those skilled in the art that will fall within the spirit and scope of the principles of this disclosure.

Claims (5)

1. An off-line real-time multilingual broadcast sensitive word monitoring method is characterized by comprising the following steps:
step S1: the speech recognition module carries out language discrimination on the received broadcast speech:
step S11: standardizing the received voice, converting the standardized voice into a voice recognition standard voice system, and extracting a voice filterbanks matrix as an acoustic feature matrix;
step S12: learning different language acoustic feature matrixes by adopting a convolutional neural network, performing language identification, and outputting language identification classification results;
step S2: calling a keyword voice recognition module corresponding to the language to perform keyword detection and alarm:
step S21: collecting keyword voice corpora to be recognized, and expanding a sample set;
step S22: extracting an acoustic feature matrix from the keyword voice sample and performing discrete cosine transform to obtain a non-correlated feature matrix, and only retaining the first 1-13-dimensional data as a new keyword acoustic feature matrix;
step S23: learning keyword acoustic features using a GRU neural network;
step S24: and extracting 13-dimensional acoustic features of the received broadcast voice in real time, performing learning matching by using the trained GRU neural network, and giving an alarm once a target keyword is matched.
2. The method for listening to the off-line real-time multilingual broadcast sensitive word of claim 1, wherein the extracting the acoustic feature matrix in step S11 comprises:
step S111: pre-emphasis the speech signal sequence x (t) to obtain y (t) x (t) -0.97x (t-1);
step S112: cutting the pre-enhanced voice into frames with overlapping time, and executing a hamming window function on each frame;
step S113: performing 256-point FFT on each frame, and converting a time domain frame into a frequency domain;
step S114: filtering each frequency spectrum by adopting a plurality of triangular filters, wherein the interval between the triangular filters accords with the Mel scale, and merging the filtered frequency spectrums;
step S115: and normalizing the merged frequency spectrum to obtain a FilterBanks matrix.
3. The method for monitoring the offline real-time multilingual broadcast sensitive word according to claim 1 or 2, wherein the data normalization in step S11 is performed by: and converting the received voice into channel conversion sampling, and standardizing the voice into a single-channel standard voice format with a 16K sampling rate and 16 bits.
4. The method for off-line real-time multilingual broadcast-sensitive word listening of claim 1, wherein the step S21 of expanding the sample set comprises: and randomly denoising, accelerating and changing the tone of the collected real voice word corpus, and expanding the sample.
5. The method for monitoring the off-line real-time multilingual broadcast sensitive words according to claim 1, wherein a convolutional neural network with 5 convolutional layers in a sequential structure is adopted in step S12, wherein:
the 1 st layer convolution uses 3 × 3 convolution kernels, 16 feature maps extract input features, the output is subjected to 3 × 3 pooling processing after ReLU activation, the 2 nd layer convolution uses 3 × 3 convolution kernels, 32 feature maps, the output of the 1 st layer is used as input, the output is subjected to 3 × 3 pooling processing after ReLU activation, the 3 rd layer convolution uses 3 × 3 convolution kernels and 64 feature maps, the output of the 2 nd layer is used as input, the output is subjected to 3 × 3 pooling processing after ReLU activation, the 4 th layer convolution uses 3 × 3 convolution kernels and 128 piece feature maps, the output of the 3 rd layer is used as input, the output is subjected to 3 × 3 pooling processing after ReLU activation, the 5 th layer convolution uses 3 convolution kernels, 256 feature maps, the output of the 4 th layer is used as input, the output is subjected to 3 × 3 pooling processing after ReLU activation, the output of the 5 th layer convolution is further connected with the full-mapped data after Drout and the 5 th layer convolution, finally, the recognition result is output through the full connection layer of the 32 neurons and the Softmax layer of the 8 neurons, the Softmax layer outputs a vector with the length of 8, each element in the vector represents the probability that the input is recognized as the corresponding classification, and the classification corresponding to the maximum probability in the 8 elements is the final recognition result of the neural network.
CN202010162340.XA 2020-03-10 2020-03-10 Offline real-time multilingual broadcast sensitive word monitoring method Pending CN111341295A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010162340.XA CN111341295A (en) 2020-03-10 2020-03-10 Offline real-time multilingual broadcast sensitive word monitoring method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010162340.XA CN111341295A (en) 2020-03-10 2020-03-10 Offline real-time multilingual broadcast sensitive word monitoring method

Publications (1)

Publication Number Publication Date
CN111341295A true CN111341295A (en) 2020-06-26

Family

ID=71182263

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010162340.XA Pending CN111341295A (en) 2020-03-10 2020-03-10 Offline real-time multilingual broadcast sensitive word monitoring method

Country Status (1)

Country Link
CN (1) CN111341295A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6332120B1 (en) * 1999-04-20 2001-12-18 Solana Technology Development Corporation Broadcast speech recognition system for keyword monitoring
CN108172238A (en) * 2018-01-06 2018-06-15 广州音书科技有限公司 A kind of voice enhancement algorithm based on multiple convolutional neural networks in speech recognition system
CN108711420A (en) * 2017-04-10 2018-10-26 北京猎户星空科技有限公司 Multilingual hybrid model foundation, data capture method and device, electronic equipment
CN109523993A (en) * 2018-11-02 2019-03-26 成都三零凯天通信实业有限公司 A kind of voice languages classification method merging deep neural network with GRU based on CNN
CN109599129A (en) * 2018-11-13 2019-04-09 杭州电子科技大学 Voice depression recognition methods based on attention mechanism and convolutional neural networks

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6332120B1 (en) * 1999-04-20 2001-12-18 Solana Technology Development Corporation Broadcast speech recognition system for keyword monitoring
CN108711420A (en) * 2017-04-10 2018-10-26 北京猎户星空科技有限公司 Multilingual hybrid model foundation, data capture method and device, electronic equipment
CN108172238A (en) * 2018-01-06 2018-06-15 广州音书科技有限公司 A kind of voice enhancement algorithm based on multiple convolutional neural networks in speech recognition system
CN109523993A (en) * 2018-11-02 2019-03-26 成都三零凯天通信实业有限公司 A kind of voice languages classification method merging deep neural network with GRU based on CNN
CN109599129A (en) * 2018-11-13 2019-04-09 杭州电子科技大学 Voice depression recognition methods based on attention mechanism and convolutional neural networks

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DONG ZHU 等: "Identification of Spoken Language from Webcast Using Deep Convolutional Recurrent Neural Networks" *
王诗佳: ""基于深度学习的声音事件识别研究"" *

Similar Documents

Publication Publication Date Title
CN103700370B (en) A kind of radio and television speech recognition system method and system
EP3701528B1 (en) Segmentation-based feature extraction for acoustic scene classification
CN111816218A (en) Voice endpoint detection method, device, equipment and storage medium
CN111461173B (en) Multi-speaker clustering system and method based on attention mechanism
CN112735383A (en) Voice signal processing method, device, equipment and storage medium
CN111986699B (en) Sound event detection method based on full convolution network
CN111625649A (en) Text processing method and device, electronic equipment and medium
Tripathi et al. Focal loss based residual convolutional neural network for speech emotion recognition
CN113707175B (en) Acoustic event detection system based on feature decomposition classifier and adaptive post-processing
CN113611286B (en) Cross-language speech emotion recognition method and system based on common feature extraction
CN114822578A (en) Voice noise reduction method, device, equipment and storage medium
US11776532B2 (en) Audio processing apparatus and method for audio scene classification
CN113793624A (en) Acoustic scene classification method
CN116741159A (en) Audio classification and model training method and device, electronic equipment and storage medium
Hajihashemi et al. Novel time-frequency based scheme for detecting sound events from sound background in audio segments
CN106228984A (en) Voice recognition information acquisition methods
CN111145761A (en) Model training method, voiceprint confirmation method, system, device and medium
CN111341295A (en) Offline real-time multilingual broadcast sensitive word monitoring method
CN116230020A (en) Speech emotion recognition and classification method
Zhou et al. Environmental sound classification of western black-crowned gibbon habitat based on spectral subtraction and VGG16
CN111048110A (en) Musical instrument identification method, medium, device and computing equipment
Martín-Gutiérrez et al. An End-to-End Speaker Diarization Service for improving Multimedia Content Access
Shome et al. A robust DNN model for text-independent speaker identification using non-speaker embeddings in diverse data conditions
Ashurov et al. Classification of Environmental Sounds Through Spectrogram-Like Images Using Dilation-Based CNN
Cruz et al. Novel Time-Frequency Based Scheme for Detecting Sound Events from Sound Background in Audio Segments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination