CN114266735B - Chest X-ray image lesion abnormality detection method - Google Patents
Chest X-ray image lesion abnormality detection method Download PDFInfo
- Publication number
- CN114266735B CN114266735B CN202111484958.9A CN202111484958A CN114266735B CN 114266735 B CN114266735 B CN 114266735B CN 202111484958 A CN202111484958 A CN 202111484958A CN 114266735 B CN114266735 B CN 114266735B
- Authority
- CN
- China
- Prior art keywords
- extraction module
- context information
- feature map
- lesion
- sequence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000003902 lesion Effects 0.000 title claims abstract description 43
- 238000001514 detection method Methods 0.000 title claims abstract description 32
- 230000005856 abnormality Effects 0.000 title claims description 9
- 238000000605 extraction Methods 0.000 claims abstract description 34
- 238000000034 method Methods 0.000 claims abstract description 22
- 238000011976 chest X-ray Methods 0.000 claims abstract description 16
- 238000013507 mapping Methods 0.000 claims abstract description 4
- 231100000915 pathological change Toxicity 0.000 claims abstract description 4
- 230000036285 pathological change Effects 0.000 claims abstract description 4
- 230000006870 function Effects 0.000 claims description 32
- 230000007246 mechanism Effects 0.000 claims description 7
- 238000011176 pooling Methods 0.000 claims description 6
- 238000013528 artificial neural network Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 230000002159 abnormal effect Effects 0.000 abstract description 4
- 238000003745 diagnosis Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 201000010099 disease Diseases 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 206010016654 Fibrosis Diseases 0.000 description 1
- 241000521257 Hydrops Species 0.000 description 1
- 206010030113 Oedema Diseases 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000004761 fibrosis Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 210000004072 lung Anatomy 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Landscapes
- Apparatus For Radiation Diagnosis (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method for detecting pathological changes and anomalies of chest X-ray images, which comprises the steps of inputting an image to be detected into a feature extraction module to obtain a first feature image; inputting the first feature map into a context information extraction module to obtain a second feature map rich in context information; expanding the second feature map into a one-dimensional sequence, mapping the one-dimensional sequence into an embedded sequence with set dimension, and adding position coding information into the embedded sequence; and (3) inputting the sequence with the added position codes into a transducer network model, outputting a target frame of the lesion and the type of the lesion, and finishing the abnormal detection of the lesion of the X-ray picture. The method can effectively cope with the complexity and diversity of chest X-ray lesions, has high detection accuracy, and can effectively finish the lesion detection of chest X-rays.
Description
Technical Field
The invention relates to a method for detecting pathological changes and anomalies of chest X-ray images, and belongs to the technical field of image processing.
Background
In recent years, in the medical field, X-ray images have a very important role in diagnosis. In order to enable rapid and accurate automated diagnosis, a large number of researchers have been devoted to developing intelligent computer-aided detection systems to assist doctors in performing diagnostic tasks on chest X-ray lesions. With the increasing demand for X-ray diagnosis of chest diseases, there is an urgent need for effective computer-aided tools to alleviate the burden on doctors. Because the organ structures overlap along the projection direction, and the diversity of chest X-ray diseases, the identification of X-ray images is very difficult. Often requiring a very experienced physician for diagnosis. The existing X-ray image detection method is difficult to cope with complex scenes and has poor accuracy.
Disclosure of Invention
The invention aims to provide a chest X-ray image lesion abnormality detection method, which aims to solve the problems that the existing X-ray image detection method is difficult to cope with complex scenes and has poor accuracy.
In order to achieve the above purpose, the invention adopts the following technical scheme:
The invention provides a chest X-ray image lesion anomaly detection method, which is realized based on an anomaly detection model, wherein the anomaly detection model comprises a feature extraction module, a context information extraction module, a position encoder and a transducer network model which are sequentially connected, and the method comprises the following steps:
inputting the image to be detected into a feature extraction module to obtain a first feature map;
inputting the first feature map into a context information extraction module to obtain a second feature map rich in context information;
expanding the second feature map into a one-dimensional sequence, mapping the one-dimensional sequence into an embedded sequence with set dimension, and adding position coding information into the embedded sequence by using a position encoder;
and inputting the sequence added with the position codes into a transducer network model, and outputting a target frame of the lesion and the type of the lesion.
Further, the feature extraction module employs ResNet networks.
Further, the inputting the first feature map into a context information extraction module to obtain a second feature map rich in context information, including:
Inputting the first feature map into a context information extraction module, and adding the result output by the context information extraction module with the feature map before input to obtain a new feature map;
the new feature map is subjected to pooling downsampling and then is used as input of a context information extraction module;
repeating the steps for multiple times until the feature images output by the context information extraction module fuse multiple layers of information.
Further, the context information extraction module comprises 2 standard convolution layers, a plurality of bottleneck structures containing jump connection and 1 standard convolution layer, wherein the 2 standard convolution layers, the bottleneck structures contain 1 standard convolution layer, 1 expansion convolution layer and 1 standard convolution layer, and the standard convolution layer, the bottleneck structures contain 1 standard convolution layer, and the 1 expansion convolution layer are connected in sequence.
Further, the convolution kernel sizes of the 3 standard convolution layers in the context information extraction module are 1x1,3x3 and 3x3 respectively, and the channel numbers are 128, 128 and 512 respectively; the convolution kernel sizes of 2 standard convolution layers in each bottleneck structure are respectively 1x1 and 1x1, the channel numbers are respectively 128 and 128, the convolution kernel size of the extended convolution layer is 3x3, the expansion rate is 2, and the channel number is 256.
Further, the adding position coding information in the embedded sequence includes:
the position-coding information is added using sine and cosine functions of different frequencies.
Further, the transducer network model includes a transducer encoder and a transducer decoder that include a multi-head attention mechanism, and a multi-layer feedforward neural network.
Further, the anomaly detection model is obtained through training by the following method:
acquiring an X-ray image chest lesion dataset, wherein the dataset is divided into a plurality of lesion category sub-datasets, each sub-dataset comprises a plurality of sample images under the same lesion category, and each sample image comprises coordinates of a lesion and is marked with the lesion category;
Inputting each sample image of each sub-data set into the abnormality detection model, and outputting a prediction result;
And (3) optimally matching the prediction result and the true value by adopting a Hungary algorithm to obtain a loss function, carrying out gradient descent according to the back propagation of the loss function, and training to obtain the anomaly detection model.
Further, the loss function is:
the loss function comprises a classification loss function and a positioning regression loss function, wherein the classification loss function adopts cross entropy loss:
the locate regression loss function includes IoU losses and regression losses, expressed as:
wherein, L reg is a smooth L1 function, and the form is:
l iou is a GIoU function, which takes the form:
Wherein a and B represent rectangles involved in the calculation, C represents the smallest rectangle containing both a and B, and |·| represents the area of the rectangle.
Compared with the prior art, the invention has the beneficial effects that:
The method for detecting the abnormal lesions of the chest X-ray image adopts a transformer structure fused with context information as a feature extractor, can effectively cope with the complexity and diversity of the chest X-ray lesions, has high detection accuracy, can distinguish various different types of lesion areas, and can accurately mark out the lesion areas.
Drawings
FIG. 1 is a network configuration diagram of a method for detecting lesions in chest X-ray images according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a context information extraction module;
FIG. 3 is a chest radiograph to be examined;
fig. 4 is a graph of the detection result.
Detailed Description
The invention is further described below in connection with specific embodiments. The following examples are only for more clearly illustrating the technical aspects of the present invention, and are not intended to limit the scope of the present invention.
The embodiment of the invention provides a method for detecting pathological changes and anomalies of chest X-ray images, which is realized based on an anomaly detection model as shown in figure 1. The anomaly detection model is sequentially connected with a feature extraction module, a context information extraction module, a position encoder and a transducer network model.
Referring to fig. 1, a method for detecting abnormal lesions in chest X-ray images specifically includes the following steps:
step 1, inputting an image to be detected into a feature extraction module to obtain a first feature map;
the input image is converted into a feature map using ResNet as a feature extraction module using convolution, pooling, and jump connections.
Step 2, inputting the first feature map into a context information extraction module to obtain a second feature map rich in context information;
As shown in fig. 2, the context information extraction module (DCE module) includes a 1x1 standard convolution layer, a 3x3 standard convolution layer, a plurality of bottleneck structures including jump connection, and a 3x3 standard convolution layer connected in sequence, wherein the number of channels of the 3 standard convolution layers is 128, and 512, respectively.
Each bottleneck structure containing a jump connection contains a 1x1 standard convolution layer, a 3x3 extended convolution layer, and a 1x1 standard convolution layer connected in sequence. The number of channels of the 2 standard convolution layers is 128 and 128 respectively, the expansion rate of the expansion convolution layer is 2, and the number of channels is 256.
The feature map is input into the DCE module, is subjected to standard convolution of 1x1 and standard convolution of 3x3, is subjected to dimension reduction, is subjected to bottleneck structure comprising a plurality of jump connections, and is subjected to dimension lifting through standard convolution of 1x 1.
And extracting the context information by adopting an iterative fusion mode.
In an embodiment, the extracting the contextual features of the first feature map by using the contextual information extracting module specifically includes:
Step 21, sending the feature map to a DCE module, and adding the obtained result with the original feature map;
wherein the characteristic diagram output in the step1 is marked as F 1.
Step 22, reducing the size of the feature map to be half of the original size by using pooling operation;
Wherein the pooling layer size is 2 x 2.
Step 23, repeating steps 21 and 22 for a plurality of times until the feature map merges the information of the plurality of layers.
The first layer feature F l is shown in FIG. 2 as having been subjected to a DCE module and a pooled downsampling operation to yield a smaller size first +1th layer feature F k+1, which can be formulated as:
Fl=Fl-1+fDCE(Fl-1)
Fl+1=fdown(Fl)
where F l is denoted as a first level feature map, F DCE is a context information extraction module, and F down is downsampling, where a pooling operation is employed as the downsampling operation.
In this embodiment, there are 4 layers of feature graphs altogether, and finally the feature graph F 4 is obtained.
Step 3, expanding the second feature map into a one-dimensional sequence, mapping the one-dimensional sequence into an embedded sequence with set dimension, and adding position coding information into the embedded sequence by using a position encoder;
wherein a one-dimensional sequence is mapped into an embedded sequence with dimension d model, and then position coding information is added by using sine and cosine functions with different frequencies.
Where pos represents position, i represents dimension, d model represents the total dimension of the embedded sequence, and the dimension of the sequence added with position coding is still d model.
And 4, inputting the sequence added with the position codes into a transducer network model, and outputting a target frame of the lesion and the type of the lesion.
The transducer network model comprises a transducer encoder and a transducer decoder which apply a multi-head attention mechanism, and a multi-layer feedforward neural network.
The multi-headed self-attention mechanism can be expressed as:
MultiHead(Q,K,V)=Concat(head1,…,headh)WO
where concat represents stitching the feature tensors. head i represents the ith single head attention head.
Wherein,AndH represents the number of heads of the multi-head attention, and the dimension of X is d model.
Multi-head attention consists of h single-head attention mechanisms, where a single-head self-attention mechanism is expressed as:
Wherein Q, K, V are obtained from the input sequence X through a series of matrix multiplication transformations, representing the query, key and value, respectively. The dimension of Q is N q, the dimensions of K and V are N kv, and the Softmax function is used to calculate the attention weight a, which is calculated from the query and the key:
Wherein/> Where i is the index of the query and j is the index of the key. N kv represents the dimensions of query Q and key K. The final result is the sum of the values weighted by the attention weights, i.e. the ith behavior of the output of the single head attention mechanism: /(I)
The output sequence is obtained through a plurality of transformers and transformers decoders.
And then, the output sequence passes through a two-layer feedforward neural network, and a detection frame with the dimension of N obj multiplied by 4 and the corresponding lesion type are output, so that the detection is finally completed. Where N obj denotes the number of detection frames and 4 denotes the dimension of the coordinates of the rectangular detection frame as 4. Using relu as an activation function, this can be expressed as:
FFN(x)=max(0,xW1+b1)W2+b2
Wherein W 1、W2 is a parameter matrix, b 1、b2 is a bias, and the dimension of the input x is d model. In the invention, d model is set to 256, the attention head number h is set to 8, and N q,Nkv is set to d model/h, namely 32.
In the embodiment of the invention, the abnormality detection model also needs to be trained in advance, and the method specifically comprises the following steps:
Step a, acquiring an X-ray image chest lesion dataset, wherein the dataset is divided into a plurality of lesion category sub-datasets, each sub-dataset comprises a plurality of sample images under the same lesion category, and each sample image comprises coordinates of a lesion and is marked with the lesion category;
B, inputting each sample image of each sub-data set into the abnormality detection model, and outputting a prediction result;
And c, optimally matching the predicted result and the true value by adopting a Hungary algorithm to obtain a loss function, carrying out gradient descent according to the back propagation of the loss function, and training to obtain a model.
The model outputs N obj target frames as prediction results, hereinafter abbreviated as N, for the prediction result setAnd (3) representing.
Using best matches found using hungarian algorithmUsed as training:
Where Ω N represents all permutations of the set of real values y, L match is a function that measures the difference between the predicted and real values. Is defined as Wherein/>The format of the predicted result includes the coordinates and confidence of the target frame. /(I)Represented as the probability that the predicted i target boxes belong to the C i class. /(I)Representing the best alignment found to minimize overall matching loss using the hungarian algorithm. The matching loss is:
During training, the loss function includes classification loss and positioning loss.
The classification loss uses cross entropy loss:
the locate regression loss function includes IoU losses and regression losses, expressed as:
Wherein, L reg is a smooth L1 function, and the form is:
l iou is a GIoU function, which takes the form:
Wherein a and B represent rectangular boxes participating in the calculation, C represents the smallest rectangular box containing a and B, and |·| represents the area of the rectangular box.
Training uses Adam optimizer, where β 1=0.9,β2=0.98,∈=10-9, during training, learning rate is continuously changed according to the following formula:
Where step_num represents the number of steps for training, warmup _steps represents the number of steps for preheating, here set to 4000.
An X-ray image of the chest to be detected is shown in fig. 3, the X-ray image is input into an abnormality detection model, the model is output to obtain the lesion area and the category of the image, and as shown in fig. 4, the method successfully detects the lung solid changes, hydrops and fibrosis in the X-ray image.
Through the embodiment, the method for detecting the abnormal lesions of the chest X-ray image adopts the transducer structure integrating the context information as the feature extractor, can effectively cope with the complexity and diversity of the chest X-ray lesions, has high detection accuracy, can distinguish various different types of lesion areas, and can accurately mark out the lesion areas.
The present invention has been disclosed in the preferred embodiments, but the invention is not limited thereto, and the technical solutions obtained by adopting equivalent substitution or equivalent transformation fall within the protection scope of the present invention.
Claims (7)
1. A method for detecting the pathological changes of chest X-ray image is characterized in that: the method is realized based on an abnormality detection model, the abnormality detection model comprises a feature extraction module, a context information extraction module, a position encoder and a transducer network model which are connected in sequence, and the method comprises the following steps:
inputting the image to be detected into a feature extraction module to obtain a first feature map;
inputting the first feature map into a context information extraction module to obtain a second feature map rich in context information;
expanding the second feature map into a one-dimensional sequence, mapping the one-dimensional sequence into an embedded sequence with set dimension, and adding position coding information into the embedded sequence by using a position encoder;
inputting the sequence added with the position codes into a transducer network model, and outputting a target frame of the lesion and the type of the lesion;
The anomaly detection model is obtained through training by the following method:
Acquiring an X-ray image chest lesion dataset, wherein the dataset is divided into a plurality of lesion category sub-datasets, each sub-dataset comprises a plurality of sample images under the same lesion category, and each sample image comprises coordinates of a lesion and is marked with a lesion category;
Inputting each sample image of each sub-data set into the anomaly detection model, and outputting a prediction result;
The predicted result and the true value are optimally matched by adopting a Hungary algorithm to obtain a loss function, and gradient descent is carried out according to the back propagation of the loss function, so that the anomaly detection model is obtained through training;
The loss function is:
the loss function comprises a classification loss function and a positioning regression loss function, wherein the classification loss function adopts cross entropy loss:
the locate regression loss function includes IoU losses and regression losses, expressed as:
wherein, L reg is a smooth L1 function, and the form is:
l iou is a GIoU function, which takes the form:
Wherein a and B represent rectangles involved in the calculation, C represents the smallest rectangle containing both a and B, and |·| represents the area of the rectangle.
2. The method of claim 1, wherein the feature extraction module employs a ResNet network.
3. The method of claim 1, wherein inputting the first feature map into a context information extraction module to obtain a second feature map enriched in context information comprises:
Inputting the first feature map into a context information extraction module, and adding the result output by the context information extraction module with the feature map before input to obtain a new feature map;
the new feature map is subjected to pooling downsampling and then is used as input of a context information extraction module;
repeating the steps for multiple times until the feature images output by the context information extraction module fuse multiple layers of information.
4. The method of claim 1, wherein the context information extraction module comprises 2 standard convolutional layers, a plurality of bottleneck structures comprising jump connections, and 1 standard convolutional layer connected in sequence, each bottleneck structure comprising 1 standard convolutional layer, 1 extended convolutional layer, and 1 standard convolutional layer connected in sequence.
5. The method of claim 1, wherein the convolution kernel sizes of the 3 standard convolution layers in the context information extraction module are 1x1,3x3 and 3x3, respectively, and the channel numbers are 128, 128 and 512, respectively; the convolution kernel sizes of 2 standard convolution layers in each bottleneck structure are respectively 1x1 and 1x1, the channel numbers are respectively 128 and 128, the convolution kernel size of the extended convolution layer is 3x3, the expansion rate is 2, and the channel number is 256.
6. The method of claim 1, wherein adding position-coding information to the embedded sequence comprises:
the position-coding information is added using sine and cosine functions of different frequencies.
7. The method of claim 1, wherein the transducer network model comprises a transducer encoder and a transducer decoder that include a multi-headed attention mechanism, and a multi-layer feedforward neural network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111484958.9A CN114266735B (en) | 2021-12-07 | 2021-12-07 | Chest X-ray image lesion abnormality detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111484958.9A CN114266735B (en) | 2021-12-07 | 2021-12-07 | Chest X-ray image lesion abnormality detection method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114266735A CN114266735A (en) | 2022-04-01 |
CN114266735B true CN114266735B (en) | 2024-06-07 |
Family
ID=80826725
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111484958.9A Active CN114266735B (en) | 2021-12-07 | 2021-12-07 | Chest X-ray image lesion abnormality detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114266735B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116965843B (en) * | 2023-09-19 | 2023-12-01 | 南方医科大学南方医院 | Mammary gland stereotactic system |
CN117670845B (en) * | 2023-12-08 | 2024-07-23 | 北京长木谷医疗科技股份有限公司 | Spinal column slippage identification and assessment method and device based on X-ray medical image |
CN117522877B (en) * | 2024-01-08 | 2024-04-05 | 吉林大学 | Method for constructing chest multi-disease diagnosis model based on visual self-attention |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111626379A (en) * | 2020-07-07 | 2020-09-04 | 中国计量大学 | X-ray image detection method for pneumonia |
CN113065402A (en) * | 2021-03-05 | 2021-07-02 | 四川翼飞视科技有限公司 | Face detection method based on deformed attention mechanism |
CN113469962A (en) * | 2021-06-24 | 2021-10-01 | 江苏大学 | Feature extraction and image-text fusion method and system for cancer lesion detection |
-
2021
- 2021-12-07 CN CN202111484958.9A patent/CN114266735B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111626379A (en) * | 2020-07-07 | 2020-09-04 | 中国计量大学 | X-ray image detection method for pneumonia |
CN113065402A (en) * | 2021-03-05 | 2021-07-02 | 四川翼飞视科技有限公司 | Face detection method based on deformed attention mechanism |
CN113469962A (en) * | 2021-06-24 | 2021-10-01 | 江苏大学 | Feature extraction and image-text fusion method and system for cancer lesion detection |
Also Published As
Publication number | Publication date |
---|---|
CN114266735A (en) | 2022-04-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114266735B (en) | Chest X-ray image lesion abnormality detection method | |
Xu et al. | Multi-task joint learning model for segmenting and classifying tongue images using a deep neural network | |
Mansilla et al. | Learning deformable registration of medical images with anatomical constraints | |
Elmuogy et al. | An efficient technique for CT scan images classification of COVID-19 | |
CN111429407B (en) | Chest X-ray disease detection device and method based on double-channel separation network | |
CN112132878B (en) | End-to-end brain nuclear magnetic resonance image registration method based on convolutional neural network | |
CN115223678A (en) | X-ray chest radiography diagnosis report generation method based on multi-task multi-mode deep learning | |
CN111241326B (en) | Image visual relationship indication positioning method based on attention pyramid graph network | |
CN116229552A (en) | Face recognition method for embedded hardware based on YOLOV7 model | |
CN116258732A (en) | Esophageal cancer tumor target region segmentation method based on cross-modal feature fusion of PET/CT images | |
CN115995015A (en) | CXR image classification method and system based on residual convolution and multi-head self-attention | |
CN117911418B (en) | Focus detection method, system and storage medium based on improved YOLO algorithm | |
CN114170232A (en) | X-ray chest radiography automatic diagnosis and new crown infected area segmentation method based on Transformer | |
CN116091490A (en) | Lung nodule detection method based on YOLOv4-CA-CBAM-K-means++ -SIOU | |
Attallah | Deep learning-based CAD system for COVID-19 diagnosis via spectral-temporal images | |
CN116129426A (en) | Fine granularity classification method for cervical cell smear 18 category | |
CN117058448A (en) | Pulmonary CT image classification system based on domain knowledge and parallel separable convolution Swin transducer | |
CN117522877B (en) | Method for constructing chest multi-disease diagnosis model based on visual self-attention | |
Xu et al. | Identification of benign and malignant lung nodules in CT images based on ensemble learning method | |
CN114066844A (en) | Pneumonia X-ray image analysis model and method based on attention superposition and feature fusion | |
CN117274985A (en) | Method and system for detecting tubercle bacillus real-time target based on deep learning | |
CN113469962B (en) | Feature extraction and image-text fusion method and system for cancer lesion detection | |
Jalalifar et al. | Data-efficient training of pure vision transformers for the task of chest X-ray abnormality detection using knowledge distillation | |
Bhuvana et al. | Efficient generative transfer learning framework for the detection of COVID-19 | |
Arora et al. | Modified UNet++ model: a deep model for automatic segmentation of lungs from chest X-ray images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |