CN112101449B - Method for inverting radar data by satellite cloud picture based on semantic loss - Google Patents
Method for inverting radar data by satellite cloud picture based on semantic loss Download PDFInfo
- Publication number
- CN112101449B CN112101449B CN202010951615.8A CN202010951615A CN112101449B CN 112101449 B CN112101449 B CN 112101449B CN 202010951615 A CN202010951615 A CN 202010951615A CN 112101449 B CN112101449 B CN 112101449B
- Authority
- CN
- China
- Prior art keywords
- data
- loss
- model
- image
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method for inverting radar data by a satellite cloud picture based on semantic loss, which comprises the following steps: s1, establishing a data set; s2, establishing a network structure, extracting the characteristics of the data set, and calculating the loss according to the characteristics of the corresponding convolutional layer; s3, carrying out iterative training on the deep convolutional neural network by using a training set, verifying the trained model by using a verification set after training, and then storing the model with the minimum fuzzy loss resistant function on the verification set; s4, testing the model with the minimum anti-fuzzy loss function by using the test set, judging whether the model meets the requirement, if not, continuing to carry out the step S3 to carry out iterative training until the test result meets the requirement, and outputting a formal model; and S5, inverting the radar data of the satellite cloud picture by using the formal model, and outputting the result. The anti-fuzzy loss function provided by the invention solves the problem that the fuzzy phenomenon occurs when the traditional network using the pixel-level loss function trains the image.
Description
Technical Field
The invention relates to the field of meteorological detection, in particular to a method for inverting radar data by using a satellite cloud chart based on semantic loss.
Background
The satellite cloud picture is a cloud state picture received from a meteorological satellite on the ground. Although the meteorological satellite can monitor the earth in all directions, the monitoring scale is too large and the area is too wide, and effective monitoring cannot be achieved for a local system with small scale and short duration. For example: locally strong convection, when a tornado is generated and develops continuously, the tornado cannot be observed on a satellite cloud picture, but a radar can detect whether hook-shaped echoes are generated or not. So to speak, the use of radar data makes the meteorological detection more accurate. However, the observation range of the radar is not all-around, and in western regions where the radar deployment is sparse, a large gap exists between radar networking. Meanwhile, the radar has a limited observation range on the sea and can only cover offshore areas. To compensate for radar networking gaps and insufficient marine observation, we try to invert radar data using satellite observation data.
Currently this technology is not widely used. Because the satellite cloud images and the radar data have certain difference and do not correspond to each other, a fuzzy phenomenon can occur when some networks using pixel-level loss functions train images. To this end, we introduce a special anti-blur Loss function (context Loss) in the multi-layer convolutional network, which is robust to slight data misalignment. The loss function is formed byAndthe two parts are formed, wherein,measuring the loss of the generated image and the label image, andthe loss of the generated image and the input image is measured. The Contextual Loss plays a key role in the optimization of CNN network performance.
Disclosure of Invention
In view of this, the present invention aims to provide a method for inverting radar data based on a satellite cloud chart with semantic loss, and mainly solves the problems that: in the process of inverting radar data by using a satellite cloud picture, a fuzzy phenomenon can occur when a network training image of a pixel-level loss function is used.
In order to achieve the above purpose, the invention provides the following technical scheme:
the method for inverting radar data by using the satellite cloud picture based on semantic loss is characterized by comprising the following steps of:
s1, establishing a data set, matching the data set, and dividing the data set into a training set, a verification set and a test set;
s2, establishing a network structure, extracting the characteristics of the data set, and calculating the loss according to the characteristics of the corresponding convolutional layer; the network structure comprises three parts: a feature extraction network, an anti-fuzzy loss function and a deep convolution neural network;
s3, carrying out iterative training on the deep convolutional neural network by using a training set, verifying the trained model by using a verification set after training, and then storing the model with the minimum fuzzy loss resistant function on the verification set;
s4, testing the model with the minimum anti-fuzzy loss function by using the test set, judging whether the model meets the requirement, if not, continuing to perform step S3 to perform iterative training until the test result meets the requirement, and outputting a formal model;
and S5, inverting the radar data of the satellite cloud picture by using the formal model, and outputting an inversion result.
Preferably, the data set comprises first data and second data, the first data is a satellite cloud picture shot by a wind cloud satellite four, the shooting time interval is 4 minutes, and the picture size is 1500 × 1750;
the second data is an image obtained by a radar, the time interval of the obtaining is 6 minutes, and the size of the image is 700 × 800.
Preferably, the matching process comprises the following steps:
s101, temporally matching the first data with the second data;
s102, performing equal-longitude-latitude projection on the first data by referring to the longitude and latitude of the second data;
s103, cutting out 350 × 400 images from the first data as input images, and correspondingly cutting out 350 × 400 images from the second data as label images.
Preferably, the first data is superposition of data acquired by a plurality of channels in a wind cloud satellite four, and the plurality of channels include: a NOMChannel09 channel, a NOMChannel10 channel, a NOMChannel11 channel, a NOMChannel12 channel, a NOMChannel13 channel, and a NOMChannel14 channel;
the training set is 5000 pairs of the first data and the second data, the verification set is 744 pairs of the first data and the second data, and the test set is 744 pairs of the first data and the second data.
Preferably, the step S2 specifically includes the following steps:
s201, expanding the first data into three dimensions, and inputting the three dimensions into a deep convolutional neural network to obtain a primary image;
s202, copying the first data and the second data into RGB channels;
s203, sending the first data, the preliminary image and the second data to a VGG19 network for feature extraction;
the VGG19 network is a model trained in advance through ImageNet, and specifically comprises the steps of reserving 16 convolutional layers of the VGG19 network as a feature extraction part, and removing 3 full-connection layers of the VGG 19; each convolutional layer uses the alternation of a convolution by 3 x 3 and pooling by 2 x 2.
Preferably, in step S2, the expression of the anti-blur loss function is:
L(G)=LCX(G(s),s,ls)+λ·LCX(G(s),t,lt)
in the formula, G is a 17-layer CNN network, s is first data, G(s) is a preliminary image, and the label of the first data is t,; l. thesTo obtain the content characteristics of the first data,/tIn order to obtain the style characteristics of the second data, lambda is 5; loss LCX(G(s), t, L) represents the degree of similarity between the generated image and the target image, and the loss LCX(g(s), s, l) measuring the similarity of the generated image and the source image.
Preferably, in step S3, the learning rate is set to 1e-4, e is a symbol of scientific notation, the number of iterations is 300, and the step size is 2.
The invention has the beneficial effects that:
the present invention is robust to slight data skew by introducing a special anti-fuzzy Loss function (context Loss) in the multi-layer convolutional network. The loss function provided by the invention is based on semantics, can ignore the spatial position of the picture, and can solve the problem of incomplete alignment of data.
Drawings
FIG. 1 is a flow chart of a method for inverting radar data by satellite cloud images.
FIG. 2 is a graph showing the results of the experiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to express the technical scheme of the present invention more clearly, the key point in the technical scheme is the design of a loss function, and the main idea of the loss function is to consider pictures as a set of features, and measure the similarity between the pictures according to the similarity between the features. If most features of an image can find similar features in another image, we consider the two pictures to be similar. Because the design of the loss function is based on semantics, the spatial position of the picture can be ignored, and the problem of incomplete data alignment can be solved.
Suppose that the source image s and the target image t are two images to be compared, siAnd tjThe features are obtained after the source image s and the target image t pass through the VGG19 respectively. We can represent each image as a set of features, i.e., S ═ SiT } and T ═ Tj}. It should be noted that we assume that | S | ═ T | ═ N here, and if | S | ≠ T |, then N-point sampling is performed from a larger set. N represents the number of high-dimensional points (features). If in S, most of SiThe most similar corresponding point T can be found in TjI am ofThey are considered similar.
First discussing characteristics siAnd tjThe similarity between the two components will be described in detail in the following. Context loss is a loss function based on cosine similarity, let dijRepresenting a feature siAnd a characteristic tjCosine distance between, then dijCan be expressed as:
for theAll have dij<<dikWe consider feature siAnd a characteristic tjAre semantically similar. For ease of calculation, the cosine distance is normalized by:
here, we fix e-1 e-5. By exponentiation, we translate cosine distances into similarities, defined as follows:
where h > 0 is a bandwidth parameter, where we fix h 0.5. Finally, we use the scale invariance of normalized similarity to define semantic similarity between features:
image similarity is determined by the similarity between features in the imagesTo be measured. A pair of images is considered similar when most features of one image have similar features in the other image. We are the feature tjFinding the features s most similar theretoiForming a match CX between featuresijWhile contextual loss can be considered as CXijIs calculated as a weighted sum of. The above-defined method is robust to the scale of distances, i.e., CX if two features in two images are dissimilar, even if they are in corresponding locationsijWill also be low. If two features in two images are similar, even if they are not in corresponding positions, CXijIt will also be high.
Formally, the contextual similarity between images is defined as:
wherein, CXijRepresentation feature siAnd tjThe similarity of (c). When an image is compared with itself, the feature similarity value is CXiiSo that CX (S, S) ═ 1. Conversely, when the set of features in one image is completely different from the other image, the feature similarity value isSo that there are
In summary, contextual loss is defined as:
LCX(s,t,l)=-log(CX(φl(s),φl(t))) (6)
where φ represents a VGG19 network, φl(s),φl(t) represents the feature maps of images s and t, respectively, extracted from the l layer of the network phi.
We train the network G to map a given source image s into an output image G(s). Here, network G we use a simple 17-layer CNN network. Loss of use LCX(G(s), t, l) represents generating an image anddegree of similarity of target images, and at the same time, loss of use LCX(G(s), s, l) measuring the similarity of the generated image and the source image.
Referring to fig. 1, the invention provides a method for inverting radar data by using a satellite cloud picture based on semantic loss, which comprises the following steps:
s1, establishing a data set, matching the data set, and dividing the data set into a training set, a verification set and a test set; the data set includes first data and second data; the training of convolutional networks requires a large number of data sets as support, so the production of data sets is concerned with whether the trained model is sufficient to represent all the sample space. We used the satellite cloud maps and radar data to train the proposed method; the method comprises the following specific steps: the first data is a satellite cloud picture shot by a wind and cloud satellite IV, the shooting time interval is 4 minutes, and the picture size is 1500 × 1750; the second data is an image acquired by radar, the time interval for acquisition is 6 minutes, and the picture size is 700 × 800.
The matching process comprises the following steps:
s101, temporally matching the first data with the second data;
s102, performing equal-longitude-latitude projection on the first data by referring to the longitude and latitude of the second data;
and S103, cutting 350 × 400 images from the first data to be used as input images, and correspondingly cutting 350 × 400 images from the second data to be used as label images.
The first data is superposition of data acquired by a plurality of channels in a wind cloud satellite IV, and the plurality of channels comprise: a NOMChannel09 channel, a NOMChannel10 channel, a NOMChannel11 channel, a NOMChannel12 channel, a NOMChannel13 channel, and a NOMChannel14 channel;
the training set is 5000 pairs of first data and second data, the validation set is 744 pairs of first data and second data, and the test set is 744 pairs of first data and second data.
S2, establishing a network structure, extracting the characteristics of the data set, and calculating the loss according to the characteristics of the corresponding convolutional layer; the network structure comprises three parts: a feature extraction network, an anti-fuzzy loss function and a deep convolution neural network;
the method specifically comprises the following steps:
s201, expanding the first data into three dimensions, and inputting the three dimensions into a deep convolutional neural network to obtain a preliminary image, wherein the deep convolutional neural network is a 17-layer CNN network;
s202, copying the first data and the second data into an RGB channel;
s203, sending the first data, the preliminary image and the second data to a VGG19 network for feature extraction;
the VGG19 network is a model trained in advance through ImageNet, and specifically comprises the steps of reserving 16 convolutional layers of the VGG19 network as a feature extraction part, and removing 3 full-connection layers of the VGG 19; each convolutional layer uses the alternation of a convolution by 3 x 3 and pooling by 2 x 2.
In step S2, the expression of the anti-blur loss function is:
L(G)=LCX(G(s),s,ls)+λ·LCX(G(s),t,lt)
in the formula, G is a 17-layer CNN network, s is first data, G(s) is a preliminary image, and the label of the first data is t,; lsTo obtain the content characteristics of the first data,/tλ is 5 to obtain the style characteristics of the second data; loss LCX(G(s), t, L) represents the degree of similarity between the generated image and the target image, and the loss LCX(g(s), s, l) measuring the similarity of the generated image and the source image.
S3, carrying out iterative training on the deep convolutional neural network by using a training set, verifying the trained model by using a verification set after training, and then storing the model with the minimum fuzzy loss resistant function on the verification set; the learning rate is set to 1e-4, e is a scientific counting method symbol, the iteration number is 300, and the step length is 2.
The specific training environment is a Python environment configured with a TensorFlow library; the processor selects a GeForce GTX 1080Ti processor; the optimizer selects Adam; the activation function selects Relu.
And S4, testing the model with the minimum anti-fuzzy loss function by using the test set, judging whether the model meets the requirement, if not, continuing to perform step S3 to perform iterative training until the test result meets the requirement, and outputting a formal model.
And S5, inverting the radar data of the satellite cloud picture by using the formal model, and outputting an inversion result.
Referring to fig. 2, column a in fig. 2 represents an input image, and column b is an output image obtained after CNN network training; column c is the tag image obtained from the radar data.
The details of the present invention are well known to those skilled in the art.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions that can be obtained by a person skilled in the art through logical analysis, reasoning or limited experiments based on the prior art according to the concepts of the present invention should be within the scope of protection determined by the claims.
Claims (4)
1. The method for inverting radar data by the satellite cloud picture based on semantic loss is characterized by comprising the following steps of:
s1, establishing a data set, matching the data set, and dividing the data set into a training set, a verification set and a test set;
the data set comprises first data and second data, the first data is a satellite cloud picture shot by a wind cloud satellite four, the shooting time interval is 4 minutes, and the picture size is 1500 × 1750;
the second data are images obtained by a radar, the time interval of the obtaining is 6 minutes, and the size of the images is 700 x 800;
the matching process comprises the following steps:
s101, temporally matching the first data with the second data;
s102, performing equal-longitude-latitude projection on the first data by referring to the longitude and latitude of the second data;
s103, cutting 350 x 400 images from the first data to serve as input images, and correspondingly cutting 350 x 400 images from the second data to serve as label images;
the first data is superposition of data acquired by a plurality of channels in a wind cloud satellite IV, and the plurality of channels comprise: NOMChannel09 channel, NOMChannel10 channel, NOMChannel11 channel, NOMChannel12 channel, NOMChannel13 channel and NOMChannel14 channel;
the training set is 5000 pairs of the first data and the second data, the validation set is 744 pairs of the first data and the second data, and the test set is 744 pairs of the first data and the second data;
s2, establishing a network structure, extracting the characteristics of the data set, and calculating the loss according to the characteristics of the corresponding convolutional layer; the network structure comprises three parts: a feature extraction network, an anti-fuzzy loss function and a deep convolution neural network;
s3, carrying out iterative training on the deep convolutional neural network by using a training set, verifying the trained model by using a verification set after training, and then storing the model with the minimum fuzzy loss resistant function on the verification set;
s4, testing the model with the minimum anti-fuzzy loss function by using the test set, judging whether the model meets the requirement, if not, continuing to perform step S3 to perform iterative training until the test result meets the requirement, and outputting a formal model;
and S5, inverting the radar data of the satellite cloud picture by using the formal model, and outputting an inversion result.
2. The method for inverting radar data based on the satellite cloud image with semantic loss according to claim 1, wherein the step S2 specifically comprises the following steps:
s201, expanding the first data into three dimensions, and inputting the three dimensions into a deep convolution neural network to obtain a primary image;
s202, copying the first data and the second data into an RGB channel;
s203, sending the first data, the preliminary image and the second data to a VGG19 network for feature extraction;
the VGG19 network is a model trained in advance through ImageNet, and specifically comprises the steps of reserving 16 convolutional layers of the VGG19 network as a feature extraction part, and removing 3 full-connection layers of the VGG 19; each convolutional layer uses the alternation of a convolution of 3 x 3 and pooling of 2 x 2.
3. The method for inverting radar data based on satellite cloud images with semantic loss as claimed in claim 2, wherein in the step S2, the expression of the anti-ambiguity loss function is:
L(G)=LCX(G(s),s,ls)+λ·LCX(G(s),t,lt)
wherein, G is a 17-layer CNN network, s is first data, G(s) is a preliminary image, and t is a label of the first data; lsTo obtain the content characteristics of the first data,/tλ is 5 to obtain the style characteristics of the second data; loss LCX(G(s),t,lt) Indicating the degree of similarity of the generated image to the target image, loss LCX(G(s),s,ls) The similarity of the generated image and the source image is measured.
4. The method for inverting radar data based on satellite cloud images with semantic loss as claimed in claim 3, wherein in the step S3, the learning rate is set to 1e-4, the number of iterations is 300, and the step size is 2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010951615.8A CN112101449B (en) | 2020-09-11 | 2020-09-11 | Method for inverting radar data by satellite cloud picture based on semantic loss |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010951615.8A CN112101449B (en) | 2020-09-11 | 2020-09-11 | Method for inverting radar data by satellite cloud picture based on semantic loss |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112101449A CN112101449A (en) | 2020-12-18 |
CN112101449B true CN112101449B (en) | 2022-07-15 |
Family
ID=73750931
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010951615.8A Active CN112101449B (en) | 2020-09-11 | 2020-09-11 | Method for inverting radar data by satellite cloud picture based on semantic loss |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112101449B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108510456A (en) * | 2018-03-27 | 2018-09-07 | 华南理工大学 | The sketch of depth convolutional neural networks based on perception loss simplifies method |
CN109543502A (en) * | 2018-09-27 | 2019-03-29 | 天津大学 | A kind of semantic segmentation method based on the multiple dimensioned neural network of depth |
CN110188720A (en) * | 2019-06-05 | 2019-08-30 | 上海云绅智能科技有限公司 | A kind of object detection method and system based on convolutional neural networks |
-
2020
- 2020-09-11 CN CN202010951615.8A patent/CN112101449B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108510456A (en) * | 2018-03-27 | 2018-09-07 | 华南理工大学 | The sketch of depth convolutional neural networks based on perception loss simplifies method |
CN109543502A (en) * | 2018-09-27 | 2019-03-29 | 天津大学 | A kind of semantic segmentation method based on the multiple dimensioned neural network of depth |
CN110188720A (en) * | 2019-06-05 | 2019-08-30 | 上海云绅智能科技有限公司 | A kind of object detection method and system based on convolutional neural networks |
Non-Patent Citations (1)
Title |
---|
基于加权损失函数的多尺度对抗网络图像语义分割算法;张宏钊等;《计算机应用与软件》;20200112(第01期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112101449A (en) | 2020-12-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108288067B (en) | Training method of image text matching model, bidirectional search method and related device | |
CN111783419B (en) | Address similarity calculation method, device, equipment and storage medium | |
EP3819790A2 (en) | Method and apparatus for visual question answering, computer device and medium | |
CN106557563B (en) | Query statement recommendation method and device based on artificial intelligence | |
CN110781413B (en) | Method and device for determining interest points, storage medium and electronic equipment | |
CN109145085B (en) | Semantic similarity calculation method and system | |
CN115457531A (en) | Method and device for recognizing text | |
WO2023115790A1 (en) | Chemical structure image extraction method and apparatus, storage medium, and electronic device | |
JP2017199149A (en) | Learning device, learning method, and learning program | |
CN113407814A (en) | Text search method and device, readable medium and electronic equipment | |
CN113610097A (en) | SAR ship target segmentation method based on multi-scale similarity guide network | |
CN113065409A (en) | Unsupervised pedestrian re-identification method based on camera distribution difference alignment constraint | |
CN115658934A (en) | Image-text cross-modal retrieval method based on multi-class attention mechanism | |
CN109635810B (en) | Method, device and equipment for determining text information and storage medium | |
CN113408663B (en) | Fusion model construction method, fusion model using device and electronic equipment | |
CN111523586A (en) | Noise-aware-based full-network supervision target detection method | |
CN109033318B (en) | Intelligent question and answer method and device | |
Ke et al. | Haze removal from a single remote sensing image based on a fully convolutional neural network | |
CN114328952A (en) | Knowledge graph alignment method, device and equipment based on knowledge distillation | |
CN112101449B (en) | Method for inverting radar data by satellite cloud picture based on semantic loss | |
Zhao et al. | Mosaic method of side‐scan sonar strip images using corresponding features | |
CN116612500B (en) | Pedestrian re-recognition model training method and device | |
Ma et al. | Retrieval term prediction using deep belief networks | |
Li et al. | Contrastive Deep Nonnegative Matrix Factorization for Community Detection | |
CN114610938A (en) | Remote sensing image retrieval method and device, electronic equipment and computer readable medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |