CN110097110B - Semantic image restoration method based on target optimization - Google Patents

Semantic image restoration method based on target optimization Download PDF

Info

Publication number
CN110097110B
CN110097110B CN201910341570.XA CN201910341570A CN110097110B CN 110097110 B CN110097110 B CN 110097110B CN 201910341570 A CN201910341570 A CN 201910341570A CN 110097110 B CN110097110 B CN 110097110B
Authority
CN
China
Prior art keywords
image
repairing
result
semantic
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201910341570.XA
Other languages
Chinese (zh)
Other versions
CN110097110A (en
Inventor
郭炜强
徐绍栋
张宇
郑波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201910341570.XA priority Critical patent/CN110097110B/en
Publication of CN110097110A publication Critical patent/CN110097110A/en
Application granted granted Critical
Publication of CN110097110B publication Critical patent/CN110097110B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The invention discloses a semantic image restoration method based on target optimization, which takes network structure optimization and restoration process optimization as main targets. In the aspect of network structure optimization, the whole network retains the spatial relationship while capturing more image semantic features as much as possible by removing a channel connection layer in a Context-Encoder, adding a parallel cavity convolution layer and setting a loss function according with human sense hierarchy. In the aspect of repairing process optimization, a specific target in an image to be repaired is captured through an image semantic segmentation network, and the repairing optimization operation can be performed on the captured target while the whole image is repaired by using a generalized model, so that the repairing result has higher reliability and accuracy. The method not only reserves the spatial information of the image, but also carries out restoration optimization aiming at a specific target, thereby effectively solving the problem of semantic confusion existing in the conventional restoration method.

Description

Semantic image restoration method based on target optimization
Technical Field
The invention relates to the technical field of digital image processing, in particular to a semantic image restoration method based on target optimization.
Background
Convolutional neural networks are proposed based on artificial neural networks. The artificial neural network simulates the human nervous system and consists of a certain number of neurons. In a supervised learning problem, there is a set of training data (x)i,yi) X is a sample, y is a class label, and inputting the sample and the class label into an artificial neural network can obtain a nonlinear classification hyperplane hw,b(x) All the input image data can be classified by this hyperplane.
A neuron is an arithmetic unit in a neural network, which is essentially a function. As shown in fig. 1, a schematic diagram of a neuron:
with 3 inputs x1、x2、x3And +1 is an offset value, output
Figure BDA0002040878870000011
f is an activation function, w is the proportion occupied by each input, b is a bias value, and the activation function is a sigmoid function:
Figure BDA0002040878870000012
the artificial neural network is composed of a plurality of the above neurons, as shown in fig. 2, which is a schematic diagram of a small artificial neural network:
in the convolutional neural network in the figure, the input is an image, the weight w is a convolutional template,
Figure BDA0002040878870000013
Figure BDA0002040878870000014
the weights of different neurons are generally convolution layers and downsampling layers which are alternated, and finally a fully connected neural network, namely the classical artificial neural network. As shown in fig. 3, a simple convolutional neural network diagram is shown:
in the figure, the convolutional layer extracts information from an image, the pooling layer is used to increase the field of view of the image, and the fully-connected layer maps the intermediate layer to an output of a specific dimension.
Context-Encoder is a kind of neural network, which is the earliest deep learning network for image restoration, and its main contribution is to introduce the countermeasure idea of generative countermeasure network into the image restoration field. The main structure of the Context-Encoder includes an Encoder and a decoder. The network structures are relatively consistent, and the convolutional layers are used from beginning to end. To date, Context-encoders are still widely used for image inpainting research.
A channel connection layer in a Context-Encoder network structure loses the spatial information of an image, and can not carry out independent repair on a specific target in the image, and the two factors cause the problem of local blurring of a repair result. In view of the above problems, the present invention provides a solution more suitable for image restoration.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a semantic image restoration method based on target optimization, which can carry out optimized image restoration aiming at a specific target in an image and can effectively solve the problem of semantic confusion of a common restoration algorithm under a specific condition.
In order to achieve the purpose, the technical scheme provided by the invention is as follows: a semantic image restoration method based on target optimization is characterized in that a network structure optimization and restoration process optimization are used as main targets, in the aspect of network structure optimization, the whole network is enabled to capture more image semantic features as much as possible and keep the spatial relationship of the image semantic features in a mode of removing a channel connecting layer in a Context-Encoder, adding a parallel hollow convolution layer and setting a loss function according with human sensory hierarchy, in the aspect of restoration process optimization, a specific target in an image to be restored is captured through an image semantic segmentation network, and restoration optimization operation can be performed on the captured target while the whole image is restored by utilizing a generalization model, so that a restoration result has higher reliability and accuracy; which comprises the following steps:
1) preprocessing an input image and a mask through a linear interpolation value to enable all images to meet network input requirements;
2) combining the input image with a mask to obtain a missing image;
3) performing first repairing operation on the missing image to obtain a repairing result;
4) repairing the first repairing result by utilizing a semantic segmentation network to obtain a specific target in the image;
5) separating a specific target in the image, and repairing by using a specific network to obtain a second repairing result;
6) and fusing the first repairing result and the second repairing result at the position of the target area to obtain a final repairing result.
In step 1), an RGB image with an arbitrary size is adjusted to an image with a size of 256 × 256 by using linear interpolation, and the core idea is to perform linear interpolation in two directions, where the linear interpolation operation flow and formula are as follows:
four points (x) are known to exist on the image matrix data0,y0)、(x0,y1)、(x1,y0)、(x1,y1),f(x0,y0)、f(x0,y1)、f(x1,y0)、f(x1,y1) Respectively corresponding values of the four points;
to the abscissa as x0The calculation formula for linear interpolation in the y-axis direction is as follows:
Figure BDA0002040878870000031
in the formula, Z1Indicating the result of the calculation, v indicates the point-to-coordinate (x) of the result of the calculation of the interpolation0,y0) Distance in the y-axis direction;
to the abscissa as x1The calculation formula for linear interpolation in the y-axis direction is as follows:
Figure BDA0002040878870000032
in the formula, Z2For the calculation result, v denotes a point-off coordinate (x) of the calculation interpolation result1,y0) Distance in the y-axis direction;
the calculation formula for linear interpolation in the x-axis direction is as follows:
Figure BDA0002040878870000041
wherein Z is the final interpolation result, and u represents the point-to-point distance x of the calculation difference result0In the x-axis direction.
In step 2), the implementation of combining the image and the mask is a pixel-level operation, and pixel values of image positions corresponding to black areas in the mask are reserved; for the positions corresponding to the white areas in the mask, the filling is done using the image pixel average.
In step 3), the first repair operation is performed through an improved Context-Encoder structure, and the specific improvement mode is as follows: the common convolution operation of the middle three layers of the encoder is changed into parallel cavity convolution, different step lengths are set for the parallel cavity convolution, and semantic features of different layers can be captured.
In step 4), the specific target acquisition operation is performed through a mature semantic segmentation network deep LabV2, the segmentation result includes positions and areas where different targets are located in the image, and the setting of the targets needs to be specified during model training.
In the step 5), the separation operation is carried out by cutting pixel point level, the repair operation is carried out by using an improved Context-Encoder structure, and the data set used in the model training process is the same type of data set corresponding to a specific target; the specific mode of the improved Context-Encoder structure is as follows: the common convolution operation of the middle three layers of the encoder is changed into parallel cavity convolution, different step lengths are set for the parallel cavity convolution, and semantic features of different layers can be captured.
In step 6), the specific implementation manner of the fusion operation is to replace the content of the corresponding position of the whole image restoration result with the specific target restoration result.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1. the network can sense image information in a larger range through the hole convolution without information loss.
2. Different step lengths are set for the parallel structure to obtain the semantic features of different dimensionalities of the image, so that the method is beneficial to the implementation of the restoration process.
3. And the channel connection layer is cancelled, and the image space information in the feature map is reserved.
4. And a sensory loss function is added, so that the characteristic extraction process is more reliable.
5. And optimizing the specific target in the image to be repaired, and solving the problem of semantic confusion of the conventional repairing algorithm.
Drawings
FIG. 1 is a diagram of a neuron in the background art.
FIG. 2 is a diagram of a small artificial neural network in the prior art.
Fig. 3 is a schematic diagram of a simple convolutional neural network in the background art.
Fig. 4 is a schematic diagram of a network structure used in the method of the present invention.
FIG. 5 is a flow chart of the method of the present invention.
FIG. 6 is a schematic diagram of a target optimization process.
FIG. 7 shows input and output cases of the method of the present invention.
Detailed Description
The present invention will be further described with reference to the following specific examples.
Before introducing the present invention, a Context-Encoder network structure needs to be introduced. The Context-Encoder network structure includes a convolutional layer, a channel connection layer, and a deconvolution layer. The convolution kernel mainly has the size of 3 multiplied by 3, so that the number of network structure parameters can be effectively reduced while the local receptive field is realized.
Fig. 4 shows a network structure used in the method of the present invention, which is different from the Context-Encoder in that a channel connection layer in the network is eliminated and a parallel void convolutional layer is added. The reason for canceling the channel connection layer is that the spatial information of the image is disturbed after passing through the channel connection layer; the reason for adding the hole convolution is that the hole convolution can enable the network to feel image information in a larger range without causing information loss; the reason for using a parallel configuration is that by placing different step sizes in parallel, features of different dimensions of the image can be captured.
The semantic image restoration method based on target optimization provided by the embodiment specifically uses optimization of a network structure and optimization of a restoration process as main targets. In the aspect of network structure optimization, the whole network retains the spatial relationship while capturing more image semantic features as much as possible by removing a channel connection layer in a Context-Encoder, adding a parallel cavity convolution layer and setting a loss function according with human sense hierarchy. In the aspect of the optimization of the repairing process, a specific target in an image to be repaired is captured through the image semantic segmentation network DeepLabV2, the whole image is repaired by using a generalized model, and meanwhile, the repairing optimization operation can be performed on the captured target, so that the repairing result has higher reliability and accuracy. Which comprises the following steps:
1) preprocessing an input image through a linear interpolation value to enable all images to meet network input requirements; the core idea of adjusting an RGB image of any size to an image of 256 × 256 size by linear interpolation is to perform linear interpolation in two directions, and the linear interpolation operation flow and formula are as follows:
four points (x) are known to exist on the image matrix data0,y0)、(x0,y1)、(x1,y0)、(x1,y1),f(x0,y0)、f(x0,y1)、f(x1,y0)、f(x1,y1) Respectively corresponding values of the four points;
to the abscissa as x0The calculation formula for linear interpolation in the y-axis direction is as follows:
Figure BDA0002040878870000061
in the formula, Z1Indicating the result of the calculation, v indicates the point-to-coordinate (x) of the result of the calculation of the interpolation0,y0) Distance in the y-axis direction;
to the abscissa as x1The calculation formula for linear interpolation in the y-axis direction is as follows:
Figure BDA0002040878870000071
in the formula, Z2For the calculation result, v denotes a point-off coordinate (x) of the calculation interpolation result1,y0) Distance in the y-axis direction;
the calculation formula for linear interpolation in the x-axis direction is as follows:
Figure BDA0002040878870000072
wherein Z is the final interpolation result, Z2And Z1Respectively, the previous calculation results, u represents the point distance x of the calculation difference result0In the x-axis direction.
2) A pixel level combining operation is performed on the image and the mask. The specific mode of the combination operation is that the pixel value of the image position corresponding to the black area in the mask is reserved; for the positions corresponding to the white areas in the mask, the filling is done using the image pixel average.
3) And carrying out repairing operation on the missing image obtained in the last step through the improved Context-Encoder structure. The content of the improved structure comprises that the common convolution operation of the middle three layers of the encoder is changed into parallel cavity convolution, different step lengths are set for the parallel cavity convolution, and semantic features of different layers can be captured.
4) And (4) segmenting the repair obtained in the last step through a mature semantic segmentation network DeepLabV 2. The result obtained by the segmentation operation includes positions and areas of different targets in the image, and the setting of the targets needs to be specified during model training.
5) And separating the target obtained in the last step by a pixel point level cutting mode, and repairing the image of the target subjected to the sub-processing. And the repairing operation is carried out by using an improved Context-Encoder structure, and the data set used in the model training process is a same-class data set corresponding to a specific target.
6) And fusing the specific target repairing result with the whole image repairing result. The specific implementation manner of the fusion operation is to replace the content of the corresponding position of the whole image restoration result by using a specific target restoration result.
The specific operation flow shown in fig. 5. Firstly, carrying out pixel-level operation by using an input image and a mask image to obtain a missing image, then carrying out first-time restoration on the missing image, and carrying out semantic segmentation on a restoration result to obtain the position of a specific target in the missing image. And then, extracting a mask of a corresponding area according to the position of the target, repairing the target by using a specific repairing network, and splicing the result obtained by repairing the target with the first repairing result to obtain a final repairing result. The branch and network usage in the repair process is shown in fig. 6.
The input and output image effects are shown in fig. 7.
In summary, the essence of the invention is to generate a new network for image restoration by the following five ways:
1) the network can sense image information in a larger range through the hole convolution without causing information loss;
2) different step lengths are set for the parallel structure to obtain semantic features of different dimensions of the image, so that the repair process is facilitated;
3) canceling a channel connection layer, and reserving image space information in the feature map;
4) a sensory loss function is added, so that the characteristic extraction process is more reliable;
5) and optimizing the specific target in the image to be repaired, and solving the problem of semantic confusion of the conventional repairing algorithm.
The above-mentioned embodiments are merely preferred embodiments of the present invention, and the scope of the present invention is not limited thereto, so that variations based on the shape and principle of the present invention should be covered within the scope of the present invention.

Claims (3)

1. A semantic image restoration method based on target optimization is characterized by comprising the following steps: the method is mainly aimed at optimizing a network structure and repairing the process, in the aspect of optimizing the network structure, the whole network can capture more image semantic features as much as possible and keep the spatial relationship of the image semantic features while removing a channel connecting layer in a Context-Encoder, adding a parallel hollow convolution layer and setting a loss function according with human sensory hierarchy, in the aspect of optimizing the repairing process, the specific target in the image to be repaired is captured by an image semantic segmentation network, and the repairing optimization operation can be performed on the captured target while repairing the whole image by using a generalization model, so that the repairing result has higher reliability and accuracy; which comprises the following steps:
1) preprocessing an input image and a mask through a linear interpolation value to enable all images to meet network input requirements;
2) combining the input image with a mask to obtain a missing image;
3) performing first repairing operation on the missing image to obtain a repairing result; wherein, the first repairing operation is carried out through an improved Context-Encoder structure, and the specific improved mode is as follows: the common convolution operation of the middle three layers of the encoder is changed into parallel cavity convolution, different step lengths are set for the parallel cavity convolution, and semantic features of different layers can be captured;
4) repairing the first repairing result by utilizing a semantic segmentation network to obtain a specific target in the image; the specific target acquisition operation is carried out through a semantic segmentation network DeepLabV2, the segmentation result comprises positions and areas where different targets are located in the image, and the setting of the targets needs to be specified during model training;
5) separating a specific target in the image, and repairing by using a specific network to obtain a second repairing result; the method comprises the steps that separation operation is carried out through pixel point level cutting, restoration operation is carried out through an improved Context-Encoder structure, and a data set used in a model training process is a same-type data set corresponding to a specific target; the specific mode of the improved Context-Encoder structure is as follows: the common convolution operation of the middle three layers of the encoder is changed into parallel cavity convolution, different step lengths are set for the parallel cavity convolution, and semantic features of different layers can be captured;
6) fusing the first repairing result and the second repairing result at the position of the target area to obtain a final repairing result; the specific implementation mode of the fusion operation is to replace the content of the corresponding position of the whole image restoration result by using a specific target restoration result.
2. The semantic image restoration method based on the target optimization according to claim 1, characterized in that: in step 1), an RGB image with an arbitrary size is adjusted to an image with a size of 256 × 256 by using linear interpolation, and the core idea is to perform linear interpolation in two directions, where the linear interpolation operation flow and formula are as follows:
four points (x) are known to exist on the image matrix data0,y0)、(x0,y1)、(x1,y0)、(x1,y1),f(x0,y0)、f(x0,y1)、f(x1,y0)、f(x1,y1) Respectively corresponding values of the four points;
to the abscissa as x0The calculation formula for linear interpolation in the y-axis direction is as follows:
Figure FDA0002904320440000021
in the formula, Z1Indicating the result of the calculation, v indicates the point-to-coordinate (x) of the result of the calculation of the interpolation0,y0) Distance in the y-axis direction;
to the abscissa as x1The calculation formula for linear interpolation in the y-axis direction is as follows:
Figure FDA0002904320440000022
in the formula, Z2For the calculation result, v denotes a point-off coordinate (x) of the calculation interpolation result1,y0) Distance in the y-axis direction;
the calculation formula for linear interpolation in the x-axis direction is as follows:
Figure FDA0002904320440000031
wherein Z is the final interpolation result, and u represents the point-to-point distance x of the calculation difference result0In the x-axis direction.
3. The semantic image restoration method based on the target optimization according to claim 1, characterized in that: in step 2), the implementation of combining the image and the mask is a pixel-level operation, and pixel values of image positions corresponding to black areas in the mask are reserved; for the positions corresponding to the white areas in the mask, the filling is done using the image pixel average.
CN201910341570.XA 2019-04-26 2019-04-26 Semantic image restoration method based on target optimization Expired - Fee Related CN110097110B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910341570.XA CN110097110B (en) 2019-04-26 2019-04-26 Semantic image restoration method based on target optimization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910341570.XA CN110097110B (en) 2019-04-26 2019-04-26 Semantic image restoration method based on target optimization

Publications (2)

Publication Number Publication Date
CN110097110A CN110097110A (en) 2019-08-06
CN110097110B true CN110097110B (en) 2021-07-20

Family

ID=67445970

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910341570.XA Expired - Fee Related CN110097110B (en) 2019-04-26 2019-04-26 Semantic image restoration method based on target optimization

Country Status (1)

Country Link
CN (1) CN110097110B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111062877A (en) * 2019-08-23 2020-04-24 平安科技(深圳)有限公司 Image filling method and device for edge learning, terminal and readable storage medium
CN113344832A (en) * 2021-05-28 2021-09-03 杭州睿胜软件有限公司 Image processing method and device, electronic equipment and storage medium
CN113298734B (en) * 2021-06-22 2022-05-06 云南大学 Image restoration method and system based on mixed hole convolution
CN113538273B (en) * 2021-07-13 2023-09-19 荣耀终端有限公司 Image processing method and image processing apparatus

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018078213A1 (en) * 2016-10-27 2018-05-03 Nokia Technologies Oy A method for analysing media content
CN108629789A (en) * 2018-05-14 2018-10-09 华南理工大学 A kind of well-marked target detection method based on VggNet
CN108985269A (en) * 2018-08-16 2018-12-11 东南大学 Converged network driving environment sensor model based on convolution sum cavity convolutional coding structure

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109598754B (en) * 2018-09-29 2020-03-17 天津大学 Binocular depth estimation method based on depth convolution network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018078213A1 (en) * 2016-10-27 2018-05-03 Nokia Technologies Oy A method for analysing media content
CN108629789A (en) * 2018-05-14 2018-10-09 华南理工大学 A kind of well-marked target detection method based on VggNet
CN108985269A (en) * 2018-08-16 2018-12-11 东南大学 Converged network driving environment sensor model based on convolution sum cavity convolutional coding structure

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Globally and Lolly Consistent Image Completion";SATOSHI IIZUKA etc.;《ACM Transactions on Graphics》;20170730;第36卷(第4期);论文第3节 *
"基于空洞卷积的快速背景自动更换";张浩等;《计算机应用》;20180210;第38卷(第2期);论文摘要,第2.2.2节 *

Also Published As

Publication number Publication date
CN110097110A (en) 2019-08-06

Similar Documents

Publication Publication Date Title
CN110097110B (en) Semantic image restoration method based on target optimization
CN109299274B (en) Natural scene text detection method based on full convolution neural network
CN111612807B (en) Small target image segmentation method based on scale and edge information
CN111340122B (en) Multi-modal feature fusion text-guided image restoration method
CN108510451B (en) Method for reconstructing license plate based on double-layer convolutional neural network
CN112036260B (en) Expression recognition method and system for multi-scale sub-block aggregation in natural environment
CN110276389B (en) Mine mobile inspection image reconstruction method based on edge correction
CN110969171A (en) Image classification model, method and application based on improved convolutional neural network
CN112950477A (en) High-resolution saliency target detection method based on dual-path processing
CN113255837A (en) Improved CenterNet network-based target detection method in industrial environment
CN114820579A (en) Semantic segmentation based image composite defect detection method and system
CN113780132A (en) Lane line detection method based on convolutional neural network
CN113160062A (en) Infrared image target detection method, device, equipment and storage medium
CN111652231B (en) Casting defect semantic segmentation method based on feature self-adaptive selection
CN111178121A (en) Pest image positioning and identifying method based on spatial feature and depth feature enhancement technology
CN111754507A (en) Light-weight industrial defect image classification method based on strong attention machine mechanism
CN115019340A (en) Night pedestrian detection algorithm based on deep learning
CN111199255A (en) Small target detection network model and detection method based on dark net53 network
CN114049343A (en) Deep learning-based tracing method for complex missing texture of crack propagation process
CN113066025A (en) Image defogging method based on incremental learning and feature and attention transfer
CN112634168A (en) Image restoration method combined with edge information
CN116993737A (en) Lightweight fracture segmentation method based on convolutional neural network
CN112232221A (en) Method, system and program carrier for processing human image
CN108764287B (en) Target detection method and system based on deep learning and packet convolution
CN106934344B (en) quick pedestrian detection method based on neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210720

CF01 Termination of patent right due to non-payment of annual fee