CN111161271A - Ultrasonic image segmentation method - Google Patents

Ultrasonic image segmentation method Download PDF

Info

Publication number
CN111161271A
CN111161271A CN201911409153.0A CN201911409153A CN111161271A CN 111161271 A CN111161271 A CN 111161271A CN 201911409153 A CN201911409153 A CN 201911409153A CN 111161271 A CN111161271 A CN 111161271A
Authority
CN
China
Prior art keywords
layer
convolution
data
input
dense
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911409153.0A
Other languages
Chinese (zh)
Inventor
陈俊江
刘宇
贾树开
陈智
方俊
梁羽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Sichuan Provincial Peoples Hospital
Original Assignee
University of Electronic Science and Technology of China
Sichuan Provincial Peoples Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China, Sichuan Provincial Peoples Hospital filed Critical University of Electronic Science and Technology of China
Priority to CN201911409153.0A priority Critical patent/CN111161271A/en
Publication of CN111161271A publication Critical patent/CN111161271A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention belongs to the technical field of medical image processing, and particularly relates to an ultrasonic image segmentation method. The method disclosed by the invention integrates multiple technologies such as a multi-scale frame, a dense convolution network, an attention mechanism and small sample enhancement on the basis of U-Net Baseline, is beneficial to realizing the extraction of multi-dimensional features, inhibiting the response of an irrelevant area and improving the performance of a small ROI, solves the problems of pain points such as few ultrasonic image samples, low pixels and fuzzy boundaries, and obtains the optimal segmentation effect.

Description

Ultrasonic image segmentation method
Technical Field
The invention belongs to the technical field of medical image processing, and particularly relates to an ultrasonic image segmentation method.
Background
With the development of technology, doctors begin to adopt a great deal of medical image data as the basis of medical diagnosis and treatment, thereby promoting the development and progress of various new technologies. How to correctly segment the medical image is an important bottleneck restricting the development of various technologies, so that the accurate segmentation of the image becomes the most important problem in the field of medical images and needs to be solved urgently.
In recent years, with the improvement of computational performance and the increase of data volume, deep learning has made remarkable progress in the field of medical images. The Convolutional Neural Network (CNN) can capture nonlinear mapping between input and output, automatically learn local features and high-level abstract features through a multi-layer network structure, and is superior to manual feature extraction and prediction. However, conventional CNNs cannot reasonably propagate lower layer features to higher layers. Therefore, the U-Net algorithm is further provided, low-dimensional features and high-dimensional features are fused through jump connection, and a good segmentation effect is achieved.
Most of the existing medical image segmentation algorithms are based on U-Net (U-Net base), but due to the fact that medical images have the problems of large difference between training samples, small perceptual Region (ROI) and the like, balance between accuracy and recall rate is difficult, computing resources and model parameters are redundant, segmentation effect is not obvious, and the like, and therefore the problem that accurate segmentation of ultrasound images is still needed to be solved urgently is solved.
Disclosure of Invention
The purpose of the invention is: based on the thought of deep learning, an ultrasonic image segmentation method capable of accurately segmenting medical tissues or focuses is provided.
The technical scheme adopted by the invention is as follows:
an ultrasound image segmentation method comprising the steps of:
step 1, preprocessing an original ultrasonic image to obtain training set and verification set data;
step 2, performing data enhancement on the training set and the verification set data, including:
1) and increasing the data volume of the training set and the verification set by adopting offline enhancement: adopting rotation transformation and horizontal turning transformation to perform 10 times of enhancement;
2) enhancing the generalization of the network model by online enhancement: the method adopts rotation transformation, scale transformation, scaling transformation, translation transformation and color contrast transformation, and reduces the memory pressure while enhancing the data diversity by using an online iterator;
step 3, constructing an attention U-shaped network of multi-scale intensive convolution, comprising the following steps:
1) a multiple input dense convolutional encoder module: the input layer is input in a sample format of NxNx1, N is a positive integer, the size of input data is scaled into four groups of input data according to the ratio of 8:4:2:1 through a multi-input module, wherein the first group of data forms input 1 through 3-by-3 convolution, then passes through a 1 st 3-layer dense convolution module, and then carries out 1 st down-sampling; the second group of data forms an input 2 through 3-by-3 convolution, the input 2 is fused with the data after the 1 st down-sampling, and then the 2 nd down-sampling is carried out after the data passes through the 2 nd 3-layer dense convolution module; in the same way, the third layer and the fourth layer adopt the same network construction; the dense convolution module is structurally composed of 3 densely connected convolution layers, and the input of each layer is the feature map fusion of all previous layer outputs of the dense block; the encoder module utilizes the dense convolution layer and the pooling layer to complete feature extraction, the feature extraction is totally divided into 4 layers, the number of channels is increased along with the increment of the number of layers of a feature map, the size is reduced, the number of channels of convolution kernels from the 1 st layer to the 4 th layer is respectively 32, 64, 128 and 256, and the size of each layer of convolution kernels is 3 x 3;
2) dense convolution center module: after the 4 th downsampling, passing through a dense convolution center module, wherein the structure of the dense convolution center module is a convolution layer comprising 3 dense connections, and the input of each layer is the feature map fusion output by all the previous layers of the dense block;
3) a multi-output attention mechanism decoder module: performing channel feature fusion on the attention feature map and the up-sampling feature map of each layer by using deconvolution as an up-sampling mode; the attention mechanism is as follows: convolving the high dimensional features by 1 x 1 to obtain a gating signal gi(ii) a Then the low dimensional feature xlSampling by 2 times, and gating signal giAdding, global average pooling, 1 × 1 convolution, nonlinear transformation, and upsampling to obtain linear attention coefficient
Figure BDA0002349507560000021
Finally, linear attention coefficient
Figure BDA0002349507560000022
By element and low dimensional feature xlMultiplying and retaining the relative activation to obtain the attention coefficient
Figure BDA0002349507560000023
Figure BDA0002349507560000024
Figure BDA0002349507560000025
Wherein xlRepresenting a pixel vector, giA gating vector is represented that is a function of,
Figure BDA0002349507560000026
a linear attention coefficient is represented by a linear attention coefficient,
Figure BDA0002349507560000027
denotes the attention coefficient, δ1Denotes the ReLU activation function, δ2Representing Sigmod activation function, ΘattComprises the following steps: linear transformation
Figure BDA0002349507560000028
Figure BDA0002349507560000029
And bias term bψ∈R,
Figure BDA00023495075600000210
The constructed U-shaped network adopts Tversely Loss and Focal Loss as a multi-output Loss function;
step 4, inputting training set data into the constructed U-shaped network for training to obtain a learned convolutional neural network model, and performing parameter adjustment on the verification set until an optimal model and corresponding parameters thereof are obtained to obtain a trained U-shaped network;
and 5, inputting the preprocessed to-be-processed original ultrasonic image into the trained U-shaped network to obtain a segmentation result.
The invention has the beneficial effects that: on the basis of U-Net Baseline, various technologies such as a multi-scale frame, a dense convolution network, an attention mechanism and small sample enhancement are fused, so that the extraction of multi-dimensional features, the suppression of the response of an irrelevant area and the improvement of the performance of a small ROI are facilitated, the problems of pain points such as few ultrasonic image samples, low pixels and fuzzy boundaries are solved, and the optimal segmentation effect is obtained.
Drawings
FIG. 1 is a schematic diagram of a medical image segmentation process of the present invention;
FIG. 2 is a schematic diagram of the overall structure of the MDA-UNet network of the present invention;
FIG. 3 is a schematic diagram of a dense convolutional network module of the present invention;
FIG. 4 is a schematic view of an attention mechanism module of the present invention;
FIG. 5 is a schematic of Loss and DSC for the training and validation sets; (a) is a loss function graph of the training set and the verification set, and (b) is a correct rate graph of the training set and the verification set;
FIG. 6 is a schematic diagram of the original label and the segmentation result of the test set of the present invention: (a) for the test label image, (b) for the segmentation result image.
Detailed Description
The invention is described in detail below with reference to the following figures and simulations:
the invention provides a thyroid ultrasound image segmentation method based on deep learning, which mainly comprises 5 major modules of data acquisition, data preprocessing, network model construction, data training and parameter adjustment, data testing and evaluation and the like, as shown in figure 1. The specific implementation steps are as follows:
1. preprocessing an original ultrasonic image, and dividing a training set, a verification set and a test set
1) Removing patient privacy information and image instrument marks on the ultrasonic image;
2) making a data label (label) by a professional ultrasound imaging physician team;
3) dividing the original data into a training set, a verification set and a test set according to the ratio of 6:2:2, and carrying out label similarity;
4) the resolution of the images is unified to 256 × 256; and the label is subjected to binarization processing and normalized to a [0,1] interval.
2. Data enhancement is carried out on training set and verification set of small sample
1) And (3) offline enhancement: the number of data sets is expanded to 10 times the original number.
2) Online enhancement: and a DataGenerator online iterator mode is adopted to perform scale transformation, scaling transformation, translation transformation, color contrast transformation and the like, so that the memory pressure is reduced while the data diversity is enhanced, and the generalization of a network model is enhanced.
3. Design a multiscale dense convolution attention U-Net network algorithm (MDA-UNet) (as shown in FIG. 2)
An input layer of the MDA-UNet network adopts a sample format of NxNx1 for input (N is a positive integer), and is divided into four groups of input data through a multi-input module; the first group of data forms input 1 through 3-by-3 convolution, then passes through a 1 st dense convolution module with 3 layers, and then carries out 1 st down-sampling; the second group of data forms input 2 through 3-by-3 convolution, the input 2 is fused (concat) with the data after the 1 st down-sampling, and then the second group of data passes through the 2 nd 3-layer dense convolution module and then the 2 nd down-sampling is carried out; in the same way, the third layer and the fourth layer adopt the same network construction; a central module constructed by intensive convolution after the 4 th down-sampling; the central module forms gating attention with the data after the fourth layer dense convolution and is fused with the data sampled on the central module (concat); then two convolutions (3 × 3 convolution, BN operation, ReLU activation function) were performed in succession; similarly, the third layer, the second layer and the first layer are also similar; finally, a convolution (1 × 1 convolution, sigmoid activation function) is performed to obtain pixel-level classification, namely segmentation of the image.
1) Multi-input dense convolution encoder module (shown in the left half of FIG. 2)
1.1 multiple input module: the size of input data is scaled into four pairs of data according to the ratio of 8:4:2:1, and the four pairs of data are respectively fused with a two-three-four sampling layer of an encoder network.
1.2 dense convolution module (as shown in FIG. 3): each dense block contains 3 densely connected convolutional layers, the input of each layer being the feature map fusion of all previous layer outputs of the dense block. The pooled feature maps of each layer of the encoder will go through a dense block (BN operation, ReLU activation function and 3 × 3 convolution).
1.3 the encoder module mainly utilizes the dense convolution layer and the pooling layer to complete the feature extraction, the feature extraction is totally divided into 4 layers, the feature map increases along with the number of layers, the number of channels increases, and the size becomes smaller. The number of the convolution kernel channels from the layer 1 to the layer 4 is 32, 64, 128 and 256 respectively, and the size of each convolution kernel is 3 x 3.
2) Dense convolution center module (as shown in FIG. 3)
2.1 dense convolution center block method as dense convolution of the encoder block.
3) Multi-output attention machine decoder module (as shown on the right half of FIG. 2)
And 3.1, the encoder module is divided into 4 layers in total, and the attention feature map and the up-sampling feature map of each layer are subjected to channel feature fusion by using deconvolution as an up-sampling mode. The characteristic diagram increases along with the number of layers, the number of channels is reduced, and the size is increased. The number of convolution kernel channels from the 6 th layer to the 9 th layer is 256, 128, 64 and 32 respectively, and the size of each convolution kernel layer is 3 x 3.
3.2 attention mechanism module (as shown in fig. 4): convolving the high dimensional features by 1 x 1 to obtain a gating signal gi(ii) a Then the low dimensional feature xlSampling by 2 times, and gating signal giAdding, performing global average pooling, 1 × 1 convolution, nonlinear transformation, and upsampling to obtain linear attention coefficient
Figure BDA0002349507560000051
Finally, linear attention coefficient
Figure BDA0002349507560000052
By element and low dimensional feature xlMultiplying and retaining the relative activation to obtain the attention coefficient
Figure BDA0002349507560000053
The formula is as follows:
Figure BDA0002349507560000054
Figure BDA0002349507560000055
4. inputting the data of the training set and the verification set into an MDA-UNet network for training to obtain an optimal parameter model
Loss and dsc were recorded for each training. According to loss and dsc on the verification set, parameters are adjusted, trained again and the best model and parameters are saved.
5. Inputting the data to be segmented into the optimal parameter model to obtain the segmentation result (as shown in FIG. 5 and FIG. 6)
FIG. 5 is a schematic diagram of Loss and DSC of the training set and validation set provided by the embodiment of the present invention: (a) is a loss function graph of the training set and the verification set, and (b) is a correct rate graph of the training set and the verification set.
Fig. 6 is a schematic diagram of an original label and a segmentation result of a test set according to an embodiment of the present invention: (a) the original label image is (a) the segmentation result image.

Claims (1)

1. An ultrasound image segmentation method, comprising the steps of:
step 1, preprocessing an original ultrasonic image to obtain training set and verification set data;
step 2, performing data enhancement on the training set and the verification set data, including:
1) and increasing the data volume of the training set and the verification set by adopting offline enhancement: adopting rotation transformation and horizontal turning transformation to perform 10 times of enhancement;
2) enhancing the generalization of the network model by online enhancement: the method adopts rotation transformation, scale transformation, scaling transformation, translation transformation and color contrast transformation, and reduces the memory pressure while enhancing the data diversity by using an online iterator;
step 3, constructing an attention U-shaped network of multi-scale intensive convolution, comprising the following steps:
1) a multiple input dense convolutional encoder module: the input layer is input in a sample format of NxNx1, N is a positive integer, the size of input data is scaled into four groups of input data according to the ratio of 8:4:2:1 through a multi-input module, wherein the first group of data forms input 1 through 3-by-3 convolution, then passes through a 1 st 3-layer dense convolution module, and then carries out 1 st down-sampling; the second group of data forms an input 2 through 3-by-3 convolution, the input 2 is fused with the data after the 1 st down-sampling, and then the 2 nd down-sampling is carried out after the data passes through the 2 nd 3-layer dense convolution module; in the same way, the third layer and the fourth layer adopt the same network construction; the dense convolution module is structurally composed of 3 densely connected convolution layers, and the input of each layer is the feature map fusion of all previous layer outputs of the dense block; the encoder module utilizes the dense convolution layer and the pooling layer to complete feature extraction, the feature extraction is totally divided into 4 layers, the number of channels is increased along with the increment of the number of layers of a feature map, the size is reduced, the number of channels of convolution kernels from the 1 st layer to the 4 th layer is respectively 32, 64, 128 and 256, and the size of each layer of convolution kernels is 3 x 3;
2) dense convolution center module: after the 4 th downsampling, passing through a dense convolution center module, wherein the structure of the dense convolution center module is a convolution layer comprising 3 dense connections, and the input of each layer is the feature map fusion output by all the previous layers of the dense block;
3) a multi-output attention mechanism decoder module: performing channel feature fusion on the attention feature map and the up-sampling feature map of each layer by using deconvolution as an up-sampling mode; the attention mechanism is as follows: convolving the high dimensional features by 1 x 1 to obtain a gating signal gi(ii) a Then the low dimensional feature xlSampling by 2 times, and gating signal giAdding, global average pooling, 1 × 1 convolution, nonlinear transformation, and upsampling to obtain linear attention coefficient
Figure FDA0002349507550000011
Finally, linear attention coefficient
Figure FDA0002349507550000012
By element and low dimensional feature xlMultiply and retain the associated activationsTo obtain the attention coefficient
Figure FDA0002349507550000013
Figure FDA0002349507550000014
Figure FDA0002349507550000015
Wherein xlRepresenting a pixel vector, giA gating vector is represented that is a function of,
Figure FDA0002349507550000021
a linear attention coefficient is represented by a linear attention coefficient,
Figure FDA0002349507550000022
denotes the attention coefficient, δ1Denotes the ReLU activation function, δ2Representing Sigmod activation function, ΘattComprises the following steps: linear transformation
Figure FDA0002349507550000023
Figure FDA0002349507550000024
And bias term bψ∈R,
Figure FDA0002349507550000025
Step 4, inputting training set data into the constructed U-shaped network for training to obtain a learned convolutional neural network model, and performing parameter adjustment on the verification set until an optimal model and corresponding parameters thereof are obtained to obtain a trained U-shaped network;
and 5, inputting the preprocessed to-be-processed original ultrasonic image into the trained U-shaped network to obtain a segmentation result.
CN201911409153.0A 2019-12-31 2019-12-31 Ultrasonic image segmentation method Pending CN111161271A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911409153.0A CN111161271A (en) 2019-12-31 2019-12-31 Ultrasonic image segmentation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911409153.0A CN111161271A (en) 2019-12-31 2019-12-31 Ultrasonic image segmentation method

Publications (1)

Publication Number Publication Date
CN111161271A true CN111161271A (en) 2020-05-15

Family

ID=70559973

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911409153.0A Pending CN111161271A (en) 2019-12-31 2019-12-31 Ultrasonic image segmentation method

Country Status (1)

Country Link
CN (1) CN111161271A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112241766A (en) * 2020-10-27 2021-01-19 西安电子科技大学 Liver CT image multi-lesion classification method based on sample generation and transfer learning
CN112862774A (en) * 2021-02-02 2021-05-28 重庆市地理信息和遥感应用中心 Accurate segmentation method for remote sensing image building
CN113012177A (en) * 2021-04-02 2021-06-22 上海交通大学 Three-dimensional point cloud segmentation method based on geometric feature extraction and edge perception coding
CN113240698A (en) * 2021-05-18 2021-08-10 长春理工大学 Multi-class segmentation loss function and construction method and application thereof
CN114049339A (en) * 2021-11-22 2022-02-15 江苏科技大学 Fetal cerebellum ultrasonic image segmentation method based on convolutional neural network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110059538A (en) * 2019-02-27 2019-07-26 成都数之联科技有限公司 A kind of identifying water boy method based on the intensive neural network of depth
CN110189334A (en) * 2019-05-28 2019-08-30 南京邮电大学 The medical image cutting method of the full convolutional neural networks of residual error type based on attention mechanism
CN110544264A (en) * 2019-08-28 2019-12-06 北京工业大学 Temporal bone key anatomical structure small target segmentation method based on 3D deep supervision mechanism

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110059538A (en) * 2019-02-27 2019-07-26 成都数之联科技有限公司 A kind of identifying water boy method based on the intensive neural network of depth
CN110189334A (en) * 2019-05-28 2019-08-30 南京邮电大学 The medical image cutting method of the full convolutional neural networks of residual error type based on attention mechanism
CN110544264A (en) * 2019-08-28 2019-12-06 北京工业大学 Temporal bone key anatomical structure small target segmentation method based on 3D deep supervision mechanism

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
NABILA ABRAHAM等: ""A NOVEL FOCAL TVERSKY LOSS FUNCTIONWITH IMPROVED ATTENTION U-NET FOR LESION SEGMENTATION"" *
梁礼明等: ""自适应尺度信息的U型视网膜血管分割算法"" *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112241766A (en) * 2020-10-27 2021-01-19 西安电子科技大学 Liver CT image multi-lesion classification method based on sample generation and transfer learning
CN112862774A (en) * 2021-02-02 2021-05-28 重庆市地理信息和遥感应用中心 Accurate segmentation method for remote sensing image building
CN113012177A (en) * 2021-04-02 2021-06-22 上海交通大学 Three-dimensional point cloud segmentation method based on geometric feature extraction and edge perception coding
CN113240698A (en) * 2021-05-18 2021-08-10 长春理工大学 Multi-class segmentation loss function and construction method and application thereof
CN113240698B (en) * 2021-05-18 2022-07-05 长春理工大学 Application method of multi-class segmentation loss function in implementation of multi-class segmentation of vertebral tissue image
CN114049339A (en) * 2021-11-22 2022-02-15 江苏科技大学 Fetal cerebellum ultrasonic image segmentation method based on convolutional neural network

Similar Documents

Publication Publication Date Title
CN111145170B (en) Medical image segmentation method based on deep learning
CN110738697B (en) Monocular depth estimation method based on deep learning
CN111161273B (en) Medical ultrasonic image segmentation method based on deep learning
CN111161271A (en) Ultrasonic image segmentation method
CN108492271B (en) Automatic image enhancement system and method fusing multi-scale information
CN109671094B (en) Fundus image blood vessel segmentation method based on frequency domain classification
CN107492071A (en) Medical image processing method and equipment
CN113674253B (en) Automatic segmentation method for rectal cancer CT image based on U-transducer
CN115018824B (en) Colonoscope polyp image segmentation method based on CNN and Transformer fusion
CN111461232A (en) Nuclear magnetic resonance image classification method based on multi-strategy batch type active learning
CN110930416A (en) MRI image prostate segmentation method based on U-shaped network
CN111951288B (en) Skin cancer lesion segmentation method based on deep learning
CN111179275B (en) Medical ultrasonic image segmentation method
CN114092439A (en) Multi-organ instance segmentation method and system
CN112419155A (en) Super-resolution reconstruction method for fully-polarized synthetic aperture radar image
CN113222124B (en) SAUNet + + network for image semantic segmentation and image semantic segmentation method
CN112884788B (en) Cup optic disk segmentation method and imaging method based on rich context network
CN115375711A (en) Image segmentation method of global context attention network based on multi-scale fusion
CN114511502A (en) Gastrointestinal endoscope image polyp detection system based on artificial intelligence, terminal and storage medium
CN115526829A (en) Honeycomb lung focus segmentation method and network based on ViT and context feature fusion
CN109118487A (en) Bone age assessment method based on non-down sampling contourlet transform and convolutional neural networks
CN116823850A (en) Cardiac MRI segmentation method and system based on U-Net and transducer fusion improvement
CN116452619A (en) MRI image segmentation method based on high-resolution network and boundary enhancement
CN115578262A (en) Polarization image super-resolution reconstruction method based on AFAN model
CN114463339A (en) Medical image segmentation method based on self-attention

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200515