CN113344936A - Soil nematode image segmentation and width measurement method based on deep learning - Google Patents
Soil nematode image segmentation and width measurement method based on deep learning Download PDFInfo
- Publication number
- CN113344936A CN113344936A CN202110748905.7A CN202110748905A CN113344936A CN 113344936 A CN113344936 A CN 113344936A CN 202110748905 A CN202110748905 A CN 202110748905A CN 113344936 A CN113344936 A CN 113344936A
- Authority
- CN
- China
- Prior art keywords
- image
- nematode
- soil
- width
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a soil nematode image segmentation and width measurement method based on deep learning. The method can effectively remove impurities such as bubbles, soil particles and the like in the image, avoid interference calculation, realize more reliable soil nematode identification and width measurement, improve the accuracy of soil nematode width calculation, reduce errors generated by manual calculation, greatly shorten manual working time and provide basis for soil nitrogen conversion calculation. The method can also analyze the morphology of the nematodes and the like in the obtained image according to the specific requirements of the user, and improves the research capability of the optical microscope in the aspect of soil biology.
Description
Technical Field
The invention relates to the field of nematode image segmentation and digital image processing, in particular to a soil nematode microscopic image segmentation and width measurement method based on deep learning.
In recent years, the deep learning technology has made great progress in the field of image recognition and segmentation, and the method extracts the essential features of the image by performing operations such as rolling and pooling on the image through a multilayer network, and has obvious advantages in the field of image recognition and segmentation. In biological image processing, all information contained in an image is not needed, the image information needed by a user is screened according to requirements, and the image containing the information wanted by the user has special properties such as shape and color. The soil microscopic image often contains granular impurities, and the judgment of researchers is necessarily influenced according to the interference degree of the impurities.
Because the part of the images of the soil nematodes are overlapped with the part of the images of the impurities, the difficulty of segmenting the soil nematodes is increased. The method is characterized in that the occlusion in the image segmentation field causes great troubles to the feature segmentation extraction of the image, measures such as image preprocessing, artificial labeling and the like are taken for the soil nematode image, redundant occlusion parts are excluded from the nematode image, and the model loss caused by the occlusion is reduced as much as possible by carrying out special treatment on the outline of the nematode image.
Soil nematodes are the most abundant species in soil multicellular animals, and due to their wide feeding habits, they occupy important links of multiple nutrition levels in soil food networks, regulating key processes and functions of the ecosystem such as nutrient mineralization, turnover and supply. Traditional soil nematode density and colony analysis takes a long time, while biomass measurement undoubtedly further increases labor and time costs. Current methods of nematode biomass research require body length and width to be determined, but body length measurements are more time consuming. Therefore, reducing the time investment for measuring the body length of the nematodes is probably the most effective method for improving the efficiency of measuring the biological quantity of the nematodes, researchers provide a new biomass measurement method taking the body width of the nematodes as the only measurement variable, save the time for measuring and calculating the biomass of the nematodes, promote the biomass of the nematodes to be more widely applied, and improve the functions of the soil nematodes and the accuracy of the research on the key processes and functional relationships between the soil nematodes and the nematodes.
Disclosure of Invention
The invention discloses a soil nematode image segmentation and width measurement method based on deep learning.
The invention aims to provide a soil nematode image segmentation and width measurement method based on deep learning, which aims to solve the problems that the noise of a soil nematode image under a microscope at present interferes with the recognition of researchers on soil nematodes, and helps the researchers calculate the nematode width by using a computer, so that the manual calculation efficiency is low and the error is large.
The invention adopts the following technical scheme:
a soil nematode image segmentation and width measurement method based on deep learning is characterized by comprising the following steps:
1) fixing microscope distance, recording proportion, acquiring soil nematode images, preprocessing and labeling the images, and constructing a training set, a verification set and a test set for deep learning, wherein the training set comprises an original image and a corresponding labeled image, and the verification set and the test set respectively only comprise the original image;
2) the U-net model is constructed and mainly comprises two parts, wherein one part is a trunk model, and the other part is a decoding part. The main model carries out feature extraction, the decoding part carries out feature restoration decoding, and the training process image is split and recombined. Inputting the training set into a U-net model for training, and inputting the verification set into the trained model for verification to obtain a trained model;
3) inputting the test set into a model and outputting a binary segmentation image;
4) analyzing the nematode image by using an OpenCV computer vision software library, manually selecting a body width measuring part because the shape of the nematode is not a completely regular cylinder, drawing a line segment for the width of the current part by using a computer after the part to be measured is selected, calculating the pixel length of the line segment, and calculating the body width of the part according to the proportion.
Preferably, the image preprocessing in step 1) includes adjusting brightness and contrast of the image by using an OpenCV computer vision software library, and then changing the size of the image in batch to reduce model training parameters.
Preferably, in the step 1), manual marking operation is performed on the nematodes in the soil nematode picture by adopting Photoshop software to obtain the marked image, and the original image and the marked image are ensured to correspond to each other.
Preferably, in step 1), the python code is used to perform operations of flipping, translating, rotating and adding salt and pepper noise on the images in the training set, the verification set and the test set to realize the amplification of the data set.
Preferably, the trunk model part in step 2) includes a plurality of encoder modules with the same structure, and each encoder module includes five modules, namely, one convolution operation and one pooling operation. The convolution operation uses a 3 x 3 convolution kernel with the activation function Relu followed by Dropout. Selecting an Xavier method for initial convolutional layer; the pooling operation used maximum pooling, with a pooled convolution kernel size of 2 x 2.
Preferably, the decoding part in step 2) includes a plurality of decoder sub-modules with the same structure, and each decoder module includes one deconvolution operation, one connection parameter fusion operation and one convolution operation. The deconvolution operation adopts a convolution kernel of 3 x 3, the activation function is Relu, the step length is 2, the connection parameter fusion operation is to fuse the image parameters with the same characteristics of the same layer, and only the depth of the parameters is increased after the fusion; the convolution operation uses a convolution kernel of 3 x 3, the activation function is Relu, and the method of initializing the weights is Xavier.
Preferably, Adam is used as an optimizer in the model in the step 2), Dropout is set to 0.5, and the loss function selects binary classifier binary _ cross as a measure of accuracy. Meanwhile, the training process adopts the learning rate reduction and early stopping means, so that the model training is more efficient.
Preferably, in the step 4), an OpenCV computer vision software library is used for analyzing the nematode image, mainly determining the type of the image, creating an image which can be drawn on the given image, reading the image, and then performing binarization processing, edge detection and image graying on the image.
Preferably, the measurement position is selected in the step 4) by frame, and the position is manually selected, and the specific method is to pull a square area in the image by using a mouse, so that calculation errors caused by measurement error positions are avoided.
Preferably, the line segments are drawn in the step 4), the nematode profiles in the framed area are assumed to be parallel through edge refinement according to the contour of the nematode, contour coordinates of straight lines are extracted, straight line fitting is carried out on the coordinates on the profiles, a straight line equation By = Ax + B is calculated, and the distance between the two lines is calculated, wherein the equation is | B1-B0|/sqrt (A + B). And calculating the width of the part of the nematode according to the proportion to obtain a value taking the mum as a unit, wherein the value is the width of the nematode.
The invention has the beneficial effects that: the method has the advantages that the computer is used for carrying out image processing and analysis on the soil nematodes, the method can carry out more accurate analysis and research than a manual method, reduces subjective interference, improves working efficiency, relieves the burden of researchers, saves time, improves efficiency, and helps and promotes calculation of nematode biomass to be widely applied.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is an original image in a training set.
Fig. 3 is a label image.
FIG. 4 is an overall framework diagram of the U-Net model of the present invention.
Fig. 5 shows an original image, a labeled image, a model predictive segmented image, and a binary predictive segmented image in a test set, respectively.
FIG. 6 shows the manual selection of nematode sites.
FIG. 7 plots boxed area line segments and measures distance.
The invention is described in further detail below with reference to the figures and specific examples.
Detailed Description
The invention is further described below by means of specific embodiments.
Referring to fig. 1, a soil nematode image segmentation and width measurement method based on deep learning includes the following steps:
1) and selecting a proper microscope observation multiple which can enable imaging to be clearer, fixing the microscope distance, recording the proportion and obtaining a soil nematode image. Firstly, image preprocessing is carried out, and because the pixel values of images of nematodes shot under a microscope are generally large and the quality of the images is different, the brightness and contrast of the images need to be adjusted before deep learning is carried out, and the size of the images is changed;
and (3) manually labeling the nematodes in the soil nematode picture by adopting Photoshop software to obtain a labeled image, and ensuring that the original image corresponds to the labeled image, which is shown in the figure 2 and the figure 3. And constructing a training set, a verification set and a test set which have no intersection with each other, wherein the training set comprises original images and corresponding label images, and the verification set and the test set respectively only comprise the original images. Carrying out turning, translation and rotation operations on the images in the training set, the verification set and the test set by using a python code to realize amplification of the data set;
further, image preprocessing is carried out on images in batch in an OpenCV software library by using a python code, finally, all the images are scaled to 256 × 256, then, a rapid selection tool of Photoshop software is used for carrying out check on nematode outlines, and after nematode body surfaces are labeled successfully, the images are converted into single-channel 8-bit images and stored. The data set is also expanded after being divided into a training set, a verification set and a test set according to 7:2:1 by using python.
2) And constructing a U-Net model, wherein the U-Net model comprises two parts, one part is a trunk model, and the other part is a decoding part. The main model part is also called as an encoding part (down sampling) to extract the features, and the decoding part (up sampling) to restore the features and decode the features to the size of the original image. Inputting the training set into a U-Net model for training, and inputting the verification set into the trained U-Net model for verification to obtain a trained U-Net model;
the main function of the coding part is feature extraction, and the coding part comprises a plurality of coder modules with the same structure, wherein each coder module comprises a convolution operation and a pooling operation. Certain information such as boundaries and colors contained in the image can be extracted every time the image passes through the encoder module, and abstract features of the image can be captured as the convolution times are more, so that robustness to small disturbances of the input image, such as image translation, rotation and the like, can be improved, the risk of overfitting is reduced, the operation amount is reduced, and the size of a receptive field is increased;
in the trunk model, the parameters of the convolution operation are: using a 3 x 3 convolution kernel, the activation function is Relu, followed by Dropout. Selecting an Xavier method for initial convolutional layer; the pooling operation adopts a maximum pooling scheme, and the size of a used pooling convolution kernel is 2 x 2;
the decoding part mainly has the function of feature restoration, and the function of the decoding part is to restore and decode abstract features to the size of the original image and finally obtain a segmentation result. The decoder comprises a plurality of decoder modules with the same structure, wherein each decoder sub-module comprises a deconvolution operation, a connection parameter fusion operation and a convolution operation;
the deconvolution operation adopts a convolution kernel of 3 × 3, the activation function is Relu, the step size is 2, the convolution operation adopts a convolution kernel of 3 × 3, the activation function is Relu, and the method for initializing the weight is Xavier. The connection parameter fusion operation is to fuse image parameters with the same characteristics of the same layer, which is beneficial to reducing information loss caused by down-sampling;
the model structure of the invention is shown in FIG. 4;
preferably, the network layer number of the invention is five, the network layer number can be replaced by any value, and the task with high dividing difficulty is usually a deeper network layer number. After the model is constructed, Adam (adaptive motion estimation) is adopted as an optimizer, and the initial learning rate is 1 × 10-5Dropout is set to 0.5, the loss function selects the binary classifier binary _ cross, with accuracy as a measure;
meanwhile, the training process adopts the learning rate reduction and early stopping means, so that the model training is more efficient;
the model parameters such as the size and the number of convolution kernels, the learning rate, the optimizer and the like can be set to be suitable parameters according to specific conditions.
3) And inputting the test set into a trained U-Net model, and outputting a binary segmentation image. The test set image data is segmented using the model, as shown in fig. 5.
4) Analyzing the nematode image by using an OpenCV computer vision software library, manually selecting a body width measuring part because the shape of the nematode is not a completely regular cylinder, drawing a line segment for the width of the current part by using a computer after the part to be measured is selected, calculating the pixel length of the line segment, and finally calculating the body width of the part according to the proportion;
further, an OpenCV computer vision software library is used for analyzing nematode images, mainly comprising the steps of judging the type of the images, creating images which can be drawn on the given images, reading the images, then carrying out binarization processing on the images, carrying out edge detection and carrying out gray level on the images. Selecting the measurement part by manual selection part, wherein the specific method is to pull a square area in the image by using a mouse, as shown in fig. 6, so that the calculation error caused by the measurement error position is avoided, and the smaller the selection part is, the more accurate the width calculation is;
furthermore, line segments are drawn, the nematode profiles in the boxed areas are assumed to be parallel through edge refinement according to the profiles of the nematodes in the boxed areas, the profile coordinates of straight lines are extracted, straight line fitting is carried out on the coordinates on the profiles, a straight line equation By = Ax + B is calculated, the distance between the two lines is calculated, and the numerical value is displayed on a picture, as shown in FIG. 7, the equation is | B1-B0|/sqrt (A | + A + B |). And calculating the width of the part of the nematode according to the proportion to obtain a value taking the mum as a unit, wherein the value is the width of the nematode.
The above description is only an embodiment of the present invention, but the design concept of the present invention is not limited thereto, and any insubstantial modifications made by using the design concept should fall within the scope of infringing the present invention.
Claims (6)
1. A soil nematode microscopic image segmentation and width measurement method based on deep learning is characterized by comprising the following steps:
(1) acquiring an image containing soil nematodes, manually labeling the image, dividing the image into a soil nematode part and other parts, converting the labeled image into a corresponding image with 8-bit depth, completing labeling to construct a data set and performing image enhancement on the image;
(2) constructing a U-net model, using the obtained soil nematode image and the corresponding labeled image as a training set, using other original images as a verification set and a test set, inputting the training set into the model for training, and then using the verification set to input the model for verification to obtain a trained U-net nematode image segmentation model;
(3) inputting a nematode image with a body width to be measured into a trained model, and outputting to obtain a segmented binary image;
(4) and processing the output binary segmentation image by using an OpenCV computer vision software library, manually intercepting a measurement area, drawing a straight line after obtaining a measurement contour, and measuring the length to obtain the body width of the nematode.
2. The soil nematode microscopic image segmentation and width measurement method based on deep learning as claimed in claim 1, wherein the image enhancement method in step 1) comprises flipping, translation, rotation and adding salt-pepper noise, so as to create more data to make the neural network have better generalization effect.
3. The soil nematode microscopic image segmentation and width measurement method based on deep learning as claimed in claim 1, wherein the labeling operation in step 1) is to label the contour of the nematode in the soil nematode picture by using open source software Photoshop and store the contour as a binary image with a bit depth of 8.
4. The soil nematode microscopic image segmentation and width measurement method based on deep learning as claimed in claim 1, wherein the U-net neural network model in step 2) is mainly divided into two parts, one is a trunk model, the other is a decoding part, and the model mainly comprises a convolutional layer, a max pooling layer and a Dropout layer.
5. The soil nematode microscopic image segmentation and width measurement method based on deep learning as claimed in claim 1, wherein the nematode body width calculation of step 4) firstly inputs a picture, then intercepts a target region to be detected, detects the edges of nematode images, grays the images, and finally draws width line segments and width measurement.
6. The soil nematode microscopic image segmentation and width measurement method based on deep learning of claim 5, wherein the obtained width is converted into units according to a proportion, and finally the real width of the part of the nematode is obtained.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110748905.7A CN113344936A (en) | 2021-07-02 | 2021-07-02 | Soil nematode image segmentation and width measurement method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110748905.7A CN113344936A (en) | 2021-07-02 | 2021-07-02 | Soil nematode image segmentation and width measurement method based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113344936A true CN113344936A (en) | 2021-09-03 |
Family
ID=77482283
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110748905.7A Pending CN113344936A (en) | 2021-07-02 | 2021-07-02 | Soil nematode image segmentation and width measurement method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113344936A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116087036A (en) * | 2023-02-14 | 2023-05-09 | 中国海洋大学 | Device for identifying images of sediment plume of deep sea mining and image analysis method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106017328A (en) * | 2015-12-17 | 2016-10-12 | 广东正业科技股份有限公司 | Method and device for measuring various types of line widths |
CN107808396A (en) * | 2017-11-01 | 2018-03-16 | 齐鲁工业大学 | It is easy to nematode recognition methods and the system of image segmentation |
CN112132884A (en) * | 2020-09-29 | 2020-12-25 | 中国海洋大学 | Sea cucumber length measuring method and system based on parallel laser and semantic segmentation |
CN112949378A (en) * | 2020-12-30 | 2021-06-11 | 至微生物智能科技(厦门)有限公司 | Bacterial microscopic image segmentation method based on deep learning network |
-
2021
- 2021-07-02 CN CN202110748905.7A patent/CN113344936A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106017328A (en) * | 2015-12-17 | 2016-10-12 | 广东正业科技股份有限公司 | Method and device for measuring various types of line widths |
CN107808396A (en) * | 2017-11-01 | 2018-03-16 | 齐鲁工业大学 | It is easy to nematode recognition methods and the system of image segmentation |
CN112132884A (en) * | 2020-09-29 | 2020-12-25 | 中国海洋大学 | Sea cucumber length measuring method and system based on parallel laser and semantic segmentation |
CN112949378A (en) * | 2020-12-30 | 2021-06-11 | 至微生物智能科技(厦门)有限公司 | Bacterial microscopic image segmentation method based on deep learning network |
Non-Patent Citations (1)
Title |
---|
李士军等: "A lightweight model of vgg-16 for remote sensing image classification", 《IEEE》, vol. 14, 23 June 2021 (2021-06-23), XP011868960, DOI: 10.1109/JSTARS.2021.3090085 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116087036A (en) * | 2023-02-14 | 2023-05-09 | 中国海洋大学 | Device for identifying images of sediment plume of deep sea mining and image analysis method |
CN116087036B (en) * | 2023-02-14 | 2023-09-22 | 中国海洋大学 | Device for identifying images of sediment plume of deep sea mining and image analysis method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111862064B (en) | Silver wire surface flaw identification method based on deep learning | |
CN115345885B (en) | Appearance quality detection method for metal fitness equipment | |
CN113592845A (en) | Defect detection method and device for battery coating and storage medium | |
CN111582294B (en) | Method for constructing convolutional neural network model for surface defect detection and application thereof | |
CN110853015A (en) | Aluminum profile defect detection method based on improved Faster-RCNN | |
CN110276402B (en) | Salt body identification method based on deep learning semantic boundary enhancement | |
CN112085024A (en) | Tank surface character recognition method | |
CN110648310B (en) | Weak supervision casting defect identification method based on attention mechanism | |
CN111833306A (en) | Defect detection method and model training method for defect detection | |
CN117253024B (en) | Industrial salt quality inspection control method and system based on machine vision | |
CN112907519A (en) | Metal curved surface defect analysis system and method based on deep learning | |
CN114694038A (en) | High-resolution remote sensing image classification method and system based on deep learning | |
CN112862744A (en) | Intelligent detection method for internal defects of capacitor based on ultrasonic image | |
CN112949378A (en) | Bacterial microscopic image segmentation method based on deep learning network | |
CN115731282A (en) | Underwater fish weight estimation method and system based on deep learning and electronic equipment | |
CN113344936A (en) | Soil nematode image segmentation and width measurement method based on deep learning | |
CN117495851B (en) | Image contour processing-based water environment microorganism detection method | |
Li et al. | DDR-Unet: A High Accuracy and Efficient Ore Image Segmentation Method | |
CN112784922A (en) | Extraction and classification method of intelligent cloud medical images | |
CN115294151A (en) | Lung CT interested region automatic detection method based on multitask convolution model | |
CN113870328A (en) | Liquid foreign matter visual detection method and system | |
CN117911409B (en) | Mobile phone screen bad line defect diagnosis method based on machine vision | |
CN117037049B (en) | Image content detection method and system based on YOLOv5 deep learning | |
CN116797602A (en) | Surface defect identification method and device for industrial product detection | |
CN116071597A (en) | Workpiece classification and identification method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20210903 |
|
WD01 | Invention patent application deemed withdrawn after publication |