CN111242134A - Remote sensing image ground object segmentation method based on feature adaptive learning - Google Patents

Remote sensing image ground object segmentation method based on feature adaptive learning Download PDF

Info

Publication number
CN111242134A
CN111242134A CN202010043580.8A CN202010043580A CN111242134A CN 111242134 A CN111242134 A CN 111242134A CN 202010043580 A CN202010043580 A CN 202010043580A CN 111242134 A CN111242134 A CN 111242134A
Authority
CN
China
Prior art keywords
network
segmentation
remote sensing
image
generator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010043580.8A
Other languages
Chinese (zh)
Inventor
朱磊
王畅
吴谨
邹才刚
王文武
向森
邓慧萍
刘劲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Science and Engineering WUSE
Wuhan University of Science and Technology WHUST
Original Assignee
Wuhan University of Science and Engineering WUSE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Science and Engineering WUSE filed Critical Wuhan University of Science and Engineering WUSE
Priority to CN202010043580.8A priority Critical patent/CN111242134A/en
Publication of CN111242134A publication Critical patent/CN111242134A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a remote sensing image ground object segmentation method based on feature adaptive learning. The invention solves the problem of segmentation of different characteristic domains by applying a semantic segmentation domain adaptive algorithm based on a generation countermeasure network (GAN), and solves the problem of domain conversion at a pixel level in a segmentation space. The method introduces the characteristic domain adaptation technology into the ground feature segmentation of the remote sensing image, and can effectively segment the planting greenhouse in the remote sensing image.

Description

Remote sensing image ground object segmentation method based on feature adaptive learning
Technical Field
The invention belongs to the technical field of remote sensing image processing, and particularly relates to a remote sensing greenhouse ground feature segmentation method based on feature domain adaptive learning.
Background
With the continuous development of human society and the continuous progress of science and technology, resource problems become a serious problem in the world today. In the face of how global resources continue to support the survival and development of human society and how people master and utilize the resources as soon as possible, the reasonable utilization of remote sensing image resources is one of the most effective technical means for solving the problems at present.
The remote sensing image is a film or a photo for recording electromagnetic waves of various ground objects, and is mainly divided into an aerial photo and a satellite photo. Has been widely applied to aspects of forestry, agriculture, geology, mineral products, hydrology and water resources, oceans, environmental monitoring and the like, and makes great contribution to the development of global economy and society and the sustainable development of resources.
The time resolution and the space resolution of the current remote sensing image are continuously improved, and a large number of multi-scale remote sensing images are generated. The big datamation of the remote sensing image just accords with the characteristic that deep learning needs a large amount of data, and the processing of the remote sensing image by using a Convolutional Neural Network (CNN) is an important means for analyzing the remote sensing data. In the future, it will be a development direction to establish analysis of multi-scale remote sensing images based on CNN in the aspects of forest fire prevention, military strategy, traffic management, and the like.
Recently, convolutional neural network based methods have made significant progress in semantic segmentation and are applied to automated driving and image editing. The key to CNN-based methods is to annotate a large number of images that cover possible scene changes. However, such training models may not generalize well to unseen images, especially when there is a domain difference between the training (source) image and the test (target) image. In this case, the feature in the remote sensing image may include various visual presentations even for a single type of feature, for example: 1) due to the difference of the sensors, the same ground object obtained by different satellites has different visual characteristics in the image; 2) the same ground features in different regions are different in geographic environments, so that the visual features of the same ground feature, the types, textures, colors and other features of the surrounding ground features are also obviously different. Meanwhile, since the remote sensing image usually has a large-scale feature, it takes a lot of labor cost to manually mark for various environments.
Disclosure of Invention
The invention provides a remote sensing image ground feature segmentation method based on feature adaptive learning, which aims to overcome the problems in the existing method, regard semantic segmentation as structured output comprising spatial similarity of a source domain and a target domain, and adopt counterstudy in an output space. In order to further enhance the adaptive model, a multi-level countermeasure network is constructed, and the domain adaptation of the output space is effectively completed on different feature layers. The result shows that the method has better effect on the segmentation precision and quality, and can greatly reduce the labor cost required by manual marking.
The invention discloses a remote sensing image ground object segmentation method based on feature adaptive learning, which comprises the following steps of:
step 1, respectively acquiring remote sensing pictures of the south-north greenhouses with the same size from imaging equipment;
step 2, cutting the obtained remote sensing picture, taking the cut northern greenhouse remote sensing picture as a source set, taking the cut southern greenhouse remote sensing picture as a target set, and performing data cleaning on the two data sets to remove wrong data;
step 3, constructing a network framework, wherein the network framework comprises a generator network and a discriminator network, a Unet is selected as a segmentation network in the generator network, and a discriminator network D is composed of 5 convolutional layers;
step 4, training a generator network G, and setting a source domain image with artificial labels as IsThe artificial label of the source domain image is Ys,Ps=G(Is) The segmentation output of the source domain image is adopted, and the segmentation loss between the result obtained after the generator network G and the artificial label is expressed in a cross entropy mode as follows:
Figure BDA0002368595720000021
setting the target domain image without artificial mark as It,Pt=G(It) Is the segmentation output of the target domain image, and the resistance loss of the target domain image is expressed by the Logistic form:
Figure BDA0002368595720000022
the goal of the generator network is to generate as accurate a segmentation result as possible for the source and target domains to fool the discriminator network D into not distinguishing which domain the segmentation result came from, so the goal is to minimize the total learning loss L (I) ands,It);
step 5, training the discriminator network D, and outputting P by dividing the source domainsAnd target domain split output PtAfter normalization processing of sigmoid function, the two segmentation results are input into a discriminator network D, and cross entropy loss L is calculated according to the following moded
Figure BDA0002368595720000023
Adding the segmentation output of the source domain image and the segmentation output of the target domain image into a discriminant network, and monitoring the discriminant network by calculating the loss of the discriminant according to the segmentation results of the source domain and the target domain, wherein the aim is to maximize L in the process of training the discriminant networkdHelping the training generator to separate the source domain and the target domain as well as possible;
step 6, combining the generator network G and the discriminator network D, and training the generator network G and the discriminator network D independently and alternately in each iteration;
in conjunction with equation (1) and equation (2), the total learning loss during the generator training process is as follows:
L(Is,It)=Iseg(Is)+λadvLadv(It) (4)
wherein λ isadvIs the weight used to balance these two losses;
the generator network G and the discriminator network D form a dynamic game process in the training process, according to which
Figure BDA0002368595720000031
The form of (1) optimizes the parameters in the generator network G and the discriminator network D;
and 7, selecting the manufactured target set picture and inputting the target set picture into a trained network frame for testing.
Further, in step 2, the obtained remote sensing image is cut according to the size of 512 × 512, and the cutting mode is a sliding window operation adopting a coincidence proportion of 10%.
Further, the convolution kernels of the 5 convolution layers in step 3 are all 4 × 4, the step size is 2, and the number of channels is 64,128,256,512, 1.
Compared with the prior art, the invention has the advantages and beneficial effects that: for data sets with different domains, the invention applies a semantic segmentation domain adaptive algorithm based on a generation countermeasure network (GAN) to solve the segmentation problem of different feature domains and the domain conversion problem at a pixel level in a segmentation space. The method introduces the characteristic domain adaptation technology into the ground feature segmentation of the remote sensing image, and can effectively segment the planting greenhouse in the remote sensing image.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a remote sensing picture of a southern greenhouse taken;
FIG. 3 is a remote sensing picture of a northern greenhouse;
FIG. 4 shows a source set (northern greenhouse) picture;
FIG. 5 illustrates source set tags;
FIG. 6 shows a target set (southern greenhouse) picture;
FIG. 7 illustrates an object set tag;
FIG. 8 is a southern greenhouse picture segmented using Unet alone;
FIG. 9 is a southern greenhouse picture segmented using the method of the present invention.
Detailed Description
The technical solution of the present invention is further explained with reference to the drawings and the embodiments.
As shown in fig. 1, the method for segmenting a ground object in a remote sensing image based on feature adaptive learning provided by the present invention specifically includes the following steps:
(1) and an image acquisition step, wherein remote sensing pictures of the greenhouse in the south and north with the same size are acquired from the imaging equipment respectively. FIG. 2 shows a remote sensing picture of a southern greenhouse; FIG. 3 shows a remote sensing picture taken of a northern greenhouse;
(2) and image preprocessing, namely cutting the acquired remote sensing image according to the size of 512 multiplied by 512, wherein the cutting mode is that sliding window operation with the coincidence proportion of 10% is adopted, the cut northern greenhouse remote sensing image is taken as a source set, the cut southern greenhouse remote sensing image is taken as a target set, data cleaning is carried out on the two data sets, and wrong data including wrong manual marks, unmatched image sizes and the like are removed. FIG. 4 shows a source set (northern greenhouse) picture; FIG. 5 illustrates source set tags; FIG. 6 shows a target set (southern greenhouse) picture; FIG. 7 illustrates an object set tag;
(3) and constructing a network framework, wherein the main framework of the method comprises a generator network and a discriminator network. Carrying out multiple groups of experiments, selecting Unet as a segmentation network in a generator network G, wherein a discriminator network D consists of 5 convolution layers, the cores of the discriminator network D are 4 multiplied by 4, the stride is 2, and the number of channels is 64,128,256,512 and 1 respectively;
(4) and G, training the generator network G. Setting a source domain image with artificial labels as IsThe artificial label of the source domain image is Ys,Ps=G(Is) Is the split output of the source domain image (the split output of the picture has length and width H and W). The segmentation loss between the results obtained after segmenting the network (i.e., the Unet) and the artificial labels is expressed in cross-entropy form as:
Figure BDA0002368595720000041
setting the target domain image without artificial mark as It,Pt=G(It) Is the segmentation output of the target domain image. Considering that the discriminator is a simple classification network, its resistance loss is expressed in Logistic form as:
Figure BDA0002368595720000042
the goal of the generator network is to generate as accurate a segmentation result as possible for the source and target domains to fool the discriminator network into not distinguishing which domain the segmentation result came from, so the goal is to minimize the total learning loss L (I)s,It);
(5) A training step of the discriminator network D,
dividing the source domain into output PsAnd target domain split output PtAfter normalization processing of sigmoid function, the two segmentation results are input into a discriminator network D, and cross entropy loss L is calculated according to the following moded
Figure BDA0002368595720000043
And adding the segmentation output of the source domain image and the segmentation output of the target domain image into the discrimination network, and monitoring the discriminator network by calculating the loss of the discriminator according to the segmentation results of the source domain and the target domain. In training the arbiter network, the goal is to maximize LdHelping the training generator to separate the source domain and the target domain as well as possible;
(6) and a step of antagonistic network learning, namely combining the generator network and the arbiter network, and training the generator G network and the arbiter D network independently and alternately in each iteration.
In conjunction with equation (1) and equation (2), the total learning loss during the generator training process is as follows:
L(Is,It)=Iseg(Is)+λadvLadv(It) (4)
wherein λ isadvIs the weight used to balance these two losses. After a plurality of tests, the invention takes lambdaadvIs 0.001.
The ultimate goal of antagonistic learning is:
1 make the source domain image generate accurate segmentation result as much as possible, namely minimize the segmentation loss I of the source domain image in the generator network Gseg(Is);
2 bringing the target domain output as close as possible to the source domain output, i.e. maximizing the probability L that the target domain prediction is considered as a source domain predictionadv(It);
The generator network and the discriminator network form a dynamic game process in the training process, according to
Figure BDA0002368595720000051
The form of (1) optimizing parameters in the generator network and the discriminator network;
(7) and a step of testing the target set image, namely selecting a manufactured target set (southern greenhouse) picture as a test set to test. Fig. 8 is a southern greenhouse picture segmented only by using the Unet, and fig. 9 is an experimental result of this document, namely, a remote sensing image ground object segmentation method based on feature adaptive learning of this document is used.
And (3) qualitatively evaluating the segmentation result of the experiment, for the remote sensing pictures of the greenhouse in the south and north, the pictures of the two groups of data sets have great difference in appearance, a general segmentation network is used for training and segmenting the two groups of data sets, the obtained segmentation result is very poor, and a segmentation picture is almost obtained. By using the remote sensing image ground object segmentation method based on feature adaptive learning, the approximate segmentation contour of the test chart can be obtained, and the segmentation result is far better than that of the figure 8. Meanwhile, the segmentation results of the experiment are quantitatively evaluated, average absolute errors (MAE, mean absolute error, MAE is used as the average value of pixel-level absolute errors, the actual situation of predicted value errors can be better reflected, the smaller the MAE, the better the segmentation result is represented), and the experiment results are shown in table 1. The evaluation from both qualitative and quantitative aspects can show that compared with the common segmentation network, the method has better effect on segmentation precision and quality;
TABLE 1 results of the experiment
Reference standard Segmentation using Unet only Using the patented methods presented herein
MAE 0.838575940548 0.543762722675
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.

Claims (3)

1. A remote sensing image ground object segmentation method based on feature adaptive learning is characterized by comprising the following steps:
step 1, respectively acquiring remote sensing pictures of the south-north greenhouses with the same size from imaging equipment;
step 2, cutting the obtained remote sensing picture, taking the cut northern greenhouse remote sensing picture as a source set, taking the cut southern greenhouse remote sensing picture as a target set, and performing data cleaning on the two data sets to remove wrong data;
step 3, constructing a network framework, wherein the network framework comprises a generator network and a discriminator network, a Unet is selected as a segmentation network in a generator network G, and a discriminator network D is composed of 5 convolutional layers;
step 4, training a generator network G, and setting a source domain image with artificial labels as IsThe artificial label of the source domain image is Ys,Ps=G(Is) The segmentation output of the source domain image is adopted, and the segmentation loss between the result obtained after the generator network G and the artificial label is expressed in a cross entropy mode as follows:
Figure FDA0002368595710000011
setting the target domain image without artificial mark as It,Pt=G(It) Is the segmentation output of the target domain image, and the resistance loss of the target domain image is expressed by the Logistic form:
Figure FDA0002368595710000012
the goal of the generator network is to generate as accurate a segmentation result as possible for the source and target domains to fool the discriminator network D into not distinguishing which domain the segmentation result came from, so the goal is to minimize the total learning loss L (I) ands,It);
step 5, training the discriminator network D, and outputting P by dividing the source domainsAnd target domain split output PtAfter normalization processing of sigmoid function, the two segmentation results are input into a discriminator network D, and cross entropy loss L is calculated according to the following moded
Figure FDA0002368595710000013
Adding the segmentation output of the source domain image and the segmentation output of the target domain image into a discrimination network, and calculating the loss of a discriminator by using the segmentation results of the source domain and the target domainUnsupervised arbiter networks, in the course of training them, the aim being to maximize LdHelping the training generator to separate the source domain and the target domain as well as possible;
step 6, combining the generator network G and the discriminator network D, and training the generator network G and the discriminator network D independently and alternately in each iteration;
in conjunction with equation (1) and equation (2), the total learning loss during the generator training process is as follows:
L(Is,It)=Iseg(Is)+λadvLadv(It) (4)
wherein λ isadvIs the weight used to balance these two losses;
the generator network G and the discriminator network D form a dynamic game process in the training process, according to which
Figure FDA0002368595710000021
The form of (1) optimizes the parameters in the generator network G and the discriminator network D;
and 7, selecting the manufactured target set picture and inputting the target set picture into a trained network frame for testing.
2. The method for segmenting the remote sensing image terrain based on the feature adaptive learning as claimed in claim 1, characterized in that: and 2, cutting the acquired remote sensing image according to the size of 512 multiplied by 512, wherein the cutting mode is a sliding window operation adopting a superposition proportion of 10%.
3. The method for segmenting the remote sensing image terrain based on the feature adaptive learning as claimed in claim 1, characterized in that: the convolution kernels of the 5 convolutional layers in step 3 are all 4 × 4, the step is 2, and the number of channels is 64,128,256,512 and 1 respectively.
CN202010043580.8A 2020-01-15 2020-01-15 Remote sensing image ground object segmentation method based on feature adaptive learning Pending CN111242134A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010043580.8A CN111242134A (en) 2020-01-15 2020-01-15 Remote sensing image ground object segmentation method based on feature adaptive learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010043580.8A CN111242134A (en) 2020-01-15 2020-01-15 Remote sensing image ground object segmentation method based on feature adaptive learning

Publications (1)

Publication Number Publication Date
CN111242134A true CN111242134A (en) 2020-06-05

Family

ID=70876150

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010043580.8A Pending CN111242134A (en) 2020-01-15 2020-01-15 Remote sensing image ground object segmentation method based on feature adaptive learning

Country Status (1)

Country Link
CN (1) CN111242134A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112926585A (en) * 2021-01-11 2021-06-08 深圳大学 Cross-domain semantic segmentation method based on regenerative kernel Hilbert space
CN114022762A (en) * 2021-10-26 2022-02-08 三峡大学 Unsupervised domain self-adaption method for extracting area of crop planting area
CN115830597A (en) * 2023-01-05 2023-03-21 安徽大学 Domain self-adaptive remote sensing image semantic segmentation method from local to global based on pseudo label generation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109190707A (en) * 2018-09-12 2019-01-11 深圳市唯特视科技有限公司 A kind of domain adapting to image semantic segmentation method based on confrontation study
GB201910720D0 (en) * 2019-07-26 2019-09-11 Tomtom Global Content Bv Generative adversarial Networks for image segmentation
CN110516539A (en) * 2019-07-17 2019-11-29 苏州中科天启遥感科技有限公司 Remote sensing image building extracting method, system, storage medium and equipment based on confrontation network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109190707A (en) * 2018-09-12 2019-01-11 深圳市唯特视科技有限公司 A kind of domain adapting to image semantic segmentation method based on confrontation study
CN110516539A (en) * 2019-07-17 2019-11-29 苏州中科天启遥感科技有限公司 Remote sensing image building extracting method, system, storage medium and equipment based on confrontation network
GB201910720D0 (en) * 2019-07-26 2019-09-11 Tomtom Global Content Bv Generative adversarial Networks for image segmentation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YI-HSUAN TSAI等: "Learning to Adapt Structured Output Space for Semantic Segmentation", 《ARXIV:1802.10349V1》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112926585A (en) * 2021-01-11 2021-06-08 深圳大学 Cross-domain semantic segmentation method based on regenerative kernel Hilbert space
CN112926585B (en) * 2021-01-11 2023-07-28 深圳大学 Cross-domain semantic segmentation method based on regeneration kernel Hilbert space
CN114022762A (en) * 2021-10-26 2022-02-08 三峡大学 Unsupervised domain self-adaption method for extracting area of crop planting area
CN115830597A (en) * 2023-01-05 2023-03-21 安徽大学 Domain self-adaptive remote sensing image semantic segmentation method from local to global based on pseudo label generation
CN115830597B (en) * 2023-01-05 2023-07-07 安徽大学 Domain self-adaptive remote sensing image semantic segmentation method from local to global based on pseudo tag generation

Similar Documents

Publication Publication Date Title
CN109919108B (en) Remote sensing image rapid target detection method based on deep hash auxiliary network
US11521379B1 (en) Method for flood disaster monitoring and disaster analysis based on vision transformer
CN110245709B (en) 3D point cloud data semantic segmentation method based on deep learning and self-attention
CN104462494B (en) A kind of remote sensing image retrieval method and system based on unsupervised feature learning
CN111160311A (en) Yellow river ice semantic segmentation method based on multi-attention machine system double-flow fusion network
CN107680106A (en) A kind of conspicuousness object detection method based on Faster R CNN
CN111242134A (en) Remote sensing image ground object segmentation method based on feature adaptive learning
CN104715251B (en) A kind of well-marked target detection method based on histogram linear fit
CN109033107A (en) Image search method and device, computer equipment and storage medium
CN113592786B (en) Deep learning-based ocean mesoscale vortex detection method
CN109543585A (en) Underwater optics object detection and recognition method based on convolutional neural networks
CN113033432A (en) Remote sensing image residential area extraction method based on progressive supervision
CN113627472A (en) Intelligent garden defoliating pest identification method based on layered deep learning model
Wang et al. A feature-supervised generative adversarial network for environmental monitoring during hazy days
CN115049841A (en) Depth unsupervised multistep anti-domain self-adaptive high-resolution SAR image surface feature extraction method
CN115810149A (en) High-resolution remote sensing image building extraction method based on superpixel and image convolution
CN115115863A (en) Water surface multi-scale target detection method, device and system and storage medium
CN116977633A (en) Feature element segmentation model training method, feature element segmentation method and device
CN107292268A (en) The SAR image semantic segmentation method of quick ridge ripple deconvolution Structure learning model
CN114743109A (en) Multi-model collaborative optimization high-resolution remote sensing image semi-supervised change detection method and system
CN106934797A (en) A kind of SAR remote sensing imagery change detection methods based on neighborhood relative entropy
CN115631211A (en) Hyperspectral image small target detection method based on unsupervised segmentation
CN115147727A (en) Method and system for extracting impervious surface of remote sensing image
CN115797904A (en) Active learning method for multiple scenes and multiple tasks in intelligent driving visual perception
CN114998587A (en) Remote sensing image building semantic segmentation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200605

RJ01 Rejection of invention patent application after publication