CN111985376A - Remote sensing image ship contour extraction method based on deep learning - Google Patents
Remote sensing image ship contour extraction method based on deep learning Download PDFInfo
- Publication number
- CN111985376A CN111985376A CN202010812673.2A CN202010812673A CN111985376A CN 111985376 A CN111985376 A CN 111985376A CN 202010812673 A CN202010812673 A CN 202010812673A CN 111985376 A CN111985376 A CN 111985376A
- Authority
- CN
- China
- Prior art keywords
- remote sensing
- sensing image
- target detection
- target
- contour extraction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000605 extraction Methods 0.000 title claims abstract description 58
- 238000013135 deep learning Methods 0.000 title claims abstract description 27
- 238000001514 detection method Methods 0.000 claims abstract description 93
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 57
- 230000011218 segmentation Effects 0.000 claims abstract description 41
- 238000012549 training Methods 0.000 claims description 43
- 230000006870 function Effects 0.000 claims description 32
- 238000010586 diagram Methods 0.000 claims description 27
- 238000000034 method Methods 0.000 claims description 25
- 238000012545 processing Methods 0.000 claims description 18
- 238000011176 pooling Methods 0.000 claims description 7
- 238000009826 distribution Methods 0.000 claims description 6
- 238000013528 artificial neural network Methods 0.000 claims description 4
- 238000010606 normalization Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 230000001629 suppression Effects 0.000 claims description 3
- 238000002474 experimental method Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 238000010200 validation analysis Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 230000007123 defense Effects 0.000 description 2
- 238000009795 derivation Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 238000003032 molecular docking Methods 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000000969 carrier Substances 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000009432 framing Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 238000003064 k means clustering Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Biophysics (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Geometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a remote sensing image ship contour extraction method based on deep learning, which comprises the following steps: s1: carrying out target identification and positioning on the obtained remote sensing image by a target detection method based on a convolutional neural network to obtain a target detection result graph; s2: performing semantic segmentation on the acquired remote sensing image based on a full convolution network to obtain a segmentation graph corresponding to a target detection result graph; s3: and fusing the target detection result graph and the segmentation graph corresponding to the target detection result graph to obtain a contour extraction result. On one hand, the target detection model based on the convolutional neural network is utilized to accurately position the ship target; on the other hand, the contour of the ship target in the positioning area is accurately extracted by utilizing a semantic segmentation model based on a full convolution network. The interference caused by manual operation can be reduced, automatic and intelligent remote sensing image ship contour extraction is realized, and the interpretation precision and efficiency can be effectively improved.
Description
Technical Field
The invention relates to the technical field of remote sensing image processing and information extraction, in particular to a remote sensing image ship contour extraction method based on deep learning.
Background
Remote Sensing (RS) refers to a non-contact, Remote Sensing technique [1 ]. The remote sensing technology is widely applied to the fields of agriculture, national defense, oceans and the like. The remote sensing technology has important significance for promoting the construction of national infrastructure, the construction of national defense and the development of national economy.
The remote sensing image is used as a main storage mode of imaging remote sensing data, and has important significance for remote sensing information extraction and application in the later period, so that the remote sensing image processing is particularly important. The processing of remote sensing images can be roughly divided into two parts: processing the remote sensing image at a pixel level, such as performing radiation correction, geometric correction, projection transformation, mosaic transformation and the like on the remote sensing image; and secondly, processing the remote sensing image at a characteristic and semantic level, such as characteristic extraction of the remote sensing image, classification of the remote sensing image, identification of the remote sensing image, extraction of a ground feature profile of the remote sensing image and the like. The ground object target detection and the contour extraction of the remote sensing image have important military significance and civil value. Carrying out target detection and contour extraction on ground objects in the remote sensing image, and providing important information and technical support for military reconnaissance, military surveying and mapping and image end guidance technologies [2 ]; meanwhile, the ground object target detection and contour extraction in the remote sensing image have important guiding significance for resource investigation, rural planning, urban planning, natural disaster monitoring, resource survey and the like. Although a corresponding method is used for remote sensing ground object detection and contour extraction at present, an automatic processing algorithm is difficult to meet the requirements in actual production due to the limitation of the high complexity of a target and the current theoretical technical level [2], so that manual extraction or an interactive processing mode is mostly adopted in actual operation. However, remote sensing image ground object target detection and contour extraction mainly based on manual extraction not only consumes a large amount of manpower, but also consumes a large amount of time, and the working efficiency is low. In addition, interactive ground object target detection and contour extraction are greatly influenced by human factors, and the precision is generally not high.
In recent years, deep learning techniques have been rapidly developed in the field of Artificial Intelligence (AI), and deep learning algorithms have been successfully applied to the fields of computer vision, speech recognition, machine translation, and the like, and have been shown to exceed human levels in some tasks. Compared with the traditional remote sensing image ground feature detection and contour extraction method, the deep learning-based method is established on the basis of different types of deep neural network models, large-scale data are trained by using the neural network, and a prediction task is completed, so that the deep neural network model has stronger generalization and feature expression capabilities. In the deep learning method, a Convolutional Neural Network (CNN) model has become a mainstream way for solving tasks such as two-dimensional image classification and target detection due to weight sharing and pooling strategies, and the Convolutional Neural Network is composed of an input layer, a Convolutional layer, a pooling layer, a full-link layer and an output layer, and can quickly and efficiently solve tasks such as deep layer feature extraction, image classification and target detection based on scene level. A full convolution Network [4] (full volumetric Network, FCN) which is modified on the basis of a Convolutional neural Network overcomes the obstacles that a pooling layer structure in a CNN can reduce the spatial resolution of a two-dimensional feature map and is difficult to be directly applied to the classification problem of an image pixel level, and the FCN conducts lossless upsampling on the two-dimensional feature map by using a trainable transpose convolution layer, so that the semantic segmentation problem based on the pixel level is better solved, and the full convolution Network becomes an important method for solving the tasks of two-dimensional image pixel level classification, contour extraction and the like.
The traditional remote sensing ground object target detection and contour extraction method and the manual interpretation method are not high in prediction precision or prediction efficiency, and cannot meet the requirements in the big data era. In order to solve the above situation, the invention provides an integrated model of remote sensing image surface feature target detection and contour extraction based on deep learning, and applies the technology based on deep learning to the remote sensing image, aiming at improving the working efficiency and precision, reducing manual operation and realizing full-automatic remote sensing image surface feature target detection and contour extraction. The method model provided by the invention has the advantage of full automation, and meanwhile, the whole extraction process is free from interference of human factors, so that the extraction process is more accurate, reliable, rapid, efficient, more automatic and more intelligent.
Disclosure of Invention
The invention aims to provide a remote sensing image ship contour extraction method based on deep learning, which can reduce interference caused by manual operation, realize automatic and intelligent remote sensing image ship contour extraction and effectively improve interpretation precision and efficiency.
The invention provides a remote sensing image ship contour extraction method based on deep learning, which comprises the following steps:
s1: carrying out target identification and positioning on the obtained remote sensing image by a target detection method based on a convolutional neural network to obtain a target detection result graph;
s2: performing semantic segmentation on the acquired remote sensing image based on a full convolution network to obtain a segmentation graph corresponding to a target detection result graph;
s3: and fusing the target detection result graph and the segmentation graph corresponding to the target detection result graph to obtain a contour extraction result.
Further, step S1 includes:
s1.1: processing the remote sensing image by using a convolutional neural network region generation algorithm to generate a candidate region, extracting the characteristics of the candidate region by using a convolutional neural network, classifying the characteristics and identifying and positioning the target in the region;
s1.2: and correcting the initial coordinates of the extracted target by adopting a boundary box regression algorithm, and deleting redundant target boxes by utilizing a non-maximum suppression algorithm to obtain a final detection result graph.
Further, the target detection method is a fast R-CNN target detection algorithm, and step S1.1 includes:
s1.1.1: inputting the remote sensing image to be detected into a Faster R-CNN target detection algorithm, and extracting a corresponding characteristic diagram of the image by using VGG 16;
s1.1.2: generating a series of candidate frames on the feature map extracted by the VGG16 by using the RPN region candidate frame network;
s1.1.3: and after RoI Pooling processing, inputting the processed result into an R-CNN target detection head for detection, realizing coordinate regression and type detection of the candidate box, and obtaining the coordinate of the target box, the type of the target and the type confidence coefficient.
Further, a loss function of the Faster R-CNN target detection algorithm is determined, and the fast R-CNN target detection algorithm is trained by using a back propagation algorithm, so that the loss function is reduced to an appropriate value.
Further, the loss function of the Faster R-CNN target detection algorithm is:
i denotes the index of the anchor frame in the remote-sensing image, piRepresenting the probability that the anchor frame is predicted as the target, ti={tx,ty,tw,thIs a vector representing the 4 parameterized coordinates of the predicted bounding box.The probability of the anchor frame true target is represented,representing real parametric bounding box coordinates;
IoU denotes the ratio of the area of the intersection part of the anchor frame and the real frame to the area of the union part, IoU ∈ [0,0.3 ])Is 0, IoU e (0.7, 1)]) Time of flight1, x, y, w, h represents the predicted frame center coordinates (x, y) and the width and height (w, h), x of the frame*,y*,w*,h*Representing the real box center coordinates (x)*,y*) And width and height (w) of the frame*,h*),xa,ya,wa,haCenter coordinates (x) representing the anchor framea,ya) And width and height (w) of anchor framea,ha);
further, the back propagation algorithm includes a chain derivative rule and/or a gradient descent.
Further, the loss function of the full convolution network is
Wherein w and h respectively represent the width and high resolution of the prediction map,the probability distribution vector on the real category channel at the position (i, j) is shown, wherein the probability on only one channel is 1, and the probabilities on other channels are 0; p is a radical ofi,jRepresenting the probability distribution vector of each classification channel predicted at (i, j), wherein the sum of the probabilities of each classification channel is 1;
and training the full convolution network by adopting a back propagation algorithm, reducing the loss function of the full convolution network to a proper value to obtain the trained full convolution network, and then performing semantic segmentation on the acquired remote sensing image based on the full convolution network.
Further, step S2 includes:
s2.1: extracting a characteristic diagram of an input remote sensing image by using a base network of a trained full convolution network;
s2.2: sampling the feature graph to the size of an input graph by using a transposed convolution layer of the trained full convolution network;
s2.3: and classifying the characteristic diagram channels with the same size as the input diagram to obtain a segmentation diagram of the remote sensing image.
Further, before the remote sensing image is input into the convolutional neural network and the full convolutional network, firstly, the remote sensing image is subjected to pixel value normalization processing.
Further, the coordinates of the target frame, the type and the type confidence of the target and the segmentation graph corresponding to the type confidence are fused to obtain a contour extraction result.
The invention has the following beneficial effects:
(1) according to the invention, the artificial intelligence and deep learning algorithm are applied to intelligent interpretation tasks such as remote sensing image target detection and semantic segmentation, and compared with the traditional algorithm, the deep network model based on deep learning can extract deep features in the image, so that the interpretation precision and efficiency can be effectively improved;
(2) the invention provides a method for improving the level of remote sensing image ship contour extraction by using multiple networks. On one hand, a target detection model based on a convolutional neural network is utilized to accurately position a ship target; on the other hand, a semantic segmentation model based on a full convolution network is used for accurately extracting the contour of the ship target in the positioning area;
(3) the method integrates the remote sensing image target detection and semantic segmentation to process, reduces the interference caused by manual operation, and realizes automatic and intelligent remote sensing image ship contour extraction.
Drawings
FIG. 1 is a schematic flow chart of a remote sensing image ship contour extraction method based on deep learning according to the invention;
FIG. 2 is a diagram of the basic network structure employed by the Faster R-CNN target detection algorithm;
FIG. 3 is a diagram of the basic network architecture employed by the FCN;
fig. 4 is an exemplary diagram of HRSC2016 datasets: FIG. 4(a) is an exemplary diagram of a ship docking data set, and FIG. 4(b) is an exemplary diagram of a ship voyage data set;
FIG. 5 is a vertical and rotated rectangular box labeled view;
FIG. 6 is a diagram showing the results of the detection of the second class ship Faster R-CNN;
FIG. 7 is a diagram showing the results of the first class ship Faster R-CNN test;
fig. 8 is a diagram of remote sensing images and corresponding semantic segmentation labels in the Vaihingen dataset: FIG. 8(a) is a video image, and FIG. 8(b) is a category label map;
FIG. 9 is a pixel level ship mask map;
FIG. 10 is a ship mask map with three levels of class information converted into two classes of mask map;
FIG. 11 is an exemplary illustration of vessel segmentation by FCN on HRSC 2016;
FIG. 12 is a schematic view of an integrated model process for vessel inspection and contour extraction;
fig. 13 is a diagram of ship detection and contour extraction results.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be further described with reference to the accompanying drawings.
Referring to fig. 1, the invention provides a remote sensing image ship contour extraction method based on deep learning, which integrates two processes of target detection and semantic segmentation into an integrated model to realize rapid positioning and contour extraction of a high-resolution remote sensing image ship, and is a novel high-resolution remote sensing image ship contour extraction method with higher automation and intelligent degree.
The method comprises three steps of target identification and positioning based on a convolutional neural network, semantic segmentation based on a full convolutional network, and integrated processing of target detection and semantic segmentation. Specifically, the method comprises the following steps:
s1: carrying out target identification and positioning on the obtained remote sensing image by a target detection method based on a convolutional neural network to obtain a target detection result graph;
s2: performing semantic segmentation on the acquired remote sensing image based on a full convolution network to obtain a segmentation graph corresponding to a target detection result graph;
s3: and fusing the target detection result graph and the segmentation graph corresponding to the target detection result graph to obtain a contour extraction result.
For step S1, it can be specifically subdivided into:
s1.1: processing the remote sensing image by using a convolutional neural network region generation algorithm to generate a candidate region, extracting the characteristics of the candidate region by using a convolutional neural network, classifying the characteristics and identifying and positioning the target in the region;
s1.2: and correcting the initial coordinates of the extracted target by adopting a boundary box regression algorithm, and deleting redundant target boxes by utilizing a non-maximum suppression algorithm to obtain a final detection result graph.
Preferably, the target detection method is a Faster R-CNN target detection algorithm, a basic network adopted by the Faster R-CNN target detection algorithm is ResNet-152v1, and a specific network structure is shown in FIG. 2. On this basis, step S1.1 can be further refined as:
s1.1.1: inputting the remote sensing image to be detected into a Faster R-CNN target detection algorithm, and extracting a corresponding characteristic diagram of the image by using VGG 16;
s1.1.2: generating a series of candidate frames on the feature map extracted by the VGG16 by using the RPN region candidate frame network;
s1.1.3: and after RoI Pooling processing, inputting the processed result into a Fast R-CNN target detection algorithm to realize coordinate regression and class detection of the candidate box and obtain the coordinate of the target box, the class of the target and the class confidence coefficient.
In order to improve the precision, before an Faster R-CNN target detection algorithm model is used, the fast R-CNN target detection algorithm model needs to be trained, the training aims to enable parameters to approach a real model as much as possible, before the training, a relation function between the real target and a network predicted value is defined firstly and is called a loss function, the loss function reflects the degree of fitting of the model to data, the better the model is fitted, and the smaller the value of the loss function is; conversely, the worse the model fit, the larger the loss function value.
The loss function of the Faster R-CNN target detection algorithm mainly comprises a class loss function and a box coordinate regression loss function, and the loss function of the Faster R-CNN target detection algorithm in one figure is as follows:
i denotes the index of the anchor frame in the remote-sensing image, piRepresenting the probability that the anchor frame is predicted as the target, ti={tx,ty,tw,thIs a vector representing the 4 parameterized coordinates of the predicted bounding box.The probability of the anchor frame true target is represented,representing the true parametric bounding box coordinates. In general, NclsSet to 256, NregSet to 2000 and gamma to 10.
IoU denotes the ratio of the area of the intersection part of the anchor frame and the real frame to the area of the union part, IoU ∈ [0,0.3 ])Is 0, IoU e (0.7, 1)]) Time of flight1, x, y, w, h represents the predicted frame center coordinates (x, y) and the width and height (w, h), x of the frame*,y*,w*,h*Representing the real box center coordinates (x)*,y*) And width and height (w) of the frame*,h*),xa,ya,wa,haCenter coordinates (x) representing the anchor framea,ya) And width and height (w) of anchor framea,ha);
after the loss function of the Faster R-CNN target detection algorithm is defined, a network model needs to be trained by a back propagation algorithm. The back propagation algorithm is an important parameter updating algorithm for training the neural network, and the core of the back propagation algorithm is a chain derivation rule and gradient descent. The chain-type derivation method is a method for deriving complex functions. Gradient descent is an algorithm that seeks the minimum of the objective function. The target function in deep learning is generally a loss function, and parameters are continuously updated through a back propagation algorithm, so that the loss function is smaller and smaller, is continuously close to a minimum value, and finally fluctuates near the minimum value. Therefore, the fitting error of the model is smaller and is reduced to a smaller value, and the training of the Faster R-CNN target detection algorithm model is realized.
The Convolutional Neural Network (CNN) can be used to solve the tasks such as deep feature extraction, image scene classification, and image target detection, but its pooling layer structure can reduce the spatial resolution of the two-dimensional feature map, and directly interpolate the extracted deep features and restore the spatial resolution, which can seriously reduce the precision of the output classification result, so that the CNN model is difficult to be directly applied to the tasks such as high-precision semantic segmentation and contour extraction.
The method adopts a classic Full Convolution Network (FCN) algorithm as one of the constituent models for extracting the target contour of the remote sensing image ship, the FCN adopts a basic network ResNet-18v2, and the specific network structure is shown in figure 3.
The loss function of the Full Convolution Network (FCN) is a cross-entropy loss function, and a specific cross-entropy loss function expression of a graph is as follows:
wherein w and h respectively represent the width and high resolution of the prediction map,the probability distribution vector on the real category channel at the position (i, j) is shown, wherein the probability on only one channel is 1, and the probabilities on other channels are 0; p is a radical ofi,jRepresenting the probability distribution vector of each classification channel predicted at (i, j), wherein the sum of the probabilities of each classification channel is 1; after the loss function of the FCN is defined, a back propagation algorithm consistent with a target detection method based on fast R-CNN can be adopted to train the FCN model and update and optimize model weight parameters so as to complete the training of the full convolution network model.
Step S2 may be subdivided into:
s2.1: extracting a characteristic diagram of an input remote sensing image by using a base network of a trained full convolution network;
s2.2: sampling the feature graph to the size of an input graph by using a transposed convolution layer of the trained full convolution network;
s2.3: and classifying the characteristic diagram channels with the same size as the input diagram to obtain a segmentation diagram of the remote sensing image.
Before inputting the remote sensing image into the convolution neural network and the full convolution network, firstly, carrying out pixel value normalization processing on the remote sensing image: because the resolution of the remote sensing images is different, before the images are input, the shortest side of the images is firstly zoomed to 600 (the size of the shortest side is limited by complementing once), and meanwhile, the pixel values are normalized.
Step S3 specifically includes: and fusing the coordinates of the target frame, the categories and the category confidence degrees of the targets and the segmentation maps corresponding to the categories to obtain the contour extraction result.
The following description will be given by way of specific examples.
(1) Target identification and positioning based on fast R-CNN network model
HRSC2016 target detection dataset
The fast R-CNN is used as a basic network model for target identification and positioning, and training and testing are performed on a High-Resolution Ship data set 2016(HRSC2016, High Resolution Ship Collection 2016). The HRSC2016 dataset is derived from google earth and has a relatively single background (typically sea or harbour, as shown in fig. 4, fig. 4(a) is an exemplary diagram of a ship docking dataset and fig. 4(b) is an exemplary diagram of a ship navigation dataset), with a resolution of between 0.4 and 2 meters for constructing the dataset, and an image size of between 300 × 300 and 1500 × 900
Images in the HRSC2016 dataset were taken at six harbors in russia and usa, and the vessels in the dataset were classified in three stages. The first class is ships; the second level categories include 4 categories: aircraft carriers, warships, commercial ships, submarines; the third class mainly carries out further model subdivision on four classes in the second class to form a third class. The labeling of data sets takes a number of ways: a vertical rectangular box mark, a rotational rectangular box mark, and a pixel-level ship mask mark. Wherein the vertical rectangular frame mark and the rotating rectangular frame mark are shown in fig. 5.
The training of the target detection network adopts data marked by a vertical frame, the marking file of the data is an XML file taking a picture as a basic unit, and the data mainly recorded comprises the name of an image, the depth of the image and the marking data of the vertical frame and a rotating frame of a target in the image.
Fast R-CNN network model training setup
The data aggregation for target detection by the HRSC2016 comprises 1055 graphs in total, which are divided into two parts: a training data set and a validation data set. 792 graphs exist in the training data set, and the training data set accounts for about 75% of total data; the verification dataset has 263 graphs, accounting for about 25% of the total dataset. According to the established Faster R-CNN network structure (the base network borrows ResNet-152v1 with pre-training weight), parameters in the network are updated iteratively by using a back propagation algorithm according to a defined loss function, the parameter optimization method adopts a random gradient descent (SGD) algorithm, and meanwhile, a data enhancement means is performed by adopting random horizontal inversion and multi-scale scaling in the training process.
The initial value of the learning rate in training was 0.001 and the batch size was set to 1. Because the Faster R-CNN is a two-stage target detection network, namely, a candidate frame is generated firstly, and then the frame coordinate and the object in the frame are subjected to category judgment, the quality of the candidate frame generated by the candidate frame network RPN has direct influence on the final detection result. Therefore, the setting of some hyper-parameters in the RPN network needs to be set according to actual situations, such as the aspect ratio of the generated candidate box and the size of the candidate box: the default of the aspect ratio of the candidate box is [0.5,1.0,2.0], the default of the scale of the candidate box is [8,16,32], the experiment calculates the corresponding aspect ratio and area according to the box coordinates marked in the HRSC2016 dataset, and performs K-means clustering on the obtained series of aspect ratios and areas to obtain the aspect ratio of [0.45,0.7,0.95,1.9] and the scale of [10,17.5,27.5 ]. During training, the Faster R-CNN is trained for 80 rounds totally, the learning rate of the first 50 rounds is the initial learning rate, the learning rate of each epoch in the last 30 rounds is reduced to 10-0.1 times of the learning rate of the previous round, and the training time is about 38 hours.
Prediction of fast R-CNN on HRSC2016 target detection dataset
Based on the experimental parameters and the strategy setting introduced above, the invention tests and verifies the first class (namely ship target) and the second class (namely four types of targets including aircraft carrier, warship, commercial ship and submarine) in the HRSC2016 dataset. The effect of the attempted detection on the second level category is shown in FIG. 6; the overall result of the secondary category prediction is good, missing detection does not occur, the category judgment accuracy is high, and the frame coordinate is accurate. The effect of the attempted detection on the first level category is shown in fig. 7.
As can be seen from fig. 7, the detection result of the first class is better, which is particularly shown in that not only the correct classification and accurate framing of the marked ships can be performed, but also the ship without the mark can be detected, and the generalization capability is better. Meanwhile, the accuracy of the primary and secondary class detection results is evaluated, and the evaluation results are shown in table 1:
TABLE 1 accuracy evaluation table for primary and secondary class detection results
(2) Semantic segmentation based on FCN network model
The FCN is used as a basic network model of pixel-level semantic segmentation, firstly, after pre-training is carried out on an ISPRS Vaihingen semantic segmentation data set, model tuning is carried out on an HRSC2016 data set by using pre-training weights.
ISPRS Vaihingen semantic segmentation data set introduction and training set
ISPRS Vaihingen semantic segmentation data set, wherein an image in the data set is a near-infrared remote sensing image, and the image contains 6 types of ground objects which are respectively: impervious surfaces (RGB:255, 255), buildings (RGB:0, 255), vegetation-low (RGB:0,255,255), trees (RGB:0,255,0), cars (RGB:255, 0), backgrounds (RGB:255,0, 0). The images and corresponding label maps in the data are shown in fig. 8(a) is an image map, and fig. 8(b) is a category label map):
the Vaihingen data set contains 33 remote sensing images of the area, corresponding to a total of 33 images in TIF format. The FCN pre-training process adopts 16 remote sensing images, each image is large and needs to be cut, the original image is cut in a step size of 48 to obtain a series of small images with the size of 128 x 128, corresponding cutting is carried out on corresponding label images, and finally 23449 data sets of images with the size of 128 x 128 are obtained.
During training, the data set of Vaihingen is divided firstly, and the data set is divided into two parts in an experiment: a training data set and a validation data set. The training data set comprises 21105 graphs, which account for about 90% of the total data; the validation data set has 2344 graphs, accounting for about 10% of the total data set. And then, building an FCN network structure (a base network utilizes pre-trained ResNet-18v2), wherein a loss function is a cross entropy loss function, parameters in the network are updated iteratively by utilizing a back propagation algorithm, a random gradient descent (SGD) method is adopted in a parameter optimization method, and random horizontal inversion is adopted in an experiment for data enhancement.
In a specific experiment, a small-batch random gradient descent algorithm is adopted, the learning rate or learning step length is initially set to be 0.001, 100 rounds of training are performed in total, the learning rate is kept unchanged in the first 50 rounds of training, and the learning rate is reduced to be 0.5 times of the previous round of training in each round of training in the last 50 rounds of training. The number of samples input for the parameter update training in each round is called the batch size and is set to 128 in the experiment. After the training is completed, the single-pixel prediction accuracy on the Vaihingen validation data is 92.53%.
b. Tuning training and prediction results on HRSC2016 dataset
After the FCN is used for pre-training in the Vaihingen data set, the HRSC2016 data set pixel-level ship mask mark is continuously used as a training sample for model tuning training. The pixel-level ship mask labeling training data is schematically shown in fig. 9.
The ship contour label map with three-level category information is shown in fig. 9, and since fine category information is not required for contour extraction of a ship, the ship mask label map with three-level category information is converted into ship and non-ship classification ship mask maps by a program, as shown in fig. 10.
As shown in fig. 10, the mask of the ship area with the three-level classification information is turned to be white, and the other areas are all black. Thereby converting into ship and non-ship classification ship contour segmentation training data suitable for the text.
Tuning training was performed on the HRSC2016 using the FCN model trained on the Vaihingen dataset. The other settings remain the same as for training without pre-training weights. The final training yielded a validation accuracy of 97.40%. The segmentation effect on the HRSC2016 is shown in fig. 11.
(3) Integrated processing of object detection and semantic segmentation
And constructing an integrated processing method for extracting the ship contour by using fully trained and tested Faster R-CNN and FCN networks. Firstly, ship detection is carried out on a remote sensing image containing a ship through fast R-CNN, and coordinates of a ship detection frame in the image, the type and the confidence coefficient of the ship in the frame are obtained. Similarly, the remote sensing image is sent to the FCN network, and the semantic segmentation is carried out on the ship. Then, a corresponding area is obtained on the ship segmentation graph by utilizing the coordinate information of the detection frame, and the area is superposed on a ship detection result graph, so that the results of ship detection and contour extraction are obtained. The flow of the above process and the effect diagram are shown in fig. 12.
The invention takes Google earth digital orthographic image (about 1 meter) as a data source to carry out a ship contour extraction experiment, and the specific experiment result is shown in figure 13.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (10)
1. A remote sensing image warship contour extraction method based on deep learning is characterized by comprising the following steps: the method comprises the following steps:
s1: carrying out target identification and positioning on the obtained remote sensing image by a target detection method based on a convolutional neural network to obtain a target detection result graph;
s2: performing semantic segmentation on the acquired remote sensing image based on a full convolution network to obtain a segmentation graph corresponding to a target detection result graph;
s3: and fusing the target detection result graph and the segmentation graph corresponding to the target detection result graph to obtain a contour extraction result.
2. The remote sensing image ship contour extraction method based on deep learning of claim 1, characterized in that: step S1 includes:
s1.1: processing the remote sensing image by using a convolutional neural network region generation algorithm to generate a candidate region, extracting the characteristics of the candidate region by using a convolutional neural network, classifying the characteristics and identifying and positioning the target in the region;
s1.2: and correcting the initial coordinates of the extracted target by adopting a boundary box regression algorithm, and deleting redundant target boxes by utilizing a non-maximum suppression algorithm to obtain a final detection result graph.
3. The remote sensing image ship contour extraction method based on deep learning of claim 2, characterized in that: the target detection method is a Faster R-CNN target detection algorithm, and the step S1.1 comprises the following steps:
s1.1.1: inputting the remote sensing image to be detected into a Faster R-CNN target detection algorithm, and extracting a corresponding characteristic diagram of the image by using VGG 16;
s1.1.2: generating a series of candidate frames on the feature map extracted by the VGG16 by using the RPN region candidate frame network;
s1.1.3: and after RoI Pooling processing, inputting the processed result into an R-CNN target detection head to realize coordinate regression and class detection of the candidate box and obtain the coordinates of the target box, the class of the target and the class confidence coefficient.
4. The remote sensing image ship contour extraction method based on deep learning of claim 3, wherein: determining a loss function of the Faster R-CNN target detection algorithm, and training the fast R-CNN target detection algorithm by using a back propagation algorithm to reduce the loss function to an appropriate value.
5. The remote sensing image ship contour extraction method based on deep learning of claim 4, wherein: the loss function of the Faster R-CNN target detection algorithm is:
i denotes the index of the anchor frame in the remote-sensing image, piRepresenting the probability that the anchor frame is predicted as the target, ti={tx,ty,tw,thIs a vector representing the 4 parameterized coordinates of the predicted bounding box.The probability of the anchor frame true target is represented,representing real parametric bounding box coordinates;
IoU denotes the ratio of the area of the intersection part of the anchor frame and the real frame to the area of the union part, IoU ∈ [0,0.3 ])Is 0, IoU e (0.7, 1)]) Time of flight1, x, y, w, h represents the predicted frame center coordinates (x, y) and the width and height (w, h), x of the frame*,y*,w*,h*Representing the real box center coordinates (x)*,y*) And width and height (w) of the frame*,h*),xa,ya,wa,haCenter coordinates (x) representing the anchor framea,ya) And width and height (w) of anchor framea,ha);
6. the remote sensing image ship contour extraction method based on deep learning of claim 4, wherein: the back propagation algorithm includes a chain derivative rule and/or a gradient descent.
7. The remote sensing image ship contour extraction method based on deep learning of claim 4, wherein: the loss function of a full convolutional network is:
wherein w and h respectively represent the width and high resolution of the prediction map,the probability distribution vector on the real category channel at the position (i, j) is shown, wherein the probability on only one channel is 1, and the probabilities on other channels are 0; p is a radical ofi,jRepresenting the probability distribution vector of each classification channel predicted at (i, j), wherein the sum of the probabilities of each classification channel is 1;
and training the full convolution network by adopting a back propagation algorithm, reducing the loss function of the full convolution network to a proper value to obtain the trained full convolution network, and then performing semantic segmentation on the acquired remote sensing image based on the full convolution network.
8. The remote sensing image ship contour extraction method based on deep learning of claim 7, characterized by comprising the following steps: step S2 includes:
s2.1: extracting a characteristic diagram of an input remote sensing image by using a base network of a trained full convolution network;
s2.2: sampling the feature graph to the size of an input graph by using a transposed convolution layer of the trained full convolution network;
s2.3: and classifying the characteristic diagram channels with the same size as the input diagram to obtain a segmentation diagram of the remote sensing image.
9. The remote sensing image ship contour extraction method based on deep learning of claim 1, characterized in that: before the remote sensing image is input into the convolution neural network and the full convolution network, the remote sensing image is firstly subjected to pixel value normalization processing.
10. The remote sensing image ship contour extraction method based on deep learning of claim 1, characterized in that: and fusing the coordinates of the target frame, the categories and the category confidence degrees of the targets and the segmentation maps corresponding to the categories to obtain the contour extraction result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010812673.2A CN111985376A (en) | 2020-08-13 | 2020-08-13 | Remote sensing image ship contour extraction method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010812673.2A CN111985376A (en) | 2020-08-13 | 2020-08-13 | Remote sensing image ship contour extraction method based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111985376A true CN111985376A (en) | 2020-11-24 |
Family
ID=73434326
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010812673.2A Pending CN111985376A (en) | 2020-08-13 | 2020-08-13 | Remote sensing image ship contour extraction method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111985376A (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112528862A (en) * | 2020-12-10 | 2021-03-19 | 西安电子科技大学 | Remote sensing image target detection method based on improved cross entropy loss function |
CN112597815A (en) * | 2020-12-07 | 2021-04-02 | 西北工业大学 | Synthetic aperture radar image ship detection method based on Group-G0 model |
CN113159042A (en) * | 2021-03-30 | 2021-07-23 | 苏州市卫航智能技术有限公司 | Laser vision fusion unmanned ship bridge opening passing method and system |
CN113177947A (en) * | 2021-04-06 | 2021-07-27 | 广东省科学院智能制造研究所 | Complex environment target segmentation method and device based on multi-module convolutional neural network |
CN113344148A (en) * | 2021-08-06 | 2021-09-03 | 北京航空航天大学 | Marine ship target identification method based on deep learning |
CN113657551A (en) * | 2021-09-01 | 2021-11-16 | 陕西工业职业技术学院 | Robot grabbing posture task planning method for sorting and stacking multiple targets |
CN113989662A (en) * | 2021-10-18 | 2022-01-28 | 中国电子科技集团公司第五十二研究所 | Remote sensing image fine-grained target identification method based on self-supervision mechanism |
CN113989305A (en) * | 2021-12-27 | 2022-01-28 | 城云科技(中国)有限公司 | Target semantic segmentation method and street target abnormity detection method applying same |
CN114037686A (en) * | 2021-11-09 | 2022-02-11 | 浙江大学 | Children intussusception automatic check out system based on degree of depth learning |
CN114549972A (en) * | 2022-01-17 | 2022-05-27 | 中国矿业大学(北京) | Strip mine stope extraction method, apparatus, device, medium, and program product |
CN114565764A (en) * | 2022-03-01 | 2022-05-31 | 北京航空航天大学 | Port panorama sensing system based on ship instance segmentation |
CN114708513A (en) * | 2022-03-04 | 2022-07-05 | 深圳市规划和自然资源数据管理中心 | Edge building extraction method and system considering corner features |
CN114898204A (en) * | 2022-03-03 | 2022-08-12 | 中国铁路设计集团有限公司 | Rail transit peripheral hazard source detection method based on deep learning |
CN116486265A (en) * | 2023-04-26 | 2023-07-25 | 北京卫星信息工程研究所 | Airplane fine granularity identification method based on target segmentation and graph classification |
CN117612031A (en) * | 2024-01-22 | 2024-02-27 | 环天智慧科技股份有限公司 | Remote sensing identification method for abandoned land based on semantic segmentation |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106295503A (en) * | 2016-07-25 | 2017-01-04 | 武汉大学 | The high-resolution remote sensing image Ship Target extracting method of region convolutional neural networks |
US20170076438A1 (en) * | 2015-08-31 | 2017-03-16 | Cape Analytics, Inc. | Systems and methods for analyzing remote sensing imagery |
CN107527352A (en) * | 2017-08-09 | 2017-12-29 | 中国电子科技集团公司第五十四研究所 | Remote sensing Ship Target contours segmentation and detection method based on deep learning FCN networks |
CN109711288A (en) * | 2018-12-13 | 2019-05-03 | 西安电子科技大学 | Remote sensing ship detecting method based on feature pyramid and distance restraint FCN |
CN109711295A (en) * | 2018-12-14 | 2019-05-03 | 北京航空航天大学 | A kind of remote sensing image offshore Ship Detection |
CN110647802A (en) * | 2019-08-07 | 2020-01-03 | 北京建筑大学 | Remote sensing image ship target detection method based on deep learning |
-
2020
- 2020-08-13 CN CN202010812673.2A patent/CN111985376A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170076438A1 (en) * | 2015-08-31 | 2017-03-16 | Cape Analytics, Inc. | Systems and methods for analyzing remote sensing imagery |
CN106295503A (en) * | 2016-07-25 | 2017-01-04 | 武汉大学 | The high-resolution remote sensing image Ship Target extracting method of region convolutional neural networks |
CN107527352A (en) * | 2017-08-09 | 2017-12-29 | 中国电子科技集团公司第五十四研究所 | Remote sensing Ship Target contours segmentation and detection method based on deep learning FCN networks |
CN109711288A (en) * | 2018-12-13 | 2019-05-03 | 西安电子科技大学 | Remote sensing ship detecting method based on feature pyramid and distance restraint FCN |
CN109711295A (en) * | 2018-12-14 | 2019-05-03 | 北京航空航天大学 | A kind of remote sensing image offshore Ship Detection |
CN110647802A (en) * | 2019-08-07 | 2020-01-03 | 北京建筑大学 | Remote sensing image ship target detection method based on deep learning |
Non-Patent Citations (3)
Title |
---|
REN S 等: "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS & MACHINE INTELLIGENCE》, vol. 39, no. 06, pages 1137 - 1149, XP055705510, DOI: 10.1109/TPAMI.2016.2577031 * |
ZHANG XIAODONG 等: "Change detection based on Faster R-CNN for high-resolution remote sensing images", 《REMOTE SENSING LETTERS》, vol. 09, no. 10, pages 923 - 932 * |
张晓东 等: "基于深度学习的遥感影像地物目标检测和轮廓提取一体化模型", 《测绘地理信息》, vol. 44, no. 06, pages 1 - 2 * |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112597815A (en) * | 2020-12-07 | 2021-04-02 | 西北工业大学 | Synthetic aperture radar image ship detection method based on Group-G0 model |
CN112528862A (en) * | 2020-12-10 | 2021-03-19 | 西安电子科技大学 | Remote sensing image target detection method based on improved cross entropy loss function |
CN112528862B (en) * | 2020-12-10 | 2023-02-10 | 西安电子科技大学 | Remote sensing image target detection method based on improved cross entropy loss function |
CN113159042A (en) * | 2021-03-30 | 2021-07-23 | 苏州市卫航智能技术有限公司 | Laser vision fusion unmanned ship bridge opening passing method and system |
CN113177947A (en) * | 2021-04-06 | 2021-07-27 | 广东省科学院智能制造研究所 | Complex environment target segmentation method and device based on multi-module convolutional neural network |
CN113177947B (en) * | 2021-04-06 | 2024-04-26 | 广东省科学院智能制造研究所 | Multi-module convolutional neural network-based complex environment target segmentation method and device |
CN113344148A (en) * | 2021-08-06 | 2021-09-03 | 北京航空航天大学 | Marine ship target identification method based on deep learning |
CN113657551A (en) * | 2021-09-01 | 2021-11-16 | 陕西工业职业技术学院 | Robot grabbing posture task planning method for sorting and stacking multiple targets |
CN113657551B (en) * | 2021-09-01 | 2023-10-20 | 陕西工业职业技术学院 | Robot grabbing gesture task planning method for sorting and stacking multiple targets |
CN113989662A (en) * | 2021-10-18 | 2022-01-28 | 中国电子科技集团公司第五十二研究所 | Remote sensing image fine-grained target identification method based on self-supervision mechanism |
CN114037686B (en) * | 2021-11-09 | 2022-05-17 | 浙江大学 | Children intussusception automatic check out system based on degree of depth learning |
CN114037686A (en) * | 2021-11-09 | 2022-02-11 | 浙江大学 | Children intussusception automatic check out system based on degree of depth learning |
CN113989305A (en) * | 2021-12-27 | 2022-01-28 | 城云科技(中国)有限公司 | Target semantic segmentation method and street target abnormity detection method applying same |
CN114549972A (en) * | 2022-01-17 | 2022-05-27 | 中国矿业大学(北京) | Strip mine stope extraction method, apparatus, device, medium, and program product |
CN114565764A (en) * | 2022-03-01 | 2022-05-31 | 北京航空航天大学 | Port panorama sensing system based on ship instance segmentation |
CN114898204A (en) * | 2022-03-03 | 2022-08-12 | 中国铁路设计集团有限公司 | Rail transit peripheral hazard source detection method based on deep learning |
CN114898204B (en) * | 2022-03-03 | 2023-09-05 | 中国铁路设计集团有限公司 | Rail transit peripheral dangerous source detection method based on deep learning |
CN114708513A (en) * | 2022-03-04 | 2022-07-05 | 深圳市规划和自然资源数据管理中心 | Edge building extraction method and system considering corner features |
CN116486265A (en) * | 2023-04-26 | 2023-07-25 | 北京卫星信息工程研究所 | Airplane fine granularity identification method based on target segmentation and graph classification |
CN116486265B (en) * | 2023-04-26 | 2023-12-19 | 北京卫星信息工程研究所 | Airplane fine granularity identification method based on target segmentation and graph classification |
CN117612031A (en) * | 2024-01-22 | 2024-02-27 | 环天智慧科技股份有限公司 | Remote sensing identification method for abandoned land based on semantic segmentation |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111985376A (en) | Remote sensing image ship contour extraction method based on deep learning | |
CN109598241B (en) | Satellite image marine ship identification method based on Faster R-CNN | |
CN110222626B (en) | Unmanned scene point cloud target labeling method based on deep learning algorithm | |
CN113516664A (en) | Visual SLAM method based on semantic segmentation dynamic points | |
CN113076871A (en) | Fish shoal automatic detection method based on target shielding compensation | |
CN112347895A (en) | Ship remote sensing target detection method based on boundary optimization neural network | |
Alidoost et al. | Knowledge based 3D building model recognition using convolutional neural networks from LiDAR and aerial imageries | |
CN108428220A (en) | Satellite sequence remote sensing image sea island reef region automatic geometric correction method | |
CN111563408A (en) | High-resolution image landslide automatic detection method with multi-level perception characteristics and progressive self-learning | |
Sun et al. | IRDCLNet: Instance segmentation of ship images based on interference reduction and dynamic contour learning in foggy scenes | |
CN115393734A (en) | SAR image ship contour extraction method based on fast R-CNN and CV model combined method | |
CN115527103A (en) | Unmanned ship perception experiment platform system | |
CN116935369A (en) | Ship water gauge reading method and system based on computer vision | |
CN116740528A (en) | Shadow feature-based side-scan sonar image target detection method and system | |
CN115115601A (en) | Remote sensing ship target detection method based on deformation attention pyramid | |
CN117392382A (en) | Single tree fruit tree segmentation method and system based on multi-scale dense instance detection | |
Huang et al. | Urban Building Classification (UBC) V2-A Benchmark for Global Building Detection and Fine-grained Classification from Satellite Imagery | |
Zhou et al. | Underwater occlusion object recognition with fusion of significant environmental features | |
CN112069997B (en) | Unmanned aerial vehicle autonomous landing target extraction method and device based on DenseHR-Net | |
CN115641510B (en) | Remote sensing image ship detection and identification method | |
CN116452965A (en) | Underwater target detection and recognition method based on acousto-optic fusion | |
CN116824333A (en) | Nasopharyngeal carcinoma detecting system based on deep learning model | |
CN110889418A (en) | Gas contour identification method | |
CN113537397B (en) | Target detection and image definition joint learning method based on multi-scale feature fusion | |
CN113205526B (en) | Distribution line accurate semantic segmentation method based on multi-source information fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20231201 Address after: 430000 Hubei city of Wuhan province Wuchang Luojiashan Applicant after: WUHAN University Address before: Building 5 (1-3), Building 3S Geospatial Information Industry Base, No. 7 Wudayuan Road, Donghu Development Zone, Wuhan City, Hubei Province 430000 Applicant before: Hubei furuier Technology Co.,Ltd. |
|
TA01 | Transfer of patent application right |