CN112132207A - Target detection neural network construction method based on multi-branch feature mapping - Google Patents

Target detection neural network construction method based on multi-branch feature mapping Download PDF

Info

Publication number
CN112132207A
CN112132207A CN202010988619.3A CN202010988619A CN112132207A CN 112132207 A CN112132207 A CN 112132207A CN 202010988619 A CN202010988619 A CN 202010988619A CN 112132207 A CN112132207 A CN 112132207A
Authority
CN
China
Prior art keywords
branch
network
prediction
branches
target detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010988619.3A
Other languages
Chinese (zh)
Inventor
刘晋
李怡瑶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Maritime University
Original Assignee
Shanghai Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Maritime University filed Critical Shanghai Maritime University
Priority to CN202010988619.3A priority Critical patent/CN112132207A/en
Publication of CN112132207A publication Critical patent/CN112132207A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a target detection method based on multi-branch feature mapping. The method divides the detected targets into three types according to the size of the detected targets in the image sample data set; inputting three branches of a network respectively, wherein each branch has the same network structure, different expansion rates and different effective prediction frame size ranges; according to the prediction frame range, limiting a scale range matched with the receptive field of each branch, and generating a plurality of prediction frames for each branch; in order to improve the screening precision of the prediction frames, the invention provides a scale perception training scheme, which effectively eliminates the prediction frames with inconsistent sizes, so that the network achieves the aim of light weight; and finally integrating the results of all the branches and outputting a network prediction result in the main branch. The target detection neural network based on the branch feature mapping is used for flexibly adjusting the network branches to enable prediction to be accurate when the size span of a detected target is large under a complex background, and particularly, the target detection neural network can still effectively detect very large or very small targets.

Description

Target detection neural network construction method based on multi-branch feature mapping
Technical Field
The invention belongs to the field of digital image processing and deep learning, and particularly relates to a method for identifying targets in a complex scene with a large target size change range, in particular to a method for constructing a target detection neural network based on multi-branch feature mapping.
Background
At present, the application of artificial intelligence technology in various fields such as industry, science and life is gradually wide. Object detection, as a pre-step of object recognition, plays a crucial role in applications such as robot monitoring, unmanned autonomous driving, etc. Since more than one object is often contained in a scene image and the background is complex, which means that the image contains a huge amount of data, a computer program is required to automate the task of extracting objects from the scene image.
Convolutional neural networks have gained widespread use in the field of target detection. Conventional convolutional neural network-based approaches can be broadly divided into two categories: a one-stage based approach, similar to YOLO or SSD, that directly uses convolutional neural networks to obtain the bounding box of interest; and two-phase based methods, similar to FasterR-CNN or R-FCN, that generate prediction frames and then extract features to generate bounding frames with further refinement. However, the core problem of both methods is how to handle the scale change. With the proposal of AlexNet, the error rate of classification tasks performed using deep learning neural networks has dropped from 15% to 2%, however, the average detection accuracy of the currently best performing detector on the COCO data set is only 62%. The reason for this gap is the hot problem in the field of target detection, i.e., the large span of the target scale. Depending on the actual scenario, the size of the object to be detected may vary over a large range, which may form a significant obstacle for detection, especially for detectors with very small or very large detection sizes.
To solve this multi-scale problem, an intuitive approach is to use a multi-scale image pyramid, which is popular in both manual feature-based approaches and current deep convolutional neural network-based approaches. Recent studies have shown that depth detectors can be well optimized if multi-scale training methods are applied. The basic idea of this training method is to avoid extreme-scale training objects. In contrast, SNIP proposes a scale normalization method, which generates an image pyramid for each image and selectively trains objects of appropriate size in each image scale. However, expanding a single image into various sizes is equivalent to doubling the data amount, which results in a huge amount of calculation, thereby increasing the time for object detection, and is not favorable for practical use. Another approach is to use a pyramid of features within the network instead of the image pyramid, thereby reducing the computational cost. Some researchers have attempted to construct a fast feature pyramid for target detection by interpolating the feature channels of nearby scale layers. Then, the SSD proposes a multi-scale feature mapping of different layers, and not only uses the feature map of the last layer for prediction, but also uses the feature maps of the last six layers for multiple predictions. Because the low-level features and the high-level features are fully utilized, the target detection effect is enhanced. On this basis, to make up for the lack of semantics in low-level features, the FPN further expands the top-down path and cross-joins to incorporate strong semantic information in high-level features. However, since the regional features of the objects of different scales are extracted from different levels of the FPN network, each level of the FPN network is generated from a different set of parameters. This makes the feature pyramid unable to achieve the same performance as the image pyramid with a reduced amount of computation. In summary, the problem of target detection with large target scale span still remains to be solved efficiently.
Disclosure of Invention
In order to solve the above problems, a method for constructing a target detection neural network based on multi-branch feature mapping is provided, which is a target detection neural network that can be flexible and changeable in a complex background, and particularly achieves good adaptability for very large or very small detection targets.
According to the invention, through a multi-branch network structure, the network can adapt to multi-scale target detection, different branches have the same network structure, and each branch is operated in parallel to execute convolution operation, pooling operation and the like, so that the operation time can be reduced, the characteristic information of different scales of the target can be extracted, meanwhile, the different receptive fields caused by expansion rate parameters are similar to the increase of the number of samples, and the over-fitting problem is reduced; each branch adopts a weight sharing mechanism, so that the problem of parameter redundancy in the network is effectively reduced, and the final detection effect can be optimized by improving the single-scale detection precision; by introducing the region set feature unit into each branch, the position and other information of the low-level feature graph and the classification and other information of the high-level feature graph are effectively combined to meet the position classification dual requirements of the detection task; by introducing a scale perception training scheme, the scale perception capability of each branch is improved, and training objects with extreme scales on unmatched branches are avoided; by introducing a non-maximum suppression algorithm (NMS), the optimal prediction box is retained within a defined number of frames.
The multi-branch feature mapping-based target detection neural network construction method provided by the invention has the advantages of multi-scale target detection, scale perception training strategy, combination of feature extraction of different grades, strong adaptability of model structure, light model weight and the like.
In order to achieve the purpose, the method for constructing the target detection neural network based on the multi-branch feature mapping is realized by the following technical scheme:
a construction method of a target detection neural network based on multi-branch feature mapping is characterized by comprising the following steps:
step 1: making an image data set COCO2017-plus, dividing the data set into a training set COCO-train and a testing set COCO-test, wherein the COCO2017-plus data set is constructed by adopting a data enhancement mode of random horizontal turning on the basis of a standard data set COCO 2017;
step 2: constructing a multi-branch characteristic mapping target detection neural network, wherein the network comprises a multi-branch network structure based on weight sharing, a region set characteristic unit combining characteristics of each level and a scale perception training module improving the network operation performance;
and step 3: defining a scale perception training scheme, and screening a plurality of prediction frames generated by each branch and removing the prediction frames outside the range by using the scale perception training scheme provided in the network;
and 4, step 4: preprocessing operations such as size unification and graying are carried out on data to be input;
and 5: inputting the preprocessed training set COCO2017-train into the multi-branch target detection neural network for training, and removing the unqualified prediction boxes by using a well-defined scale perception training scheme;
step 6: the optimal number of prediction blocks obtained on each branch is fed to the NMS module. After NMS processing, the optimal 500 prediction blocks are retained.
Further, the multi-branch feature mapping target detection neural network building method in step 2 is as follows:
firstly, the multi-branch feature mapping network architecture is composed of a plurality of branch blocks, the example defines the multi-branch feature mapping network architecture into three branches, and the number of the branches can be defined according to the variation range of specific target scales in actual operation. The three branches have the same basic backbone network structure based on ResNet-50, each branch block operates in parallel but has different expansion rates, the expansion rates d of the three branches1,d2,d 31, 2, 3, respectively, a 1 x 1 convolution operation is required before entering each branch. Each branch block comprises a plurality of parallel region set characteristic units, the expansion rate of the same branch is the same, and each unit consists of three parts: BN (BatchNormalization) layer, ReLu (RectisedLinearunits) layer and a 3X 3 convolutional layer. Adding a scale perception training scheme to the tail part of each branch, leaving off unqualified prediction frames, not participating in back propagation, and finally, performing NMS (network management system) conversion on the three-branch results to keep optimal 500 prediction frames, and uniformly outputting the prediction frames in a main network;
further, the scale-aware training scheme in step 3 is defined as follows:
defining valid prediction box ranges for each branch, [ l ]i,ui]Let RoI (Region-of-Interest) on the input image be w in width and h in height, and its effective range should be:
Figure BDA0002690071540000031
in actual operation, the effective prediction range of each branch image can be defined according to the variation range of the target scale. And screening a plurality of prediction frames obtained by each branch by using the effective prediction range defined by the scale perception training scheme, wherein frames which are not in the range are omitted and do not participate in back propagation.
Further, the specific training strategy in step 5 is as follows:
inputting the preprocessed training set COCO2017-train into the multi-branch target detection neural network, wherein training is divided into 12 rounds in total, the input initialization learning rate of the network is defined to be 0.02, the initial learning rate is reduced by 0.1 time after the 8 th round and the 9 th round, the network is converged to the accuracy rate to meet the requirement, the trained model is stored, and the number of the whole training round can be adjusted to be two times or three times according to the number of branches of the actual network.
Drawings
FIG. 1 is a flow chart of an implementation of the neural network construction method for target detection based on multi-branch feature mapping according to the present invention
FIG. 2 is a network architecture diagram of the multi-branch eigen-mapping-based target detection neural network construction method of the present invention
FIG. 3 is a convolution operation of the region set feature unit based on the multi-branch feature mapping target detection neural network construction method of the present invention
FIG. 4 is the internal structure of the region set feature block of the multi-branch feature mapping-based target detection neural network construction method of the present invention
FIG. 5 is an application of the multi-scale training scheme of the multi-branch feature mapping based target detection neural network construction method of the present invention
FIG. 6 is a detection effect diagram of the multi-branch feature mapping-based target detection neural network construction method of the present invention
Detailed Description
The invention is described in further detail below with reference to the figures and the detailed description. It should not be understood that the scope of the above-described subject matter is limited to the following examples, and that any technique realized based on the teachings of the present invention is within the scope of the present invention.
The overall implementation flow of the method for constructing the target detection neural network based on the multi-branch feature mapping is shown in fig. 1, and is specifically described as follows:
the data set in the embodiment is a new image data set COCO2017-plus constructed in a data enhancement mode of random horizontal inversion on the basis of COCO2017, and is divided into a training set COCO-train and a test set COCO-test. And the image size is unified to 800 pixels. The COCO dataset is a large, rich object detection, segmentation and caption dataset. The data set is mainly intercepted from complex daily scenes by taking scenes as targets, and the targets in the images are calibrated through accurate segmentation. The image included 91 types of objects, 328,000 images and 2,500,000 labels.
In this embodiment, the implementation platform of the method for constructing the target detection neural network based on the multi-branch feature mapping is a computer, the operating system is Windows 10, the deep learning architecture is Pytorch and Detectron2, the graphics processing library uses opencv 4.1.0, and the image acceleration unit uses GeForce GTX 1060 GPU.
The overall network architecture of the multi-branch feature mapping-based target detection neural network construction method provided by the invention is shown in fig. 2, and in the example, an input image G is subjected toinFormalization is defined as follows:
Figure BDA0002690071540000041
performing graying processing on an input image, setting the gray value of the original image as f (m, n) E [ a, b ], changing the gray value into g (m, n) E [ c, d ] after graying, and operating as follows:
g(m,n)=c+k[f(m,n)-a]。
the processed image is input to the network, in this example assuming that the output of the l-th layer neurons in the neural network is represented as yl. For the ith neuron in the l +1 layer neural network, the method
Figure BDA0002690071540000051
Express their corresponding weights by
Figure BDA0002690071540000052
Representing its corresponding offset, the convolution operation is defined as:
Figure BDA0002690071540000053
the expansion rates of the branches in the multi-branch network are different, and in this example, the expansion rates of the 3 branches are set as d1=1,d2=2,d33. The dilation convolution is to fill 0 in a convolution kernel, and assuming that the original convolution kernel has a size of K, the dilated convolution kernel has a size of K, and the dilation rate is d, the size of the equivalent convolution kernel of the dilation convolution can be calculated by the following formula:
K=k+(k+1)*(d-1)。
the feature map size output after convolution is:
Figure BDA0002690071540000054
wherein WinIndicating the size of the input image, WoutRepresenting the output image size, padding represents the number of 0's, tide represents the step size, and F represents the size of the convolution kernel. The expansion rate is 1, which is a normal convolution, and if the expansion rate is greater than 1, which means that the expansion convolution is performed.
The region set feature unit is adopted among the branches to extract features, the convolution operation with the region set feature unit is shown in fig. 3, and the internal structure of each region set feature block is shown in fig. 4. The image extracted region features are defined as a one-dimensional vector:
r=[r1,r2,...,rn]。
the region set feature extraction network establishes connection between each layer and all the following layers, and the extraction operation is as follows:
xl=Hl([x0,x1,...,xl-1])。
wherein Hl([x0,x1,...,xl-1]) Representing the collection of signatures obtained from each layer, from layer 0 to layer l-1. The output image after neural network processing is defined as:
Figure BDA0002690071540000061
because each branch adopts different expansion rates, but each branch shares the same data set, in order to avoid the degradation of network performance caused by the condition of size mismatching, a training scheme of scale perception is utilized to set different effective ranges for each branch, thereby improving the perception capability of each size. The application of the multi-scale training scheme in the network is shown in fig. 5. The multi-scale branch number is i, and the expansion ratio is d1,d2...di. The range of each branch is defined as [ l ]i,ui]And satisfy
Figure BDA0002690071540000062
w is the width of the image and h is the height of the image. The effective prediction range for the 3 branches in this example is [0,90 ]],[30,160],[90,∞]And each branch limits a scale range matched with the receptive field of the branch, and for a plurality of prediction frames generated in the training process of each branch, the prediction frames outside the range are screened and removed, so that the backward propagation is not involved, and the calculated amount is reduced, so that the network is light.
In this example, the structure of the multi-branch unit is shown in table 1, where MB _ Layer represents the multi-branch unit. Where MBconv consists of 3 dilated convolutions. In the example, a total of 12 rounds of training are carried out, the learning rate is initialized to 0.02 and is reduced by 0.1 times after 8 th round and 9 th round, and the whole training discussion can be adjusted to be two times or three times according to the number of branches of the network in practical operation. In the training process, the optimal 1200 propofol values obtained on each branch are sent to the NMS module. After NMS processing, the optimal 500 propofol are reserved, and finally, a main branch is used for representing the detection result of the network.
TABLE 1 network Multi-Branch Unit architecture
Figure BDA0002690071540000063
Figure BDA0002690071540000071
The target detection result obtained by prediction of the present invention is shown in fig. 6, where a, B, and C represent input pictures, and D, E, and F represent targets recognized through the present network and corresponding block diagrams. Experiments prove that the multi-branch feature mapping-based target detection neural network provided by the invention has higher size perception capability on each branch, effectively adapts to targets with different sizes, and is more accurate in detection.
It will be appreciated by those of ordinary skill in the art that the foregoing description provides numerous implementation details. Of course, embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.

Claims (1)

1. A target detection neural network construction method based on multi-branch feature mapping is characterized by comprising the following steps:
step 1: making an image data set COCO2017-plus, dividing the data set into a training set COCO-train and a testing set COCO-test, wherein the COCO2017-plus data set is constructed by adopting a data enhancement mode of random horizontal turning on the basis of a standard data set COCO 2017;
step 2: constructing a multi-branch characteristic mapping target detection neural network, wherein the network comprises a multi-branch network structure based on weight sharing, and a scale perception training module for combining region set characteristic units of each level of characteristics and improving the network operation performance, and the network construction steps are as follows:
step 2.1: the multi-branch feature mapping network architecture is composed of a plurality of branch networks, generally defined as three branches, and the number of the branches can be defined according to the variation range of specific target scales in actual operation;
step 2.2: the three branches have the same basic backbone network structure based on ResNet-50, and the branch networks operate in parallel but have different expansion rates, the expansion rates d of the three branches1,d2,d31, 2 and 3 respectively, before entering each branchThere is a 1 x 1 convolution operation;
step 2.3: each branch network comprises a plurality of parallel regional set characteristic units, the expansion rate of the same branch is the same, and each unit consists of three parts: BN layer, ReLu layer and a 3 x 3 convolutional layer;
step 2.4: adding a scale perception training scheme to the tail part of each branch;
step 2.5: finally, the three-branch results are transformed and unified in the main network through NMS and output;
and step 3: defining a scale perception training scheme, screening a plurality of prediction frames generated by each branch by using the scale perception training scheme, and removing the prediction frames outside the range:
step 3.1: defining a prediction box range for each branch li,ui]Let RoI on the input image be w wide and h high, its valid range should be:
Figure FDA0002690071530000011
step 3.2: the effective prediction ranges of the three branches are set to l1,u1],[l2,u2],[l3,u3]In actual operation, the effective prediction range of each branch image can be defined according to the variation range of the target scale;
step 3.3: screening a plurality of prediction frames obtained by each branch by using a scale perception training scheme, removing the prediction frames outside the range and not participating in backward propagation;
and 4, step 4: preprocessing data to be input, adjusting all short edges of an image to 800 pixels, and performing graying processing on the image;
and 5: inputting the preprocessed training set COCO2017-train into the multi-branch target detection neural network, wherein training is totally divided into 12 rounds, the network defines the input initialization learning rate to be 0.02, the input initialization learning rate is reduced by 0.1 time after the 8 th round and the 9 th round, and finally the network is converged, and the trained model is stored, and the number of the whole training round can be adjusted to be two times or three times according to the number of branches of the actual network;
step 6: and sending a certain number of optimal prediction boxes obtained on each branch to an NMS module, and reserving 500 optimal prediction boxes after NMS processing.
CN202010988619.3A 2020-09-18 2020-09-18 Target detection neural network construction method based on multi-branch feature mapping Withdrawn CN112132207A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010988619.3A CN112132207A (en) 2020-09-18 2020-09-18 Target detection neural network construction method based on multi-branch feature mapping

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010988619.3A CN112132207A (en) 2020-09-18 2020-09-18 Target detection neural network construction method based on multi-branch feature mapping

Publications (1)

Publication Number Publication Date
CN112132207A true CN112132207A (en) 2020-12-25

Family

ID=73842980

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010988619.3A Withdrawn CN112132207A (en) 2020-09-18 2020-09-18 Target detection neural network construction method based on multi-branch feature mapping

Country Status (1)

Country Link
CN (1) CN112132207A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112975969A (en) * 2021-02-26 2021-06-18 清华大学 Robot control and visual perception integrated controller system and method
CN113204010A (en) * 2021-03-15 2021-08-03 锋睿领创(珠海)科技有限公司 Non-visual field object detection method, device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112975969A (en) * 2021-02-26 2021-06-18 清华大学 Robot control and visual perception integrated controller system and method
CN113204010A (en) * 2021-03-15 2021-08-03 锋睿领创(珠海)科技有限公司 Non-visual field object detection method, device and storage medium

Similar Documents

Publication Publication Date Title
CN110472483B (en) SAR image-oriented small sample semantic feature enhancement method and device
CN110175671B (en) Neural network construction method, image processing method and device
CN109584248B (en) Infrared target instance segmentation method based on feature fusion and dense connection network
CN107609601B (en) Ship target identification method based on multilayer convolutional neural network
CN110135267B (en) Large-scene SAR image fine target detection method
US20220215227A1 (en) Neural Architecture Search Method, Image Processing Method And Apparatus, And Storage Medium
CN114202672A (en) Small target detection method based on attention mechanism
CN112446476A (en) Neural network model compression method, device, storage medium and chip
CN111461083A (en) Rapid vehicle detection method based on deep learning
CN110032925B (en) Gesture image segmentation and recognition method based on improved capsule network and algorithm
CN112163628A (en) Method for improving target real-time identification network structure suitable for embedded equipment
CN108764298B (en) Electric power image environment influence identification method based on single classifier
CN113160062B (en) Infrared image target detection method, device, equipment and storage medium
CN112529146B (en) Neural network model training method and device
CN110222718B (en) Image processing method and device
CN110532959B (en) Real-time violent behavior detection system based on two-channel three-dimensional convolutional neural network
CN111126278A (en) Target detection model optimization and acceleration method for few-category scene
CN113420794B (en) Binaryzation Faster R-CNN citrus disease and pest identification method based on deep learning
CN112464930A (en) Target detection network construction method, target detection method, device and storage medium
CN113743505A (en) Improved SSD target detection method based on self-attention and feature fusion
CN113420651A (en) Lightweight method and system of deep convolutional neural network and target detection method
CN112132207A (en) Target detection neural network construction method based on multi-branch feature mapping
CN115393690A (en) Light neural network air-to-ground observation multi-target identification method
CN113627240B (en) Unmanned aerial vehicle tree species identification method based on improved SSD learning model
CN114373104A (en) Three-dimensional point cloud semantic segmentation method and system based on dynamic aggregation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Liu Jin

Inventor after: Li Yiyao

Inventor after: Gao Zhenyu

Inventor before: Liu Jin

Inventor before: Li Yiyao

WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20201225