CN111160109A - Road segmentation method and system based on deep neural network - Google Patents

Road segmentation method and system based on deep neural network Download PDF

Info

Publication number
CN111160109A
CN111160109A CN201911243197.0A CN201911243197A CN111160109A CN 111160109 A CN111160109 A CN 111160109A CN 201911243197 A CN201911243197 A CN 201911243197A CN 111160109 A CN111160109 A CN 111160109A
Authority
CN
China
Prior art keywords
network
deep neural
neural network
road
road segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911243197.0A
Other languages
Chinese (zh)
Other versions
CN111160109B (en
Inventor
张洋
姚登峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Union University
Original Assignee
Beijing Union University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Union University filed Critical Beijing Union University
Priority to CN201911243197.0A priority Critical patent/CN111160109B/en
Publication of CN111160109A publication Critical patent/CN111160109A/en
Application granted granted Critical
Publication of CN111160109B publication Critical patent/CN111160109B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention provides a road segmentation method and a system based on a deep neural network, wherein the method comprises the steps of preparing a data set and the following steps: constructing an algorithm model; training the algorithm model until the parameters of the model become a stable threshold; and inputting the image to be segmented into the algorithm model for road segmentation. The invention provides a road segmentation method and a road segmentation system based on a deep neural network, provides a novel network model for deep neural road segmentation, and solves the problem that an unmanned intelligent automobile cannot effectively segment roads in real time on actual roads.

Description

Road segmentation method and system based on deep neural network
Technical Field
The invention relates to the technical field of unmanned driving, in particular to a road segmentation method and a road segmentation system based on a deep neural network.
Background
At present, unmanned automobiles face a plurality of problems, and one of the problems of road segmentation is. Road region segmentation is in fact a kind of image segmentation. Image segmentation is a key technology in the field of image processing and computer vision. It is an important precondition and basis for image processing and one of the most difficult problems in image processing. With the continuous introduction of the deep learning algorithm, the image segmentation is also rapidly developed.
Image segmentation can be divided into conventional methods and algorithms based on deep learning. The traditional image segmentation method is to input artificial features into a new dataset. While also requiring increased expert knowledge and time costs to adjust the features. However, learning feature representations from different data sets based on deep learning algorithms is a research focus in recent years. Fcn (full volumetric network) proposed by Long J et al in 2014 is the earliest proposed image segmentation method in deep learning. Subsequently, algorithms like MaskRCNN, BiSeNet, SegNet, deplab, etc. were derived from the FCN framework. These algorithms can classify and label images at the pixel level, but are time consuming and difficult to meet in real-time applications. With the continuous development of the deep neural network, more and more road segmentation algorithms based on the deep neural network are provided, but few algorithms capable of combining accuracy and real-time performance are provided.
The southwest university of transportation (2018) discloses a Master thesis entitled road scene semantic segmentation research based on a deep neural network from Shinyong, wherein the status quo of the research of image semantic segmentation is summarized firstly, and the research result of the deep neural network used in the related field of semantic segmentation at the front edge is analyzed. Secondly, an ERFNet-Efficient network structure based on a deep convolutional neural network is designed, the network structure is an improvement of an ERFNet network of an encoder-decoder structure, and a road scene semantic segmentation task can be accurately and efficiently completed by using latest research results of asymmetric convolution, grouping convolution, expanding convolution, transposition convolution, batch standardization and the like. Then, an ERFNet-efficiency model is realized by using a PyTorch deep learning framework, and training and testing are carried out on a Cityscapes data set, so that a good effect is achieved. And then the segmentation precision of the ERFNet-Efficient model is further improved through related technologies such as transfer learning and integrated learning. Finally, the ERFNet-efficiency network model is compared with the ERFNet algorithm and other leading-edge algorithms, performance indexes are model accuracy, operation speed and model parameter number, and results show that the ERFNet-efficiency network is one of the most excellent algorithms in overall performance. The algorithm in the paper corresponds to the whole road scene, but is not specific to a certain part of the road scene, and the algorithm in the paper fails to extract spatial features, so that if the algorithm is used for road segmentation, the error rate is high.
Disclosure of Invention
In order to solve the technical problems, the invention provides a novel deep neural road segmentation network model and a road segmentation system based on a deep neural network, and solves the problem that an unmanned intelligent automobile cannot effectively segment roads in real time on actual roads.
The first purpose of the present invention is to provide a road segmentation method based on a deep neural network, which includes the preparation of data sets, and further includes the following steps:
step 1: constructing an algorithm model;
step 2; training the algorithm model until the parameters of the model become a stable threshold;
and step 3: and inputting the image to be segmented into the algorithm model for road segmentation.
Preferably, the preparation of the data set comprises the sub-steps of:
step 01: collecting a road surface image;
step 02: and marking the collected road surface image.
In any of the above schemes, preferably, the collection method is to collect the actual situation of each road surface, and select a variable road type and/or a plurality of road surface conditions.
In any of the foregoing schemes, preferably, the step 1 includes constructing a SUG-Net network, where the SUG-Net network includes a backbone network and a feature fusion network.
In any of the above schemes, preferably, the backbone network is composed of 5 convolutional layers 3 × 3 and 4 maximum pooling layers in a manner of nesting the convolutional layers and the maximum pooling layers.
In any of the above schemes, it is preferable to add spatial pyramid pooling with modification scales of 5x5,9x9,13x13 at the bottom of the backbone network.
In any of the above schemes, preferably, the pyramid pooling is followed by the feature fusion network, which is formed by 4 convolution layers of 3 × 3 and 4 upsampling layers in a one-to-one correspondence.
In any of the above embodiments, preferably, the feature fusion network is finally added with a 1 × 1 convolution layer to form a plurality of feature maps.
In any of the above schemes, preferably, the step 1 further includes placing the GIOU at the end of the SUG-Net network and outputting the result.
In any of the above schemes, preferably, the step 2 includes sequentially dividing the processed data set into a training set and a test set, where the training set occupies N% of the data set, and the test set occupies 1-N% of the data set, where 50< N < 100.
In any of the above schemes, preferably, the step 2 further includes sending the training set to the SUG-Net network for learning until the parameter of the algorithm model becomes a stable threshold.
Preferably, in any of the above schemes, the step 3 further includes inputting the testing set into the algorithm model, and checking the accuracy and rate of the algorithm model.
The invention also provides a road segmentation system based on the deep neural network, which comprises a data set and comprises the following modules:
constructing a module: the algorithm model is constructed;
a training module: for training said algorithmic model until the parameters of the model become a stable threshold;
an input module: and the algorithm model is used for inputting the image to be segmented into the algorithm model for road segmentation.
Preferably, the preparation of the data set comprises the sub-steps of:
step 01: collecting a road surface image;
step 02: and marking the collected road surface image.
In any of the above schemes, preferably, the collection method is to collect the actual situation of each road surface, and select a variable road type and/or a plurality of road surface conditions.
In any of the foregoing schemes, preferably, the building module is configured to build a SUG-Net network, where the SUG-Net network includes a backbone network and a feature fusion network.
In any of the above schemes, preferably, the backbone network is composed of 5 convolutional layers 3 × 3 and 4 maximum pooling layers in a manner of nesting the convolutional layers and the maximum pooling layers.
In any of the above schemes, it is preferable to add spatial pyramid pooling with modification scales of 5x5,9x9,13x13 at the bottom of the backbone network.
In any of the above schemes, preferably, the pyramid pooling is followed by the feature fusion network, which is formed by 4 convolution layers of 3 × 3 and 4 upsampling layers in a one-to-one correspondence.
In any of the above embodiments, preferably, the feature fusion network is finally added with a 1 × 1 convolution layer to form a plurality of feature maps.
In any of the above schemes, preferably, the construction module is further configured to put the GIOU to the end of the SUG-Net network and output the result.
In any of the above schemes, preferably, the training module is configured to sequentially divide the processed data set into a training set and a test set, where the training set occupies N% of the data set, and the test set occupies 1-N% of the data set, where 50< N < 100.
In any of the above schemes, preferably, the training module is further configured to send the training set to the SUG-Net network for learning until the parameter of the algorithm model becomes a stable threshold.
In any of the above schemes, preferably, the input module is configured to input the testing set into the algorithm model, and check the accuracy and rate of the algorithm model.
The invention provides a road segmentation method and a road segmentation system based on a deep neural network, which overcome the defect of insufficient real-time performance of the conventional road segmentation algorithm compared with the conventional method.
SUG-Net is a deep learning segmentation network.
The feature map is a feature map generated after convolution of the input image with the convolution kernel.
GIOU (generalized Intersection over Union) represents a new evaluation index used as a frame loss function to replace IOU (the union of the detection result and the Intersection of the group Truth is IoU), for any two A, B frames, a minimum box C capable of enclosing the IOU is firstly found, then the ratio of the area of C \ A ∪ B to the area of C is calculated, note that the area of C \ A ∪ B is the area of C minus the area of A ∪ B, and then the IoU value of A, B is used to subtract the ratio to obtain GIoU.
Drawings
Fig. 1 is a flowchart of a deep neural network-based road segmentation method according to a preferred embodiment of the present invention.
Fig. 1A is a flowchart of a data set preparation method according to the embodiment shown in fig. 1 of the deep neural network-based road segmentation method according to the present invention.
Fig. 2 is a block diagram of a preferred embodiment of a deep neural network-based road segmentation system according to the present invention.
Fig. 3 is a block diagram of another preferred embodiment of a deep neural network-based road segmentation method according to the present invention.
Fig. 4 is a comparison diagram of an embodiment of a segmentation effect of a deep neural network-based road segmentation method according to the present invention.
Fig. 5 is a comparison diagram of a segmentation effect of another embodiment of the deep neural network-based road segmentation method according to the present invention.
Detailed Description
The invention is further illustrated with reference to the figures and the specific examples.
Example one
As shown in fig. 1, step 100 is performed to prepare a data set. The preparation of the data set comprises the sub-steps of: as shown in fig. 1A, step 101 is executed to acquire road surface images, and the acquisition method selects various road types and/or various road surface conditions for acquiring the actual conditions of each road surface. And step 102, marking the acquired road surface image.
Step 110 is executed to construct an algorithm model. And constructing a network SUG-Net, wherein the SUG-Net comprises a backbone network and a feature fusion network. The backbone network is formed by 5 convolution layers of 3x 3 and 4 maximum pooling layers in a nesting mode of the convolution layers and the maximum pooling layers. Adding spatial pyramid pooling with improved scales of 5x5,9x9,13x13 at the bottom of the backbone network. The pyramid pooling is followed by the feature fusion network, which is one-by-one buckled by 4 3x 3 convolutional layers and 4 upsampling layers. And adding a layer of convolution layer 1 x1 to the last of the feature fusion network to form a plurality of feature maps. And placing the GIOU to the end of the SUG-Net network and outputting the result.
Step 120 is performed to train the algorithm model until the parameters of the model become a stable threshold. And sequentially dividing the processed data set into a training set and a test set, wherein the training set accounts for N% of the data set, and the test set accounts for 1-N% of the data set, wherein N is more than 50 and less than 100. And sending the training set to an SUG-Net network for learning until the parameters of the algorithm model become a stable threshold value.
And step 130, inputting the test set into the algorithm model, checking the accuracy and the speed of the algorithm model and performing road segmentation.
Example two
As shown in fig. 2, a deep neural network-based road segmentation system includes a data set 200, a construction module 210, a training module 220, and an input module 230.
The preparation of the data set 200 comprises the following sub-steps: step 01: acquiring a road surface image, wherein the acquisition method is to acquire the real situation of each road surface and select changeable road types and/or various road surface conditions; step 02: and marking the collected road surface image.
The building module 210: for constructing an algorithmic model. And constructing a network SUG-Net, wherein the SUG-Net comprises a backbone network and a feature fusion network. The backbone network is formed by 5 convolution layers of 3x 3 and 4 maximum pooling layers in a nesting mode of the convolution layers and the maximum pooling layers. Adding spatial pyramid pooling with improved scales of 5x5,9x9,13x13 at the bottom of the backbone network. The pyramid pooling is followed by the feature fusion network, which is one-by-one buckled by 4 3x 3 convolutional layers and 4 upsampling layers. And adding a layer of convolution layer 1 x1 to the last of the feature fusion network to form a plurality of feature maps. And placing the GIOU to the end of the SUG-Net network and outputting the result.
The training module 220: for training the algorithmic model until the parameters of the model become a stable threshold. And sequentially dividing the processed data set into a training set and a test set, wherein the training set accounts for N% of the data set, and the test set accounts for 1-N% of the data set, wherein N is more than 50 and less than 100. And sending the training set to an SUG-Net network for learning until the parameters of the algorithm model become a stable threshold value.
The input module 230: and the system is used for inputting the test set into the algorithm model, checking the accuracy and the speed of the algorithm model and performing road segmentation.
EXAMPLE III
The existing road segmentation algorithm of the deep neural network on the market has the contradiction between accuracy and instantaneity, and the detailed algorithm with high accuracy usually needs long time because the algorithm is large in network, large in number of convolution layers and large in generated characteristics. The neural network with better real-time performance has the advantages of improving the speed due to the small number of network layers, but having extremely low accuracy. Neither of these deep neural networks has a good ability to be marketed. Therefore, the invention provides a deep neural network road segmentation algorithm with both real-time performance and accuracy.
Step 1: preparation of data sets
Step 1-1: and acquiring road images, namely acquiring the real conditions of each road surface, enabling the road types to be changeable as much as possible and the road surface conditions to be various as much as possible, and then denoising, stretching and cutting the images of the data set to obtain images which relatively accord with conditions.
Step 1-2: and marking the acquired road surface image.
Step 2: the construction of algorithm models and Convolutional Neural Networks (CNN) have achieved huge achievements and wide applications in the aspects of image classification, image detection and the like since 2012. CNN is powerful in that its multi-layer structure can automatically learn features, and can learn features of multiple levels: the sensing domain of the shallower convolutional layer is smaller, and the characteristics of some local regions are learned; deeper convolutional layers have larger perceptual domains and can learn more abstract features. These abstract features are less sensitive to the size, position, orientation, etc. of the object, thereby contributing to an improvement in recognition performance. These abstract features are helpful for classification, and can well judge what kind of object is contained in an image, but because some details of the object are lost, the concrete outline of the object cannot be well given, and which object each pixel belongs to specifically is indicated, so that it is difficult to accurately segment the object. The disadvantages are also evident: one is that the storage overhead is large. For example, an image block size of 15x15 for each pixel is used, the required storage space is 225 times larger than the original image. Secondly, the calculation efficiency is low. The adjacent pixel blocks are substantially repetitive, and the convolution is calculated for each pixel block one by one, and this calculation is also largely repetitive. Thirdly, the size of the pixel block limits the size of the sensing region. The size of the pixel block is usually much smaller than the size of the whole image, and only some local features can be extracted, so that the performance of classification is limited. However, the fully convolutional neural network (FCN) converts the fully-connected layers in the conventional CNN into convolutional layers one by one. FCN has two distinct advantages: one is that any size of input image can be accepted without requiring all training images and test images to be the same size. Secondly, it is more efficient because the problems of repeated storage and convolution calculation due to the use of pixel blocks are avoided. But at the same time, the defects of FCN are also obvious: the results of the upsampling are blurred and smooth, and are not sensitive to details in the image. Secondly, each pixel is classified, the relation between the pixels is not fully considered, the space regularization step used in the common segmentation method based on pixel classification is omitted, and the space consistency is lacked.
Inspired by the FCN structure, it was found that this structure is well suited for image segmentation, and the problem of road segmentation in image segmentation can also be solved by this full convolution structure. A network SUG-Net is constructed. As shown in fig. 3, in order to increase the real-time performance of the network and reduce the number of convolutional network layers, the SUG-Net is generally divided into two networks, after an image is input, a backbone network (feature extraction network) is input, a method of nesting convolutional layers and maximum pooling layers is adopted, the network is composed of 5 convolutional layers of 3 × 3 and 4 maximum pooling layers, and in order to solve the two disadvantages that the full convolutional structure is not sensitive to image details, a fine result is difficult to obtain, and spatial consistency is poor, spatial pyramid pooling with the scale improved to 5x5,9x9, and 13x13 is added at the bottom of the backbone network, so that the whole feature extraction network is more beneficial to extracting spatial features, and the characteristics of the full convolutional network structure are solved. And then pyramid pooling is followed by a feature fusion network, the network is formed by buckling 4 convolution layers of 3 × 3 and 4 upper sampling layers one by one, and a convolution layer of 1 × 1 is added at the end of the network to form a plurality of feature maps.
And step 3: and training the model, and sequentially dividing the processed data set into a training set and a testing set. And the training set accounts for 8, the testing set accounts for 2, and then the training set is sent to the SUG-Net network for learning until the parameters of the model are stabilized to a basically unchanged value.
And 4, step 4: and testing the model and using, namely inputting the test set into the model, checking the accuracy and speed of the model, and then inputting the image to be segmented into the model to segment the road.
The scheme provides a novel network model for deep neural road segmentation, and the defect of insufficient real-time performance of the conventional road segmentation algorithm is overcome; the algorithm can meet the actual application requirements by adding part of modules according to adjustment, and has strong adaptability.
Example four
As shown in fig. 4, (a) is the original graph, (b) is the Unet segmentation graph, and (c) is the SU-Net segmentation graph. (b) The Unet of the graph is the most excellent road segmentation network at present, but the convergence effect of the left and right edge regions is very poor as can be seen from the graph, most of the gaps are left in learning, and the effect of the model on the segmentation of the road contour is not clear and clear as well as the SUG-Net segmentation in the graph (c). Therefore, the SUG-Net has better road segmentation accuracy.
EXAMPLE five
As shown in FIG. 5, (a) is a Spp-Unet segment diagram, and (b) is a SU-Net segment diagram. Comparing SPP-Net and SUG-Net with time as a variable, it can be found from the figure that the rough contour of SPP-Net is not divided in the same time, and the effect is very poor; our SUG-Net has a good segmentation of the whole road, but a few shadows appear on the top of some images. As can be seen from the above, SUG-Net has good real-time performance.
For a better understanding of the present invention, the foregoing detailed description has been given in conjunction with specific embodiments thereof, but not with the intention of limiting the invention thereto. Any simple modifications of the above embodiments according to the technical essence of the present invention still fall within the scope of the technical solution of the present invention. In the present specification, each embodiment is described with emphasis on differences from other embodiments, and the same or similar parts between the respective embodiments may be referred to each other. For the system embodiment, since it basically corresponds to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.

Claims (10)

1. A road segmentation method based on a deep neural network comprises the preparation of a data set, and is characterized by further comprising the following steps:
step 1: constructing an algorithm model;
step 2; training the algorithm model until the parameters of the model become a stable threshold;
and step 3: and inputting the image to be segmented into the algorithm model for road segmentation.
2. The deep neural network-based road segmentation method of claim 1, wherein the preparation of the data set comprises the sub-steps of:
step 01: collecting a road surface image;
step 02: and marking the collected road surface image.
3. The deep neural network-based road segmentation method according to claim 2, wherein the collection method is used for collecting the real situation of each road surface, and a changeable road type and/or a plurality of road surface conditions are selected.
4. The deep neural network-based road segmentation method according to claim 3, wherein the step 1 comprises constructing a SUG-Net network, and the SUG-Net network comprises a backbone network and a feature fusion network.
5. The deep neural network-based road segmentation method of claim 4, wherein the backbone network is composed of 5 convolutional layers 3x 3 and 4 maximal pooling layers in a manner of nesting the convolutional layers and the maximal pooling layers.
6. The deep neural network-based road segmentation method of claim 5, wherein a spatial pyramid pooling with improvement scales of 5x5,9x9,13x13 is added at the bottom of the backbone network.
7. The deep neural network-based road segmentation method of claim 6, wherein the pyramid pooling is followed by the feature fusion network, which is one-by-one buckled by 4 convolution layers of 3x 3 and 4 upsampled layers.
8. The deep neural network-based road segmentation method of claim 7, wherein the feature fusion network is finally added with a layer of 1 × 1 convolution layer to form a plurality of feature maps.
9. The deep neural network-based road segmentation method of claim 8, wherein the step 1 further comprises putting GIOU to the end of the SUG-Net network and outputting the result.
10. A road segmentation system based on a deep neural network comprises a data set, and is characterized by comprising the following modules:
constructing a module: the algorithm model is constructed;
a training module: for training said algorithmic model until the parameters of the model become a stable threshold;
an input module: and the algorithm model is used for inputting the image to be segmented into the algorithm model for road segmentation.
CN201911243197.0A 2019-12-06 2019-12-06 Road segmentation method and system based on deep neural network Active CN111160109B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911243197.0A CN111160109B (en) 2019-12-06 2019-12-06 Road segmentation method and system based on deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911243197.0A CN111160109B (en) 2019-12-06 2019-12-06 Road segmentation method and system based on deep neural network

Publications (2)

Publication Number Publication Date
CN111160109A true CN111160109A (en) 2020-05-15
CN111160109B CN111160109B (en) 2023-08-18

Family

ID=70556538

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911243197.0A Active CN111160109B (en) 2019-12-06 2019-12-06 Road segmentation method and system based on deep neural network

Country Status (1)

Country Link
CN (1) CN111160109B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107480726A (en) * 2017-08-25 2017-12-15 电子科技大学 A kind of Scene Semantics dividing method based on full convolution and shot and long term mnemon
CN108062756A (en) * 2018-01-29 2018-05-22 重庆理工大学 Image, semantic dividing method based on the full convolutional network of depth and condition random field
CN108416783A (en) * 2018-02-01 2018-08-17 湖北工业大学 Road scene dividing method based on full convolutional Neural network
CN109145920A (en) * 2018-08-21 2019-01-04 电子科技大学 A kind of image, semantic dividing method based on deep neural network
CN109325534A (en) * 2018-09-22 2019-02-12 天津大学 A kind of semantic segmentation method based on two-way multi-Scale Pyramid
CN109410219A (en) * 2018-10-09 2019-03-01 山东大学 A kind of image partition method, device and computer readable storage medium based on pyramid fusion study
CN109711413A (en) * 2018-12-30 2019-05-03 陕西师范大学 Image, semantic dividing method based on deep learning
US20190205758A1 (en) * 2016-12-30 2019-07-04 Konica Minolta Laboratory U.S.A., Inc. Gland segmentation with deeply-supervised multi-level deconvolution networks
CN109993082A (en) * 2019-03-20 2019-07-09 上海理工大学 The classification of convolutional neural networks road scene and lane segmentation method
CN110111335A (en) * 2019-05-08 2019-08-09 南昌航空大学 A kind of the urban transportation Scene Semantics dividing method and system of adaptive confrontation study
CN110188817A (en) * 2019-05-28 2019-08-30 厦门大学 A kind of real-time high-performance street view image semantic segmentation method based on deep learning
CN110298841A (en) * 2019-05-17 2019-10-01 同济大学 A kind of Image Multiscale semantic segmentation method and device based on converged network

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190205758A1 (en) * 2016-12-30 2019-07-04 Konica Minolta Laboratory U.S.A., Inc. Gland segmentation with deeply-supervised multi-level deconvolution networks
CN107480726A (en) * 2017-08-25 2017-12-15 电子科技大学 A kind of Scene Semantics dividing method based on full convolution and shot and long term mnemon
CN108062756A (en) * 2018-01-29 2018-05-22 重庆理工大学 Image, semantic dividing method based on the full convolutional network of depth and condition random field
CN108416783A (en) * 2018-02-01 2018-08-17 湖北工业大学 Road scene dividing method based on full convolutional Neural network
CN109145920A (en) * 2018-08-21 2019-01-04 电子科技大学 A kind of image, semantic dividing method based on deep neural network
CN109325534A (en) * 2018-09-22 2019-02-12 天津大学 A kind of semantic segmentation method based on two-way multi-Scale Pyramid
CN109410219A (en) * 2018-10-09 2019-03-01 山东大学 A kind of image partition method, device and computer readable storage medium based on pyramid fusion study
CN109711413A (en) * 2018-12-30 2019-05-03 陕西师范大学 Image, semantic dividing method based on deep learning
CN109993082A (en) * 2019-03-20 2019-07-09 上海理工大学 The classification of convolutional neural networks road scene and lane segmentation method
CN110111335A (en) * 2019-05-08 2019-08-09 南昌航空大学 A kind of the urban transportation Scene Semantics dividing method and system of adaptive confrontation study
CN110298841A (en) * 2019-05-17 2019-10-01 同济大学 A kind of Image Multiscale semantic segmentation method and device based on converged network
CN110188817A (en) * 2019-05-28 2019-08-30 厦门大学 A kind of real-time high-performance street view image semantic segmentation method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
OLAF RONNEBERGER 等: "U-Net: Convolutional Networks for Biomedical Image Segmentation", 《ARXIV》 *
景唯ACR: "GIoU详解", 《HTTPS://BLOG.CSDN.NET/WEIXIN_41735859/ARTICLE/DETAILS/89288493》 *

Also Published As

Publication number Publication date
CN111160109B (en) 2023-08-18

Similar Documents

Publication Publication Date Title
CN107563381B (en) Multi-feature fusion target detection method based on full convolution network
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
CN110363182B (en) Deep learning-based lane line detection method
CN111126202B (en) Optical remote sensing image target detection method based on void feature pyramid network
CN108319972B (en) End-to-end difference network learning method for image semantic segmentation
CN108985181B (en) End-to-end face labeling method based on detection segmentation
CN109344736B (en) Static image crowd counting method based on joint learning
CN110428432B (en) Deep neural network algorithm for automatically segmenting colon gland image
CN109493346B (en) Stomach cancer pathological section image segmentation method and device based on multiple losses
CN111832655B (en) Multi-scale three-dimensional target detection method based on characteristic pyramid network
CN109840521B (en) Integrated license plate recognition method based on deep learning
CN108629367B (en) Method for enhancing garment attribute identification precision based on deep network
CN111612008B (en) Image segmentation method based on convolution network
CN109583483A (en) A kind of object detection method and system based on convolutional neural networks
CN109840483B (en) Landslide crack detection and identification method and device
CN111626993A (en) Image automatic detection counting method and system based on embedded FEFnet network
CN110569782A (en) Target detection method based on deep learning
CN106295613A (en) A kind of unmanned plane target localization method and system
CN111652273B (en) Deep learning-based RGB-D image classification method
CN114821069B (en) Construction semantic segmentation method for remote sensing image of double-branch network fused with rich-scale features
CN110738132B (en) Target detection quality blind evaluation method with discriminant perception capability
CN111860587A (en) Method for detecting small target of picture
CN110991374B (en) Fingerprint singular point detection method based on RCNN
CN116630971A (en) Wheat scab spore segmentation method based on CRF_Resunate++ network
CN113487610B (en) Herpes image recognition method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant