CN110634142B - Complex vehicle road image boundary optimization method - Google Patents
Complex vehicle road image boundary optimization method Download PDFInfo
- Publication number
- CN110634142B CN110634142B CN201910767799.XA CN201910767799A CN110634142B CN 110634142 B CN110634142 B CN 110634142B CN 201910767799 A CN201910767799 A CN 201910767799A CN 110634142 B CN110634142 B CN 110634142B
- Authority
- CN
- China
- Prior art keywords
- pixel
- image
- segmentation
- boundary
- feature map
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000005457 optimization Methods 0.000 title claims abstract description 31
- 238000000034 method Methods 0.000 title claims abstract description 27
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 41
- 230000011218 segmentation Effects 0.000 claims abstract description 37
- 238000012549 training Methods 0.000 claims abstract description 6
- 238000005192 partition Methods 0.000 claims description 2
- 238000013528 artificial neural network Methods 0.000 abstract description 5
- 238000011084 recovery Methods 0.000 abstract description 5
- 238000004088 simulation Methods 0.000 abstract description 2
- 238000013527 convolutional neural network Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000011176 pooling Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 238000003709 image segmentation Methods 0.000 description 4
- 238000005070 sampling Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 239000000284 extract Substances 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a complex vehicle road image boundary optimization method, which realizes the classification of image targets in a complex vehicle road environment. Firstly, training simulation of a large amount of data is carried out through a SegNet algorithm model to obtain rough vehicle-road target classification characteristics, then a simple linear iterative clustering algorithm is utilized to obtain an over-segmentation region of an image, a neural network obtained through the SegNet algorithm is combined to determine the category of each pixel in each super-pixel region, finally, the result of semantic segmentation is optimized through the accurate boundary recovery capability of a conditional random field, and the boundary and small-region target erroneous segmentation optimization of a vehicle-road image is realized. The result shows that the method can improve the segmentation accuracy of the object boundary.
Description
Technical Field
The invention belongs to the technical field of image processing, relates to image boundary optimization, and in particular relates to a boundary optimization method for a complex road image.
Background
The current vehicle road image boundary optimization method needs to divide the image first and process the divided block image locally. Conventional image segmentation algorithms include an image threshold-based method, an edge detection-based method, a region-based image segmentation method, and the like. However, due to the complexity of road scenes and the richness of categories, the effect of the traditional image segmentation method still needs to be improved. Since 2012, deep learning algorithms, such as fully connected convolutional neural networks (Fully Convolutional Networks, FCNs), have been rapidly applied to tasks such as target recognition, target detection, and the like, with significant success. However, FCNs suffer from the disadvantage that the results obtained are not sufficiently fine and lack spatial consistency, and therefore require other means to combine to obtain finer image segmentation results. Meanwhile, the local processing of the image block at present mainly comprises local noise reduction, and is performed on all local pixel points, so that the problem of target erroneous segmentation of a small area exists, and the boundary of the block image is not focused on.
Disclosure of Invention
The invention aims to provide a boundary optimization algorithm for a complex vehicle road image, which is used for realizing the boundary and small-area target erroneous segmentation optimization of the complex vehicle road image.
In order to achieve the above purpose, the invention adopts the following technical scheme:
the boundary optimization method for the complex vehicle road image carries out image boundary optimization on the complex vehicle road image to be optimized to obtain a complex vehicle road boundary optimization image, and comprises the following steps:
step 1: converting the complex vehicle road image to be optimized into an initial vehicle road RGB image, and performing feature training on the initial vehicle road RGB image by adopting a SegNet algorithm model to obtain a rough feature map of the initial vehicle road RGB image;
step 2: performing region segmentation on the rough feature map by adopting a simple linear iterative clustering algorithm to obtain a plurality of segmented regions, wherein each segmented region comprises a plurality of similar pixels, and determining the pixel category of all the similar pixels of each segmented region through the rough feature map obtained in the step 1;
step 3: and (2) performing image boundary optimization on the complex vehicle road image to be optimized by utilizing the boundary recovery capability of the conditional random field and combining the pixel type of each over-segmentation area and the rough feature map obtained in the step (1) to obtain a complex vehicle road boundary optimized image.
Specifically, in step 2, each pixel in the rough feature map corresponds to one (L, a, b) color value, and simultaneously, two-dimensional xy image coordinates of the rough feature map are established, so that each pixel can be represented by a 5-dimensional vector [ L, a, b, x, y ], the similarity of any two pixels is measured by the vector distance corresponding to the two pixels, and if the vector distance is smaller than a given threshold delta, the two pixels are similar pixels; where L represents the color brightness of the pixel, a represents the color green-red value of the pixel, and b represents the color blue Huang Zhi of the pixel.
Specifically, in step 2, the i-th partition area is S i ,S i ={C 1 ,C 2 ,...,C N },C N For the N-th similar pixel contained in the over-segmentation region, S is calculated i Pixel class, n, of each similar pixel in (a) Ii Representing the number of similar pixels of pixel class I in the ith region of separation, calculating n Ii The ratio of the number of all the similar pixels is foundAnd if the difference value between the maximum value and the sub-maximum value is larger than a given threshold value zeta, the pixel class I is used as the pixel class to which all similar pixels of the ith excessive region belong.
Preferably, ζ has a value of 0.1 to 0.2.
The beneficial effects of the invention are as follows:
the method can optimize the boundary of the complex vehicle road image, can optimize the spot of the wrong segmentation of the small area in the vehicle road image, and is reflected as more small spots in the image. By the SLIC algorithm processing, these "blobs" can be eliminated to some extent by their block label similarity to surrounding superpixels, i.e., the speckle reduction processing, thereby improving the segmentation accuracy of the object boundary.
Drawings
Fig. 1 is a schematic block diagram of a boundary optimization algorithm for a complex road image.
Fig. 2 is a diagram of a SegNet network architecture model.
FIG. 3 is a graph of the difference between SegNet and FCN upsampling; wherein: (a) SegNet sampling map; (b) FCN sampling graph.
FIG. 4 is a graph of semantic segmentation of an image; wherein: (a) an original image; (b) SegNet semantic segmentation images.
FIG. 5 is a flowchart of an image boundary optimization method according to the present invention.
Fig. 6 is a graph of the result of the super-pixel segmentation of different parameters.
Detailed Description
The invention will now be described in detail with reference to the drawings and examples.
The invention relates to a Simple Linear Iterative Clustering (SLIC), which is an image super-pixel (super-pixels) segmentation algorithm, wherein the input of the whole algorithm is only one, namely the number K of super-pixels, the original N pixels of a picture are segmented into K pixels, and the size of each pixel is N/K. The distance between the superpixels (i.e. the side length of the superpixels in the regular case) is s= v N/K. The goal of the algorithm is to minimize the cost function (costfunction). Specifically, in the method, the sum of the distances from each pixel to the corresponding center point is the smallest, so that convergence is performed. The detailed flow of the SLIC algorithm is as follows:
step 1: the color image in the CIELAB color space sets the number of superpixels K. The initial clustering center is initialization C i =(l i ,a i ,b i ,x i ,b i ) T Step length between clustering centers of superpixels
Step 2: in order to avoid that the super pixel falls to the edge position and reduce noise pixels to be close to the super pixel, lab color gradients of pixel points in 8 adjacent domains of the seed point are calculated in 3*3 adjacent domains of each cluster center, the Lab color gradients are respectively compared with the initial seed point gradients, the point with the minimum gradient value (the most flat) is taken, and LABXY information of the point is recorded as a new seed point.
Step 3:
1) Class labels are assigned to each pixel point within a neighborhood around each seed point. Similar to but different from the standard k-means algorithm, i.e., the search range of k-means is s×s in the whole graph, and the search range of SLIC is limited to 2s×2S, which can accelerate algorithm convergence.
2) Distance metrics, including color distance and spatial distance. Initialization distlab, distxy is infinity.
dc represents the lab color space distance of a point from a seed point, calculated as follows:
ds represents the spatial coordinate distance of a point from the seed point, calculated as follows:
d represents the combined distance (normalized color distance + spatial distance) of a point from the seed point.
Ns is the maximum spatial distance within the class, defined as ns=s, for each cluster. The maximum color distance Nc varies from picture to picture and from cluster to cluster, so we take a fixed constant m (the value range [1,40], typically 10) instead. The final distance measure D' is as follows:
since each pixel point is searched by a plurality of seed points, each pixel point has a distance from surrounding seed points, and the seed point corresponding to the minimum value is taken as the clustering center of the pixel point. And calculating labxy means and coordinate barycenters of all pixels in each new superpixel. The center of gravity of the coordinates is taken as the new seed point position of the super pixel.
Step 4: and the isolated points are removed, so that connectivity is enhanced. This function has mainly several roles: ensuring that the same super pixel is a single communication area; removing undersized superpixels; avoiding the situation where a single superpixel is cut.
When the color distance m of the maximum number K of the input super pixels is different, different segmentation effects are generated. By using hundreds or thousands of super pixels to replace massive image data, not only can the clear edge outline area of the image be obtained, but also the calculated amount of pixel samples is reduced to a great extent, and the calculation efficiency is improved.
Reclassifying the boundary and the misclassified pixels in each region, and specifically comprises the following steps:
all pixels c= { C in each superpixel Sp 1 ,C 2 ,...,C s Counting the number n of pixel labels owned by each category in the superpixel k :
n p ={n 1 ,n 2 ,n 3 ,...,n K } (9)
Wherein n is i Representing the number of labels of pixels owned by category i in region p, then find n p N of the maximum value of (a) j The pixels in region p are classified as the j-th class. But there is also a case where n p Number n of the largest number of tags in the list i Number n of sum sub maximum number labels j In the case of close proximity, it is not very certain whether the region p should belong to the class i or the class j, but it is defined that the region p is classified as the neural network with the highest probability, and therefore, there is a false segmentation phenomenon. Therefore, we need to define a threshold T,
taking T as 0.2, if T is larger than 0.2, classifying pixels in the region p into the ith class, otherwise, classifying according to the result of the semantic segmentation of the convolutional neural network.
The Conditional Random Field (CRF) of the invention, an algorithm model in the machine learning field, has a boundary optimization function. The boundary optimization is realized by removing small region erroneous segmentation appearing in an image on the basis of the image super segmentation in the previous step, and reflecting more small spots in the image, wherein the spots can be eliminated through the similarity of the spots and the block labels of surrounding super pixels. The boundary optimization algorithm reclassifies the boundary and the misclassified pixels in each region, combines super pixels and utilizes a convolutional neural network to realize image boundary optimization. The block label similarity is calculated based on similarity probability, a similar local homogeneous region is searched for each point to be denoised by using the similarity probability, and then the point is processed in the local homogeneous region. Let X and Y be random variables and P (Y|X) be the conditional probability distribution of Y given X. If the random variable Y constitutes a markov random field represented by an undirected graph g= (V, E) and is true for any node V, the conditional probability distribution P (y|x) is called a conditional random field. There are many tools for CRF today, for example crf++, crf++ contains version Windows, linux, tar_gz is Linux version, zip is Windows version, and no further description is given here.
According to the boundary optimization method for the complex vehicle road image, training simulation of a large amount of data is carried out through a SegNet algorithm model to obtain a rough feature map of an initial vehicle road RGB image, wherein the rough feature map contains rough vehicle road target classification features; obtaining the over-segmentation areas of the image by using a simple linear iterative clustering (Simple Linear Iterative Clustering, SLIC) algorithm, and determining the pixel category in each over-segmentation area by combining a neural network rough characteristic diagram obtained by a SegNet algorithm; and then optimizing the result of the semantic segmentation of the over-region by using the accurate boundary recovery capability of the conditional random field (Conditional Random Field, CRF) to realize the boundary and small-region target error segmentation optimization of the vehicle road image.
The SegNet algorithm model mainly comprises an encoder and a decoder, the encoding process mainly extracts target features based on a VGG-16 network model, and the decoding process mainly uses a recorded maximum pooling index to carry out upsampling processing after convolution pooling to obtain feature maps of an input image. Meanwhile, a SegNet image semantic segmentation algorithm based on a VGG-16 network model is adopted to realize rough segmentation of images, rough features are extracted, classification and identification are carried out on each pixel, pixels with similar features in a complex road image are combined to form a plurality of representative areas, the over-segmentation area of the image is obtained, the category of the pixels in each area is determined by combining a neural network, and finally the pixels in the whole image are reclassified according to a pixel classification algorithm in each area. Combining superpixels and utilizing a convolutional neural network to realize image boundary optimization, wherein the determining of the category of pixels in each region comprises the following steps: the class of each target is obtained through a convolution layer, a pooling layer, a deconvolution layer and the like, each pixel is classified through a softmax function, and then a cross entropy loss function is constructed according to the error between the predicted value and the true value. The similar pixel confirmation in the boundary optimization algorithm comprises the following steps: the image is converted from RGB color space to CIE-Lab color space, the (L, a, b) color value and the (x, y) coordinates corresponding to each pixel form a 5-dimensional vector V [ L, a, b, x, y ], and the similarity of two pixels can be measured by the vector distance of the two pixels, and the larger the distance is, the smaller the similarity is.
Example 1:
the embodiment provides a boundary optimization method for a complex vehicle road image, which comprises the following steps:
step 1: converting the complex vehicle road image to be optimized into an initial vehicle road RGB image, and performing feature training on the initial vehicle road RGB image by adopting a SegNet algorithm model to obtain a rough feature map of the initial vehicle road RGB image;
specifically, an original image I and a rough feature map L extracted by a SegNet algorithm are input.
Step 2: performing region segmentation on the rough feature map by adopting a simple linear iterative clustering algorithm to obtain a plurality of segmented regions, wherein each segmented region comprises a plurality of similar pixels, and determining the pixel category of all the similar pixels of each segmented region through the rough feature map obtained in the step 1;
specifically, an SLIC algorithm is applied to divide an original image to obtain K superpixels sp= { S 1 ,S 2 ,S 3 ,...,S K -and the region or pixel class of each superpixel is marked with a label i;
more specifically, superpixel S i All pixels in (a) are S i ={C 1 ,C 2 ,...,C N [ is, wherein C j Corresponding to one pixel in the class j in the feature map, calculating each super pixel S i The pixel class of each pixel in the array has the same number n of pixels of the label i ={n 1 ,n 2 ,...,n m N is }, where n i Indicated at S i The number of pixels belonging to class i is calculated and the same label L is obtained Cj The proportion W of the pixel n Cj= {W 1 ,W 2 ,...,W m }. If W is Cj The difference between the maximum and the next maximum is greater than 0.2, then W is used Cj Corresponding label L Cmax This superpixel is marked, otherwise the class of SegNet segmentation results is kept to mark this superpixel. And finally traversing the whole image.
Step 3: and (2) performing image boundary optimization on the complex vehicle road image to be optimized by utilizing the boundary recovery capability of the conditional random field and combining the pixel type of each over-segmentation area and the rough feature map obtained in the step (1) to obtain a complex vehicle road boundary optimized image.
The boundary optimization of the invention feeds boundary and contour information in an original vehicle road image extracted by utilizing super pixels back to a pixel-level segmentation image, enhances and improves a preliminary model, and realizes the segmentation accuracy of a complex vehicle road target. The SegNet algorithm is utilized to extract pixel level features, the SLIC algorithm is utilized to realize super-pixel level image feature extraction, and then the two parts of pixel features are combined to provide a boundary-optimized semantic segmentation algorithm. Instant L Cmax Reassigning the classification of the current superpixel, outputting an image I'.
Referring to fig. 1, a rough road object classification feature is obtained by training and simulating a large amount of data through a SegNet algorithm model, then an over-segmentation region of an image is obtained through a simple linear iterative clustering (Simple Linear Iterative Clustering, SLIC) algorithm, the category of each pixel in each super-pixel region is determined by combining a neural network obtained through the SegNet algorithm, finally a semantic segmentation result is optimized through the accurate boundary recovery capability of a conditional random field (Conditional Random Field, CRF), and the boundary and small-region object erroneous segmentation optimization is realized on the road image.
Referring to fig. 2, the segnet network is mainly composed of two parts of encoder and decoder, the encoding process mainly extracts target features based on VGG-16 network model, and only discards the last three full connection layers, so that the number of learnable parameters is greatly reduced, namely, each conv layer comprises a condition+a latch normalization+a ReLU operation, a pool layer adopts a window of 2x2, and a stride of stride 2. Each pool layer corresponds to a half resolution downsampling of the image. And the location of the maximum in each pooling window in the feature maps is recorded during each maxpool. The decoding process uses the recorded maximum pooled index to up-sample the input image after it is convolutionally pooled to obtain the feature maps of the input image.
Referring to fig. 3 (a), the SegNet up-sampling process, i.e., feature map values 1, 2, 3, 4, maps to the new feature map with the largest pooled coordinates previously saved, with the other parts filled with zeros.
Referring to fig. 3 (b), the FCN up-sampling process adds a deconvolution of the feature map values 1, 2, 3, 4 to the previous corresponding convolved feature map.
Referring to fig. 4 (b), the image semantic segmentation by using the convolutional neural network can already obtain good effects, multiple pooling reduces overfitting and enlarges the receptive field, and realizes invariance, but the semantic features at the edge of the target object are fuzzy and the segmentation precision is reduced due to the loss of target structure information caused by the reduction of neurons. As can be seen from the semantic segmentation result, the edge segmentation of the left lane is rough, the boundary positioning is not accurate enough, and the partial region is subjected to the phenomenon of partial erroneous segmentation.
Referring to fig. 5, the invention mainly researches a boundary optimization algorithm for a complex vehicle road image, wherein the boundary optimization is to feed back boundary and contour information in an original vehicle road image extracted by utilizing super pixels into a pixel-level segmentation image, enhance and improve a preliminary model and realize the segmentation accuracy of a complex vehicle road target. The SegNet algorithm is utilized to extract pixel level features, the SLIC algorithm is utilized to realize super-pixel level image feature extraction, and then the two parts of pixel features are combined to provide a boundary-optimized semantic segmentation algorithm.
Referring to fig. 6, the principle of the slic algorithm is to convert an image from RGB color space to CIE-Lab color space, where the (L, a, b) color values and the (x, y) coordinates corresponding to each pixel form a 5-dimensional vector V L, a, b, x, y, and the similarity of two pixels can be measured by their vector distance, with the greater distance the smaller the similarity.
Claims (2)
1. The boundary optimization method for the complex road image is characterized by comprising the following steps of:
step 1: converting the complex vehicle road image to be optimized into an initial vehicle road RGB image, and performing feature training on the initial vehicle road RGB image by adopting a SegNet algorithm model to obtain a rough feature map of the initial vehicle road RGB image;
step 2: based on region segmentation of the rough feature map by adopting a simple linear iterative clustering algorithm, finding out all pixel points similar to any pixel point in the rough feature map to form an over-segmentation region, obtaining a plurality of over-segmentation regions, wherein each over-segmentation region contains a plurality of similar pixels, and determining the pixel category of all the similar pixels of each over-segmentation region through the rough feature map obtained in the step 1; wherein:
the specific steps of finding all the pixel points similar to any pixel point in the rough feature map include: the method comprises the steps that any pixel point in the rough feature map corresponds to one (L, a, b) color value, and meanwhile, xy two-dimensional image coordinates of the rough feature map are established, wherein the any pixel point is represented by a 5-dimensional vector [ L, a, b, x, y ], the similarity of the any pixel point and other pixel points is measured through the vector distance between the any pixel point and the other pixel points, and if the vector distance is smaller than a given threshold delta, the pixel points are similar; wherein, L, a and b respectively represent the color brightness, the color green-red value and the color blue Huang Zhi of the arbitrary pixel point;
the determining, by the rough feature map obtained in the step 1, a pixel class to which all similar pixels of each of the excessive regions belong specifically includes: s is S i All similar pixels of (1) belong to the pixel class I, S i Represents the i th subdivision region, S i ={C 1 ,C 2 ,...,C N },C N For the N-th similar pixel contained in the over-partition region, N is calculated Ii In proportion to the number of all similar pixels, n Ii The number of similar pixels with the pixel class I in the ith excessive dividing area is represented, the maximum value and the sub-maximum value of the proportion are found, and if the difference value between the maximum value and the sub-maximum value is larger than a given threshold value zeta, the pixel class I is used as the pixel class to which all the similar pixels in the ith excessive dividing area belong;
step 3: and (2) adopting a conditional random field method, and carrying out image boundary speckle reduction on the complex vehicle-road image to be optimized by combining the pixel type of each sub-segmentation area and the rough feature map obtained in the step (1) to obtain a complex vehicle-road boundary optimized image.
2. The boundary optimization method for a complex road image according to claim 1, wherein ζ takes a value of 0.1-0.2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910767799.XA CN110634142B (en) | 2019-08-20 | 2019-08-20 | Complex vehicle road image boundary optimization method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910767799.XA CN110634142B (en) | 2019-08-20 | 2019-08-20 | Complex vehicle road image boundary optimization method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110634142A CN110634142A (en) | 2019-12-31 |
CN110634142B true CN110634142B (en) | 2024-02-02 |
Family
ID=68970363
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910767799.XA Active CN110634142B (en) | 2019-08-20 | 2019-08-20 | Complex vehicle road image boundary optimization method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110634142B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111881744B (en) * | 2020-06-23 | 2024-06-21 | 安徽清新互联信息科技有限公司 | Face feature point positioning method and system based on spatial position information |
CN112580424B (en) * | 2020-09-29 | 2023-08-11 | 长安大学 | Polarization characteristic multi-scale pooling classification algorithm for complex vehicle-road environment |
CN113657415B (en) * | 2021-10-21 | 2022-01-25 | 西安交通大学城市学院 | Object detection method oriented to schematic diagram |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106446914A (en) * | 2016-09-28 | 2017-02-22 | 天津工业大学 | Road detection based on superpixels and convolution neural network |
WO2018119807A1 (en) * | 2016-12-29 | 2018-07-05 | 浙江工商大学 | Depth image sequence generation method based on convolutional neural network and spatiotemporal coherence |
CN109522908A (en) * | 2018-11-16 | 2019-03-26 | 董静 | Image significance detection method based on area label fusion |
WO2019062092A1 (en) * | 2017-09-30 | 2019-04-04 | 深圳市颐通科技有限公司 | Superpixel- and multivariate color space-based body outline extraction method |
CN110096961A (en) * | 2019-04-04 | 2019-08-06 | 北京工业大学 | A kind of indoor scene semanteme marking method of super-pixel rank |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9558268B2 (en) * | 2014-08-20 | 2017-01-31 | Mitsubishi Electric Research Laboratories, Inc. | Method for semantically labeling an image of a scene using recursive context propagation |
-
2019
- 2019-08-20 CN CN201910767799.XA patent/CN110634142B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106446914A (en) * | 2016-09-28 | 2017-02-22 | 天津工业大学 | Road detection based on superpixels and convolution neural network |
WO2018119807A1 (en) * | 2016-12-29 | 2018-07-05 | 浙江工商大学 | Depth image sequence generation method based on convolutional neural network and spatiotemporal coherence |
WO2019062092A1 (en) * | 2017-09-30 | 2019-04-04 | 深圳市颐通科技有限公司 | Superpixel- and multivariate color space-based body outline extraction method |
CN109522908A (en) * | 2018-11-16 | 2019-03-26 | 董静 | Image significance detection method based on area label fusion |
CN110096961A (en) * | 2019-04-04 | 2019-08-06 | 北京工业大学 | A kind of indoor scene semanteme marking method of super-pixel rank |
Non-Patent Citations (4)
Title |
---|
刘步实 ; 吕永波 ; 吕万钧 ; 李晶 ; 欧阳琪 ; .基于MS-RG混合图像分割模型的道路检测研究.交通运输系统工程与信息.2019,(第02期),全文. * |
李宗民 ; 徐希云 ; 刘玉杰 ; 李华 ; .条件随机场像素建模与深度特征融合的目标区域分割算法.计算机辅助设计与图形学学报.2018,(第06期),全文. * |
杨飞 ; 王欢 ; 金忠 ; .基于融合分层条件随机场的道路分割模型.机器人.2018,(第06期),全文. * |
蒋应锋 ; 张桦 ; 薛彦兵 ; 周冕 ; 徐光平 ; 高赞 ; .一种新的多尺度深度学习图像语义理解方法研究.光电子・激光.2016,(第02期),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN110634142A (en) | 2019-12-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108537239B (en) | Method for detecting image saliency target | |
Dornaika et al. | Building detection from orthophotos using a machine learning approach: An empirical study on image segmentation and descriptors | |
CN110866896B (en) | Image saliency target detection method based on k-means and level set super-pixel segmentation | |
CN109829449B (en) | RGB-D indoor scene labeling method based on super-pixel space-time context | |
CN111986125B (en) | Method for multi-target task instance segmentation | |
CN112036231B (en) | Vehicle-mounted video-based lane line and pavement indication mark detection and identification method | |
CN111191583A (en) | Space target identification system and method based on convolutional neural network | |
CN110399840B (en) | Rapid lawn semantic segmentation and boundary detection method | |
CN110634142B (en) | Complex vehicle road image boundary optimization method | |
CN112052783A (en) | High-resolution image weak supervision building extraction method combining pixel semantic association and boundary attention | |
CN114283162B (en) | Real scene image segmentation method based on contrast self-supervision learning | |
CN112733614B (en) | Pest image detection method with similar size enhanced identification | |
CN110188763B (en) | Image significance detection method based on improved graph model | |
CN112818905B (en) | Finite pixel vehicle target detection method based on attention and spatio-temporal information | |
CN111583279A (en) | Super-pixel image segmentation method based on PCBA | |
CN114627052A (en) | Infrared image air leakage and liquid leakage detection method and system based on deep learning | |
CN112949338A (en) | Two-dimensional bar code accurate positioning method combining deep learning and Hough transformation | |
CN113435240A (en) | End-to-end table detection and structure identification method and system | |
CN104657980A (en) | Improved multi-channel image partitioning algorithm based on Meanshift | |
CN113486894B (en) | Semantic segmentation method for satellite image feature parts | |
CN108230330B (en) | Method for quickly segmenting highway pavement and positioning camera | |
CN115100652A (en) | Electronic map automatic generation method based on high-resolution remote sensing image | |
CN109741358B (en) | Superpixel segmentation method based on adaptive hypergraph learning | |
CN116630971A (en) | Wheat scab spore segmentation method based on CRF_Resunate++ network | |
CN113657225B (en) | Target detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |