CN108256455B - Road image segmentation method based on vanishing points - Google Patents

Road image segmentation method based on vanishing points Download PDF

Info

Publication number
CN108256455B
CN108256455B CN201810015224.8A CN201810015224A CN108256455B CN 108256455 B CN108256455 B CN 108256455B CN 201810015224 A CN201810015224 A CN 201810015224A CN 108256455 B CN108256455 B CN 108256455B
Authority
CN
China
Prior art keywords
road
image
color image
point
vanishing point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810015224.8A
Other languages
Chinese (zh)
Other versions
CN108256455A (en
Inventor
付方发
王瑶
徐伟哲
王宇哲
牛娜
蔡祎炜
王进祥
王永生
来逢昌
谭紫阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN201810015224.8A priority Critical patent/CN108256455B/en
Publication of CN108256455A publication Critical patent/CN108256455A/en
Application granted granted Critical
Publication of CN108256455B publication Critical patent/CN108256455B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

A road image segmentation method based on vanishing points relates to an image processing method. The method solves the problems that the accuracy of a road identification result is low, the operation time of the algorithm is long and the accuracy of the road identification result based on the deep learning road identification algorithm is easily influenced by sample data in the traditional road identification algorithm based on binocular matching. The invention separates the road part from the road overhead part in the image by using the characteristics of the vanishing point, thus removing the redundant information of the road overhead part in the image, extracting the road color image from which the road overhead part is removed, putting the road color image into the deep learning algorithm for training, or directly using the road recognition algorithm based on binocular matching to process, and reducing the whole area of the image after removing the redundant information of the road overhead part, so the algorithm shortens the running time, improves the accuracy of road recognition simultaneously, and finally can obtain the image of the drivable road area rapidly and with high accuracy. The invention is used in the technical field of image processing.

Description

Road image segmentation method based on vanishing points
Technical Field
The invention relates to a road image segmentation method based on vanishing points, and belongs to the technical field of image processing.
Background
The road recognition algorithm is expected to obtain a high-accuracy road recognition result at a fast processing speed and with less algorithm running time. Currently, mainstream traditional road recognition algorithms can be divided into two types: and a road identification algorithm based on binocular matching and deep learning. The traditional road identification algorithm based on binocular matching has the disadvantages of low accuracy of road identification results and long algorithm running time, so that the application is greatly limited. For example, in the patent "segmentation model training method, road segmentation method, vehicle control method and apparatus" (CN106558058A), an unsupervised free region segmentation method is used to perform free region segmentation on a training sample image, the training sample image is used as an input image, the free region segmentation image is used as an annotation image, an initial segmentation model is trained to obtain a target segmentation model, and then a drivable road region is obtained. However, the road recognition algorithm based on the deep learning is easily affected by the number of samples to generate wrong segmentation, and the traditional algorithm based on the binocular matching has the advantages of long running time and lower precision than the deep learning algorithm. And the road color picture collected by the front camera mainly comprises a road and a road overhead part, wherein the area of the road overhead part accounts for 20% -60% of the whole image size according to different shooting angles, and the part is redundant information for road identification and is difficult to effectively remove, so that the road identification algorithm based on deep learning and the traditional algorithm based on binocular matching have poor flexibility.
Disclosure of Invention
The invention aims to provide a road image segmentation method based on vanishing points, and the method is used for solving the problems that the road identification result accuracy rate of the traditional road identification algorithm based on binocular matching is low, the algorithm running time is long, and the road identification algorithm based on deep learning is easily influenced by sample data to generate wrong segmentation and influence the road identification result accuracy rate.
The technical scheme adopted by the invention for solving the technical problems is as follows:
step one, processing an input road color image through a Canny edge detection algorithm to obtain a gray level image I1Horizontal and vertical edge information of (x, y); wherein x and y are respectively the abscissa and ordinate of each point in the gray scale image;
step two, changing the input road color image into a gray image I (x, y), and extracting texture features of the whole gray image I (x, y) by adopting a Gabor filter;
step three, the gray level image I in the step one is processed1Performing intersection operation on the horizontal and vertical edge information of the (x, y) and the texture features of the gray level image I (x, y) in the step two to obtain a texture feature image with horizontal and vertical directions, and obtaining effective voting points by introducing a confidence function;
step four, voting the vanishing points to be selected above the effective voting points by using the effective voting points obtained in the step three to obtain a point with the largest number of votes as a vanishing point;
step five, segmenting the color image of the road and the overhead part of the road by using the vanishing point obtained in the step four: the ordinate y of the road color image increases from top to bottom in sequence, and the abscissa x increases from left to right in sequence; cutting off all pixels of which the vertical coordinate of the whole road color image is smaller than the vertical coordinate of the vanishing point to obtain a road color image with the overhead part of the road removed; the width of the road color image after the removal of the road overhead part is equal to the width of the road overhead part color image;
step six, repeating the step one to the step five to obtain N road color images with the overhead parts of the roads removed; putting the N road color images with the overhead road parts removed into a deep learning algorithm for training, or directly processing the N road color images with the overhead road parts removed by adopting a binocular matching algorithm; and rapidly obtaining the travelable road area image with high accuracy.
The invention has the beneficial effects that:
1. the road image segmentation method based on the vanishing point provided by the invention utilizes the vanishing point to segment the road in the road color image from the overhead part thereof, namely, the redundant information of the road overhead part of the road color image can be removed by utilizing the characteristics of the vanishing point, then the obtained road color image part without the road overhead part is put into a deep learning algorithm for training, or the road color image without the road overhead part is directly processed by adopting a binocular matching algorithm, so that the drivable road region image with high accuracy can be obtained. The road image segmentation method does not need to process the whole road edge, but only takes the vanishing point as the pretreatment of the image; meanwhile, the invention removes the redundant information of the upper part of the road color image and reduces the whole area of the image put into the deep learning algorithm, so the road image segmentation method shortens the operation time.
2. The road image segmentation method based on the vanishing points can process a picture with the size of 1248 × 384 in 0.08s, and reduces the algorithm running time of 40-60% based on deep learning or a binocular matching algorithm.
3. The accuracy of the travelable road area image obtained by the road image segmentation method based on the vanishing point can be improved by 1-2%.
The road image segmentation method based on the vanishing point can quickly obtain the image of the drivable road area with high accuracy, and unmanned vehicles can advance according to the road identification result.
Drawings
FIG. 1 is a flowchart of the operation of the road image segmentation method based on vanishing points of the present invention;
FIG. 2 is a road color image input by the present invention;
FIG. 3 is an edge image after Canny edge detection in accordance with the present invention;
FIG. 4 is a gray scale image texture master pattern of the present invention;
FIG. 5 is a road texture map obtained by the confidence function method of the present invention;
FIG. 6 is a schematic diagram illustrating voting for vanishing point pixels to be selected by effective voting points according to the present invention; wherein P1, P2, P3 and P4 represent valid voting points; v1 and V2 represent candidate vanishing points; o is1、O2、O3And O4The masters representing valid voting points P1, P2, P3 and P4, respectivelyDirectional points; z1, Z2 and Z3 represent three areas into which the candidate vanishing points V1 and V2 divide the image respectively; l represents a boundary between a road on the right side and an off-road in the image;
FIG. 7 is a diagram illustrating voting for vanishing point pixels to be selected by effective voting points according to the present invention;
FIG. 8 is a schematic illustration of a road vanishing point determined by the present invention;
FIG. 9 is a schematic diagram of a road color image obtained by removing a road empty part after segmentation using vanishing points according to the present invention;
FIG. 10 is a graph of the recognition accuracy of the drivable road region versus the number of training passes in the road image segmentation method based on vanishing points according to the present invention;
fig. 11 is a relationship curve between training time and training times of the road image segmentation method based on vanishing points according to the present invention.
Detailed Description
The first embodiment is as follows: as shown in fig. 1 to 3, the road image segmentation method based on vanishing points according to the present embodiment is specifically performed according to the following steps:
step one, processing an input road color image through a Canny edge detection algorithm to obtain a gray level image I1Horizontal and vertical edge information of (x, y); wherein x and y are respectively the abscissa and ordinate of each point in the gray scale image;
step two, changing the input road color image into a gray image I (x, y), and extracting texture features of the whole gray image I (x, y) by adopting a Gabor filter;
step three, the gray level image I in the step one is processed1Performing intersection operation on the horizontal and vertical edge information of the (x, y) and the texture features of the gray level image I (x, y) in the step two to obtain a texture feature image with horizontal and vertical directions, and obtaining effective voting points by introducing a confidence function;
step four, voting the vanishing points to be selected above the effective voting points by using the effective voting points obtained in the step three to obtain a point with the largest number of votes as a vanishing point;
step five, segmenting the color image of the road and the overhead part of the road by using the vanishing point obtained in the step four: the ordinate y of the road color image increases from top to bottom in sequence, and the abscissa x increases from left to right in sequence; cutting off all pixels of which the vertical coordinate of the whole road color image is smaller than the vertical coordinate of the vanishing point to obtain a road color image with the overhead part of the road removed; the width of the road color image after the removal of the road overhead part is equal to the width of the road overhead part color image;
step six, repeating the step one to the step five to obtain N road color images with the overhead parts of the roads removed; putting the N road color images with the overhead road parts removed into a deep learning algorithm for training, or directly processing the N road color images with the overhead road parts removed by adopting a binocular matching algorithm; and rapidly obtaining the travelable road area image with high accuracy.
The second embodiment is as follows: the first difference between the present embodiment and the specific embodiment is: the size of the road color image input in the first step is W x H x 3; where W represents the width of the road color image, H represents the height of the road color image, and 3 represents the three channels of the road color image.
The third concrete implementation mode: as shown in fig. 4, the first to second differences between the present embodiment and the first to second embodiments are: the specific process of extracting the texture features of the whole gray level image I (x, y) by adopting a Gabor filter in the second step is as follows:
the Gabor filter filters the gray image I (x, y) through a Gaussian window to extract the texture features of the whole gray image I (x, y), and the formula of the Gabor filter
Figure BDA0001541714320000041
The following were used:
Figure BDA0001541714320000042
formulation of grayscale image I (x, y) with Gabor filter
Figure BDA0001541714320000043
Convolution is carried out to obtain the energy response of each point in the gray-scale image I (x, y)
Figure BDA0001541714320000044
Figure BDA0001541714320000045
Figure BDA0001541714320000046
Wherein a is xcos phi + ysin phi, b is xsin phi + ycos phi;
Figure BDA0001541714320000047
in the form of a radial frequency, the frequency,
Figure BDA0001541714320000048
c is the frequency multiplication constant, where
Figure BDA0001541714320000049
e is the base of the natural logarithm function; i represents the unit of an imaginary number; φ is the texture direction, where φ belongs to {0 °,45 °,90 °, 135 ° }; the magnitude of the energy response of each point in the grayscale image I (x, y) is
Figure BDA00015417143200000410
Figure BDA00015417143200000411
And
Figure BDA00015417143200000412
respectively, the real and imaginary parts of the energy response of each point in the gray-scale image I (x, y).
The fourth concrete implementation mode: as shown in fig. 5 and 6, the present embodiment is different from the first to third embodiments in that: step three, the method for introducing the confidence function is used for obtaining the effective voting points by the specific process:
according to four different texture squaresGet four different energy response amplitudes to phi
Figure BDA00015417143200000413
Figure BDA0001541714320000051
And
Figure BDA0001541714320000052
wherein the content of the first and second substances,
Figure BDA0001541714320000053
represents the largest one of the four energy response amplitudes, and the other energy response amplitudes are sequentially from large to small
Figure BDA0001541714320000054
And
Figure BDA0001541714320000055
by introducing a confidence function, the energy response amplitude constant E less than or equal to the energy response amplitude constant E is removedthRetains a constant E greater than the amplitude of the energy responsethObtaining effective voting points by the texture characteristics of the voting points; the confidence function conf (x, y) is formulated as follows:
Figure BDA0001541714320000056
because the texture features of the road are not influenced by illumination and color change, the texture features of the road are selected to be reserved to calculate effective voting points; because the texture features of the road are stronger than those of pedestrians, trees and vehicles on the road surface, the texture features of the pedestrians, trees and vehicles on the road surface can be removed by introducing a confidence function, and the texture features of the road are reserved, so that effective voting points are obtained for voting the vanishing points to be chosen.
When viewed along the direction of the road, two parallel road edges intersect at a point at a distance, which is the road vanishing point.
The fifth concrete implementation mode: as shown in fig. 7 and 8, the present embodiment differs from the first to fourth embodiments in that: step four, voting the vanishing points to be selected above the effective voting points by using the effective voting points obtained in the step three, and obtaining a point with the largest number of votes as a vanishing point comprises the following specific processes:
Figure BDA0001541714320000057
wherein Votes (P, V) represents a voting function; p and V respectively represent an effective voting point and a vanishing point to be selected, and d (P and V) represent the distance between the effective voting point and the vanishing point to be selected; gamma is a vector
Figure BDA0001541714320000058
And vector
Figure BDA0001541714320000059
The included angle is less than 5 degrees, and the point O represents a main direction point of the effective voting point P;
obtaining the vanishing point to be selected with the most votes after voting, and selecting the vanishing point (x) with the most votes to be selected0,y0) As a vanishing point.
The main direction of each pixel in the image can be calculated through color characteristics, texture characteristics and the like, so that the position of the road vanishing point is obtained according to the position pointed by the main direction. The distance and the included angle between the effective voting point and the vanishing point to be selected are introduced to be used as the judgment of the number of votes to be selected, so that the error voting information generated by the interference point is reduced, the vanishing point to be selected with the largest number of votes is obtained after voting, and then the vanishing point to be selected with the largest number of votes is used as the vanishing point to divide the road from the upper space of the road.
Aiming at a plurality of vanishing point algorithms, the invention only introduces one vanishing point calculation algorithm, and other vanishing point calculation algorithms can be put in the algorithm of the invention to divide the road and the upper hollow part thereof so as to remove redundant information, finally improve the accuracy of the identification of the drivable area of the road and reduce the operation time of the algorithm.
The sixth specific implementation mode: as shown in fig. 9, the present embodiment is different from the first to fifth embodiments in that: the concrete process of the step five is as follows:
using the vanishing point (x) obtained in step four0,y0) Dividing the color image of the road and the road overhead part: the color image of the road part is positioned at the lowest part of the whole road color image, and the ordinate in the road color image is smaller than y0All the pixels are cut off to obtain a road color image with the overhead part of the road removed; the size of the road color image after removing the road overhead part is (1: W, y)0H); wherein, 1: W and y0H represents the width and height of the road color image after the removal of the road overhead portion, respectively.
The size of the road color image input in the step one is W x H x 3, wherein W represents the width of the road color image, H represents the height of the road color image, and 3 represents three channels of the road color image; let the coordinates of vanishing points in the road color image be (x)0,y0) The road color image (1: W, y0: H) obtained by removing the road empty part after the vanishing point segmentation is utilized, wherein 1: W and y0H respectively represents the width and the height of the obtained road color image with the overhead part of the road removed, the ordinate y of the road color image is increased from top to bottom in sequence, and the abscissa x is increased from left to right in sequence in the same way; since the road portion is located below, the ordinate of the road portion is larger than the ordinate of the vanishing point, and dividing the road and the road overhead portion requires that the ordinate of the entire color image is smaller than y0All pixels of (a) are clipped; the width 1: W of the extracted road color image after segmentation is equal to the width of the upper part of the whole road to be subtracted; therefore, the road and the overhead part of the road are divided, and redundant information is removed so as to obtain the drivable area.
The road image segmentation method based on the vanishing point can reduce redundant information and increase more interested areas, so that the subsequent training of the road segmentation method based on the deep learning can be quickly converged and quickly reach an optimal value. The method solves the problem that how to increase the region of interest in training and how to accelerate the convergence of training based on the deep learning method at present.
The seventh embodiment: the present embodiment differs from the first to sixth embodiments in that: putting the N road color images with the road overhead parts removed into a deep learning algorithm for training comprises the following specific processes:
the training network adopted based on the deep learning algorithm is a Mulitnet network, and the Mulitnet network comprises a front 13-layer network of a VGG16 network, a three-layer deconvolution network of an FCN network and a convolution layer of 1 x 1;
taking N road color images with the overhead parts of the road removed as training samples, wherein the size of each image is (1: W, y0: H); the method for inputting N road color images with the road overhead part removed into the Mulitnet network for training comprises the following steps:
the input N road color images with the road empty parts removed firstly pass through a front 13-layer network of a VGG16 network in a Mulitnet network, and the size of each image is changed into 1/32 of the original image; then, the images are up-sampled through a 1 x 1 convolution layer and a three-layer deconvolution network of an FCN network, each image is expanded by 32 times, and the size of each expanded image is equal to that of the original image; finally, obtaining an accurate drivable road area image through the 1-by-1 convolution layer;
the specific training process is as follows:
after iteration, the predicted value image of the road color image without the overhead part of the road is continuously close to the real value image by using a cross entropy cost function loss (p, q); wherein the cross entropy cost function is as follows:
Figure BDA0001541714320000071
wherein, p represents a predicted value image of the road color image with the road overhead part removed, and q represents a real value image of the road color image with the road overhead part removed; c represents each removed roadC is the C-th pixel in each road color image with the road overhead part removed, and the value range of C is 1 to C; p is a radical ofn(c) Q predicted value image obtained by subjecting c th pixel of road color image without road upper space to Mulitnet based on deep learning algorithmn(c) Representing the true value image of the c pixel of the road color image with the empty part of the road removed, wherein the value range of N is 1 to N; and obtaining an accurate travelable road area image after calibration.
The method can train N pictures at the same time each time, and can obtain the road identification rule of the road image segmentation method based on the vanishing point through a continuous iteration process. Even if the image to be recognized which is put into the deep learning network based on the invention is not subjected to vanishing point segmentation processing, a road recognition result with high accuracy can be obtained, and an accurate drivable road area is further obtained.
The types of networks which can be selected for training the training samples based on the deep learning algorithm are more, and the Mulitnet network is only one of the networks. Meanwhile, the N road color images with the overhead parts of the roads removed can also be directly processed by using a binocular matching algorithm, and a road identification result with high accuracy can also be obtained.
In the actual training process, the number of training samples used by the user is 280, and the color image of the road with the empty part of the road removed is about (1: W, y)1H), the total number of pixels calculated is 52,416,000(1248 by 384), with H slightly different depending on the position of the vanishing point; if the photographed road color image is directly put into a Mulitnet network for training, the number of pixels processed at one time is 104,832,000(1248 × 200); it would be prohibitively computationally expensive to not use vanishing points to segment the road from its overhead. As shown in fig. 10, which is a relationship curve of the driving region identification accuracy rate and the training times of the method of the present invention, it can be seen that the training times are 16k (i.e. 16 × 10)3) In the case of the zone of travelability of the method of the inventionThe domain identification accuracy is up to 96.2%, and compared with the traditional method based on deep learning or binocular matching algorithm, the method provided by the invention has the advantage that the travelable region identification accuracy can be improved by 1-2%.
Referring to FIG. 11, which shows the training time versus training times of the method of the present invention, it can be seen that the training time of the method of the present invention is only 5.66e +3 (i.e. 5.66 x 10) for the training times of 16k3) In ms; the road image segmentation method based on the vanishing points can be realized only by using a vanishing point estimation method, the operation speed of the algorithm is effectively improved, a picture with the size of 1248 by 384 can be processed within 0.08s, and the operation time of 40-60% based on the traditional deep learning or binocular matching algorithm can be reduced.

Claims (7)

1. The road image segmentation method based on the vanishing point is characterized by comprising the following steps of:
step one, processing an input road color image through a Canny edge detection algorithm to obtain a gray level image I1Horizontal and vertical edge information of (x, y); wherein x and y are respectively the abscissa and ordinate of each point in the gray scale image;
step two, changing the input road color image into a gray image I (x, y), and extracting texture features of the whole gray image I (x, y) by adopting a Gabor filter;
step three, the gray level image I in the step one is processed1Performing intersection operation on the horizontal and vertical edge information of the (x, y) and the texture features of the gray level image I (x, y) in the step two to obtain a texture feature image with horizontal and vertical directions, and obtaining effective voting points by introducing a confidence function;
step four, voting the vanishing points to be selected above the effective voting points by using the effective voting points obtained in the step three to obtain a point with the largest number of votes as a vanishing point;
step five, segmenting the color image of the road and the overhead part of the road by using the vanishing point obtained in the step four: the ordinate y of the road color image increases from top to bottom in sequence, and the abscissa x increases from left to right in sequence; cutting off all pixels of which the vertical coordinate of the whole road color image is smaller than the vertical coordinate of the vanishing point to obtain a road color image with the overhead part of the road removed; the width of the road color image after the removal of the road overhead part is equal to the width of the road overhead part color image;
step six, repeating the step one to the step five to obtain N road color images with the overhead parts of the roads removed; putting the N road color images with the overhead road parts removed into a deep learning algorithm for training, or directly processing the N road color images with the overhead road parts removed by adopting a binocular matching algorithm; and rapidly obtaining the travelable road area image with high accuracy.
2. The vanishing point-based road image segmentation method according to claim 1, wherein: the size of the road color image input in the first step is W x H x 3; where W represents the width of the road color image, H represents the height of the road color image, and 3 represents the three channels of the road color image.
3. The vanishing point-based road image segmentation method according to claim 2, wherein: the specific process of extracting the texture features of the whole gray level image I (x, y) by adopting a Gabor filter in the second step is as follows:
the Gabor filter filters the gray image I (x, y) through a Gaussian window to extract the texture features of the whole gray image I (x, y), and the formula of the Gabor filter
Figure FDA0001541714310000011
The following were used:
Figure FDA0001541714310000012
formulation of grayscale image I (x, y) with Gabor filter
Figure FDA0001541714310000021
Convolution is carried out to obtain the energy response of each point in the gray-scale image I (x, y)
Figure FDA0001541714310000022
Figure FDA0001541714310000023
Figure FDA0001541714310000024
Wherein a is x cos phi + y sin phi, b is-x sin phi + y cos phi;
Figure FDA0001541714310000025
in the form of a radial frequency, the frequency,
Figure FDA0001541714310000026
c is the frequency multiplication constant, where
Figure FDA0001541714310000027
e is the base of the natural logarithm function; i represents the unit of an imaginary number; φ is the texture direction, where φ belongs to {0 °,45 °,90 °, 135 ° }; the magnitude of the energy response of each point in the grayscale image I (x, y) is
Figure FDA0001541714310000028
And
Figure FDA0001541714310000029
respectively, the real and imaginary parts of the energy response of each point in the gray-scale image I (x, y).
4. The vanishing point-based road image segmentation method according to claim 3, wherein: step three, the method for introducing the confidence function is used for obtaining the effective voting points by the specific process:
according to four different texture directions phi, four different energy response amplitudes are obtained
Figure FDA00015417143100000210
Figure FDA00015417143100000211
And
Figure FDA00015417143100000212
wherein the content of the first and second substances,
Figure FDA00015417143100000213
represents the largest one of the four energy response amplitudes, and the other energy response amplitudes are sequentially from large to small
Figure FDA00015417143100000214
And
Figure FDA00015417143100000215
by introducing a confidence function, the energy response amplitude constant E less than or equal to the energy response amplitude constant E is removedthRetains a constant E greater than the amplitude of the energy responsethObtaining effective voting points by the texture characteristics of the voting points; the confidence function conf (x, y) is formulated as follows:
Figure FDA00015417143100000216
5. the vanishing point-based road image segmentation method according to claim 4, wherein: step four, voting the vanishing points to be selected above the effective voting points by using the effective voting points obtained in the step three, and obtaining a point with the largest number of votes as a vanishing point comprises the following specific processes:
Figure FDA00015417143100000217
wherein Votes (P, V) represents a voting function; p and V respectively represent an effective voting point and a vanishing point to be selected, and d (P and V) represent the distance between the effective voting point and the vanishing point to be selected; gamma is a vector
Figure FDA0001541714310000031
And vector
Figure FDA0001541714310000032
The included angle is less than 5 degrees, and the point O represents a main direction point of the effective voting point P;
obtaining the vanishing point to be selected with the most votes after voting, and selecting the vanishing point (x) with the most votes to be selected0,y0) As a vanishing point.
6. The vanishing point-based road image segmentation method according to claim 5, wherein: the concrete process of the step five is as follows:
using the vanishing point (x) obtained in step four0,y0) Dividing the color image of the road and the road overhead part: the color image of the road part is positioned at the lowest part of the whole road color image, and the ordinate in the road color image is smaller than y0All the pixels are cut off to obtain a road color image with the overhead part of the road removed; the size of the road color image after removing the road overhead part is (1: W, y)0H); wherein, 1: W and y0H represents the width and height of the road color image after the removal of the road overhead portion, respectively.
7. The vanishing point-based road image segmentation method according to claim 6, wherein: putting the N road color images with the road overhead parts removed into a deep learning algorithm for training comprises the following specific processes:
the training network adopted based on the deep learning algorithm is a Mulitnet network, and the Mulitnet network comprises a front 13-layer network of a VGG16 network, a three-layer deconvolution network of an FCN network and a convolution layer of 1 x 1;
taking N road color images with the overhead parts of the road removed as training samples, wherein the size of each image is (1: W, y0: H); the method for inputting N road color images with the road overhead part removed into the Mulitnet network for training comprises the following steps:
the input N road color images with the road empty parts removed firstly pass through a front 13-layer network of a VGG16 network in a Mulitnet network, and the size of each image is changed into 1/32 of the original image; then, the images are up-sampled through a 1 x 1 convolution layer and a three-layer deconvolution network of an FCN network, each image is expanded by 32 times, and the size of each expanded image is equal to that of the original image; finally, obtaining an accurate drivable road area image through the 1-by-1 convolution layer;
the specific training process is as follows:
after iteration, the predicted value image of the road color image without the overhead part of the road is continuously close to the real value image by using a cross entropy cost function loss (p, q); wherein the cross entropy cost function is as follows:
Figure FDA0001541714310000041
wherein, p represents a predicted value image of the road color image with the road overhead part removed, and q represents a real value image of the road color image with the road overhead part removed; c represents all pixels in each road color image with the road overhead part removed, C is the C-th pixel in each road color image with the road overhead part removed, and the value range of C is 1-C; p is a radical ofn(c) Q predicted value image obtained by subjecting c th pixel of road color image without road upper space to Mulitnet based on deep learning algorithmn(c) Representing the true value image of the c pixel of the road color image with the empty part of the road removed, wherein the value range of N is 1 to N; and obtaining an accurate travelable road area image after calibration.
CN201810015224.8A 2018-01-08 2018-01-08 Road image segmentation method based on vanishing points Active CN108256455B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810015224.8A CN108256455B (en) 2018-01-08 2018-01-08 Road image segmentation method based on vanishing points

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810015224.8A CN108256455B (en) 2018-01-08 2018-01-08 Road image segmentation method based on vanishing points

Publications (2)

Publication Number Publication Date
CN108256455A CN108256455A (en) 2018-07-06
CN108256455B true CN108256455B (en) 2021-03-23

Family

ID=62725953

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810015224.8A Active CN108256455B (en) 2018-01-08 2018-01-08 Road image segmentation method based on vanishing points

Country Status (1)

Country Link
CN (1) CN108256455B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109003286A (en) * 2018-07-26 2018-12-14 清华大学苏州汽车研究院(吴江) Lane segmentation method based on deep learning and laser radar

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104050681A (en) * 2014-07-04 2014-09-17 哈尔滨工业大学 Road vanishing point detection method based on video images
CN105335704A (en) * 2015-10-16 2016-02-17 河南工业大学 Lane line identification method and device based on bilinear interpolation
JP2017010553A (en) * 2015-06-24 2017-01-12 株式会社リコー Detection method and detection device for road boundary body

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201043507A (en) * 2009-06-05 2010-12-16 Automotive Res & Testing Ct Method for detection of tilting of automobile and headlamp automatic horizontal system using such a method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104050681A (en) * 2014-07-04 2014-09-17 哈尔滨工业大学 Road vanishing point detection method based on video images
JP2017010553A (en) * 2015-06-24 2017-01-12 株式会社リコー Detection method and detection device for road boundary body
CN105335704A (en) * 2015-10-16 2016-02-17 河南工业大学 Lane line identification method and device based on bilinear interpolation

Also Published As

Publication number Publication date
CN108256455A (en) 2018-07-06

Similar Documents

Publication Publication Date Title
CN108053419B (en) Multi-scale target tracking method based on background suppression and foreground anti-interference
CN107506711B (en) Convolutional neural network-based binocular vision barrier detection system and method
CN105160309B (en) Three lanes detection method based on morphological image segmentation and region growing
CN108776779B (en) Convolutional-circulation-network-based SAR sequence image target identification method
CN111079685B (en) 3D target detection method
CN104834922B (en) Gesture identification method based on hybrid neural networks
CN107767383B (en) Road image segmentation method based on superpixels
CN107423760A (en) Based on pre-segmentation and the deep learning object detection method returned
CN104700071B (en) A kind of extracting method of panorama sketch road profile
CN109285162A (en) A kind of image, semantic dividing method based on regional area conditional random field models
CN105182350A (en) Multi-beam sonar target detection method by applying feature tracking
CN104463877B (en) A kind of water front method for registering based on radar image Yu electronic chart information
CN104299008A (en) Vehicle type classification method based on multi-feature fusion
CN104299009A (en) Plate number character recognition method based on multi-feature fusion
CN106934455B (en) Remote sensing image optics adapter structure choosing method and system based on CNN
CN110837768A (en) Rare animal protection oriented online detection and identification method
CN114283162A (en) Real scene image segmentation method based on contrast self-supervision learning
CN110909741A (en) Vehicle re-identification method based on background segmentation
CN115272306B (en) Solar cell panel grid line enhancement method utilizing gradient operation
Jiang et al. Dfnet: Semantic segmentation on panoramic images with dynamic loss weights and residual fusion block
CN107704833A (en) A kind of front vehicles detection and tracking based on machine learning
CN108229247A (en) A kind of mobile vehicle detection method
CN107944350B (en) Monocular vision road identification method based on appearance and geometric information fusion
CN111091071A (en) Underground target detection method and system based on ground penetrating radar hyperbolic wave fitting
CN108256455B (en) Road image segmentation method based on vanishing points

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant