CN104091344A - Road dividing method - Google Patents

Road dividing method Download PDF

Info

Publication number
CN104091344A
CN104091344A CN201410350481.9A CN201410350481A CN104091344A CN 104091344 A CN104091344 A CN 104091344A CN 201410350481 A CN201410350481 A CN 201410350481A CN 104091344 A CN104091344 A CN 104091344A
Authority
CN
China
Prior art keywords
estimator
road
link weights
weights
link
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410350481.9A
Other languages
Chinese (zh)
Other versions
CN104091344B (en
Inventor
汤淑明
袁俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN201410350481.9A priority Critical patent/CN104091344B/en
Publication of CN104091344A publication Critical patent/CN104091344A/en
Application granted granted Critical
Publication of CN104091344B publication Critical patent/CN104091344B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a road dividing method. The method comprises the steps that S1, an n-link weight estimator is trained offline, the n-link weight estimator is obtained, a t-link weight estimator is trained offline, and the t-link weight estimator is obtained; S2, the n-link weight and the t-link weight are estimated through the n-link weight estimator and the t-link weight estimator respectively, a road image is divided online under an image division method framework, and a road area in the road image is obtained. According to the road dividing method, the mapping relation between the characteristics of adjacent pixel points and the n-link weight and the t-link weight is fitted with the statistical learning method, the trained n-link weight estimator and the trained t-link weight estimator are utilized for estimating the n-link weight and the t-link weight, the minimum division of a s-t image is obtained with the maximum flow/minimum division algorithm, and the road area is obtained. The road dividing method has good robustness for interference factors in the road environment.

Description

A kind of road dividing method
Technical field
The present invention relates to vehicle assistant drive technical field, particularly a kind of road dividing method for road detection system.
Background technology
The development of auto industry has had the history of more than 100 year, and the appearance of automobile has improved people's production and life style greatly, has promoted socioeconomic development.Automobile has also brought problems in offering convenience for human lives, and such as traffic hazard takes place frequently, energy resource consumption is too fast, and environmental pollution is serious etc.In the urban district of traffic congestion, the average per minute of driver need to complete 20-30 hand-foot coordination moves to realize the manipulation of vehicle, and therefore under traffic congestion situation, driving task is quite complicated.Along with social development, people to automobile the requirement in security, the feature of environmental protection, economy and comfortableness also more and more higher.The technology such as sensor, computing machine, control automatically, artificial intelligence and machine vision are in constantly development and innovation, and more and more application in Communication and Transportation Engineering.This background has expedited the emergence of the research and development of intelligent vehicle, makes vehicle possess autonomous driving and auxiliary function of driving, and experiences to realize safe, energy-conservation, convenient, comfortable driving.
Road detection system is the important component part of intelligent vehicle DAS (Driver Assistant System).Intelligent vehicle obtains position and the attitude of can pass through region and the relative road boundary of vehicle body by road detection system.So, can realize assisting vehicle navigation, lane departure warning, lane keeping, adaptive cruise, monitoring driver's driving condition, many auxiliary driving functions such as prediction driver's behavior intention by Road Detection.
Current most Approach for road detection is based on computer vision.In these methods, the process that road area pixel and other area pixel point are separated is referred to as road and cuts apart.It is a challenging problem that road is cut apart: on the one hand, due to the impact of the factors such as road surface material, weather condition, illumination variation, road surface has various outward appearances; On the other hand, along with the motion of vehicle, road surface and background be in dynamic change, and on road surface, generally exist the disturbing factor such as vehicle and pedestrian.The accuracy that above-mentioned factor is easily cut apart road impacts, and cuts apart and has brought very big difficulty for road.
Summary of the invention
(1) want technical solution problem
The object of the invention is to solve current road dividing method and be easily subject to the technical matters that environmental factor is disturbed, therefore, the present invention proposes a kind of road dividing method of robust.
(2) technical scheme
For solving the problems of the technologies described above, the present invention proposes a kind of road dividing method, comprises that step is as follows:
Step S1: off-line training n-link weights estimator, obtain n-link weights estimator, off-line training t-link weights estimator, obtains t-link weights estimator;
Step S2: utilize n-link weights estimator and t-link weights estimator to estimate respectively n-link weights and t-link weights, under the method frame cutting at figure, online road image is cut apart, obtain the road area in road image.
(3) beneficial effect
Under the method frame that the present invention cuts at figure, carry out road and cut apart, closely combined global information and the local message in road image, local interference is had to good robustness; The present invention carrys out the mapping relations between feature and the n-link weights of matching neighbor pixel by statistical learning method, the mode that can make up the contrast calculating n-link weights that use neighbor pixel in traditional method of cutting based on figure is subject to the shortcoming that environmental factor is disturbed, and has further strengthened the robustness that road is cut apart.Therefore, the present invention has good robustness to the disturbing factor in road environment.
Brief description of the drawings
Fig. 1 is the process flow diagram that road of the present invention is cut apart;
Fig. 2 is the connection diagram of adjacent node in s-t figure of the present invention;
Fig. 3 is s-t figure connection diagram of the present invention.
Embodiment
For making the object, technical solutions and advantages of the present invention clearer, below in conjunction with specific embodiment, and with reference to accompanying drawing, the present invention is described in more detail.
The present invention proposes the figure that a kind of n-link weights of cutting apart for road can learn and cut method, road image is represented with s-t figure (s-t Graph), cut at figure under the method frame of (Graph Cut) and realize Road image segmentation, as shown in Figure 1, described method comprises:
Step S1: off-line training n-link (Neighboring Link) weights estimator, obtain n-link weights estimator, off-line training t-link (Terminal Link) weights estimator, obtains t-link weights estimator; N-link weights estimator is to carry out the feature of matching neighbor pixel to the mapping relations of n-link weights with statistical learning method.T-link weights estimator is the mapping relations that arrive t-link weights by the neighborhood characteristics of statistical learning method matching pixel.
Step S2: utilize n-link weights estimator and t-link weights estimator, estimate respectively n-link weights and t-link weights, under the method frame cutting at figure, online road image is cut apart, obtain the road area in road image.
It is a kind of combined optimization method based on graph theory that described figure cuts, and has a wide range of applications at computer vision field.When figure cuts when being applied to image and cutting apart, characterize image to be split with s-t figure, use max-flow/minimal cut (Max-flow/Min-cut) algorithm to ask for the minimal cut of s-t figure.The process of asking for minimal cut is equal to the process that minimizes an energy function, obtains minimal cut and can obtain the optimum segmentation of image under this energy function.
Described n-link weights estimator training process comprises the following steps:
Step S1A: extract training sample from the road image sample set marking;
Because pixel in road image is numerous, there is no need to extract all pixels in the image that marked as training sample.The sharp increase of on the one hand, extracting too much sample and can bring calculated amount; On the other hand, too much sample can make n-link weights estimator be difficult to focus on some disturbing factors, such as shade, and hot spot, lane markings etc.So, need to adopt a kind of sampling policy to extract sample from the image marking.A positive sample pixel adjacent from region, road surface, negative sample extracts the boundary from road.Not extracting positive sample from non-road area is because we are indifferent to the n-link weights of non-road area, and the diversity of minimizing data is conducive to the convergence of n-link weights estimator.In the present invention, according to certain probability, high-contrast area, high-brightness region and the general area from road area randomly drawed positive sample, the probability of institute's foundation has determined the ratio of each area sample in positive sample, and then has determined the power of n-link weights estimator for various different road conditions adaptive facultys.Pixel on road boundary is all used as negative sample, because the number of these pixels is far smaller than the number of road area pixel.The estimator output weights that positive sample is corresponding are 1, and the estimator output weights that negative sample is corresponding are 0.Each training sample extracting is the proper vector of one 10 × 1 dimension, included feature and define as shown in table 1.
The input feature vector of table 1 multi-layer perception(MLP)
Step S1B: utilize the training sample training n-link weights estimator extracting, until the error of n-link estimator, in receptible scope, obtains the parameter of n-link weights estimator.
Adopt multi-layer perception(MLP) (Multilayer Perceptron, MLP) to learn the mapping relations between feature and the n-link weights of neighbor pixel, for estimating n-link weights.The present invention represents road image with s-t figure, the adjacent node in s-t figure is connected by the mode of eight connections, and as shown in (a) in Fig. 2, adjacent node is by level, be vertically connected with the limit at diagonal angle.And in Fig. 2, (a) can be divided into multiple elementary cells that are made up of a node and four edges, as shown in (b) in Fig. 2.Certainly, there is no described four edges in the node of image border, but the limit lacking can be regarded as to weights are 0 limit.So, can with four independently MLP carry out the mapping relations of the weights of the n-link of the four direction shown in (b) in fitted figure 2.Each MLP is the feedforward neural network of four layers, comprises an input layer, and two hidden layers and an output layer comprise respectively 10,20,20,1 neurons.Each neuronic activation function is sigmoid function.The proper vector of one 10 × 1 dimension of input of each MLP, its definition is as shown in table 1.MLP output area is between 0 to 1, and output is more close to 1, and corresponding n-link weights are larger.
By RPROP algorithm, utilize the training sample training multi-layer perception(MLP) extracting in step S1A.For the impact on training result of the difference of eliminating positive sample and negative sample quantity, balance is hit rate and false alarm rate, need in the time of training, be that negative sample adds weight, and weight coefficient is as follows:
w ( y ) = 1 , if y = 1 N p / N n , if y = 0 - - - ( 1 )
Wherein, w (y) is the weight coefficient adding for negative sample, N pand N nrepresent respectively the number of positive sample and negative sample, y is the class label of sample, and y=1 is positive sample, otherwise is negative sample.
The training process of described t-link weights estimator comprises the following steps:
Step S1a: extract training sample from the road image sample set marking;
Extract positive sample and negative sample according to certain ratio from road area and non-road area.Each sample is the proper vector of one 13 × 1 dimension, the feature comprising has R, G, the B value of pixel, average and the variance of R, G, B value in 9 × 9 windows centered by this pixel, the Grad of this pixel and gradient direction, and its coordinate under image coordinate system.Positive output valve corresponding to sample is 1, and the output valve that negative sample is corresponding is-1.
Step S1b: utilize the training sample training t-link weights estimator extracting, until the error of t-link weights estimator, in receptible scope, obtains the parameter of t-link weights estimator.
Adopt Gentle Adaboost (GAB) sorter as t-link weights estimator.Because GAB is a mutation algorithm of Adaboost algorithm, there is good feature selecting ability and numerical stability, have better robustness for noise, and GAB is a kind of good homing method that has supervision.This GAB sorter is made up of 500 decision-making stubs, and each decision-making stub can be regarded as a decision tree that only has a node.The output of GAB is not class label, but the weighted sum of the votes of all Weak Classifiers.For positive sample, we wish GAB output on the occasion of; For negative sample, we wish GAB output negative value.Utilize the training sample training GAB sorter extracting in step S1a.
The online cutting procedure of road image comprises the following steps:
Step S21: obtain road image I by vehicle-mounted imageing sensor;
Step S22: build s-t figure, each pixel in road image I is made as to adjacent node (Neighborhood Nodes), adjacent node is communicated with by four or eight modes that are communicated with are connected with adjacent node, source node (Source Node) represents road area and is made as prospect, sink nodes (Sink Node) represent non-road area and are made as background, and each adjacent node is connected with sink nodes with source node respectively; S-t figure is a kind of graph model, comprises two kinds of nodes and two kinds of limits.Two kinds of nodes are respectively adjacent node and end node, and end node comprises source node and convergence point.
As shown in Figure 3, what s indicated is source node to connection in s-t figure between node, and what t indicated is sink nodes, and solid black point is adjacent node.Eight mode of communicating that adjacent node passes through are connected, weights on every limit represent the correlation degree of these limit two endvertexs, the weights on the limit that adjacent node is connected with source node represent that adjacent node is the probability of the pixel of road area, and the weights on the limit that adjacent node is connected with sink nodes represent that adjacent node is the probability of non-road area pixel.
Step S23: utilize the n-link weights estimator that trains to estimate that respectively each adjacent node in road image I is adjacent the weights of the fillet between node;
Each adjacent node need to be estimated the You Qi left side, the limit adjacent node of weights, upper left side adjacent node, and top adjacent node and upper right side adjacent node fillet, as shown in (b) in Fig. 2.There is no described four edges in the node of image border, but the limit lacking can be regarded as to weights are 0 limit.The feature (defining as shown in table 1) of extracting each adjacent node, is input to feature in corresponding MLP.The output of MLP is the weights of corresponding sides, that is:
B (p,q)=MLP d(v p,q),d=dir(p,q) (2)
Wherein, B (p, q)multi-layer perception(MLP) MLP d(v p, q) weights of corresponding sides of output, MLP d(v p, q) represent for estimating the multi-layer perception(MLP) of d the n-link weights in direction, v p, qbe the proper vector of input, dir (p, q) represents the direction of neighbor p and q fillet.
Step S24: utilize the t-link weights estimator that trains to estimate respectively the weights of each adjacent node and the weights of source node fillet, each adjacent node and sink nodes fillet in road image I;
Extract the feature of each adjacent node, the GAB sorter that input trains.GAB sorter is exported the weighted sum of the votes of all Weak Classifiers, also needs further to pass through change of scale to interval [0,1], and the weight calculation of fillet is as follows:
R p ( x p , l p ) = 1 1 + exp ( - w · x p ) , l p = 0 1 1 + exp ( w · x p ) , l p = 1 - - - ( 3 )
Wherein, l pfor the class label of pixel p, w is normal number, x pthe output of GAB at pixel p place, l p=1 o'clock R p(x p, l p) for pixel p is to the weight of the fillet of sink nodes, l p=0 o'clock R p(x p, l p) for pixel p is to the weight of the fillet of source node.
Step S25: utilize max-flow/minimal cut algorithm to ask for the minimal cut of the s-t figure of structure, obtain the class label L={l of each pixel in road image I i| l i∈ 0,1}, and i=1,2 ... N}, wherein, N is the number of pixel in road image I, l ifor the class label of pixel i, l i=1 represent pixel point i is road area pixel, and no person is non-road area pixel.
Above-described specific embodiment; object of the present invention, technical scheme and beneficial effect are further described; institute is understood that; the foregoing is only specific embodiments of the invention; be not limited to the present invention; within the spirit and principles in the present invention all, any amendment of making, be equal to replacement, improvement etc., within all should being included in protection scope of the present invention.

Claims (7)

1. a road dividing method, comprises that step is as follows:
Step S1: off-line training n-link weights estimator, obtain n-link weights estimator, off-line training t-link weights estimator, obtains t-link weights estimator;
Step S2: utilize n-link weights estimator and t-link weights estimator to estimate respectively n-link weights and t-link weights, under the method frame cutting at figure, online road image is cut apart, obtain the road area in road image.
2. road dividing method as claimed in claim 1, is characterized in that, described road image represents with s-t figure.
3. road dividing method as claimed in claim 1, is characterized in that, described n-link weights estimator is to carry out the feature of matching neighbor pixel to the mapping relations of n-link weights with statistical learning method.
4. road dividing method as claimed in claim 1, is characterized in that, the process of described training n-link weights estimator comprises the following steps:
Step S1A: extract training sample from the road image sample set marking;
Step S1B: utilize the training sample training n-link weights estimator extracting, until the error of n-link estimator, in receptible scope, obtains the parameter of n-link weights estimator.
5. road dividing method as claimed in claim 1, is characterized in that, described t-link weights estimator is the mapping relations that arrive t-link weights by the neighborhood characteristics of statistical learning method matching pixel.
6. road dividing method as claimed in claim 1, is characterized in that, the process of described training t-link weights estimator comprises the following steps:
Step S1a: extract training sample from the road image sample set marking;
Step S1b: utilize the training sample training t-link weights estimator extracting, until the error of t-link weights estimator, in receptible scope, obtains the parameter of t-link weights estimator.
7. road dividing method as claimed in claim 1, is characterized in that, the described process of online road image being cut apart comprises the following steps:
Step S21: obtain road image by vehicle-mounted imageing sensor;
Step S22: build s-t figure, each pixel in road image is made as to adjacent node, adjacent node is communicated with by four or eight modes that are communicated with are connected with adjacent node, source node represents road area and is made as prospect, sink nodes represent non-road area and are made as background, and each adjacent node is connected with sink nodes with source node respectively;
Step S23: utilize the n-link weights estimator that trains to estimate that respectively each adjacent node in road image I is adjacent the weights of the fillet between node;
Step S24: utilize the t-link weights estimator that trains to estimate respectively the weights of each adjacent node and the weights of source node fillet, each adjacent node and sink nodes fillet in road image;
Step S25: utilize max-flow/minimal cut algorithm to ask for the minimal cut of the s-t figure of structure, obtain the class label L={l of each pixel in road image i| l i∈ 0,1}, and i=1,2 ... N}, wherein, N is the number of pixel in road image, l ifor the class label of pixel i, l i=1 represent pixel point i is road area pixel, otherwise is non-road area pixel.
CN201410350481.9A 2014-07-22 2014-07-22 Road dividing method Active CN104091344B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410350481.9A CN104091344B (en) 2014-07-22 2014-07-22 Road dividing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410350481.9A CN104091344B (en) 2014-07-22 2014-07-22 Road dividing method

Publications (2)

Publication Number Publication Date
CN104091344A true CN104091344A (en) 2014-10-08
CN104091344B CN104091344B (en) 2017-04-19

Family

ID=51639059

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410350481.9A Active CN104091344B (en) 2014-07-22 2014-07-22 Road dividing method

Country Status (1)

Country Link
CN (1) CN104091344B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105631880A (en) * 2015-12-31 2016-06-01 百度在线网络技术(北京)有限公司 Lane line segmentation method and apparatus
CN106558058A (en) * 2016-11-29 2017-04-05 北京图森未来科技有限公司 Parted pattern training method, lane segmentation method, control method for vehicle and device
CN108229274A (en) * 2017-02-28 2018-06-29 北京市商汤科技开发有限公司 Multilayer neural network model training, the method and apparatus of roadway characteristic identification

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030069494A1 (en) * 2001-10-04 2003-04-10 Marie-Pierre Jolly System and method for segmenting the left ventricle in a cardiac MR image
US20050134587A1 (en) * 1999-09-23 2005-06-23 New York University Method and apparatus for segmenting an image in order to locate a part thereof
US20050238215A1 (en) * 2001-10-04 2005-10-27 Marie-Pierre Jolly System and method for segmenting the left ventricle in a cardiac image
CN101558404A (en) * 2005-06-17 2009-10-14 微软公司 Image segmentation
CN102609686A (en) * 2012-01-19 2012-07-25 宁波大学 Pedestrian detection method
CN103473767A (en) * 2013-09-05 2013-12-25 中国科学院深圳先进技术研究院 Segmentation method and system for abdomen soft tissue nuclear magnetism image

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050134587A1 (en) * 1999-09-23 2005-06-23 New York University Method and apparatus for segmenting an image in order to locate a part thereof
US20030069494A1 (en) * 2001-10-04 2003-04-10 Marie-Pierre Jolly System and method for segmenting the left ventricle in a cardiac MR image
US20050238215A1 (en) * 2001-10-04 2005-10-27 Marie-Pierre Jolly System and method for segmenting the left ventricle in a cardiac image
CN101558404A (en) * 2005-06-17 2009-10-14 微软公司 Image segmentation
CN102609686A (en) * 2012-01-19 2012-07-25 宁波大学 Pedestrian detection method
CN103473767A (en) * 2013-09-05 2013-12-25 中国科学院深圳先进技术研究院 Segmentation method and system for abdomen soft tissue nuclear magnetism image

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
FALIU YI 等: "Image Segmentation: A Survey of Graph-cut Methods", 《SYSTEMS AND INFORMATICS》 *
JYUN-FAN TSAI 等: "Road Detection and Classification in Urban Environments Using Conditional Random Field Models", 《INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE》 *
SHA YUN 等: "A Road Detection Algorithm by Boosting Using Feature Combination", 《INTELLIGENT VEHICLES SYMPOSIUM》 *
YURI BOYKOV 等: "An Experimental Comparison of Min-Cut/Max-Flow Algorithms for Energy Minimization in Vision", 《PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105631880A (en) * 2015-12-31 2016-06-01 百度在线网络技术(北京)有限公司 Lane line segmentation method and apparatus
CN105631880B (en) * 2015-12-31 2019-03-22 百度在线网络技术(北京)有限公司 Lane line dividing method and device
CN106558058A (en) * 2016-11-29 2017-04-05 北京图森未来科技有限公司 Parted pattern training method, lane segmentation method, control method for vehicle and device
CN108229274A (en) * 2017-02-28 2018-06-29 北京市商汤科技开发有限公司 Multilayer neural network model training, the method and apparatus of roadway characteristic identification

Also Published As

Publication number Publication date
CN104091344B (en) 2017-04-19

Similar Documents

Publication Publication Date Title
Saleem et al. Smart cities: Fusion-based intelligent traffic congestion control system for vehicular networks using machine learning techniques
US11934962B2 (en) Object association for autonomous vehicles
Liang et al. Fine-grained vessel traffic flow prediction with a spatio-temporal multigraph convolutional network
CN107886073A (en) A kind of more attribute recognition approaches of fine granularity vehicle based on convolutional neural networks
Yi et al. Trajectory clustering aided personalized driver intention prediction for intelligent vehicles
CN105893951A (en) Multidimensional non-wearable type traffic police gesture identification method and system for driverless vehicles
CN104537891B (en) A kind of boats and ships track real-time predicting method
Chandra et al. Cmetric: A driving behavior measure using centrality functions
CN111045422A (en) Control method for automatically driving and importing 'machine intelligence acquisition' model
CN103839264A (en) Detection method of lane line
CN111881802B (en) Traffic police gesture recognition method based on double-branch space-time graph convolutional network
CN104331160A (en) Lip state recognition-based intelligent wheelchair human-computer interaction system and method
CN104091344A (en) Road dividing method
Chen et al. Robust vehicle detection and viewpoint estimation with soft discriminative mixture model
Arefnezhad et al. Modeling of double lane change maneuver of vehicles
US11550327B2 (en) Composition method of automatic driving machine consciousness model
CN104050680A (en) Image segmentation method based on iteration self-organization and multi-agent inheritance clustering algorithm
Baliyan et al. Role of AI and IoT techniques in autonomous transport vehicles
Hoang et al. Optimizing YOLO Performance for Traffic Light Detection and End-to-End Steering Control for Autonomous Vehicles in Gazebo-ROS2
Souza et al. Vision-based waypoint following using templates and artificial neural networks
CN105809699A (en) Image segmentation based car window extraction method and system
CN106650814A (en) Vehicle-mounted monocular vision-based outdoor road adaptive classifier generation method
Zailan et al. An automated solid waste detection using the optimized YOLO model for riverine management
CN111046897A (en) Method for defining fuzzy event probability measure spanning different spaces
CN114332833A (en) Binary differentiable fatigue detection method based on face key points

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant