CN107301371A - A kind of unstructured road detection method and system based on image information fusion - Google Patents

A kind of unstructured road detection method and system based on image information fusion Download PDF

Info

Publication number
CN107301371A
CN107301371A CN201710330557.5A CN201710330557A CN107301371A CN 107301371 A CN107301371 A CN 107301371A CN 201710330557 A CN201710330557 A CN 201710330557A CN 107301371 A CN107301371 A CN 107301371A
Authority
CN
China
Prior art keywords
road
video image
little
road video
advance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710330557.5A
Other languages
Chinese (zh)
Inventor
庄敏
鹿鹏
龙刚
李翊
娄海涛
权潇
刘以续
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Protruly Vision Technology Group Co Ltd
Beijing Union University
Original Assignee
Jiangsu Protruly Vision Technology Group Co Ltd
Beijing Union University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Protruly Vision Technology Group Co Ltd, Beijing Union University filed Critical Jiangsu Protruly Vision Technology Group Co Ltd
Priority to CN201710330557.5A priority Critical patent/CN107301371A/en
Publication of CN107301371A publication Critical patent/CN107301371A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/48Extraction of image or video features by mapping characteristic values of the pattern into a parameter space, e.g. Hough transformation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention discloses a kind of unstructured road detection method and system based on image information fusion, wherein, methods described includes step:Obtain road video image in real time by video camera, and the road video image is corrected;The road video image after correction is handled using unstructured road edge detection algorithm, first is obtained and takes aim at a little in advance;The road video image after correction is handled using unstructured road partitioning algorithm, second is obtained and takes aim at a little in advance;A little a little it is fitted with the second pre- take aim at using take aim at pre- to described first of least square method, obtains road virtual center line;The present invention can obtain stabilization, accurate reliable view data by above-mentioned calculating processing;Further, described image data application can be navigated in unmanned cruiser, it is possible to decrease research cost, with huge application value.

Description

A kind of unstructured road detection method and system based on image information fusion
Technical field
The present invention relates to road guide field, more particularly to a kind of unstructured road detection based on image information fusion Method and system.
Background technology
The navigation system of view-based access control model is pattern-recognition, the research emphasis of artificial intelligence field, and it can be applied to nobody and drives Sail intelligent vehicle.Lane detection technology is the key technology in auxiliary driving, unmanned technology, can be unmanned intelligent vehicle Decision-making module provides necessary environmental information.Because navigation equipment has a limitation, vision sensor cheaper and with more Big application potential, but the vision sensor is affected by environment larger.
Unstructured road refers to the road edge of the low road of structuring degree, typically no lane line and cleaning;By Influenceed in by shade, water stain etc., unstructured road detection relative difficulty is still in conceptual phase at present.Unstructured road Road detection method can be roughly divided into the detection based on roadway characteristic, the detection based on road model and the road based on machine learning The three major types such as road detection.
Detection method based on roadway characteristic mainly detects road by extracting some features of road, and these features can To be color, gray scale, texture, edge or frequency domain character of road etc., major advantage is, calculating speed insensitive to road shape It hurry up, ensure that real-time;Have the disadvantage more sensitive to shadow, slight crack and water mark;K-means clusters based on SLIC super-pixel Algorithm is, it is necessary to which priori, is divided into two major classes by adjusting parameter by road information:Wheeled region and non-running region;So And because image information is influenceed than larger by illumination and external environment, image is carried out merely pretreatment obtain road edge or Stabilization, accurate reliable view data can not be all obtained by splitting acquisition wheeled region.
Therefore, prior art has yet to be improved and developed.
The content of the invention
In view of above-mentioned the deficiencies in the prior art, it is an object of the invention to provide a kind of non-knot based on image information fusion Structure Approach for road detection and system, it is intended to solve existing unstructured road detection method can not obtain stabilization, it is accurate reliable View data the problem of.
Technical scheme is as follows:
A kind of unstructured road detection method based on image information fusion, wherein, including step:
A, obtain road video image in real time by video camera, and the road video image is corrected;
B, using unstructured road edge detection algorithm the road video image after correction is handled, obtain first and take aim in advance Point;
C, using unstructured road partitioning algorithm the road video image after correction is handled, obtain second and take aim at a little in advance;
D, using least square method it is pre- to described first take aim at a little to take aim in advance with second be a little fitted, obtain road virtual center line.
The described unstructured road detection method based on image information fusion, wherein, the step B is specifically included:
B1, gray processing and noise suppression preprocessing are carried out to the road video image;
B2, road video image edge is detected using Canny algorithms and morphology amendment is carried out;
B3, road video image edge is extracted using Hough transformation and calculates first and takes aim at a little in advance.
The described unstructured road detection method based on image information fusion, wherein, the step C is specifically included:
C1, the color characteristic and space characteristics for extracting road video image in advance;
C2, using SLIC algorithms to road video image carry out segmentation obtain super-pixel data;
C3, road video image is clustered using K-means clustering algorithms and second is calculated and is taken aim at a little in advance.
The described unstructured road detection method based on image information fusion, wherein, the step C2 is specifically included:
C21, initialization clustering processing is carried out to the pixel in road video image obtain some initial seed points;
C22, in road video image 3*3 fields reselect seed point;
The nearest some pixels of C23, the seed point reselected described in detection range in the range of the 2S*2S, and to search To pixel label be set and be classified as a class;
C24, when same pixel is classified into multiple seed points simultaneously, then calculate the pixel and multiple seed points it Between distance, using apart from the corresponding seed point of minimum value as the pixel cluster centre;
C25, the iterative processing that step C22-C24 is carried out to the cluster centre, until error convergence, what is finally split is super Pixel data.
The described unstructured road detection method based on image information fusion, wherein, the step C3 is specifically included:
C31, random several cluster center of mass point are chosen from the super-pixel data;
Each super-pixel data is divided into the center of mass point closest with it, formed by C32, the traversal super-pixel data Cluster;
C33, the average value that each clusters of calculating and as new center of mass point;
C34, repeating said steps C32-C33, until center of mass point convergence, take aim at a little in advance so as to obtain second.
The described unstructured road detection method based on image information fusion, wherein, least square in the step D The formula of method is:
;Wherein, point It is the function of many variables, and meets equationObtain minimum;K=0,1, ...n;WhenDuring linear independence,It is exactly required least square solution.
The described unstructured road detection method based on image information fusion, wherein, the gray scale in the step B1 Changing processing formula is:
A kind of unstructured road detecting system based on image information fusion, wherein, including:
Correction module, is corrected for obtaining road video image in real time by video camera, and to the road video image;
First computing module, at using unstructured road edge detection algorithm to the road video image after correction Reason, obtains first and takes aim at a little in advance;
Second computing module, for being handled using unstructured road partitioning algorithm the road video image after correction, Second is obtained to take aim at a little in advance;
Fitting module, for being a little a little fitted with the second pre- take aim at using take aim at pre- to described first of least square method, obtains road Imaginary center line.
The described unstructured road detecting system based on image information fusion, wherein, the first computing module tool Body includes:
Pretreatment unit, for carrying out gray processing and noise suppression preprocessing to the road video image;
Amending unit, for detecting road video image edge using Canny algorithms and carrying out morphology amendment;
First computing unit, takes aim at a little in advance for being extracted road video image edge using Hough transformation and being calculated first.
The described unstructured road detecting system based on image information fusion, wherein, the second computing module tool Body includes:
Extraction unit, color characteristic and space characteristics for extracting road video image in advance;
Cutting unit, super-pixel data are obtained for carrying out segmentation to road video image using SLIC algorithms;
Second computing unit, for being clustered and being calculated second to road video image using K-means clustering algorithms Take aim at a little in advance.
Beneficial effect:The present invention passes through non-structural road-edge detection algorithm and unstructured road partitioning algorithm point first The first pre- take aim at for not calculating road video image is a little taken aim at a little in advance with second, then pre- to described first by information fusion technology Take aim at a little and second takes aim at the imaginary center line for being a little fitted and obtaining road video image in advance;The present invention can by above-mentioned calculating processing Obtain stabilization, accurate reliable view data.
Brief description of the drawings
Fig. 1 is a kind of flow of the unstructured road detection method preferred embodiment based on image information fusion of the present invention Figure;
Fig. 2 for the present invention in monocular-camera carry out calibration and usage chessboard schematic diagram;
Fig. 3 extracts road video image edge result schematic diagram for the present invention using Hough transformation;
Fig. 4 clusters the result schematic diagram of road video image for the present invention using the K-means based on SLIC;
Fig. 5 is steering wheel angle figure in the specific embodiment of the invention;
Fig. 6 is specific embodiment of the invention China and foreign countries side line road gps coordinate schematic diagram;
Fig. 7 is inner side road gps coordinate schematic diagram in the specific embodiment of the invention;
Fig. 8 is that angle schematic diagram is played on specific embodiment of the invention China and foreign countries side line road;
Fig. 9 plays angle schematic diagram for inner side road in the specific embodiment of the invention;
Figure 10 is road outside navigation driving trace and image driving trace contrast schematic diagram in the specific embodiment of the invention;
Figure 11 is road inner side navigation driving trace and image driving trace contrast schematic diagram in the specific embodiment of the invention;
Figure 12 is road inner side navigation path deviation map in the specific embodiment of the invention;
Figure 13 is navigation path deviation map on the outside of road in the specific embodiment of the invention;
Figure 14 is a kind of structural frames of the unstructured road detecting system preferred embodiment based on image information fusion of the present invention Figure.
Embodiment
The present invention provides a kind of unstructured road detection method and system based on image information fusion, to make the present invention Purpose, technical scheme and effect it is clearer, clear and definite, the present invention is described in more detail below.It should be appreciated that this place The specific embodiment of description only to explain the present invention, is not intended to limit the present invention.
Referring to Fig. 1, Fig. 1 is preferably real for a kind of unstructured road detection method based on image information fusion of the present invention The flow chart of example is applied, as illustrated, wherein, including step:
S100, obtain road video image in real time by video camera, and the road video image is corrected;
S200, using unstructured road edge detection algorithm the road video image after correction is handled, obtain first Take aim at a little in advance;
S300, using unstructured road partitioning algorithm the road video image after correction is handled, obtain second and take aim in advance Point;
S400, using least square method it is pre- to described first take aim at a little to take aim in advance with second be a little fitted, obtain road virtual center Line.
Specifically, the unstructured road detection method based on image information fusion that the present invention is provided is in Windows Developed on VS2010 platforms under system;Road video image is obtained by video camera first, then to the road video Image is corrected, and calculates road respectively by non-structural road-edge detection algorithm and unstructured road partitioning algorithm Pre- take aim at of the first of video image is a little taken aim at a little in advance with second, is taken aim at a little in advance to described first finally by information fusion technology and second is pre- The imaginary center line for being a little fitted and obtaining road video image is taken aim at, stabilization can be obtained by above-mentioned calculating processing, accurate reliable Road image data;The decision-making module that the road image data are sent into unmanned cruiser can complete vision guided navigation work( Can, so as to realize the autonomous driving of unmanned cruiser.
In current intelligent driving field, the camera being applied to mainly has monocular cam and binocular camera two Class, the range measurement principle of both cameras is entirely different, and monocular cam needs that target is first identified, that is to say, that monocular It is car that camera needs first cognitive disorders thing before ranging, be people or other objects;Binocular camera is then directly by two The disparity computation of width image determines distance, itself and to require no knowledge about barrier be what, the difficult point of binocular camera is to count Amount amount is huge, and processing is slow and cost is high.
Based on above-mentioned difference, the present invention preferably monocular-camera goes to obtain road video image in real time, is imaged using monocular Head can judge barrier in advance, and then ranging again can increase the security performance of intelligent driving.
Further, in the step S100, the present invention is entered using Zhang Zhengyou camera calibrations hair to the monocular-camera Rower is determined, as shown in Fig. 2 being the plane that black and white that 10*10, the length of side are 0.05m is met each other to quasiconfiguaration by the monocular-camera Chessboard, fixed monocular-camera, hand-held chessboard is rotated at various orientations obtains 18 checkerboard images, is obtained using Matlab tool boxes The inside and outside parameter and distortion parameter of monocular-camera are obtained, is realized according to the parameter obtained and monocular-camera is demarcated;
Specifically, video camera generates each coordinate points under the plane of delineation, world coordinate system by national forest park in Xiaokeng and schemed As there is corresponding pixel under pixel coordinate system, the process of camera calibration is video camera geometrical model parametric solution Process, by the optimization of algorithm can improve camera correct accuracy.Camera parameter is divided into camera external parameter, in camera Portion's parameter and camera distortion parameter, because radial distortion can be ignored on camera influence than other larger distortion.Therefore The radial distortion parameter of camera is only calculated in MATLAB tool boxes.
If three-dimensional coordinate point is, corresponding two-dimensional camera plane pixel coordinates point is ,, gridiron pattern plane Z=0 is made,, wherein, k is The Intrinsic Matrix of camera, s is scale factor, and R and t are the rotation and translation vector under camera coordinates system respectively, and H is single Answering property matrix.
Three-dimensional coordinate point and image pixel coordinates point are, it is known that according to matrix solving method, when picture number is more than or equal to 3, k can To obtain unique solution, Camera extrinsic can be obtained according to formula, camera radial distortion system can be drawn using radial distortion estimation Number.
Further, after the monocular-camera obtains road video image, image can be carried out by geometric correction method Correction, for example:Image picpointed coordinate can be pre-established(Row, column number)And object space(Or with reference to figure)Mapping between correspondence point coordinates Relation, solution seeks the unknown parameter in mapping relations, and then according to mapping relations, to image, each pixel coordinate is corrected;Finally Determine the gray value of each pixel.
Further, in the step S200, after road video image is corrected, using unstructured road rim detection Algorithm is handled it, is obtained first and is taken aim at a little in advance, it specifically includes step:
S210, gray processing and noise suppression preprocessing are carried out to the road video image;
Specifically, the coloured image of the road video is changed into gray level image first, can be specifically entered using weighted mean method Row is changed, and conversion formula is:, i.e., tri- components of R, G, B are carried out with different weights Weighted average;It is minimum to the susceptibility of blueness due to susceptibility highest of the human eye to green, therefore, can be according to the formula pair RGB three-components, which are weighted, averagely obtains rational gray level image;Further, bilateral filtering is carried out to the road video image Processing, so as to reach the purpose for protecting side denoising.
S220, road video image edge is detected using Canny algorithms and morphology amendment is carried out;
Specifically, Edge-Detection Algorithm has Sobel, Prewitt, Laplace and Canny etc. a variety of, and the present invention is preferably Canny algorithms detect road video image edge and carry out morphology amendment, specifically, first carry out expansion process to image and obtain To continuous road edge, the noise in image is then removed by corrosion treatment.
S230, road video image edge is extracted using Hough transformation and calculates first and takes aim at a little in advance.
Specifically, the edge of road video image can be extracted by Hough transformation and calculate first it is pre- take aim at a little, after And road axis is obtained, as shown in Figure 3.
Further, in the present invention, the step S300, using unstructured road partitioning algorithm to the road after correction Video image is handled, and is obtained second and is taken aim at a little in advance, specifically includes step:
S310, the color characteristic and space characteristics for extracting road video image in advance;
Specifically, color characteristic is a kind of global characteristics, describes the superficiality of image or the scenery corresponding to image-region Matter, goes to extract the color characteristic of image using color histogram;Space characteristics refer to multiple targets for being split in image it Between mutual locus or relative direction relation, these relations can be divided into connection/syntople, overlap/overlapping relation and Comprising/containment relationship etc., the present invention marks off the object or face included in image first by being split automatically to image Color region, then according to these extracted region images features.
S320, using SLIC algorithms to road video image carry out segmentation obtain super-pixel data;
Specifically, the step S320 includes:
S321, initialization clustering processing is carried out to the pixel in road video image obtain some initial seed points;
S322, in road video image 3*3 fields reselect seed point;
The nearest some pixels of S323, the seed point reselected described in detection range in the range of the 2S*2S, and to search To pixel label be set and be classified as a class;
S324, when same pixel simultaneously be classified into multiple seed points when, then calculate the pixel and multiple seed points The distance between, using apart from the corresponding seed point of minimum value as the pixel cluster centre;
S325, the iterative processing that step S222-S224 is carried out to the cluster centre, until error convergence, are finally split Super-pixel data.
Specifically, image is transformed into CIE-Lab color spaces from RGB color, each pixel of correspondence (L, a, B) color value and(X, y)Coordinate constitutes a five dimensional vector V [L, a, b, x, y], the similitudes of two pixels can by they Vector distance is measured, and distance is bigger, and similarity is smaller;
It is of the invention to carry out some initial seed points of initialization clustering processing acquisition to the pixel in road video image in advance, so Seed point is reselected in road video image 3*3 pixel neighborhoods afterwards, it is then empty around each seed point reselected Between in the nearest some pixels of the seed point that reselects described in detection range, the pixel searched is set into label and returned For a class, all sort out until all pixels and finish;Specifically, a distance threshold can be set, when searching a pixel When the distance between point and seed point for currently reselecting are less than the distance threshold, then the pixel is classified as the kind Son one class of point;Assuming that the seed point reselected has K, road video image has N number of pixel, then each seed point is formed Cluster can include N/K pixel, the length of side each clustered substantially S=[N/K]*0.5;It is preferred that in the seed reselected The pixel closest with it is searched in the range of point surrounding 2S*2S;
Further, when same pixel is classified into multiple seed points simultaneously, then the pixel and multiple seeds are calculated Point the distance between, using apart from the corresponding seed point of minimum value as the pixel cluster centre;Simultaneously to the cluster Center is iterated processing, until error convergence, obtains last cluster, i.e. super-pixel data.
S330, road video image is clustered using K-means clustering algorithms and second is calculated and is taken aim at a little in advance;Tool Body, the step S330 includes:
S331, random several cluster center of mass point are chosen from the super-pixel data;
Each super-pixel data is divided into the center of mass point closest with it, formed by S332, the traversal super-pixel data Cluster;
Specifically, another distance threshold is set, the distance between the center of mass point and each super-pixel data are calculated, when this When distance value is less than distance threshold, then the super-pixel data are divided into center of mass point formation cluster.
S333, the average value that each clusters of calculating and as new center of mass point;
Specifically, the average vector value of all pixels point in each cluster is calculated, new center of mass point is regained.
S334, the iterative processing for repeating step S332- S333, until center of mass point convergence, take aim in advance so as to obtain second Point.
Specifically, road image is clustered by the K-means clustering algorithms based on SLIC, can be by road image Data are divided into wheeled region and non-running region, as shown in Figure 4.
Further, in the step S400, ifFor Optimal fitting curve, wherein,It is undetermined coefficient, is converted into the problem of solution optimal curve and seeks undetermined coefficient Problem;The formula of the least square method is:
;Wherein, pointIt is many Meta-function, and meet equationObtain minimum;K=0,1 ... n;WhenDuring linear independence,It is exactly required least square solution;As imaginary center line.
Taken aim in advance to described first a little by using the least square method and second takes aim at a progress process of fitting treatment in advance, just can obtained To the imaginary center line of road image.
Further, the present invention is verified by a specific embodiment to the road image data after processing;By the void Intend center line angle information to send into the decision-making module of unmanned cruiser slave computer, provide the steering wheel of the unmanned cruiser Angle as shown in figure 5, the unmanned cruiser in the process of moving, section position as shown in Figure 6 and Figure 7, send slave computer determine Plan module beats angular data as shown in Figure 8 and Figure 9, and Figure 10 and Figure 11 are illustrated respectively in the navigation of position shown in section Fig. 6 and Fig. 7 The comparison diagram of driving trace and the driving trace obtained using imaging sensor;Figure 12 and Figure 13 are respectively on the outside of section and road Section inner side uses imaging sensor and navigation traveling contrast trajector deviation figure.
Found by contrasting, the present invention utilizes monocular cam merely, and realizing road in less error range leads Boat, and compared to imaging sensor is used, navigation mode of the invention is more accurate, therefore the present invention has huge application valency Value.
Further, the present invention also provides a kind of unstructured road detecting system based on image information fusion, wherein, such as Shown in Figure 14, including:
Correction module 100, school is carried out for obtaining road video image in real time by video camera, and to the road video image Just;
First computing module 200, for being entered using unstructured road edge detection algorithm to the road video image after correction Row processing, obtains first and takes aim at a little in advance;
Second computing module 300, at using unstructured road partitioning algorithm to the road video image after correction Reason, obtains second and takes aim at a little in advance;
Fitting module 400, for being a little a little fitted with the second pre- take aim at using take aim at pre- to described first of least square method, is obtained Road imaginary center line.
The described unstructured road detecting system based on image information fusion, wherein, first computing module 200 Specifically include:
Pretreatment unit, for carrying out gray processing and noise suppression preprocessing to the road video image;
Amending unit, for detecting road video image edge using Canny algorithms and carrying out morphology amendment;
First computing unit, takes aim at a little in advance for being extracted road video image edge using Hough transformation and being calculated first.
The described unstructured road detecting system based on image information fusion, wherein, second computing module 300 Specifically include:
Extraction unit, color characteristic and space characteristics for extracting road video image in advance;
Cutting unit, super-pixel data are obtained for carrying out segmentation to road video image using SLIC algorithms;
Second computing unit, for being clustered and being calculated second to road video image using K-means clustering algorithms Take aim at a little in advance.
In summary, the present invention passes through non-structural road-edge detection algorithm and unstructured road partitioning algorithm point first The first pre- take aim at for not calculating road video image is a little taken aim at a little in advance with second, then pre- to described first by information fusion technology Take aim at a little and second takes aim at the imaginary center line for being a little fitted and obtaining road video image in advance;The present invention can by above-mentioned calculating processing Obtain stabilization, accurate reliable view data;Further, described image data application can be navigated in unmanned cruiser, Research cost can be reduced, with huge application value.
It should be appreciated that the application of the present invention is not limited to above-mentioned citing, for those of ordinary skills, can To be improved or converted according to the above description, all these modifications and variations should all belong to the guarantor of appended claims of the present invention Protect scope.

Claims (10)

1. a kind of unstructured road detection method based on image information fusion, it is characterised in that including step:
A, obtain road video image in real time by video camera, and the road video image is corrected;
B, using unstructured road edge detection algorithm the road video image after correction is handled, obtain first and take aim in advance Point;
C, using unstructured road partitioning algorithm the road video image after correction is handled, obtain second and take aim at a little in advance;
D, using least square method it is pre- to described first take aim at a little to take aim in advance with second be a little fitted, obtain road virtual center line.
2. the unstructured road detection method according to claim 1 based on image information fusion, it is characterised in that institute Step B is stated to specifically include:
B1, gray processing and noise suppression preprocessing are carried out to the road video image;
B2, road video image edge is detected using Canny algorithms and morphology amendment is carried out;
B3, road video image edge is extracted using Hough transformation and calculates first and takes aim at a little in advance.
3. the unstructured road detection method according to claim 1 based on image information fusion, it is characterised in that institute Step C is stated to specifically include:
C1, the color characteristic and space characteristics for extracting road video image in advance;
C2, using SLIC algorithms to road video image carry out segmentation obtain super-pixel data;
C3, road video image is clustered using K-means clustering algorithms and second is calculated and is taken aim at a little in advance.
4. the unstructured road detection method according to claim 3 based on image information fusion, it is characterised in that institute Step C2 is stated to specifically include:
C21, initialization clustering processing is carried out to the pixel in road video image obtain some initial seed points;
C22, in road video image 3*3 fields reselect seed point;
The nearest some pixels of C23, the seed point reselected described in detection range in the range of the 2S*2S, and to search To pixel label be set and be classified as a class;
C24, when same pixel is classified into multiple seed points simultaneously, then calculate the pixel and multiple seed points it Between distance, using apart from the corresponding seed point of minimum value as the pixel cluster centre;
C25, the iterative processing that step C22-C24 is carried out to the cluster centre, until error convergence, what is finally split is super Pixel data.
5. the unstructured road detection method according to claim 3 based on image information fusion, it is characterised in that institute Step C3 is stated to specifically include:
C31, random several cluster center of mass point are chosen from the super-pixel data;
Each super-pixel data is divided into the center of mass point closest with it, formed by C32, the traversal super-pixel data Cluster;
C33, the average value that each clusters of calculating and as new center of mass point;
C34, repeating said steps C32-C33, until center of mass point convergence, take aim at a little in advance so as to obtain second.
6. the unstructured road detection method according to claim 1 based on image information fusion, it is characterised in that described The formula of least square method is in step D:
;Wherein, pointIt is The function of many variables, and meet equationObtain minimum;K=0,1 ... n;WhenDuring linear independence,It is exactly required least square solution.
7. the unstructured road detection method according to claim 2 based on image information fusion, it is characterised in that institute Stating the processing formula of the gray processing in step B1 is:
8. a kind of unstructured road detecting system based on image information fusion, it is characterised in that including:
Correction module, is corrected for obtaining road video image in real time by video camera, and to the road video image;
First computing module, at using unstructured road edge detection algorithm to the road video image after correction Reason, obtains first and takes aim at a little in advance;
Second computing module, for being handled using unstructured road partitioning algorithm the road video image after correction, Second is obtained to take aim at a little in advance;
Fitting module, for being a little a little fitted with the second pre- take aim at using take aim at pre- to described first of least square method, obtains road Imaginary center line.
9. the unstructured road detecting system according to claim 8 based on image information fusion, it is characterised in that institute The first computing module is stated to specifically include:
Pretreatment unit, for carrying out gray processing and noise suppression preprocessing to the road video image;
Amending unit, for detecting road video image edge using Canny algorithms and carrying out morphology amendment;
First computing unit, takes aim at a little in advance for being extracted road video image edge using Hough transformation and being calculated first.
10. the unstructured road detecting system according to claim 8 based on image information fusion, it is characterised in that Second computing module is specifically included:
Extraction unit, color characteristic and space characteristics for extracting road video image in advance;
Cutting unit, super-pixel data are obtained for carrying out segmentation to road video image using SLIC algorithms;
Second computing unit, for being clustered and being calculated second to road video image using K-means clustering algorithms Take aim at a little in advance.
CN201710330557.5A 2017-05-11 2017-05-11 A kind of unstructured road detection method and system based on image information fusion Pending CN107301371A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710330557.5A CN107301371A (en) 2017-05-11 2017-05-11 A kind of unstructured road detection method and system based on image information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710330557.5A CN107301371A (en) 2017-05-11 2017-05-11 A kind of unstructured road detection method and system based on image information fusion

Publications (1)

Publication Number Publication Date
CN107301371A true CN107301371A (en) 2017-10-27

Family

ID=60137078

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710330557.5A Pending CN107301371A (en) 2017-05-11 2017-05-11 A kind of unstructured road detection method and system based on image information fusion

Country Status (1)

Country Link
CN (1) CN107301371A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108052559A (en) * 2017-12-01 2018-05-18 国电南瑞科技股份有限公司 Distribution terminal defect mining analysis method based on big data processing
CN110084190A (en) * 2019-04-25 2019-08-02 南开大学 Unstructured road detection method in real time under a kind of violent light environment based on ANN
CN111727437A (en) * 2018-01-08 2020-09-29 远见汽车有限公司 Multispectral system providing pre-crash warning
CN117648905A (en) * 2024-01-30 2024-03-05 珠海芯烨电子科技有限公司 Method and related device for analyzing label instruction of thermal printer

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108052559A (en) * 2017-12-01 2018-05-18 国电南瑞科技股份有限公司 Distribution terminal defect mining analysis method based on big data processing
CN111727437A (en) * 2018-01-08 2020-09-29 远见汽车有限公司 Multispectral system providing pre-crash warning
CN110084190A (en) * 2019-04-25 2019-08-02 南开大学 Unstructured road detection method in real time under a kind of violent light environment based on ANN
CN110084190B (en) * 2019-04-25 2024-02-06 南开大学 Real-time unstructured road detection method under severe illumination environment based on ANN
CN117648905A (en) * 2024-01-30 2024-03-05 珠海芯烨电子科技有限公司 Method and related device for analyzing label instruction of thermal printer
CN117648905B (en) * 2024-01-30 2024-04-16 珠海芯烨电子科技有限公司 Method and related device for analyzing label instruction of thermal printer

Similar Documents

Publication Publication Date Title
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN109460709A (en) The method of RTG dysopia analyte detection based on the fusion of RGB and D information
CN107844750A (en) A kind of water surface panoramic picture target detection recognition methods
CN107677274B (en) Unmanned plane independent landing navigation information real-time resolving method based on binocular vision
CN108369741A (en) Method and system for registration data
CN107301371A (en) A kind of unstructured road detection method and system based on image information fusion
CN108280401B (en) Pavement detection method and device, cloud server and computer program product
CN106183995B (en) A kind of visual parking device method based on stereoscopic vision
CN109993800A (en) A kind of detection method of workpiece size, device and storage medium
CN111738314A (en) Deep learning method of multi-modal image visibility detection model based on shallow fusion
CN104933434A (en) Image matching method combining length between perpendiculars (LBP) feature extraction method and surf feature extraction method
CN108597009A (en) A method of objective detection is carried out based on direction angle information
Oniga et al. Polynomial curb detection based on dense stereovision for driving assistance
CN110766720A (en) Multi-camera vehicle tracking system based on deep learning
CN110059683A (en) A kind of license plate sloped antidote of wide-angle based on end-to-end neural network
CN112164117A (en) V-SLAM pose estimation method based on Kinect camera
CN110428425B (en) Sea-land separation method of SAR image based on coastline vector data
CN113781562B (en) Lane line virtual-real registration and self-vehicle positioning method based on road model
CN112344923B (en) Robot positioning method and positioning device thereof
CN111998862A (en) Dense binocular SLAM method based on BNN
CN109164802A (en) A kind of robot maze traveling method, device and robot
CN107220632B (en) Road surface image segmentation method based on normal characteristic
CN116978009A (en) Dynamic object filtering method based on 4D millimeter wave radar
CN110197104B (en) Distance measurement method and device based on vehicle
CN117611525A (en) Visual detection method and system for abrasion of pantograph slide plate

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
DD01 Delivery of document by public notice

Addressee: Xue Liantong

Document name: Notice of commencement of preservation procedure

DD01 Delivery of document by public notice