CN105740806B - A kind of perspective transform target's feature-extraction method based on multi-angle of view - Google Patents

A kind of perspective transform target's feature-extraction method based on multi-angle of view Download PDF

Info

Publication number
CN105740806B
CN105740806B CN201610057100.7A CN201610057100A CN105740806B CN 105740806 B CN105740806 B CN 105740806B CN 201610057100 A CN201610057100 A CN 201610057100A CN 105740806 B CN105740806 B CN 105740806B
Authority
CN
China
Prior art keywords
target
image
perspective
perspective transform
angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610057100.7A
Other languages
Chinese (zh)
Other versions
CN105740806A (en
Inventor
田雨农
范玉涛
周秀田
于维双
陆振波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Roiland Technology Co Ltd
Original Assignee
Dalian Roiland Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Roiland Technology Co Ltd filed Critical Dalian Roiland Technology Co Ltd
Priority to CN201610057100.7A priority Critical patent/CN105740806B/en
Publication of CN105740806A publication Critical patent/CN105740806A/en
Application granted granted Critical
Publication of CN105740806B publication Critical patent/CN105740806B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The perspective transform target's feature-extraction method based on multi-angle of view that the present invention relates to a kind of, comprising the following steps: the objective area in image of camera acquisition is demarcated using gridiron pattern, realizes the perspective transform of target area;Binaryzation is carried out to the region after perspective transform, and carries out edge extracting;Target information is obtained from the image after edge extracting.The clarity of perspective transform general image can be improved in the present invention, and relatively reliable original map quality is proposed for subsequent image co-registration, is the actual proportions that image is more nearly original world coordinates.

Description

A kind of perspective transform target's feature-extraction method based on multi-angle of view
Technical field
The present invention relates to a kind of target area feature extracting method, specifically a kind of perspective transform mesh based on multi-angle of view Mark feature extracting method.
Background technique
Everybody is increasingly caused to pay close attention to and one of social problems urgently to be solved currently, improving traffic safety.Pass through The study found that 70 or more percent traffic accident is because caused by the misoperation of driver.Due to mankind itself by To the Natural control of physiology and psychology, so that traffic accident is difficult to avoid that.Therefore, computer and information Perception etc. are utilized Various advanced technologies assist the traveling of driver, improve the safety of vehicle driving, to prevent traffic accident change It obtains extremely important.In this context, the research of intelligent transportation system is just rapidly developed.It is main in intelligent transportation system It include three main bodys: driver, intelligent vehicle and intelligent highway system, usually with advanced electronic communication and meter Calculation machine control technology connects these three main bodys.
As the important component of intelligent transportation system, intelligent vehicle is a set environment sensing, programmed decision-making, more The functions such as grade auxiliary driving are in the integrated system of one.It is wherein to realize the premise of other function to the perceptional function of environment, Intelligent vehicle can use the various sensors of itself to obtain the information of ambient enviroment.Due to the complexity of road traffic environment Property, to ensure that driving safety, vehicle need that Traffic Information is made full use of to obtain reliable decision.It drives in the process of moving Member may the visual field visibility as caused by weather or factor of natural environment reduce, or because driver itself fatigue driving very To driving when intoxicated, or because road environment complexity be difficult to it is equal when, friendship is be easy to cause to sentencing for traffic environment mistake Interpreter's event.If vehicle can possess effective road environment identifying system at this time, the vehicle correctly side of traveling is timely guided Formula gives a warning dangerous situation in advance, will greatly improve drive safety.Therefore the acquisition and knowledge of Traffic Information Highly important status summary of the invention is not occupied in entire intelligent transportation system.
The roadmarking of view-based access control model extracts the important component in always intelligent driving field.Its work is from vehicle-mounted In the video information that camera obtains, according to the color, shape and textural characteristics of lane line, by lane and background separation, from And the trend of lane is obtained, vehicle is with respect to information such as the positions of lane.Existing driving line detection algorithms can substantially be divided into Lane line region detection method, character-driven method and model matching method, but these methods extract the case where a plurality of lane line The effect is unsatisfactory, and the lane line position and actual scene of acquisition have certain deviation.
Summary of the invention
To solve the above problems, the object of the present invention is to provide a kind of perspective transform target's feature-extraction based on multi-angle of view Method, this method can perspective transform correct this front side image, for after perspective image carry out binaryzation, edge processing, Hough transform extracts lane line feature.
The technical solution adopted by the present invention to solve the technical problems is: a kind of perspective transform target based on multi-angle of view is special Levy extracting method, comprising the following steps:
The objective area in image of camera acquisition is demarcated using gridiron pattern, realizes that the perspective of target area becomes It changes;
Binaryzation is carried out to the region after perspective transform, and carries out edge extracting;
Target information is obtained from the image after edge extracting.
The objective area in image to camera acquisition demarcated using gridiron pattern the following steps are included:
Gridiron pattern is divided into 9 regions according to nine grids in the image of camera acquisition;
Any rectangle is selected in each region and 4 points of its four angle points as calibration;
It is horizontally to the right that Y positive direction establishes rectangular coordinate system for X positive direction, straight down using upper left point as origin;
Perspective is obtained in the coordinate of rectangular coordinate system according to the 4 of calibration points;
The all the points of all areas are subjected to perspective transform according to the perspective in respective region respectively.
The perspective is obtained by following formula
Wherein, m1~m8For perspective;xi、yiCoordinate for 4 points in rectangular coordinate system, xi’、yi' be perspective after 4 Coordinate of a point in rectangular coordinate system;I=1...n, n=4.
The all the points by all areas pass through according to the perspective progress perspective transform in respective region following respectively Formula is realized:
Wherein, u, w, v are arbitrary point after perspective transform in the coordinate of rectangular coordinate system, and x ', y ' are camera collection image Grayscale image on arbitrary point rectangular coordinate system coordinate;M is perspective matrix, including element m1~m8With 1.
The region to after perspective transform carries out binaryzation, and carry out edge extracting the following steps are included:
Set the size of two filters;
Respectively two filters are filtered to obtain two images to the region after perspective transform;Two images are done Difference obtains the binary image of target signature;
By binary image since the leftmost side, the i-th column pixel value is subtracted into i+1 column pixel value, obtained poor conduct I+1 column pixel value;Target left edge image is obtained at this time;
By binary image since the rightmost side, the i-th column pixel value is subtracted into the (i-1)-th column pixel value, obtained poor conduct (i-1)-th column pixel value;Target right hand edge image is obtained at this time;I=1...w;
Target left edge is started to query from the upper left corner in target left edge image;From upper left in target right hand edge image Angle starts to query target right hand edge;When target left edge and target right hand edge inquire, then by target left edge position with Target right edge position is averaged, target's center's line as extraction.
In the image from after edge extracting obtain target information the following steps are included:
Hough transformation will be carried out containing the image of target's center's line, obtains Hough radius and Hough angle;
It votes Hough radius, the preceding several groups Hough radius for taking votes most and Hough angle, as target are believed Breath.
The invention has the following beneficial effects and advantage:
1. the clarity of perspective transform general image can be improved in the present invention, subsequent image co-registration is proposed more Reliable original map quality is the actual proportions that image is more nearly original world coordinates.
2. the image after perspective transform can more highlight road information, more obvious for lane line feature, for vehicle It is convenient that the identification of diatom provides, edge method of the present invention, can directly acquire lane line for the feature of lane line Center, be more easier to be provided for subsequent lane line drawing.
Detailed description of the invention
Fig. 1 is flow chart of the method for the present invention;
Fig. 2 is the original image of embodiment;
Fig. 3 is the perspective effect figure of embodiment;
Fig. 4 is the final result figure of embodiment.
Specific embodiment
The present invention will be further described in detail below with reference to the embodiments.
As shown in Figure 1, target of the invention by taking lane line as an example, is carried out using the perspective transform of multi-angle of view in front of image Real time correction, carries out binarization method on the image of correction, and edge extracting method finally extracts image using Hough transform In lane line information.
1. the piece image using video is handled as sample, gridiron pattern is arranged in camera lens on the original image first Gridiron pattern is divided into 9 regions according to nine grids by lower section, and each region is handled in the same manner.Such as Fig. 2 institute Show.
2. selecting 4 points of four points of any rectangle size as calibration on a region.
3. by 3 select 4 points according to gridiron pattern upper left be origin, horizontally to the right direction be X positive direction, straight down Rectangular coordinate system is established for Y positive direction, it is clear that 4 coordinates of available calibration in world coordinate system.
4. solving perspective according to following equations:
Wherein m1~m8For perspective (vector);xi、yiCoordinate for 4 points in rectangular coordinate system, xi’、yi' it is perspective Coordinate of 4 points afterwards in rectangular coordinate system;I=1...n, n=4.
5. whole region is converted according to the perspective, i.e.,
Wherein, u, w, v are coordinate of the arbitrary point in rectangular coordinate system after perspective transform, the figure that x ', y ' acquire for camera As upper arbitrary point is in the coordinate of rectangular coordinate system;M is perspective matrix, including element m1~m8With 1.
6. carrying out perspective transform to remaining region of nine grids according to the method described above, it is finally reached the perspective change to entire image It changes.As shown in Figure 3.
7. the region after pair perspective transform carries out binaryzation, using double scale filters, and it includes following for carrying out edge extracting Step:
It is sized two different filters;The size 101*101 of first filter 3*3 and second filter;
Respectively first filter and second filter are filtered to obtain two images to the region after perspective transform;It will The pixel value of two images makes the difference, and obtains the binary image of target signature;
Entire image is begun stepping through from the upper left corner in target left edge image;When traversal detects that current pixel value is 0, next pixel value be 1 when, record the column label of current pixel, continue to traverse, when detect currentElement value be 1, it is next When a element value is 0, the column information of currentElement is recorded again, and the column label and column label before are calculated into average value, it will The element value of column where average value, the row is assigned to 1, other all values are 0, and this results in edge images.In image Marginal information then represents lane line information.
8. detecting lane line using Hough transformation according to obtained marginal information
By obtained edge graph, marginal point in image is subjected to Hough transform, passes through following formula:
R=x*sin (theta)+y*cos (theta)
Wherein, R is Hough radius, and theta is Hough angle, and x, y are the pixel coordinate value in edge image.By all half Diameter carries out ballot statistics, and R and theta corresponding to poll highest first 20 is exactly the lane line detected, according to fluoroscopy images Point map back the lane line that original image obtains original image, as shown in Figure 4.

Claims (5)

1. a kind of perspective transform target's feature-extraction method based on multi-angle of view, it is characterised in that the following steps are included:
The objective area in image of camera acquisition is demarcated using gridiron pattern, realizes the perspective transform of target area;
Binaryzation is carried out to the region after perspective transform, and carries out edge extracting;
Target information is obtained from the image after edge extracting;
The region to after perspective transform carries out binaryzation, and carry out edge extracting the following steps are included:
Set the size of two filters;
Respectively two filters are filtered to obtain two images to the region after perspective transform;Two images are made the difference, are obtained To the binary image of target signature;
By binary image since the leftmost side, the i-th column pixel value is subtracted into i+1 column pixel value, obtained difference is as i+1 Column pixel value;Target left edge image is obtained at this time;
By binary image since the rightmost side, the i-th column pixel value is subtracted into the (i-1)-th column pixel value, obtained difference is as (i-1)-th Column pixel value;Target right hand edge image is obtained at this time;I=1...k;
Target left edge is started to query from the upper left corner in target left edge image;It is opened in target right hand edge image from the upper left corner Begin inquiry target right hand edge;When target left edge and target right hand edge inquire, then by target left edge position and target Right edge position is averaged, target's center's line as extraction.
2. a kind of perspective transform target's feature-extraction method based on multi-angle of view according to claim 1, it is characterised in that The objective area in image to camera acquisition demarcated using gridiron pattern the following steps are included:
Gridiron pattern is divided into 9 regions according to nine grids in the image of camera acquisition;
Any rectangle is selected in each region and 4 points of its four angle points as calibration;
It is horizontally to the right that Y positive direction establishes rectangular coordinate system for X positive direction, straight down using upper left point as origin;
Perspective is obtained in the coordinate of rectangular coordinate system according to the 4 of calibration points;
The all the points of all areas are subjected to perspective transform according to the perspective in respective region respectively.
3. a kind of perspective transform target's feature-extraction method based on multi-angle of view according to claim 2, it is characterised in that The perspective is obtained by following formula
Wherein, m1~m8For perspective, xi、yiCoordinate for 4 points in rectangular coordinate system, xi’、yi' be perspective after 4 points In the coordinate of rectangular coordinate system;I=1...n, n=4.
4. a kind of perspective transform target's feature-extraction method based on multi-angle of view according to claim 2, it is characterised in that The all the points by all areas carry out perspective transform according to the perspective in respective region respectively and are realized by following formula:
Wherein, u, w, v are arbitrary point after perspective transform in the coordinate of rectangular coordinate system, and x ', y ' are that camera collects image The coordinate of arbitrary point on grayscale image in rectangular coordinate system;m1~m8, 1 be perspective matrix M element.
5. a kind of perspective transform target's feature-extraction method based on multi-angle of view according to claim 1, it is characterised in that In the image from after edge extracting obtain target information the following steps are included:
Hough transformation will be carried out containing the image of target's center's line, obtains Hough radius and Hough angle;
It votes Hough radius, the preceding several groups Hough radius for taking votes most and Hough angle, as target information.
CN201610057100.7A 2016-01-27 2016-01-27 A kind of perspective transform target's feature-extraction method based on multi-angle of view Active CN105740806B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610057100.7A CN105740806B (en) 2016-01-27 2016-01-27 A kind of perspective transform target's feature-extraction method based on multi-angle of view

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610057100.7A CN105740806B (en) 2016-01-27 2016-01-27 A kind of perspective transform target's feature-extraction method based on multi-angle of view

Publications (2)

Publication Number Publication Date
CN105740806A CN105740806A (en) 2016-07-06
CN105740806B true CN105740806B (en) 2018-12-28

Family

ID=56247793

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610057100.7A Active CN105740806B (en) 2016-01-27 2016-01-27 A kind of perspective transform target's feature-extraction method based on multi-angle of view

Country Status (1)

Country Link
CN (1) CN105740806B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108230394A (en) * 2016-12-14 2018-06-29 中南大学 A kind of orbital image auto-correction method
CN109784227B (en) * 2018-12-29 2019-12-10 深圳爱莫科技有限公司 image detection and identification method and device
CN111432198A (en) * 2020-03-30 2020-07-17 中国人民解放军陆军装甲兵学院 Perspective transformation-based projection type three-dimensional display system correction method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156977A (en) * 2010-12-22 2011-08-17 浙江大学 Vision-based road detection method
CN103226817A (en) * 2013-04-12 2013-07-31 武汉大学 Superficial venous image augmented reality method and device based on perspective projection
CN103824302A (en) * 2014-03-12 2014-05-28 西安电子科技大学 SAR (synthetic aperture radar) image change detecting method based on direction wave domain image fusion

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156977A (en) * 2010-12-22 2011-08-17 浙江大学 Vision-based road detection method
CN103226817A (en) * 2013-04-12 2013-07-31 武汉大学 Superficial venous image augmented reality method and device based on perspective projection
CN103824302A (en) * 2014-03-12 2014-05-28 西安电子科技大学 SAR (synthetic aperture radar) image change detecting method based on direction wave domain image fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于改进Hough变换和透视变换的透视图像矫正;代勤等;《液晶与显示》;20120831;第27卷(第4期);552-556 *

Also Published As

Publication number Publication date
CN105740806A (en) 2016-07-06

Similar Documents

Publication Publication Date Title
CN105678285B (en) A kind of adaptive road birds-eye view transform method and road track detection method
CN110501018B (en) Traffic sign information acquisition method for high-precision map production
CN110197589B (en) Deep learning-based red light violation detection method
CN105261020B (en) A kind of express lane line detecting method
CN109657632B (en) Lane line detection and identification method
CN102682292B (en) Method based on monocular vision for detecting and roughly positioning edge of road
DE102009050505A1 (en) Clear path detecting method for vehicle i.e. motor vehicle such as car, involves modifying clear path based upon analysis of road geometry data, and utilizing clear path in navigation of vehicle
DE102009048699A1 (en) Travel's clear path detection method for motor vehicle i.e. car, involves monitoring images, each comprising set of pixels, utilizing texture-less processing scheme to analyze images, and determining clear path based on clear surface
DE102009050492A1 (en) Travel's clear path detection method for motor vehicle i.e. car, involves monitoring images, each comprising set of pixels, utilizing texture-less processing scheme to analyze images, and determining clear path based on clear surface
CN103824452A (en) Lightweight peccancy parking detection device based on full view vision
CN110188606B (en) Lane recognition method and device based on hyperspectral imaging and electronic equipment
CN109711264A (en) A kind of bus zone road occupying detection method and device
CN103902985B (en) High-robustness real-time lane detection algorithm based on ROI
CN106250816A (en) A kind of Lane detection method and system based on dual camera
CN102663357A (en) Color characteristic-based detection algorithm for stall at parking lot
CN105205489A (en) License plate detection method based on color texture analyzer and machine learning
CN106887004A (en) A kind of method for detecting lane lines based on Block- matching
CN111832388B (en) Method and system for detecting and identifying traffic sign in vehicle running
CN106802144A (en) A kind of vehicle distance measurement method based on monocular vision and car plate
CN106778633B (en) Pedestrian identification method based on region segmentation
CN105740806B (en) A kind of perspective transform target's feature-extraction method based on multi-angle of view
CN103021179B (en) Based on the Safe belt detection method in real-time monitor video
CN106767854A (en) mobile device, garage map forming method and system
CN110733416B (en) Lane departure early warning method based on inverse perspective transformation
CN109635722A (en) A kind of high-resolution remote sensing image crossing automatic identifying method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant