CN102456131A - Obstacle sensing method - Google Patents

Obstacle sensing method Download PDF

Info

Publication number
CN102456131A
CN102456131A CN2010105280956A CN201010528095A CN102456131A CN 102456131 A CN102456131 A CN 102456131A CN 2010105280956 A CN2010105280956 A CN 2010105280956A CN 201010528095 A CN201010528095 A CN 201010528095A CN 102456131 A CN102456131 A CN 102456131A
Authority
CN
China
Prior art keywords
sensing
barrier
image
search
line segment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2010105280956A
Other languages
Chinese (zh)
Inventor
林秋丰
林家平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Pingtung University of Science and Technology
Original Assignee
National Pingtung University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Pingtung University of Science and Technology filed Critical National Pingtung University of Science and Technology
Priority to CN2010105280956A priority Critical patent/CN102456131A/en
Publication of CN102456131A publication Critical patent/CN102456131A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to an obstacle sensing method which comprises an image preprocessing procedure, a lane line sensing procedure and an obstacle sensing procedure, wherein the image preprocessing procedure is to set one zone of interest in an image to be processed; the lane line sensing procedure is to sense one edge information in the zone of interest and sense one group of lane lines based on the edge information; and the obstacle sensing procedure is to compare pixels within the range of the group of lane lines with one matching template, wherein the comparison mode is to find one optimal matching point based on absolute error and operation as well as a three-step search algorithm and sense one obstacle at the periphery of the optimal matching point based on the edge information.

Description

The barrier method for sensing
Technical field
The present invention is relevant for a kind of barrier method for sensing, especially a kind of vehicle barrier method for sensing that is applied to road.
Background technology
Vehicle has become the important walking-replacing tool in modern's life; When vehicle during,, make driver or passenger's safety cause anxiety because of visual dead angle, fatigue driving or because of factor such as passerby is lawless at road driving; In the past; When traffic accident takes place with the safety feature of passive type as the mode of remedying, for example: air bag or vehicle body reinforcement etc., rescued many traffic accident victims' life.And now, automobile factory and supplier have been transferred to active security function with focus, for example: road barricade thing sensing, collision caution or automatic driving etc., to avoid traffic accident and to overturn unexpected generation.
The barrier method for sensing of prior art is at least one signal projector of vehicle set and at least one signal receiver; For example: radar, ultrasonic or laser signal etc.; This signal projector transmits to vehicle periphery; When this signal receiver was received the signal of reflection object on every side, being illustrated in vehicle periphery had barrier to exist, and was shown in the car on the screen.Only, when being transmitted by radar transmitter, can produce a large amount of electromagnetic waves, far then emissive power is big as if transmitting range, and the anxiety of harmful to human is not only arranged, and also easily phase mutual interference, and overhead overline bridge is often because of reflected signal, and is mistaken for barrier; By ultrasonic receiver sensing barrier, some angle of a sensing and fixed position, distance also has the difficulty on the sensing when too far away, and price is also more expensive; By laser pickoff sensing barrier, because of only installing receiver, have a lot of dead angles, if the receiver dress then raises the cost too much in the fixed position.
Because image processing technique is showing improvement or progress day by day, then develop gradually that with the image processing technique be basic method for sensing, announce " with the apparatus and method of stereoscopic vision sensing barrier " application for a patent for invention case No. 200517982 like TaiWan, China; Disclose a kind of prior art barrier method for sensing; Be by several video cameras, capture several raw videos, the marginal information in this raw video of sensing; Produce several edge objects and object information thereof; According to this object information, cooperate the focal length and the level interval of two video cameras in several video cameras, produce the object relative distance of this edge object.Relatively this object relative distance and a threshold distance.This object relative distance is made as a barrier less than the edge object of this threshold distance, and obtains the relative distance of this barrier.
Press, above-mentioned apparatus and method with stereoscopic vision sensing barrier are focal length and the level intervals that need two video cameras, produce the object relative distance of this edge object.Therefore, can't be only with information sensing barrier that single video camera was provided.
Summary of the invention
The object of the invention is the above-mentioned shortcoming of improvement, so that a kind of barrier method for sensing that only needs single video camera to be provided.
A kind of barrier method for sensing, its program are to comprise an image pre-treatment program, a lane line detection procedure and a barrier detection procedure.This image pre-treatment program is in a pending image, to set an area-of-interest; This lane line detection procedure is marginal information of sensing in this area-of-interest, and with one group of lane line of this marginal information sensing; This barrier detection procedure is that the pixel in this group lane line scope and a coupling model are compared; The comparison mode is to seek an optimal match point with absolute error and computing and three steps search algorithm, and around this optimal match point with barrier of this marginal information sensing.
Beneficial effect is: barrier method for sensing of the present invention is to obtain continuous image from single video camera, and with this barrier of shade information sensing of continuous image.Therefore, has the effect that minimizing is provided with cost.
Barrier method for sensing of the present invention is to obtain continuous image from single video camera, and with this barrier of shade information sensing of continuous image.Therefore, has the effect that the caution driver collides this barrier.
Description of drawings
Fig. 1: the calcspar of the preferred embodiment of barrier method for sensing of the present invention.
Fig. 2: the process flow diagram of the preferred embodiment of barrier method for sensing of the present invention.
Fig. 3 a: the low-res image synoptic diagram of the preferred embodiment of barrier method for sensing of the present invention.
Fig. 3 b: the area-of-interest synoptic diagram of the preferred embodiment of barrier method for sensing of the present invention.
Fig. 3 c: the edge images synoptic diagram of the preferred embodiment of barrier method for sensing of the present invention.
Fig. 3 d: the continuous boundary image synoptic diagram of the preferred embodiment of barrier method for sensing of the present invention.
Fig. 3 e: the benefit line image synoptic diagram of the preferred embodiment of barrier method for sensing of the present invention.
Fig. 3 f: sideline, the track synoptic diagram of the preferred embodiment of barrier method for sensing of the present invention.
Fig. 3 g: the lane line of the preferred embodiment of barrier method for sensing of the present invention and track center line synoptic diagram.
Fig. 3 h: the sensing region synoptic diagram of the preferred embodiment of barrier method for sensing of the present invention.
Fig. 4: the conversion parameter of the formula suddenly definition synoptic diagram of the preferred embodiment of barrier method for sensing of the present invention.
Fig. 5 a: a wrong road model synoptic diagram of the preferred embodiment of barrier method for sensing of the present invention.
Fig. 5 b: the wrong road model synoptic diagram of another of the preferred embodiment of barrier method for sensing of the present invention.
Fig. 6: the coupling algorithm process flow diagram of the preferred embodiment of barrier method for sensing of the present invention.
Fig. 7: the coupling algorithm synoptic diagram of the preferred embodiment of barrier method for sensing of the present invention.
Fig. 8: the boundary scan process flow diagram of the preferred embodiment of barrier method for sensing of the present invention.
Fig. 9: the horizontal scanning mode synoptic diagram of the preferred embodiment of barrier method for sensing of the present invention.
Figure 10: the vertical scanning mode synoptic diagram of the preferred embodiment of barrier method for sensing of the present invention.
[main element symbol description]
1 n opens low-res image 11 area-of-interests
2 n open edge images 21 object edges
2 ' the n-1 opens edge images 21 ' object edge
3 n open and mend sideline, line image 31 track
32 lane line 32C track center lines
33 sensing regions
34 predetermined sensing region 34 optimal match points
342 first region of search 342a central points
Put 342c point on every side around the 342b
Put 342e point on every side around the 342d
Put 342g point on every side around the 342f
Put 342i point on every side around the 342h
343 second region of search 343a central points
Put 343c point on every side around the 346B
Put 343e point on every side around the 343d
Put 343g point on every side around the 343f
Put 343i point on every side around the 343h
344 the 3rd region of search 344a central points
Put 344c point on every side around the 344b
Put 344e point on every side around the 344d
Put 344g point on every side around the 344f
Put 344i point on every side around the 344h
346 horizontal scanning line segments are put in 345 horizontal scannings
346B optimum level scan lines 347 vertical scanning point
The best vertical scanning line segment of 348 vertical scanning line segment 348B
P1 coordinate points P2 coordinate points
The P3 coordinate points
Embodiment
For letting above-mentioned and other purposes, characteristic and the advantage of the present invention can be more obviously understandable, hereinafter is special lifts preferred embodiment of the present invention, and cooperates appended graphicly, elaborates as follows:
Please with reference to shown in Figure 1, it is the calcspar of the preferred embodiment of barrier method for sensing of the present invention, and the program of this barrier method for sensing is to comprise an image pre-treatment program S1, a lane line detection procedure S2 and a barrier detection procedure S3.This image pre-treatment program S1 is from area-of-interest of a pending image setting; This lane line detection procedure S2 is marginal information of sensing in this area-of-interest, and with lane line information of this marginal information sensing; This barrier detection procedure S3 is with optimal match point of this lane line information sensing, and around this optimal match point, seeks this marginal information, with barrier information of sensing.
Please with reference to shown in Figure 2, it is the process flow diagram of barrier method for sensing of the present invention, and this image pre-treatment program S1 comprises an image to choose step S11, a resolution reduction step S12, a scope reduction process S13 and a noise removal process S14.Please with reference to shown in Fig. 3 a to 3h, it is the processing procedure synoptic diagram of barrier method for sensing of the present invention.It is in the continuous image that an image-taking device is obtained that this image is chosen step S11, and getting its n, to open (i.e. desire handle this) source image be this pending image; Image is colored if this n opens the source, then changes into GTG, to reduce the data quantity that this n opens the source image.The formula that chromatic image converts a grey-tone image into is as follows,
Y=0.2989×R+0.5870×G+0.1140×B
Wherein, Y is the pixel gray level value of this grey-tone image; R, G and B are respectively red, the green and blue component numerical value of the pixel of this chromatic image.
Shown in Fig. 3 a, it is the resolution reduction of this n being opened the source image that this resolution reduces step S12, but still original information that this n is opened in the image of source is undistorted, opens low-res image 1 and form a n.Its mode can be selected from a neighbour's interpolation method (Nearest NeighborInterpolation) or a bilinear interpolation method (Bilinear Interpolation); The former fast operation, but effect is common, and latter's arithmetic speed is medium; But effect is preferable, and visual actual demand is adopted.In the present embodiment, be to adopt this neighbour's interpolation method.
Shown in Fig. 3 b, scope reduction process S13 opens low-res image 1 in this n to set this area-of-interest (Region of interest), promptly like the zone of label " 11 " among Fig. 3 b.By the image data of only handling in these area-of-interest 11 scopes, expectation obtains required information with the less processing time.In the present embodiment; Set the wide of this area-of-interest 11 and open the wide of low-res image 1 for this n; And the height of this area-of-interest 11 opens the high half the of low-res image 1 for this n, and sets the Lower Half that this area-of-interest 11 is opened low-res image 1 for this n.
This noise removal process S14 is with the image incidental information filtering in this area-of-interest 11; Avoid image incidental information to influence the effect of follow-up image processing; In addition; Edge line segment in this area-of-interest 11 is indentation owing to reduce resolution, can be by this noise removal process S14 with this edge line segment smoothing (smoothing).Its practice can be selected from has the images filter method that keeps marginal information and filtering noise information.In the present embodiment, be selected from a medium filtering computing, it is that (for example: 3 * 3) shade element and image pixel are made multiply-add operation, replace the value in image centre position with the intermediate value of operation result ordering with identical size.
Please refer again to shown in Figure 2ly, this lane line detection procedure S2 comprises an edge sensing step S21, one to mend line step S22, a sideline sensing step S23 and a lane line identification step S24.Shown in Fig. 3 b and 3c; This edge sensing step S21 removes unwanted information such as background and inner with this area-of-interest 11; And the result is saved as a n open edge images 2; This n opens the information that edge images 2 only has several object edges 21, and the mode of carrying out can be selected from any operational method that can do the edge sensing.In the present embodiment, adopt the computing of Suo Beier (Sobel) wave filter, tabulate as follows shown in 1, carry out center and the luminance difference computing of both sides pixel, calculating again in this area-of-interest 11 by Gx and Gy shade | G x|+| G y| to obtain the information at these several object edges 21.
The Gx of table 1 rope Bel wave filter and Gy shade element
Shown in Fig. 3 c, these several object edges 21 are to present imperfect or discontinuous line segment, so need the relevent information between continuous image to come these several object edges 21 of reinforcement.Shown in Fig. 3 d and 3e; This benefit line step S22 utilizes a n-1 to open this n of edge images 2 ' come reinforcement to open several object edges 21 of edge images 2, because these several object edges 21 and this n-1 open the correlativity height at several object edges 21 ' of edge images 2 '.Therefore, adopt this n to open the mode that edge images and this n-1 open the edge images addition, open and mend line image 3 to obtain a n.
Shown in Fig. 3 f, this sideline sensing step S23 is that this n of sensing opens the sideline, track 31 of mending in the line image 3.Can in this mends line image 3, adopt the mode of model comparison, perhaps, should mend line image 3 and convert other forms into this sideline, track 31; Get the mode of its line segment feature again, for example: formula conversion (Hough Transform) suddenly, in the present embodiment; As shown in Figure 4; With x and the y coordinate of coordinate points P1, P2 or the P3 (the rest may be inferred) of same line segment, transfer ρ (rho) and θ (theta) coordinate to formula suddenly, its conversion formula is as follows:
ρ=y×cosθ+x×sinθ
Because same direction line segment on ρ and θ coordinate diagram, has the characteristic of the same point of meeting at after formula conversion suddenly in the image, therefore, can take out the point of maximum intersection amount, and to choose ρ and θ be this sideline, track 31 near the line segment of optimum value.
This lane line identification step S24 is whether this sideline, track 31 of identification is one group of correct lane line 32, can adopt the mode of figure model comparison to come identification.Shown in Fig. 5 a and 5b, in the present embodiment, be to adopt a preset wrong road model to judge whether this sideline, track 31 is correct lane line 32, judge again whether the line segment that is not picked out possibly organize lane line 32 for this.Shown in Fig. 3 g,, then mark a track center line 32C if pick out this group lane line 32.
Please refer again to shown in Figure 2ly, this barrier detection procedure S3 comprises a scope definition step S31, a target search step S32 and a boundary scan step S33.Shown in Fig. 3 h, this scope definition step S31 defines several sensing regions 33 with this group lane line 32.In the present embodiment, be to define three sensing regions 33 with this group lane line 32.
Please refer again to shown in Fig. 3 h, this target search step S32 carries out the algorithm comparison of a coupling in a predetermined sensing region 34, if comparison result then representes to exist this optimal match point less than a threshold value, promptly like the point of label " 341 " among Fig. 3 h.Please with reference to shown in Fig. 6 and 7; It is the process flow diagram and the synoptic diagram of this coupling algorithm; This coupling algorithm is with three step search algorithm (Three Step Search Algorithm) collocation absolute error and computing (Sum of AbsoluteDifference; SAD) seek this optimal match point 341, this coupling algorithm comprises seven action S321, S322, S323, S324, S325, S326 and S327.This action S321 is a central point of setting a region of search.If first step search, then setting this region of search is one first region of search 342, and the central point 342a of this first region of search 342 is the middle position points that are set at this predetermined sensing region 34; If the search of second step, then setting this region of search is one second region of search 343, and the central point 343a of this second region of search 343 is the positions that are set at the best comparison result of first step search; If the 3rd step search, the central point 344a that then sets this region of search and be 344, the three regions of search 344, one the 3rd region of search is the position that is set at the best comparison result of second step search.
This action S322 is a scope of setting this region of search.If first step search is a spacing with four pixels then, around this central point 342a, set eight scope point 342b, 342c, 342d, 342e, 342f, 342g, 342h and 342i, the scope of this region of search is 9 * 9 pixels; If the search of second step is a spacing with two pixels then, around this central point 343a, set eight scope point 343b, 343c, 343d, 343e, 343f, 343g, 343h and 343i, the scope of this region of search is 5 * 5 pixels; If the search of the 3rd step is a spacing with a pixel then, around this central point 344a, set eight scope point 344b, 344c, 344d, 344e, 344f, 344g, 344h and 344i, the scope of this region of search is 3 * 3 pixels.
This action S323 is in this region of search, compares record comparison result and normalization with a coupling model.In this embodiment, this coupling model is 30 * 5 block of pixels, and the pixel gray level value of this coupling model is between 180 to 200.The comparison mode is the scope point (i.e. this scope point 342b, 343b or 344b) with this upper left corner, region of search of central point alignment of this coupling model; The graphical pixel of desire coupling is with universe search (Full SearchAlgorithm) mode in this coupling model and this region of search; Right reaching from top to bottom carried out the absolute error and the computing of each pixel from a left side; And write down operation result for ordering and normalization, absolute error and operational formula are as follows:
d ( I j , T ) = Σ i = 1 n | I i , j - T i |
Wherein, I I, jBe the pixel gray level value in the figure of desire coupling, T iBe the pixel gray level value of this coupling model, n is the width of this coupling model, d (I j, T) error amount, d (I for mating j, T) the figure matching degree of desire coupling is high more in this coupling model of the more little representative of value and this region of search.Calculate for ease, with d (I j, T) be worth between normalization to 0 to 1 and record, regular operational formula is as follows:
d ( I j , T ) = d ( I j , T ) - d min d max - d min
Wherein, d (I j, T) be the value of present absolute error and computing; d MinBe absolute error and the minimum value that has write down; d MaxBe absolute error and the maximal value that has write down.
This action S324 is value and the position of setting a best comparison result.Wherein, the value of best comparison result is to be made as d (I j, minimum value T) is for seeking this optimal match point 341; The position of best comparison result is to be made as d (I j, the T) location of pixels of minimum value is for the region of search central point that is made as next step search.
This action S325 judges whether executed the 3rd step search.If this action S326 is then carried out in executed the 3rd step search; If do not carry out the search of the 3rd step, then carry out this action S321.
This action S326 judges whether to find this optimal match point 341.If the value of this best comparison result less than this threshold value, then finds this optimal match point 341, the position of this optimal match point 341 is made as the position of this best comparison result, put the 344h position around as shown in Figure 7, carry out this boundary scan step S33 again; If do not find this optimal match point 341, then carry out this action S327.In the present embodiment, this threshold value is 0.1.
This action S327 upgrades to be scheduled to sensing region 34.Wherein, can't find this optimal match point 341 owing to be somebody's turn to do predetermined sensing region 34, should predetermined sensing region 34 be another sensing region 33 that a n+1 opens (i.e. next of desire processing opened) image so set.
This boundary scan step S33 is near this object edge 21 of horizontal direction scanning that vertically reaches in this optimal match point 341, with the information of this barrier of sensing.Because this optimal match point 341 is pixels that are proximate to vehicle shadow; So should there be this object edge 21 top of this optimal match point 341; Therefore; This object edge 21 of vertical scanning around this optimal match point 341 can confirm whether this optimal match point 341 is erroneous judgement, and the hydrous water simple scan can be learnt the information of this barrier again.
As shown in Figure 8, this boundary scan step S33 comprises eight action S331, S332, S333, S334, S335, S336 and S 337.As shown in Figure 9, this action S331 sets several horizontal scanning points 345 with this optimal match point 341, and sets several horizontal scanning line segments 346 with these several horizontal scanning points 345.These several horizontal scanning points 345 are that can be set at this optimal match point 341 be each pixel of the vertical direction at center, and these several horizontal scanning line segments 346 are to be the center with these several horizontal scanning points 345, and along continuous straight runs extends to the lane line 32 of both sides.In the present embodiment, these several horizontal scanning points 345 are to be made as 5 points, are respectively this optimal match point 341, this optimal match point 341 upwards two pixels and this optimal match point 341 downward two pixels.
This action S332 is in this horizontal scanning line segment 346 respectively, calculates and contains the edge pixel quantity at this object edge 21, and set optimum level coupling line segment 346B.Wherein, Because the GTG value of this edge pixel and the GTG value of background have hard contrast; For example: the GTG value of this edge pixel be 255 and the GTG value of background be 0; So can calculate and write down respectively the quantity that this level coupling line segment 343 contains this edge pixel, set and contain the peaked level coupling of this edge pixel quantity line segment 346 and mate line segment 346B for this optimum level.
This action S333 judges whether this optimum level coupling line segment 346B has the information of this barrier.Whether the edge pixel quantity of judging this optimum level coupling line segment 346B half the greater than the pixel quantity of this optimum level coupling line segment 346B, if judged result is for being then to carry out the information that this step S344 further analyzes this barrier; If judged result is for not, then because this optimal match point 341 does not have this object edge 21, so there is not this barrier.
This action S334 is a bottom width of setting this barrier.Wherein, the bottom width of this barrier is the quantity that is made as the edge pixel of this optimum level coupling line segment 346B.After learning the bottom width of this barrier, carry out the information such as height that vertical scan direction can be learnt this barrier.
Shown in figure 10, this action S335 sets several vertical scanning points 347 with each pixel of this optimum level coupling line segment 346B, and sets several vertical scanning line segments 348 with these several vertical scanning points 347.These several vertical scanning line segments 348 are to be the center with these several vertical scanning points 347, vertically upwards and downwards respectively extend two minutes of pixel quantity of this optimum level coupling line segment 346B one.In the present embodiment, these several vertical scanning points 347 are to be made as 5 points, are respectively the horizontal scanning point 345 that is positioned at this optimum level coupling line segment 346B, this horizontal scanning point 345 to the second from left pixel and this horizontal scanning point 345 two pixels to the right.
This action S336 is in this vertical scanning line segment 348 respectively, calculates the length difference distance of the edge pixel that contains this object edge 21, and sets a best and vertically mate line segment 348B.Wherein, calculate and record respectively this vertical scanning line segment 348 contain the length difference distance of this edge pixel, set this length difference and vertically mate line segment 348B for this best apart from peaked vertical scanning line segment 348.
This action S337 is a height of setting this barrier.Wherein, the height of this barrier is to be made as this best vertically to mate the length difference distance that line segment 348B contains this edge pixel.
Barrier method for sensing of the present invention is to obtain continuous image from single video camera, and with this barrier of shade information sensing of continuous image.Therefore, has the effect that minimizing is provided with cost.
Barrier method for sensing of the present invention is to obtain continuous image from single video camera, and with this barrier of shade information sensing of continuous image.Therefore, has the effect that caution drives this barrier of collision.

Claims (10)

1. barrier method for sensing is characterized in that comprising:
An image pre-treatment program is set area-of-interest in pending image;
A lane line detection procedure, sensing margin information in this area-of-interest, and with one group of lane line of this marginal information sensing; And
A barrier detection procedure; Pixel in this group lane line scope and coupling model are compared; The comparison mode is sought optimal match point with absolute error and computing and three steps search algorithm, and around this optimal match point with this marginal information sensing barrier.
2. barrier method for sensing as claimed in claim 1 is characterized in that, this image pre-treatment program comprises:
An image is chosen step, and in the continuous image that image-taking device is obtained, getting a n, to open the source image be this pending image, if this n opens the image colour of originating, then changes into GTG;
A resolution reduces step, and the resolution of this n being opened the source image reduces without distortion, opens the low-res image to form a n;
A scope reduction process is opened this area-of-interest of setting in the low-res image in this n; And
A noise removal process carries out the medium filtering computing in this area-of-interest.
3. barrier method for sensing as claimed in claim 1 is characterized in that, this lane line detection procedure comprises:
An edge sensing step, several object edges of sensing in this area-of-interest, and save as a n and open edge images;
Mend the line step for one, utilize a n-1 to open edge images and come this n of reinforcement to open several object edges of edge images, and save as a n and open and mend the line image;
A sideline sensing step, this n of sensing opens the sideline, track of mending in the line image; And
Whether a lane line identification step is this group lane line with this sideline, track of wrong road Model Distinguish.
4. barrier method for sensing as claimed in claim 1 is characterized in that, this barrier detection procedure comprises:
A scope definition step defines several sensing regions with this group lane line;
A target search step is carried out absolute error and computing and the comparison of three steps search algorithm with the pixel in the predetermined sensing region and this coupling model, if comparison result then representes to exist this optimal match point less than a threshold value; And
A boundary scan step vertically reaches horizontal direction and seeks this marginal information, with this barrier of sensing around this optimal match point.
5. barrier method for sensing as claimed in claim 4 is characterized in that, this target search step comprises:
Set the central point of a region of search;
Set the scope of this region of search;
In this region of search, compare record comparison result and normalization with this coupling model;
Set the value and the position of a best comparison result; And
Judge whether the search of executed the 3rd step, if executed the 3rd step search, then judge should the best comparison result whether less than this threshold value; If less than this threshold value; Then find this optimal match point, if not less than this threshold value, then setting predetermined sensing region to be another sensing region; If do not carry out the search of the 3rd step, then carry out the action of above-mentioned setting search regional center point.
6. barrier method for sensing as claimed in claim 4 is characterized in that, this boundary scan step comprises:
Set several horizontal scanning points with this optimal match point, and set several horizontal scanning line segments with these several horizontal scanning points;
In this horizontal scanning line segment respectively, calculate the edge pixel quantity of this marginal information, to set an optimum level coupling line segment;
Whether the edge pixel quantity of judging this optimum level coupling line segment half the greater than the pixel quantity of this optimum level coupling line segment; If judged result is for being; Then set the edge pixel quantity of the bottom width of this barrier for this optimum level coupling line segment; If then there is not this barrier in judged result for not;
If there is this barrier, then sets several vertical scanning points, and set several vertical scanning line segments with these several vertical scanning points with each pixel of this optimum level coupling line segment;
In this vertical scanning line segment respectively, calculate the length difference distance of the edge pixel that contains this marginal information, vertically mate line segment to set a best; And
Set the length difference distance that the height of this barrier vertically matees line segment for this best.
7. barrier method for sensing as claimed in claim 4 is characterized in that, this threshold value is 0.1.
8. barrier method for sensing as claimed in claim 5 is characterized in that, this another sensing region is opened the sensing region that edge images selects to be different from this sensing region in a n+1.
9. barrier method for sensing as claimed in claim 6 is characterized in that, the quantity of these several horizontal scanning points is 5 points, and these several horizontal scanning points comprise this optimal match point.
10. barrier method for sensing as claimed in claim 3 is characterized in that, this reinforcement mode is opened edge images and this n-1 with this n and opened edge images and carry out additive operation.
CN2010105280956A 2010-11-02 2010-11-02 Obstacle sensing method Pending CN102456131A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010105280956A CN102456131A (en) 2010-11-02 2010-11-02 Obstacle sensing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010105280956A CN102456131A (en) 2010-11-02 2010-11-02 Obstacle sensing method

Publications (1)

Publication Number Publication Date
CN102456131A true CN102456131A (en) 2012-05-16

Family

ID=46039308

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010105280956A Pending CN102456131A (en) 2010-11-02 2010-11-02 Obstacle sensing method

Country Status (1)

Country Link
CN (1) CN102456131A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105139397A (en) * 2015-08-25 2015-12-09 广州视源电子科技股份有限公司 PCB board detection method and device
CN106796292A (en) * 2014-10-15 2017-05-31 法雷奥开关和传感器有限责任公司 For detecting in the method for at least one of the peripheral region of motor vehicles object, driver assistance system and motor vehicles
CN109871787A (en) * 2019-01-30 2019-06-11 浙江吉利汽车研究院有限公司 A kind of obstacle detection method and device
CN110163908A (en) * 2018-02-12 2019-08-23 北京宝沃汽车有限公司 Look for the method, apparatus and storage medium of object

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1838174A (en) * 2005-03-22 2006-09-27 日产自动车株式会社 Detecting device and method to detect an object based on a road boundary
CN101251928A (en) * 2008-03-13 2008-08-27 上海交通大学 Object tracking method based on core
JP2010056975A (en) * 2008-08-29 2010-03-11 Alpine Electronics Inc Object detection system by rear camera

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1838174A (en) * 2005-03-22 2006-09-27 日产自动车株式会社 Detecting device and method to detect an object based on a road boundary
CN101251928A (en) * 2008-03-13 2008-08-27 上海交通大学 Object tracking method based on core
JP2010056975A (en) * 2008-08-29 2010-03-11 Alpine Electronics Inc Object detection system by rear camera

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
李东明: "车载铁路路障智能图像检测技术研究", 《中国优秀硕士学位论文全文数据库》 *
王建中 等: "基于块匹配的运动对象检测算法", 《微电子学与计算机》 *
赵建云: "基于视频的交通路口车辆检测技术研究", 《中国优秀硕士学位论文全文数据库》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106796292A (en) * 2014-10-15 2017-05-31 法雷奥开关和传感器有限责任公司 For detecting in the method for at least one of the peripheral region of motor vehicles object, driver assistance system and motor vehicles
CN106796292B (en) * 2014-10-15 2019-06-04 法雷奥开关和传感器有限责任公司 For detecting method, driver assistance system and the motor vehicles of at least one object in the peripheral region of motor vehicles
US10571564B2 (en) 2014-10-15 2020-02-25 Valeo Schalter Und Sensoren Gmbh Method for detecting at least one object in a surrounding area of a motor vehicle, driver assistance system and motor vehicle
CN105139397A (en) * 2015-08-25 2015-12-09 广州视源电子科技股份有限公司 PCB board detection method and device
CN105139397B (en) * 2015-08-25 2017-12-19 广州视源电子科技股份有限公司 A kind of pcb board detection method and device
CN110163908A (en) * 2018-02-12 2019-08-23 北京宝沃汽车有限公司 Look for the method, apparatus and storage medium of object
CN109871787A (en) * 2019-01-30 2019-06-11 浙江吉利汽车研究院有限公司 A kind of obstacle detection method and device

Similar Documents

Publication Publication Date Title
CN106067023B (en) Container number and truck number identification system and method based on image processing
CN110232835B (en) Underground garage parking space detection method based on image processing
CN102622895B (en) Video-based vehicle speed detecting method
JP4930046B2 (en) Road surface discrimination method and road surface discrimination device
CN104796612A (en) High-definition radar linkage tracking control camera shooting system and linkage tracking method
CN102629326A (en) Lane line detection method based on monocular vision
CN107462223A (en) Driving sight distance self-operated measuring unit and measuring method before a kind of highway is turned
CN109886175B (en) Method for detecting lane line by combining straight line and circular arc
CN102456131A (en) Obstacle sensing method
CN106170072B (en) Video acquisition system and acquisition method thereof
CN105741559A (en) Emergency vehicle lane illegal occupation detection method based on lane line model
CN102306293A (en) Method for judging driver exam in actual road based on facial image identification technology
CN104700072A (en) Lane line historical frame recognition method
CN107292214B (en) Lane departure detection method and device and vehicle
KR101406316B1 (en) Apparatus and method for detecting lane
CN205750537U (en) AGV Path Recognition device based on coloured image
CN111652033A (en) Lane line detection method based on OpenCV
CN111968132A (en) Panoramic vision-based relative pose calculation method for wireless charging alignment
CN114905512A (en) Panoramic tracking and obstacle avoidance method and system for intelligent inspection robot
CN110688903B (en) Barrier extraction method based on train AEB system camera data
CN103390259A (en) Ground image processing method in visual guidance AGV
CN107437071B (en) Robot autonomous inspection method based on double yellow line detection
JP2009134591A (en) Vehicle color determination device, vehicle color determination system, and vehicle color determination method
KR200409647Y1 (en) System for supervising cars on the road
CN107176100A (en) Car-mounted terminal and its distance-finding method with distance measurement function

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20120516