CN108229406A - A kind of method for detecting lane lines, device and terminal - Google Patents

A kind of method for detecting lane lines, device and terminal Download PDF

Info

Publication number
CN108229406A
CN108229406A CN201810024993.4A CN201810024993A CN108229406A CN 108229406 A CN108229406 A CN 108229406A CN 201810024993 A CN201810024993 A CN 201810024993A CN 108229406 A CN108229406 A CN 108229406A
Authority
CN
China
Prior art keywords
pixel
candidate
lane line
line
parallax value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810024993.4A
Other languages
Chinese (zh)
Other versions
CN108229406B (en
Inventor
李阳
高语函
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Group Co Ltd
Original Assignee
Hisense Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Group Co Ltd filed Critical Hisense Group Co Ltd
Priority to CN201810024993.4A priority Critical patent/CN108229406B/en
Publication of CN108229406A publication Critical patent/CN108229406A/en
Application granted granted Critical
Publication of CN108229406B publication Critical patent/CN108229406B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/48Extraction of image or video features by mapping characteristic values of the pattern into a parameter space, e.g. Hough transformation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20061Hough transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20228Disparity calculation for image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Abstract

The application provides a kind of method for detecting lane lines, device and terminal, applied to auxiliary driving technology field, the method includes:Image to be detected and the corresponding V disparity maps of described image are obtained, determines the ground relation line in the candidate lane line and the V disparity maps in described image;It determines to be located at the second pixel on the ground relation line of a line with the first pixel on the candidate lane line, if the absolute difference between the first parallax value of first pixel and the second parallax value of second pixel meets the first preset condition, first pixel is determined as effective pixel points;If the effective pixel points proportion is more than predetermined threshold value on the candidate lane line, the candidate lane line is determined as target lane line.It using this method, can be interfered caused by the detection of lane line with the barrier contour line in rejection image, improve the accuracy of lane detection result.

Description

A kind of method for detecting lane lines, device and terminal
Technical field
This application involves auxiliary driving technology field more particularly to a kind of method for detecting lane lines, device and terminals.
Background technology
Lane Departure Warning System can assist driver's reduction that traffic occurs due to deviation by way of alarming Accident, and in the workflow of Lane Departure Warning System, lane detection identification is a link being even more important.
At present, lane line is mainly identified in road image using the linear characteristic of lane line, specifically, can be to road The gray level image of road image carries out binary conversion treatment, obtains binary image, then recycle Hough line detection mode this two Straight line is detected on value image, finally, by air line distance and the two parameters of angle of inclination to the straight line that detects into Row screening, to determine lane line.However in practical applications, it due to the interference of barrier on road surface, is detected and calculated based on Hough Certain part error detection of barrier itself is often lane line by method, and testing result is caused to be inaccurate.
Invention content
In view of this, in order to solve not detected correct lane line by the interference of road obstacle in the prior art Problem, the application provide a kind of method for detecting lane lines, device and terminal, can be from the candidate lane line detected with realization It determines accurate lane line, improves the accuracy of lane detection result.
Specifically, the application is achieved by the following technical solution:
According to the embodiment of the present application in a first aspect, provide a kind of method for detecting lane lines, the method includes:
Obtain image to be detected and the corresponding V disparity maps of described image, determine candidate lane line in described image and Ground relation line in the V disparity maps;It determines to be located at described in a line with the first pixel on the candidate lane line The second pixel on the relation line of ground, if the first parallax value of first pixel is regarded with the second of second pixel Absolute difference between difference meets the first preset condition, and first pixel is determined as effective pixel points;If the time The effective pixel points proportion on lane line is selected to be more than predetermined threshold value, the candidate lane line is determined as target track Line.
Optionally, first preset condition is:First parallax value of first pixel and second pixel The second parallax value between absolute difference be less than or equal to the first difference.
Optionally, if between the first parallax value of first pixel and the second parallax value of second pixel Absolute difference meets the first preset condition, first pixel is determined as effective pixel points, specific steps include:By parallax The direction that value changes from small to large divides the V disparity maps and obtains multiple sub- V disparity maps;In the sub- V disparity maps, determine Absolute difference between first parallax value of first pixel and the second parallax value of second pixel is less than or waits In the second difference, wherein, second difference is increased by the direction that parallax value changes from small to large.
According to the second aspect of the embodiment of the present application, another method for detecting lane lines is provided, the method includes:
Image to be detected is obtained, determines the candidate lane line in described image, and by the candidate lane line same Pixel on row is determined as candidate pixel point;According to the parallax value of the candidate pixel point, the parallax value is unsatisfactory for The candidate pixel point of two preset conditions is determined as interfering pixel;If the interference pixel proportion on the candidate lane line Less than default ratio, the candidate lane line is determined as target lane line.
Optionally, the parallax value according to the candidate pixel point, the second preset condition is unsatisfactory for by the parallax value Candidate pixel point be determined as interfering pixel, specific steps include:According to pre-set radius, determine centered on target pixel points Neighborhood, wherein the target pixel points are worth corresponding candidate pixel point for maximum disparity;If the mesh is removed in the neighborhood It marks and other candidate pixel points is not present outside pixel, the target pixel points are determined as the interference pixel.
Optionally, the parallax value according to the candidate pixel point, the second preset condition is unsatisfactory for by the parallax value Candidate pixel point be determined as interfering pixel, specific steps include:Calculate the parallax value of the candidate pixel point mean value and Standard deviation determines the parallax value dispersion of the candidate pixel point respectively;It is if discrete being worth corresponding parallax value with maximum disparity In neighborhood centered on degree, not comprising the corresponding parallax value dispersion of other parallax values, the maximum disparity is worth corresponding time Pixel is selected to be determined as interfering pixel.
According to the third aspect of the embodiment of the present application, a kind of lane detection device is provided, including:
First acquisition module for obtaining image to be detected and the corresponding V disparity maps of described image, determines described image In candidate lane line and the V disparity maps in ground relation line;Effective pixel points determining module, for determining and the time The first pixel on lane line is selected to be located at the second pixel on the ground relation line of a line, if first pixel Absolute difference between first parallax value of point and the second parallax value of second pixel meets the first preset condition, by institute It states the first pixel and is determined as effective pixel points;First object lane line determining module, if the institute on the candidate lane line Effective pixel points proportion is stated more than predetermined threshold value, the candidate lane line is determined as target lane line.
According to the fourth aspect of the embodiment of the present application, another lane detection device is provided, including:
Second acquisition module for obtaining image to be detected, determines the candidate lane line in described image, and by described in The pixel of candidate lane line on a same row is determined as candidate pixel point;Pixel determining module is interfered, for according to The parallax value of candidate pixel point, the candidate pixel point that the parallax value is unsatisfactory for the second preset condition are determined as interfering pixel Point;Second target lane line determining module is preset if the interference pixel proportion on the candidate lane line is less than The candidate lane line is determined as target lane line by ratio.
According to the 5th of the embodiment of the present application aspect, provide a kind of lane detection terminal, including memory, processor, Communication interface, CCD camera assembly and communication bus;Wherein, the memory, processor, communication interface, CCD camera assembly lead to It crosses the communication bus and carries out mutual communication;The CCD camera assembly for acquiring image to be detected, and passes through described logical Described image to be detected is sent to the processor by letter bus;The memory, for storing computer program;The processing Device, for performing the computer program stored on the memory, to institute when the processor performs the computer program State the step of image to be detected realizes any method for detecting lane lines provided by the embodiments of the present application.
According to the 6th of the embodiment of the present application the aspect, a kind of computer readable storage medium is provided, it is described computer-readable Storage medium memory contains computer program, and the computer program is realized provided by the embodiments of the present application when being executed by processor The step of any method for detecting lane lines.
Under a kind of mode, since lane line is to be located at road surface, and barrier is located on road, can be with based on this By judging whether the candidate lane line that detects is located at road surface, come determine the candidate lane line be interference lane line or Target lane line.The application is proposed to determine ground relation line in V disparity maps, be judged and the first pixel on candidate lane line The absolute difference of parallax value between the corresponding pixel of ground relation line in same a line, if absolute difference meets first in advance If condition, then first pixel is effective pixel points, finally determines target track according to effective pixel points proportion Line.
Under another way, since lane line is located at road surface, the corresponding pixel of lane line in the same horizontal position The parallax value fluctuation of point will not be very big, and based on this, the application is proposed in disparity map, according to candidate lane line on a same row The corresponding pixel of the parallax value for being unsatisfactory for the second preset condition is determined as interfering pixel by the parallax value of candidate pixel point, It is less than preset ratio in interference pixel proportion, candidate lane line could be determined as to target lane line.
In conclusion the method for detecting lane lines that the application provides can make the detection of lane line to avoid road obstacle Into interference, the accuracy of lane detection result is improved.
Description of the drawings
Fig. 1 is a kind of example of road binary image that in-vehicle camera takes;
Fig. 2 is that Fig. 1 passes through the candidate lane line exemplary plot that Hough straight-line detection obtains;
Fig. 3 detects to obtain the corresponding parallax distribution map of candidate lane line for Fig. 2;
Fig. 4 is the method for detecting lane lines flow chart of the embodiment of the present application one;
Fig. 5 is to determine effective pixel points exemplary plot under the first way of the embodiment of the present application one;
Fig. 6 is to determine effective pixel points exemplary plot under the second way of the embodiment of the present application one;
Fig. 7 is the method for detecting lane lines flow chart of the embodiment of the present application two;
Fig. 8 is the determining candidate pixel point exemplary plot of the embodiment of the present application two;
Fig. 9 is one embodiment block diagram of the application lane detection device;
Figure 10 is another embodiment block diagram of the application lane detection device;
Figure 11 is a kind of hardware structure diagram of the lane detection terminal of the embodiment of the present application five.
Specific embodiment
Here exemplary embodiment will be illustrated in detail, example is illustrated in the accompanying drawings.Following description is related to During attached drawing, unless otherwise indicated, the same numbers in different attached drawings represent the same or similar element.Following exemplary embodiment Described in embodiment do not represent all embodiments consistent with the application.On the contrary, they be only with it is such as appended The example of the consistent device and method of some aspects be described in detail in claims, the application.
It is only merely for the purpose of description specific embodiment in term used in this application, and is not intended to be limiting the application. It is also intended in the application and " one kind " of singulative used in the attached claims, " described " and "the" including majority Form, unless context clearly shows that other meanings.It is also understood that term "and/or" used herein refers to and wraps Containing one or more associated list items purposes, any or all may be combined.
It will be appreciated that though various information, but this may be described using term first, second, third, etc. in the application A little information should not necessarily be limited by these terms.These terms are only used for same type of information being distinguished from each other out.For example, not departing from In the case of the application range, the first information can also be referred to as the second information, and similarly, the second information can also be referred to as One information.Depending on linguistic context, word as used in this " if " can be construed to " ... when " or " when ... When " or " in response to determining ".
In order to make it easy to understand, before detailed explanation is carried out to the embodiment of the present invention, first to the embodiment of the present invention The noun being related to explains.
Anaglyph:The left and right two images taken by binocular camera synchronization are calculated.Wherein, In the two images of left and right, piece image is as benchmark image, and another piece image is as movement images.By the pixel in movement images Point in benchmark image with the pixel on Y coordinate with being matched, and calculate the abscissa between the matched pixel of each two Difference, the difference of the abscissa is the parallax value between two pixels.Using the parallax value as the pixel in benchmark image Corresponding pixel value, so as to obtain the anaglyph with benchmark image same size.
V disparity maps:It is to be calculated by anaglyph by transverse compression, remains the line number of disparity map, specifically, will The ordinate of anaglyph remains unchanged, and abscissa becomes parallax value, and the pixel value in V disparity maps at every bit (x1, y1) is The total number for the pixel that parallax value is x1 in the pixel that ordinate is y1 in anaglyph.
Lane Departure Warning System (Lane Departure Warning System, abbreviation LDWS) is that automotive safety is auxiliary An important component in driving field is helped, driver can be assisted to reduce by way of alarm and even avoided because of track Deviateing and traffic accident occurs, lane detection is identified as the important link in Lane Departure Warning System workflow, The accurate handling result that will directly affect Lane Departure Warning System of testing result.
Next to the present embodiments relate to application scenarios introduced.
With the development of urbanization and popularizing for automobile, traffic problems are increasingly prominent, it is desirable that automobile will not only have good Safety, but also to have it is certain intelligent, based on this, people begin one's study it is a kind of with realize nobody, it is full-automatic and Safe driving is the driving assistance system of final goal.In current driving assistance system, image procossing and meter can be passed through Calculation machine vision technique come handle radar, sensor either the collected road conditions image of camera, according to road conditions image to front Pedestrian, barrier are made prediction, and are carried out early warning to driver in the case of there are potential danger and either controlled car Anxious braking.Wherein, Lane Departure Warning System is extremely important in automobile assistant driving, and the lane detection result meeting of mistake Cause false alarm problem.
Above-mentioned background technology is mentioned, and in existing lane detection technology, usually carries out two-value to the road image taken Change is handled, and then by Hough straight-line detection, identifies lane line come really in road image using the linear characteristic of lane line Determine lane line, can be there are many chaff interferents however, in practical applications, such as automobile, fence, curb stone on road etc., by Certain pixel values are more than the binary-state threshold of setting in these chaff interferents, will member-retaining portion chaff interferent in binarization Pixel, and the pixel of these chaff interferents may interfere lane detection.
For example, Fig. 1 is a kind of example of road binary image that in-vehicle camera takes, as shown in Figure 1, dotted line White pixel point in frame 101 is the partial pixel point of vehicle body, is chassis part and tire part mostly, and these pixels Point can also fit oblique line by Hough straight-line detection, and the oblique line that these oblique lines will fit real lane line causes to do It disturbs.Fig. 2 is that Fig. 1 passes through the candidate lane line exemplary plot that Hough straight-line detection obtains, as shown in Fig. 2, by Hough straight-line detection Afterwards, it is respectively oblique line 201, oblique line 202, oblique line 203, oblique line 204 that candidate lane line is obtained in binary image, it is clear that non-vehicle Also error detection is lane line (oblique line 204) to the body portion of diatom, and in such cases, the prior art would generally be there are two types of mode The interference of oblique line 204 is excluded, it is specific as follows so as to determine target lane line in the candidate lane line:
Mode one, according to the image of monocular camera shooting, then can be in the images according to the candidate vehicle detected To determine whether there is interference straight line, straight line is interfered when existing for the geometrical relationships such as angle, distance, intersection position between diatom When by chance meeting such case of above-mentioned geometrical relationship, the prior art is can not to reject interference straight line, leads to lane detection not Accurately.
Mode two, according to the image of binocular camera shooting, then the image can be obtained by Stereo Matching Algorithm Disparity map, the corresponding parallax value of white pixel point that can pass through respectively to above-mentioned four candidate lane lines are counted, are obtained The parallax distribution map of every candidate lane line, as shown in figure 3, broken line 301 is distributed for 201 corresponding parallax of Fig. 2 bends, broken line 302 are distributed for 202 corresponding parallax of Fig. 2 bends, and broken line 303 is distributed for 203 corresponding parallax of Fig. 2 bends, and broken line 304 is 204 corresponding parallax of Fig. 2 bends is distributed.Observe this four broken lines can be seen that the fluctuation of broken line 304 significantly with other three The fluctuation of broken line is big, may determine that by experience, and 304 corresponding candidate lane line of broken line is interference oblique line.However, above-mentioned row Except the method for interference lane line can only visually judge can there is very big randomness.
Therefore, it is existing that quickly and accurately exclusive PCR will be unable to based on binary image progress lane detection so that inspection The result of measuring car diatom is inaccurate.Based on this, the application provides a kind of method for detecting lane lines, is avoided as much as with realizing Barrier interferes the detection of lane line on road, improves the accuracy of lane detection result.
It is as follows, show that the method for detecting lane lines that following embodiments provide the application illustrates.
Embodiment one:
Fig. 4 is referred to, is one embodiment flow chart of the application method for detecting lane lines, this method includes following step Suddenly:
Step S201:Image to be detected and the corresponding V disparity maps of described image are obtained, determines the candidate in described image Ground relation line in lane line and the V disparity maps.
Specifically, usually automobile carries binocular camera progress Image Acquisition, wherein, binocular camera can be mounted in vapour The front of vehicle and on the longitudinal axis of automobile, and after binocular camera is mounted on automobile, it can be to binocular camera shooting Head is demarcated.In the process of moving, which can acquire packet simultaneously to automobile by left camera and right camera The image of the object of lane segmentation containing continuous type, wherein, the image of left camera acquisition is properly termed as left image, right camera acquisition Image is properly termed as right image, using right image as figure is compared, can also make right image using left image as reference map On the basis of scheme, left image is as comparing figure.
After binocular camera collects image, which can be sent to terminal, terminal can be to the image It is handled, obtains anaglyph, V disparity maps are then calculated according to the anaglyph, about disparity map, V disparity maps Specific steps can refer to the prior art, no longer be discussed in detail here.
It is worth explanation, there are mapping relations, and terminal receives image between the disparity map that is calculated, V disparity maps When, it calculates every frame image disparity map, V disparity maps and stores in memory, in follow-up calculating process, by according to mapping Relationship can determine target disparity map and target V disparity maps.
Optionally, in the embodiment of the present application, can using camera acquisition to road image gray level image as treating Detection image can also delimit area-of-interest on the gray level image, using the corresponding parts of images of area-of-interest as treating Detection image, the application are not restricted this.For using the corresponding parts of images of area-of-interest as image to be detected, this Field technology personnel determine region of interest it is understood that various ways may be used on the gray level image of road image Domain, for example, area-of-interest can be confined on gray level image by way of manually selecting frame, in another example, it can be by default Height ratio (such as lower 3/4 part) area-of-interest is intercepted on gray level image, further for example, can determine that road disappears Point, by road end point using lower part as area-of-interest etc., the application on gray level image to determining area-of-interest Detailed process is not limited.
Image to be detected is obtained, and after its determining corresponding V disparity map, carries out straight-line detection in image to be detected respectively And in V disparity maps determine ground relation line, wherein it is possible to using the prior art come determine the lane line in image to be detected and Ground relation line in V disparity maps, does not limit here.
Optionally, lane line is generally white or yellow, and gray value is larger, and road surface is close to black, and gray value is smaller, because This can detect the edge of lane line, such as first-order difference, Robert operators, Sobel operators, La Pula using gradient information This operator, Canny operators etc., do not do introduce one by one here, and image to be detected is handled by edge detection operator, is obtained Bianry image.
Next, by Hough transformation come detection of straight lines in the binary image, so as to obtain candidate lane line.Specifically , the basic principle of Hough transformation is by the pixel transition of image space to parameter space, then to conllinear in parameter space Point counted, whether be satisfactory straight line finally by threshold determination.Under rectangular coordinate system, straight line is defined as Form shown in formula (1):
Y=mx+b (1)
Wherein, m is slope, and b is the intercept with y-axis, as long as m and b is determined, straight line can be uniquely identified down Come.If there are vertical straight lines in image, then m parameter will be infinity.Therefore, just there is the side of another parameter space Case:Straight line is described using pole coordinate parameter rather than " slope-intercept form ", then the straight line is represented by the shape shown in formula (2) again Formula:
ρ=xcos+ysin θ formula (2)
Wherein, ρ represents origin to the Euclidean distance of the straight line, and θ represents the cross line of the straight line and the angle of x-axis, if ρ and θ are done orthogonal processing, then (ρ, θ) is thus referred to as hough space, abscissa θ, ordinate ρ, so as to obtain H Matrix.In rectangular coordinate system a bit, corresponding to a sine curve in hough space.Straight line is made of countless points , it is exactly without several sine curves in hough space, but these sine curves can intersect at a point (ρ00), which is brought into Formula (1) just obtains the slope and intercept of straight line, determines unique straight line.Therefore, based on above-mentioned principle, know with Hough transformation During other straight line, the straight line for representing lane line is detected by determining the local maximum in hough space.Assuming that by taking Fig. 2 as an example, 4 candidate lane lines have been obtained in image to be detected.
Next, the prior art determining ground relation line in V disparity maps may be used, such as least square method, RANSAC (RANdom SAmple Consensus, random sampling are consistent) algorithm etc..Optionally, it is utilized in V disparity maps Straight line where RANSAC algorithms extraction ground, specific steps include, and set a parameter model, and certain points in data During suitable for the parameter model, it is believed that these points are intra-office point, are used as office by one group of random subset being chosen in data Interior point is verified to estimate desired parameter model with point not in the know, is commented by estimating the error rate of intra-office point and model Estimate model, after iteration fixed number of times, otherwise the model generated every time is rejected or because intra-office point is very little because than existing Model it is more preferable and be selected, finally obtain more accurately model.As shown in figure 5, the point in V disparity maps is fitted to obtain Ground relation line 501.
It candidate lane line in image to be detected is determined using Hough transformation and is determined using RANSAC methods about above-mentioned Part detailed not to the utmost in the description of ground relation line, those skilled in the art may refer to associated description of the prior art, this This is no longer described in detail in application, correspondingly, determining the detailed process of candidate lane line and ground relation line, ability using other methods Field technique personnel can also no longer be described in detail this referring to associated description of the prior art, the application.
Step S202:It determines to be located at the ground relation line with a line with the first pixel on the candidate lane line On the second pixel, if between the first parallax value of first pixel and the second parallax value of second pixel Absolute difference meets the first preset condition, and first pixel is determined as effective pixel points.
Since lane line is to rest on the ground, and the contour line that oblique line is all barrier is interfered, interference oblique line distance ground Face certain altitude is rejected in candidate lane line the interfering line on non-rice habitats based on this principle, can solve above-mentioned existing skill The lane detection inaccuracy problem that art is mentioned.The realization method of two kinds of determining effective pixel points is given below:
Mode one:First preset condition is the first parallax value of first pixel and second pixel The second parallax value between absolute difference be less than or equal to the first difference.It will be appreciated by persons skilled in the art that first Difference is rule of thumb preset, such as the first difference is 2.Assuming that the parallax value of pixel is D on candidate lane line, it is expert at The parallax value of upper corresponding ground relation line is d, and the first difference is T, judges the pixel on candidate lane line according to formula (3) Whether point is effective pixel points, and if effective pixel points, flag is marked as 1, can finally count on every candidate lane line The number of the pixel of flag=1:
| D-d |≤T, flag=1 (3)
Illustratively, then by taking Fig. 2 and Fig. 5 as an example, wherein coordinate system is established by coordinate origin of the upper left corner of Fig. 5, it is horizontal Axis represents parallax value, and the longitudinal axis represents line number, and the one point A of label on the candidate lane line 204 of Fig. 2, the V that point A corresponds to Fig. 5 are regarded Point A ' is obtained in poor figure, next determines that the point B on the ground relation line identical with point A ' ordinates, i.e. point A ' are regarded with point B in V In same a line of poor figure, and the absolute difference between the parallax value of point A ' and point B is calculated, if the absolute difference is less than or equal to first Difference T, then candidate lane line point A is effective pixel points, and otherwise point A is inactive pixels point.It repeats the above steps, judges successively Go out the effective pixel points on candidate lane line.
Mode two:The direction changed from small to large by parallax value divides the V disparity maps and obtains multiple sub- V disparity maps; In the sub- V disparity maps, determine the first parallax value of first pixel and second pixel the second parallax value it Between absolute difference be less than or equal to the second difference, wherein, the direction that second difference is changed from small to large by parallax value increases Greatly.
Because " near big and far smaller " is presented in the fluctuation pattern of road relation line, therefore V disparity maps are divided into multiple sub- V and regarded by us Difference figure, the remote areas of the small sub- V disparity maps correspondence image of parallax value, straight line fluctuation range is smaller, therefore setting second is poor Value is smaller, and the nearby region of the big sub- V disparity maps correspondence image of parallax value, and straight line fluctuation range is larger, therefore sets the Two differences are larger, judge whether the pixel on candidate lane line is effective pixel points according to formula (4), wherein, tiIt represents Second parallax value of a sub- V disparity maps settings of i-th (i=1,2 ... ...), and the 1st sub- V disparity map represents most remote areas, therefore t1< t2< ... < ti
Illustratively, it is assumed that V disparity maps are averagely divided into two sub- V disparity maps, as shown in fig. 6, for convenience, It is respectively the first subgraph 61 (compared with far region) and the second subgraph 62 (immediate area) to name described two sub- V disparity maps, for Second difference of one subgraph 61 setting be named as difference one (it is smaller, such as 2), ordered for the second difference of the second subgraph 62 setting Entitled difference two (it is larger, such as 11).So, in the first subgraph 61, judge the corresponding point of candidate lane line with the row Whether absolute difference between the parallax value of the point on the relation line of face is less than or equal to 2, as dotted line 611 in Fig. 6 and dotted line 612 it Between point of the point to meet condition in the first subgraph 61;In the second subgraph 62, the corresponding point of candidate lane line and the row are judged Whether the absolute difference between the parallax value of the point on middle ground relation line is less than or equal to 11, such as dotted line 621 and dotted line in Fig. 6 Point of the point to meet condition in the first subgraph 62 between 622.
Illustratively, pixel on candidate lane line can be recorded in the way of shown in the following table 1-table 2 whether For effective pixel points, certainly, the application does not limit the record of effective pixel points, storage mode, and those skilled in the art can spirit Selection living:
Table 1 judges effective pixel points in the first subgraph
Line number The parallax value of ground relation line The parallax value of candidate lane line flag
1 2 3 1
2 4 7 0
…… …… …… ……
m 30 32 1
Table 2 judges effective pixel points in the second subgraph
Line number The parallax value of ground relation line The parallax value of candidate lane line flag
m+1 32 36 1
m+2 35 43 1
…… …… …… ……
n 63 76 0
Step S203:If the effective pixel points proportion is more than predetermined threshold value on the candidate lane line, by described in Candidate lane line is determined as target lane line.
Specifically, the number of the pixel of flag=1 on every candidate lane line can be determined by above-mentioned steps S202, These pixels are effective pixel points, and can also count to obtain total pixel on every candidate lane line, pass through calculating Effective pixel points and the ratio of total pixel, to determine that the candidate lane line is to interfere lane line or target lane line, if Interference lane line then needs to be deleted, and then needs to retain if target lane line.
Illustratively, then by taking Fig. 2 as an example, effective pixel points proportion on candidate lane line 201-204 is determined respectively, It is as shown in table 3 below:
3 candidate lane line effective pixel points accounting of table
Candidate lane line 201 Candidate lane line 202 Candidate lane line 203 Candidate lane line 204
87.2% 90.3% 91.5 10.2%
If predetermined threshold value is 80%, then can be obtained according to table 3, effective pixel points accounting in candidate lane line 201-203 More than the predetermined threshold value, and the effective pixel points accounting of candidate lane line 204 is less than the predetermined threshold value, therefore candidate lane line 201-203 is determined as target lane line, and candidate lane line 204 is determined as interfering lane line, it should be removed.
It is the content of the embodiment of the present invention one above, since lane line is rest on the ground, the parallax of lane line Value should be identical with the parallax value on ground, then, the parallax value by determining the pixel on candidate lane line is corresponding Absolute difference between the parallax value put on row upper ground surface relation line, to determine whether the candidate lane line pixel is effective picture Vegetarian refreshments, by determining the ratio of effective pixel points on candidate lane line, so that it is determined that the interference lane line in candidate lane line is simultaneously It is rejected, the present invention can improve the accuracy of lane detection.
Embodiment two:
The embodiment of another lane detection itself please be give, V disparity maps is not used, but is existed according to parallax value It is interference lane line which judges in binary map, and Fig. 7 is the method for detecting lane lines flow chart of the embodiment of the present application two, with reference to Fig. 7 specifically introduces the step of embodiment two:
Step S301 obtains image to be detected, determines the candidate lane line in described image, and by the candidate lane The pixel of line on a same row is determined as candidate pixel point.
Specifically, it, to determine candidate lane line, can be not repeated herein with the step S201 in reference implementation example one It is bright.As shown in figure 8, step S301 detects 4 candidate lane lines, a line is chosen in binary map, obtains 4 candidate pixel points Respectively K1-K4.
The parallax value according to the parallax value of the candidate pixel point, is unsatisfactory for the second preset condition by step S302 Candidate pixel point is determined as interfering pixel.
Specifically, continue the example above, it, can be in disparity map really according to position of the candidate pixel point in binary map Determine the corresponding parallax values of candidate pixel point K1-K4, it is assumed that be 12,12,12,20.So, according to above-mentioned 4 candidate pixels point Next parallax value provides two kinds of methods for determining interference pixel:
Mode one:
According to pre-set radius, the neighborhood centered on target pixel points is determined, wherein the target pixel points are regarded for maximum The corresponding candidate pixel point of difference;It, will if other candidate pixel points are not present in addition to the target pixel points in the neighborhood The target pixel points are determined as the interference pixel.
Then above-mentioned example, in four candidate pixel points, the parallax value of candidate pixel point K4 is maximum, and target pixel points are K4, it is assumed that pre-set radius 2 then determines the target pixel points centered on the target pixel points and using pre-set radius as 2 Neighborhood searches for other candidate pixels point K1-K3 in the neighborhood, i.e., judge respectively other candidate pixel points and target pixel points it Between parallax value absolute difference whether be less than 2, it is clear that candidate pixel point K1-K3 is not fallen in the neighborhood, so by target Pixel K4 is determined as interfering pixel, similar, can mark flag=0.
Mode two:
The mean value and standard deviation of the parallax value of the candidate pixel point are calculated, determines the parallax of the candidate pixel point respectively It is worth dispersion;If in the neighborhood centered on being worth corresponding parallax value dispersion by maximum disparity, not comprising other parallax values pair The corresponding candidate pixel point of the maximum disparity value is determined as interfering pixel by the parallax value dispersion answered.
Exemplary, then above-mentioned example, is obtained the mean value of this four points according to formula (5) and formula (6) determines standard deviation, Then for aforementioned four candidate pixel point, mean value is obtained as μ=14, and variance is σ ≈ 6.9, is then calculated according to formula (7) each The departure degree of candidate pixel point, respectively o1≈-0.29,o2≈-0.29,o3≈-0.29,o4≈ 0.86, it is similar, it determines Maximum degree of bias degree o4≈ 0.86, centered on the maximum deviation degree and using radius as 0.3 determining neighborhood, judgement obtains other The departure degree of three candidate pixel points is not all fallen in the neighborhood, therefore by o40.86 corresponding candidate pixel points of ≈ determine To interfere pixel, flag=0 can be marked.
Step S303, if the interference pixel proportion on the candidate lane line is less than default ratio, by the time Lane line is selected to be determined as target lane line.
Specifically, can determine the interference pixel on candidate lane line according to step S302, may thereby determine that every Ratio on bar candidate lane line shared by interference pixel, if the ratio is less than default ratio, such as 10%, then candidate's vehicle Diatom is determined as target lane line.
It is the realization step of the embodiment of the present application two above, since normal lane line is all to be located at road surface, because It is all close with the parallax value of pixel in a line in this its image, from this starting point, pass through the candidate on same a line is judged The dispersion degree between candidate pixel point on lane line, to determine to be with the presence or absence of interference pixel on the candidate lane line Exclude noise interference it is ad hoc determine amount of redundancy, that is, judge that interference pixel proportion is less than in the case of default ratio, just by The candidate lane line is determined as target lane line, and can exclude vehicle on road by the above method does lane detection It disturbs, so as to improve the accuracy of lane detection.
Embodiment three:
Fig. 9 is referred to, is one embodiment block diagram of the application lane detection device, which can include:
First acquisition module 901 for obtaining image to be detected and the corresponding V disparity maps of described image, determines described The ground relation line in candidate lane line and the V disparity maps in image;
Effective pixel points determining module 902, it is same for determining to be located at the first pixel on the candidate lane line The second pixel on the capable ground relation line, if the first parallax value of first pixel and second pixel The second parallax value between absolute difference meet the first preset condition, first pixel is determined as effective pixel points;
Optionally, first preset condition is:First parallax value of first pixel and second pixel The second parallax value between absolute difference be less than or equal to the first difference.
Optionally, the effective pixel points determining module 902 is additionally operable to, the direction changed from small to large by parallax value, is drawn The V disparity maps is divided to obtain multiple sub- V disparity maps;In the sub- V disparity maps, the first parallax of first pixel is determined Absolute difference between value and the second parallax value of second pixel is less than or equal to the second difference, wherein, described second Difference is increased by the direction that parallax value changes from small to large.
First object lane line determining module 903, if the effective pixel points institute accounting on the candidate lane line Example is more than predetermined threshold value, and the candidate lane line is determined as target lane line.
It is the specific introduction of embodiment three above, the lane detection side that each module can be to introduce in reference implementation example one Method.
Example IV:
Figure 10 is referred to, is another embodiment block diagram of the application lane detection device, which can include:
Second acquisition module 1001 for obtaining image to be detected, determines the candidate lane line in described image, and will The pixel of the candidate lane line on a same row is determined as candidate pixel point;
Pixel determining module 1002 is interfered, for the parallax value according to the candidate pixel point, by the parallax value not The candidate pixel point for meeting the second preset condition is determined as interfering pixel.
Optionally, the interference pixel determining module 1002 is additionally operable to, and according to pre-set radius, is determined with target pixel points Centered on neighborhood, wherein the target pixel points are worth corresponding candidate pixel point for maximum disparity;If it is removed in the neighborhood There is no other candidate pixel points outside the target pixel points, and the target pixel points are determined as the interference pixel.
Optionally, the interference pixel determining module 1002 is additionally operable to, and calculates the parallax value of the candidate pixel point Mean value and standard deviation determine the parallax value dispersion of the candidate pixel point respectively;If being worth corresponding parallax with maximum disparity In neighborhood centered on value dispersion, not comprising the corresponding parallax value dispersion of other parallax values, by the maximum disparity value pair The candidate pixel point answered is determined as interfering pixel.
Second target lane line determining module 1003, if the interference pixel proportion on the candidate lane line Less than default ratio, the candidate lane line is determined as target lane line.
The function of each unit and the realization process of effect specifically refer to and step are corresponded in the above method in above device Realization process, details are not described herein.
The embodiment of the application lane detection device can be applied in lane detection terminal.Device embodiment can be with It is realized, can also be realized by way of hardware or software and hardware combining by software.For implemented in software, patrolled as one Device in volume meaning is by calculating corresponding in nonvolatile memory by the processor of lane detection terminal where it Machine program instruction reads what operation in memory was formed.
Embodiment five:
As shown in figure 11, a kind of hardware structure diagram for the lane detection terminal of the embodiment of the present application five, wherein, processing Device 1101 is the control centre of the lane detection device 1100, utilizes various interfaces and the entire lane detection of connection The various pieces of device are deposited by running or performing the software program being stored in memory 1102 and/or module and call The data in memory 1102 are stored up, the various functions of lane detection device 1100 and processing data are performed, so as to the vehicle Road line detector carries out integral monitoring.
Optionally, processor 1101 may include and (being not shown in Figure 11) one or more processing cores;Optionally, processor 1101 can integrate application processor and modem processor, wherein, the main processing operation system of application processor, user interface With application program etc., modem processor mainly handles wireless communication.It is understood that above-mentioned modem processor It can not be integrated into processor 1101.
Memory 1102 can be used for storage software program and module, and processor 1101 is stored in memory by operation 1102 software program and module, so as to perform various functions application and data processing.Memory 1102 mainly includes (figure It is not shown in 11) storing program area and storage data field, wherein, storing program area can storage program area, at least one function Required application program etc.;Storage data field can be stored uses created data (ratio according to lane detection device 1100 The gray level image that such as the image collected, the anaglyph being calculated or processing obtain).
In addition, memory 1102 can include (being not shown in Figure 11) high-speed random access memory, (figure can also be included It is not shown in 11) nonvolatile memory, for example, at least a disk memory, flush memory device or other volatile solid-states Memory device.Correspondingly, memory 1102 can also include (being not shown in Figure 11) Memory Controller, to provide processor The access of 1101 pairs of memories 1102.
In some embodiments, device 1100 is also optional includes:Peripheral device interface 1103 and at least one periphery are set It is standby.(can not it be shown in Figure 11 with communication bus or signal wire between processor 1101, memory 1102 and peripheral device interface 1103 Go out) it is connected.Each peripheral equipment can be connected with communication bus or signal wire with peripheral device interface 1103.Specifically, periphery is set It is standby to include:Radio frequency component 1204, touch display screen 1105, CCD camera assembly 1106, audio component 1107, positioning component At least one of 1108 and power supply module 1109.
Wherein, CCD camera assembly 1106 is used to acquire image to be detected.Optionally, CCD camera assembly 1106 can be included extremely Few two cameras.In some embodiments, at least two cameras can be respectively the left and right camera in binocular camera.
In some embodiments, CCD camera assembly 1106 can also include flash lamp.Flash lamp can be monochromatic temperature flash of light Lamp or double-colored temperature flash lamp.Double-colored temperature flash lamp refers to the combination of warm light flash lamp and cold light flash lamp, can be used for Light compensation under different-colour.
Other than each hardware exemplified by Figure 11, lane detection terminal in embodiment where device generally according to The actual functional capability of the lane detection terminal can also include other hardware, this is repeated no more.
It will be appreciated by persons skilled in the art that the lane detection terminal exemplified by Figure 11 can be applied in automobile On, it can also apply in the other equipments such as computer, smart mobile phone, the application is not restricted this.
The application also provides a kind of computer readable storage medium, which is characterized in that the computer readable storage medium Memory contains computer program, and the computer program realizes any track provided by the embodiments of the present application when being executed by processor The step of line detecting method.
For device embodiment, since it corresponds essentially to embodiment of the method, so related part is referring to method reality Apply the part explanation of example.The apparatus embodiments described above are merely exemplary, wherein described be used as separating component The unit of explanation may or may not be physically separate, and the component shown as unit can be or can also It is not physical unit, you can be located at a place or can also be distributed in multiple network element.It can be according to reality It needs that some or all of module therein is selected to realize the purpose of application scheme.Those of ordinary skill in the art are not paying In the case of going out creative work, you can to understand and implement.
The foregoing is merely the preferred embodiment of the application, not limiting the application, all essences in the application God and any modification, equivalent substitution, improvement and etc. within principle, done, should be included within the scope of the application protection.

Claims (10)

1. a kind of method for detecting lane lines, which is characterized in that the method includes:
Image to be detected and the corresponding V disparity maps of described image are obtained, determines the candidate lane line in described image and the V Ground relation line in disparity map;
It determines to be located at the second pixel on the ground relation line of a line with the first pixel on the candidate lane line Point, if the absolute difference between the first parallax value of first pixel and the second parallax value of second pixel meets First pixel is determined as effective pixel points by the first preset condition;
If the effective pixel points proportion is more than predetermined threshold value on the candidate lane line, the candidate lane line is determined For target lane line.
2. according to the method described in claim 1, it is characterized in that, first preset condition is:
Absolute difference between first parallax value of first pixel and the second parallax value of second pixel is less than Or equal to the first difference.
3. if according to the method described in claim 1, it is characterized in that, the first parallax value of first pixel and described the Absolute difference between second parallax value of two pixels meets the first preset condition, and first pixel is determined as effectively Pixel, specific steps include:
The direction changed from small to large by parallax value divides the V disparity maps and obtains multiple sub- V disparity maps;
In the sub- V disparity maps, the first parallax value and the second of second pixel that determine first pixel regard Absolute difference between difference is less than or equal to the second difference, wherein, what second difference was changed from small to large by parallax value Direction increases.
4. a kind of method for detecting lane lines, which is characterized in that the method includes:
Image to be detected is obtained, determines the candidate lane line in described image, and by the candidate lane line on a same row Pixel be determined as candidate pixel point;
According to the parallax value of the candidate pixel point, the candidate pixel point that the parallax value is unsatisfactory for the second preset condition determines To interfere pixel;
If the interference pixel proportion on the candidate lane line is less than default ratio, the candidate lane line is determined as Target lane line.
5. according to the method described in claim 4, it is characterized in that, the parallax value according to the candidate pixel point, by institute It states parallax value and is unsatisfactory for the candidate pixel point of the second preset condition and be determined as interfering pixel, specific steps include:
According to pre-set radius, the neighborhood centered on target pixel points is determined, wherein the target pixel points are maximum disparity value Corresponding candidate pixel point;
If there is no other candidate pixel points in addition to the target pixel points in the neighborhood, the target pixel points are determined For the interference pixel.
6. according to the method described in claim 4, it is characterized in that, the parallax value according to the candidate pixel point, by institute It states parallax value and is unsatisfactory for the candidate pixel point of the second preset condition and be determined as interfering pixel, specific steps include:
Calculate the mean value and standard deviation of the parallax value of the candidate pixel point, respectively determine the candidate pixel point parallax value from Divergence;It is if corresponding not comprising other parallax values in the neighborhood centered on being worth corresponding parallax value dispersion by maximum disparity The corresponding candidate pixel point of the maximum disparity value is determined as interfering pixel by parallax value dispersion.
7. a kind of lane detection device, which is characterized in that described device includes:
First acquisition module for obtaining image to be detected and the corresponding V disparity maps of described image, is determined in described image Ground relation line in candidate lane line and the V disparity maps;
Effective pixel points determining module, for determining to be located at described in a line with the first pixel on the candidate lane line The second pixel on the relation line of ground, if the first parallax value of first pixel is regarded with the second of second pixel Absolute difference between difference meets the first preset condition, and first pixel is determined as effective pixel points;
First object lane line determining module, if the effective pixel points proportion is more than in advance on the candidate lane line If threshold value, the candidate lane line is determined as target lane line.
8. a kind of lane detection device, which is characterized in that described device includes:
Second acquisition module for obtaining image to be detected, determines the candidate lane line in described image, and by the candidate The pixel of lane line on a same row is determined as candidate pixel point;
Pixel determining module is interfered, for the parallax value according to the candidate pixel point, the parallax value is unsatisfactory for second The candidate pixel point of preset condition is determined as interfering pixel;
Second target lane line determining module is preset if the interference pixel proportion on the candidate lane line is less than The candidate lane line is determined as target lane line by ratio.
9. a kind of lane detection terminal, which is characterized in that including memory, processor, communication interface, CCD camera assembly, with And communication bus;
Wherein, the memory, processor, communication interface, CCD camera assembly carry out mutual lead to by the communication bus Letter;
The CCD camera assembly for acquiring image to be detected, and is sent described image to be detected by the communication bus To the processor;
The memory, for storing computer program;
The processor, for performing the computer program stored on the memory, the processor performs the calculating The step of during machine program to described image to be detected realization claim 1-6 any the methods.
10. a kind of computer readable storage medium, which is characterized in that the computer readable storage medium memory contains computer Program, the step of claim 1-6 any the method is realized when the computer program is executed by processor.
CN201810024993.4A 2018-01-11 2018-01-11 Lane line detection method, device and terminal Active CN108229406B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810024993.4A CN108229406B (en) 2018-01-11 2018-01-11 Lane line detection method, device and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810024993.4A CN108229406B (en) 2018-01-11 2018-01-11 Lane line detection method, device and terminal

Publications (2)

Publication Number Publication Date
CN108229406A true CN108229406A (en) 2018-06-29
CN108229406B CN108229406B (en) 2022-03-04

Family

ID=62640556

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810024993.4A Active CN108229406B (en) 2018-01-11 2018-01-11 Lane line detection method, device and terminal

Country Status (1)

Country Link
CN (1) CN108229406B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109145751A (en) * 2018-07-23 2019-01-04 安徽淘云科技有限公司 Page turning detection method and device
CN109583418A (en) * 2018-12-13 2019-04-05 武汉光庭信息技术股份有限公司 A kind of lane line deviation automatic correcting method and device based on parallel relation
CN109583327A (en) * 2018-11-13 2019-04-05 青岛理工大学 A kind of binocular vision wheat seeding trace approximating method
CN109711242A (en) * 2018-10-31 2019-05-03 百度在线网络技术(北京)有限公司 Modification method, device and the storage medium of lane line
CN111316337A (en) * 2018-12-26 2020-06-19 深圳市大疆创新科技有限公司 Method and equipment for determining installation parameters of vehicle-mounted imaging device and controlling driving
CN111460072A (en) * 2020-04-01 2020-07-28 北京百度网讯科技有限公司 Lane line detection method, apparatus, device, and storage medium
CN111738034A (en) * 2019-03-25 2020-10-02 杭州海康威视数字技术股份有限公司 Method and device for detecting lane line
WO2021120574A1 (en) * 2019-12-19 2021-06-24 Suzhou Zhijia Science & Technologies Co., Ltd. Obstacle positioning method and apparatus for autonomous driving system
CN113139399A (en) * 2021-05-13 2021-07-20 阳光电源股份有限公司 Image line frame identification method and server

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679121A (en) * 2012-09-14 2014-03-26 株式会社理光 Method and system for detecting roadside using visual difference image
CN103679127A (en) * 2012-09-24 2014-03-26 株式会社理光 Method and device for detecting drivable area of road pavement
CN104166834A (en) * 2013-05-20 2014-11-26 株式会社理光 Pavement detection method and pavement detection device
CN104376297A (en) * 2013-08-12 2015-02-25 株式会社理光 Detection method and device for linear indication signs on road
CN105975957A (en) * 2016-05-30 2016-09-28 大连理工大学 Lane-line-edge-based road plane detection method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679121A (en) * 2012-09-14 2014-03-26 株式会社理光 Method and system for detecting roadside using visual difference image
CN103679127A (en) * 2012-09-24 2014-03-26 株式会社理光 Method and device for detecting drivable area of road pavement
CN104166834A (en) * 2013-05-20 2014-11-26 株式会社理光 Pavement detection method and pavement detection device
CN104376297A (en) * 2013-08-12 2015-02-25 株式会社理光 Detection method and device for linear indication signs on road
CN105975957A (en) * 2016-05-30 2016-09-28 大连理工大学 Lane-line-edge-based road plane detection method

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109145751A (en) * 2018-07-23 2019-01-04 安徽淘云科技有限公司 Page turning detection method and device
CN109711242A (en) * 2018-10-31 2019-05-03 百度在线网络技术(北京)有限公司 Modification method, device and the storage medium of lane line
CN109583327A (en) * 2018-11-13 2019-04-05 青岛理工大学 A kind of binocular vision wheat seeding trace approximating method
CN109583418A (en) * 2018-12-13 2019-04-05 武汉光庭信息技术股份有限公司 A kind of lane line deviation automatic correcting method and device based on parallel relation
CN109583418B (en) * 2018-12-13 2021-03-12 武汉光庭信息技术股份有限公司 Lane line deviation self-correction method and device based on parallel relation
CN111316337A (en) * 2018-12-26 2020-06-19 深圳市大疆创新科技有限公司 Method and equipment for determining installation parameters of vehicle-mounted imaging device and controlling driving
CN111738034A (en) * 2019-03-25 2020-10-02 杭州海康威视数字技术股份有限公司 Method and device for detecting lane line
CN111738034B (en) * 2019-03-25 2024-02-23 杭州海康威视数字技术股份有限公司 Lane line detection method and device
WO2021120574A1 (en) * 2019-12-19 2021-06-24 Suzhou Zhijia Science & Technologies Co., Ltd. Obstacle positioning method and apparatus for autonomous driving system
CN111460072A (en) * 2020-04-01 2020-07-28 北京百度网讯科技有限公司 Lane line detection method, apparatus, device, and storage medium
CN111460072B (en) * 2020-04-01 2023-10-03 北京百度网讯科技有限公司 Lane line detection method, device, equipment and storage medium
CN113139399A (en) * 2021-05-13 2021-07-20 阳光电源股份有限公司 Image line frame identification method and server
CN113139399B (en) * 2021-05-13 2024-04-12 阳光电源股份有限公司 Image wire frame identification method and server

Also Published As

Publication number Publication date
CN108229406B (en) 2022-03-04

Similar Documents

Publication Publication Date Title
CN108229406A (en) A kind of method for detecting lane lines, device and terminal
CN108629292B (en) Curved lane line detection method and device and terminal
CN116912793A (en) Pavement identification method and device
CN110490936B (en) Calibration method, device and equipment of vehicle camera and readable storage medium
CN107609483B (en) Dangerous target detection method and device for driving assistance system
CN112598922B (en) Parking space detection method, device, equipment and storage medium
CN109191513B (en) Power equipment stereo matching method based on global optimization
CN113344986B (en) Point cloud registration result evaluation method, device, equipment and storage medium
CN111243003B (en) Vehicle-mounted binocular camera and method and device for detecting road height limiting rod
CN110610137B (en) Method and device for detecting vehicle running state, electronic equipment and storage medium
CN108319931B (en) Image processing method and device and terminal
CN110163039A (en) Determine method, equipment, storage medium and the processor of vehicle running state
CN110555885A (en) calibration method and device of vehicle-mounted camera and terminal
CN112348869A (en) Method for recovering monocular SLAM scale through detection and calibration
CN114821497A (en) Method, device and equipment for determining position of target object and storage medium
EP3082069A1 (en) Stereoscopic object detection device and stereoscopic object detection method
KR101977291B1 (en) Tire abrasion mesuring apparatus, method and computer redable recording medium
CN103544495A (en) Method and system for recognizing of image categories
CN112183206A (en) Traffic participant positioning method and system based on roadside monocular camera
CN110992291A (en) Distance measuring method, system and storage medium based on trinocular vision
CN108399357B (en) Face positioning method and device
CN109359632A (en) Highway sideline detection method and device
CN113450335B (en) Road edge detection method, road edge detection device and road surface construction vehicle
KR101714131B1 (en) Device and method for recognizing parking stall
CN108416305B (en) Pose estimation method and device for continuous road segmentation object and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant