CN106327466B - The detection method and device of lane segmentation object - Google Patents

The detection method and device of lane segmentation object Download PDF

Info

Publication number
CN106327466B
CN106327466B CN201510354626.7A CN201510354626A CN106327466B CN 106327466 B CN106327466 B CN 106327466B CN 201510354626 A CN201510354626 A CN 201510354626A CN 106327466 B CN106327466 B CN 106327466B
Authority
CN
China
Prior art keywords
point
road
parallax
disparity map
lane segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510354626.7A
Other languages
Chinese (zh)
Other versions
CN106327466A (en
Inventor
游赣梅
鲁耀杰
刘殿超
师忠超
王刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ricoh Co Ltd
Original Assignee
Ricoh Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ricoh Co Ltd filed Critical Ricoh Co Ltd
Priority to CN201510354626.7A priority Critical patent/CN106327466B/en
Priority to JP2016121582A priority patent/JP6179639B2/en
Publication of CN106327466A publication Critical patent/CN106327466A/en
Application granted granted Critical
Publication of CN106327466B publication Critical patent/CN106327466B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Measurement Of Optical Distance (AREA)
  • Traffic Control Systems (AREA)

Abstract

Provide the detection method and detection device of lane segmentation object.Obtain the disparity map and road end point of road area;The parallax point in the disparity map is coordinately transformed based on road end point;And lane segmentation object is detected according to the parallax point after coordinate transform.The detection method and detection device of lane segmentation object of the invention can be coordinately transformed the parallax point in original disparity map based on road end point, to detect lane segmentation object according to transformed parallax point.

Description

The detection method and device of lane segmentation object
Technical field
The present invention relates to field of object detection, relate more specifically to lane segmentation object detecting method and device.
Background technique
The application of driving assistance system is increasingly popularized.And road or lane-departure warning system (Lane/Road detection Warning, LDW/RDW) be driving assistance system subsystem, more accurately can determine steering direction etc. to avoid collision.Road Road or lane detection are very crucial for LDW/RDW system, are only likely to do into one on the basis of being aware of road information The processing of step, such as alert.Road or lane are generally detected by detection lane segmentation object.
Lane segmentation object include lane line, road shoulder stone, fence and other can be identified for that region and the object in lane on road, Lane line may include white line, yellow line etc. again.
In the conventional method, the detection of lane line can be carried out using disparity map.For example, Fig. 1 is shown comprising roadway area The disparity map of the instance object scene in domain.However, in some cases, shown in oval frame as shown in figure 1, on sparse disparities figure The parallax pixel point for representing road boundary or lane segmentation object is seldom, so carrying out Road Detection directly on disparity map may It is relatively difficult.Moreover, in actual road conditions, road be not always it is straight, various bends or detour are also very common, and current Detection method is detected for linear road mostly, generally can not handle non-rectilinear road, more curved than as shown in Figure 1 Road.
Summary of the invention
In view of problem above, it is desirable to provide can accurately detect lane segmentation object and be able to detect non-rectilinear road And its method and apparatus of lane segmentation object.
According to an aspect of the invention, there is provided the detection method of lane segmentation object, this method may include: acquisition road The disparity map and road end point in road region;The parallax point in the disparity map is coordinately transformed based on road end point; And lane segmentation object is detected according to the parallax point after coordinate transform.
In one embodiment, according to after coordinate transform parallax point detection lane segmentation object may include: with coordinate Detection belongs to the point of lane segmentation object in the corresponding three-dimensional space of transformed parallax point;According to the inversion of the coordinate transform It changes, obtains and the point of lane segmentation object was corresponding, road minute in the disparity map belonging to of detecting in three dimensions The point of object is cut, to obtain the lane segmentation object.
In another embodiment, obtaining road end point may include: the range information based on the disparity map, by the view Road area in poor figure is divided into multiple pieces;Obtain each piece of road end point.
In another embodiment, being coordinately transformed based on road end point to the parallax point in the disparity map can wrap It includes: the end point of other blocks in the multiple piece in addition to first piece is moved to described first piece of road end point, Described in first piece be the nearest block of distance in the disparity map in the multiple piece;To each piece in other described blocks In parallax point be coordinately transformed so that the road end point of the block before parallax point before transformation in the block to transformation The horizontal distance of the road end point of horizontal distance and vertical range to transformed corresponding parallax point to the transformed block and Vertical range difference is equal.
In another embodiment, detection belongs to lane segmentation in three-dimensional space corresponding with the parallax point after coordinate transform The point of object may include: to be obtained transformed based on the relative positional relationship between transformed parallax point and road end point The distribution of parallax point in three dimensions;Parallax point in three-dimensional space is projected in the plane of Z=d, wherein d indicates all True three-dimension space length corresponding to parallax point in parallax point with maximum disparity value;According to the projection of parallax point, detection The point for belonging to lane segmentation object in three-dimensional space.
In another embodiment, the road for being obtained according to the inverse transformation of the coordinate transform and being detected in three dimensions The point of corresponding, in the disparity map the lane segmentation object of partage may include: to be based on to obtain the lane segmentation object Relative positional relationship between transformed parallax point and road end point, obtains and belongs to the road in the three-dimensional space detected The point of road partage is corresponding, transformed parallax point, as target parallax point;According to the inverse transformation of the coordinate transform, obtain The transformation forward sight obtained in disparity map corresponding with the target parallax point, described is not good enough;The transformation forward sight is almost intended It closes to obtain the lane segmentation object.
In another embodiment, according to the point packet for belonging to lane segmentation object in the projection detection three-dimensional space of parallax point It includes: the most subpoint of the parallax point quantity that selection projects among the subpoint in the plane of the Z=d;It obtains selected All three-dimensional space parallax points projected on the subpoint selected, as the point for belonging to lane segmentation object in the three-dimensional space.
It is alternatively possible to be detected in three-dimensional space in predetermined vertical range subpoint below in the plane of Z=d Belong to the point of lane segmentation object.
Optionally, which can also include: subpoint of the selection in the plane of the Z=d Among the parallax point quantity that projects be greater than the subpoint of predetermined threshold;Using selected subpoint fitting a straight line, takes and be in Two subpoints at the both ends of the straight line;The boundary of road area is determined according to described two subpoints.
According to another aspect of the present invention, the detection device of lane segmentation object is provided, comprising: obtain component, be configured to Obtain the disparity map and road end point of road area;Transform component is configured to road end point in the disparity map Parallax point be coordinately transformed;Detection part is configured to detect lane segmentation object according to the parallax point after coordinate transform.
According to the present invention it is possible to the parallax point in original disparity map is coordinately transformed based on road end point, so as to Lane segmentation object is detected according to transformed parallax point.Even if road to be detected is not linear road as a result, can also pass through Coordinate transform and be converted to detection linear road, it is possible thereby to obtain, original disparity corresponding with the linear road testing result Road and its lane segmentation object in figure.Thus, test object of the invention is no longer confined to linear road.Moreover, because utilizing All parallax points in disparity map are detected, so accurate detection result can be obtained.
Detailed description of the invention
Fig. 1 is the disparity map of the instance object scene comprising road area.
Fig. 2 is the schematic diagram of the exemplary onboard system as application environment of the present invention.
Fig. 3 is the overall flow figure of lane segmentation object detecting method according to an embodiment of the invention.
Fig. 4 is to show the schematic diagram being segmented to the road area in disparity map.
Fig. 5 is the rough schematic view for showing multiple road end points of the road area in disparity map.
Fig. 6 is the flow chart for showing the disparity map coordinate transformation method according to one embodiment.
Fig. 7 is the rough schematic view for showing the coordinate transform of disparity map.
Fig. 8 is to show to detect road minute in three-dimensional space corresponding with transformed parallax point according to one embodiment Cut the flow chart of the method for object.
Fig. 9 is the schematic diagram for showing the conversion that point is distributed from transformed disparity map to three-dimensional parallax.
Figure 10 is the schematic diagram for showing the plane projection of parallax point in three-dimensional space to Z=d.
Figure 11 is to show the schematic diagram that road area section is fitted according to subpoint.
Figure 12 is the schematic diagram shown according to the point for belonging to lines projected in acquisition three-dimensional space.
Figure 13 is to show to detect original based on the lane segmentation object detected in three dimensions according to one embodiment The flow chart of the method for lane segmentation object in beginning disparity map.
Figure 14 is to show to be obtained in original disparity map according to the target parallax point in the disparity map with same road direction Lane segmentation object schematic diagram.
Figure 15 is the functional block diagram for showing lane segmentation analyte detection device according to another embodiment of the present invention.
The hardware that Figure 16 shows the detection system according to an embodiment of the invention for realizing lane segmentation analyte detection is matched It sets.
Specific embodiment
In order to make those skilled in the art more fully understand the present invention, with reference to the accompanying drawings and detailed description to this hair It is bright to be described in further detail.
It will be described in the following order:
1, invention thought is summarized
2, embodiment
2.1, the overall process of lane segmentation analyte detection
2.2, the coordinate transform of disparity map
2.3, the detection of lane segmentation object
2.4, lane segmentation analyte detection device
2.5, lane segmentation quality testing examining system
3, it summarizes
<1, the general introduction of invention thought>
As described above, the detection that detour is carried out on sparse disparities figure is difficult.For this purpose, the present invention proposes to utilize view Road end point in poor figure is detected, and the disparity map of road area, which is converted to wherein road, according to road end point has Unidirectional real world three dimensional image, so as in the real world three dimensional image detection have unidirectional road and Its lane segmentation object, and by backtracking, road and its lane segmentation object in the real world three dimensional image for obtaining and detecting Road and its lane segmentation object corresponding, in disparity map.
According to the present invention, the road in disparity map is transformed to have unidirectional road using road end point, i.e., Forthright detects lane segmentation object in the three-dimensional world with same road direction.Thereby, it is possible to accurately detect with same The road and its partage in direction, then can be according to the corresponding lane segmentation object returned in former disparity map of the transformation relation.Cause And method of the invention can accurately detect road and its partage, moreover, method of the invention is suitable for universal road Road, no matter it is forthright or detour.
Fig. 2 contributes to understand exemplary onboard system schematic diagram of the invention, as application environment of the present invention.This hair Bright lane segmentation object detecting method may be implemented in chip (integrated circuit) wherein.
<2, embodiment>
<overall processes of 2.1 lane segmentation analyte detections>
Fig. 3 is the overall flow figure of lane segmentation object detecting method according to an embodiment of the invention.As shown in figure 3, Lane segmentation object detecting method 300 according to this embodiment may include: step S310, obtain road area disparity map and Road end point;Step S320 is coordinately transformed the parallax point in the disparity map based on road end point;And step S330 detects lane segmentation object according to the parallax point after coordinate transform.
In step s310, it can use the image of camera photographic subjects scene and obtain in the target scene included The original disparity map of road area.It is, for example, possible to use the left images of binocular camera reference object, and with left image and right figure Any one as in obtains the view of the target scene including road area by Stereo Matching Algorithm etc. as reference picture Difference figure.Certainly, the method for obtaining the disparity map of target scene is without being limited thereto.
In addition, also obtaining the road end point in the original disparity map.Theoretically, parallel straight line all disappears to infinity The point at place.However, in actual shooting image, one and only one friendship of imaging on the image plane of parallel straight line Point, is referred to as end point.Thus, for unidirectional forthright, one and only one disappears in the picture Point.
The detection of end point is familiar to those skilled in the art.For example, common Vanishing Point Detecting Algorithm can divide For three classes: the first kind is to use spatial transform technique, the information on image is transformed to a limited space up, such as Gaussian sphere Transformation, Hough transformation etc.;Second class is directly to utilize the information of straight line, carries out the detection of end point on the image plane;Third The method of class statistical estimate calculates end point by these parameters according to image top edge facial feature estimation straight line parameter, or Person constructs cost function using end point and Edge Feature Points, while estimating straight line and end point.Certainly, in addition to this any Other vanishing Point Detection Method technologies also can be applied to the present invention.
For parallel forthright, one and only one end point, on this basis, for non-straight drawing lines Road, i.e. detour, can be by being multistage linear road by its piecemeal, because every section of linear road corresponds to an end point, then Multiple end points can be obtained in the picture.Therefore, in one embodiment of the invention, in order to obtain in step s310 Road end point, can range information in the disparity map based on road area obtained, by the road area in the disparity map Multiple pieces are divided into, and obtains each piece of road end point.
Fig. 4 shows the schematic diagram being segmented to the road area in disparity map.As shown in the figure, according to road away from With a distance from camera plane, road is divided into multistage from the near to the distant.It, can be according to the distance represented in disparity map in segmentation Information according to range averaging divides, alternatively, other division modes can also be used, as long as guaranteeing every section of road after piecemeal It is approximate straight.Every section of road after being segmented as a result, has and only one road end point.Then, for non-straight drawing lines Road can obtain multiple road end points.
The particular number of segmentation is not particularly limited, it is clear that the quantity of segmentation is more, and subsequent testing result is more quasi- Really, but simultaneously calculation amount is also bigger.Therefore, in specific application, those skilled in the art can optionally select appropriate stroke Divide method.
Fig. 5 shows the rough schematic view of multiple road end points of the road area in disparity map.Road shown in Fig. 5 Road region is one section of bend, and by the way that this section of bend is divided into from the near to the distant three sections, every section corresponds to a road end point, by This obtains three road end points P1, P2 and P3 in figure.
After the original disparity map and road end point that obtain road area in step s310, in step S320, The parallax point in the original disparity map can be coordinately transformed based on road end point.Disparity map is coordinately transformed Purpose is to obtain such transformed disparity map: in the disparity map, the road area by transformation is approximately one Forthright has same road direction, in order to be able to accurately detect road and its lane segmentation object.
Can be by the way that all road end points obtained same point will be moved in step s310, and correspondingly move The every other parallax point in the original disparity map is moved to realize such disparity map transformation.It will be detailed in the part later of this paper One specific embodiment of this conversion process is described.By step S320, obtain after wherein road has unidirectional transformation Disparity map.Moreover, because of road same direction, lane segmentation object is approximately straight line in disparity map after the conversion.
In step S330, lane segmentation object can be detected according to the parallax point in transformed disparity map.For close Detection like the lane segmentation object of straight line is usually relatively easy to and accurately.For example, can with the view after coordinate transform Detection belongs to the point of lane segmentation object in not good enough corresponding three-dimensional space, that is, detects the lane segmentation object of straight line, and according to the seat Mark transformation inverse transformation, obtain with detect in three dimensions belong to the point of lane segmentation object it is corresponding, in the original view The point of lane segmentation object in poor figure, to obtain the lane segmentation object in original disparity map.
The lane segmentation object detecting method of embodiment according to the present invention as a result, based on road end point to original disparity Parallax point in figure is coordinately transformed, to detect lane segmentation object according to transformed parallax point.To even if to be detected Road be not linear road, detection linear road can be also converted to by coordinate transform, it is possible thereby to obtain straight with this Drawing lines road testing result is corresponding, road and its lane segmentation object in original disparity map.Thus, test object of the invention is not It is limited to linear road again.Moreover, accurate inspection can be obtained because being detected using all parallax points in disparity map Survey result.
The coordinate transform of disparity map<2.2,>
One specific embodiment of the coordinate transform of disparity map is described below with reference to Fig. 6-7.Fig. 6 is shown according to this implementation The flow chart of the coordinate transformation method of the disparity map of example.As previously discussed, based on road end point in the original disparity map Parallax point is coordinately transformed, to obtain the disparity map of transformed road area.Specifically, as shown in fig. 6, the coordinate transform Method 600 may include: step S610, and the end point of other blocks in multiple road blocks in addition to first piece is moved to first The road end point of block, wherein this first piece is distance is nearest in the disparity map block;And step S620, to other blocks In each piece in parallax point be coordinately transformed so that the parallax point before transformation in the block to transformation before the block road The horizontal distance and vertical range of road end point and the road end point of transformed corresponding parallax point to the transformed block Horizontal distance and vertical range difference are equal.
For example, still with road end point P1 (x shown in Fig. 51,y1)、P2(x2,y2) and P3 (x3,y3) for, distance is most The end point of close first segment road is P1 (x1,y1).In step S610, by end point P2 (x2,y2) and P3 (x3,y3) mobile To the end point P1 (x of first segment road1,y1).Assuming that road surface be it is flat, then end point P1, P2 and P3 is in identical height Place, i.e., they y-coordinate it is equal: y1=y2=y3
Correspondingly, in step S620, it is based on end point P2 (x2,y2) movement, to road block corresponding with end point P2 Each parallax point P (x, y) in (hereinafter referred to as second segment road) is coordinately transformed, so that in transformed second segment road Parallax point P ' (x ', y ') arrive transformed end point P2 ' (x2’,y2') (i.e. P1 (x1,y1)) horizontal distance and transformation before The parallax point P (x, y) to transformation before end point P2 (x2,y2) horizontal distance it is equal.Vertical range is kept before and after transformation Constant, as mentioned above y-coordinate remains unchanged, i.e. y '=y.
Before transformation, parallax point P (x, y) arrives end point P2 (x2,y2) horizontal distance DP-P2It is expressed as following formula (1):
DP-P2=(x2-x)·lz (1)
Wherein, lz is actual length of each parallax point in depth Z-direction, this is to be determined by camera and is known 's.
Parallax point P ' (x ', y ') after coordinate transform arrives transformed end point P2 ' (x2’,y2') (i.e. P1 (x1,y1)) Horizontal distance is expressed as following formula (2):
DP’-P2’=(x1-x’)·lz (2)
Horizontal distance D after coordinate transformP’-P2’Horizontal distance D before should be equal to transformationP-P2, then following formula (3) it sets up:
(x1- x ') lz=(x2-x)·lz (3)
Thus, it is possible to obtain following formula (4):
X '=x1–(x2-x) (4)
Then, the parallax point coordinate in the disparity map of the parallax point P (x, y) in second segment road after the conversion is P ' (x1– (x2- x), y).
Similarly, the corresponding end point P3 (x based on third section road3,y3) arrive end point P1 (x1,y1) movement, for Each parallax point in three sections of roads is coordinately transformed so that each parallax point in transformed third section road to transformation End point P3 ' (x afterwards3’,y3') (i.e. P1 (x1,y1)) horizontal distance and transformation before the parallax point to transformation before disappearance Point P3 (x3,y3) horizontal distance it is equal.
It, can be based on road end point in the disparity map comprising road area as a result, according to the coordinate transformation method 600 Each parallax point be coordinately transformed, to obtain transformed disparity map.
Fig. 7 is to show the rough schematic view of the coordinate transform of disparity map, and wherein left figure is original disparity map, and right figure is to become Disparity map after changing.As shown in fig. 7, in disparity map after the conversion, due to disappearing for every section of road in addition to first segment road Lose the end point that point is all moved to first segment road, that is to say, that each section of road end point having the same after the conversion, so Road area is converted into same road direction, i.e. linear road in disparity map after the conversion.
It should be noted that although hypothesis road is flat in the above examples, therefore the y-coordinate of each end point is protected Hold constant, however the present invention also can be applied to the uneven situation of road, sit at this time to the y of each parallax point in disparity map Mark carries out identical coordinate transform, so that before the transformed parallax point to the vertical range and transformation of transformed end point The vertical range of end point before the parallax point to transformation is equal.The transformation and the shift theory phase of above-mentioned x coordinate of the y-coordinate Together, details are not described herein.
The detection of lane segmentation object<2.3,>
After obtaining transformed disparity map, it can be detected according to the parallax point in the disparity map after coordinate transform Road partage.Specifically, it can be detected in three-dimensional space corresponding with the parallax point after coordinate transform and belong to lane segmentation object Point obtain and belong to lane segmentation object with what is detected in three dimensions and according to the inverse transformation of the coordinate transform The point of point lane segmentation object corresponding, in original disparity map, to obtain the lane segmentation object.It is described in detail below with The specific embodiment of lane segmentation object is detected in the corresponding three-dimensional space of transformed parallax point and according in three dimensions The point of the lane segmentation object detected obtains the specific embodiment of the lane segmentation object in original disparity map.
Fig. 8 is to show to detect road minute in three-dimensional space corresponding with transformed parallax point according to one embodiment Cut the flow chart of the method 800 of object.As shown in figure 8, this method 800 may include: step S810, it is based on transformed parallax point With the relative positional relationship between road end point, the distribution of transformed parallax point in three dimensions is obtained;Step S820, Parallax point in three-dimensional space is projected in the plane of Z=d, wherein d indicates there is maximum disparity value in all parallax points The corresponding true three-dimension space length of point;And step S830 detects belonging in three-dimensional space according to the projection of parallax point The point of lane segmentation object.
In step S810, since the position coordinates of road end point are known, it is possible to be based on transformed view Each parallax point in poor figure and the relative positional relationship between road end point determine each parallax point in three dimensions Distribution, i.e. its spatial position coordinate.
For example, for transformed any parallax point P ' (x ', y '), (5) and (6) it can calculate it according to the following formula and arrive End point P1 (x1,y1) real world horizontal distance DxWith vertical range Dy:
Dx=(x '-x1)·lz (5)
Dy=(y '-y1)·lz (6)
In addition, the depth information of each parallax point, i.e. its z-axis coordinate can be learnt according to the parallax value of each parallax point, Therefore, transformed parallax point can be obtained based on the relative positional relationship between transformed parallax point and road end point to exist Distribution in the three-dimensional space of real world.
Fig. 9 is to show the schematic diagram for the conversion that point is distributed from transformed disparity map to three-dimensional parallax, wherein left figure table Show the disparity map with same road direction by coordinate transform, right figure indicates the vertical view of corresponding three-dimensional parallax distribution map Figure.As shown in the right figure in Fig. 9, the road after coordinate transform is linear road in true three-dimension space.
It, can be by three-dimensional space in step S820 after obtaining the distribution of transformed parallax point in three dimensions Between in parallax point be projected in the plane of Z=d, wherein d indicate in all parallax points with maximum disparity value parallax point institute Corresponding true three-dimension space length.Due to having inverse relation between parallax value and distance, the parallax with maximum disparity value The distance that point has range Imaging plane nearest.Thereby, it is possible to which all parallax points are projected in the plane of Z=d as much as possible, To obtain accurate detection result using the detection of all parallaxes click-through trade road partage.
Figure 10 is to show the schematic diagram of the plane projection of parallax point in three-dimensional space to Z=d, and wherein left figure indicates three The top view of parallax distribution map is tieed up, right figure indicates projection of the parallax point in Z=d plane.As it can be seen from fig. 10 that belonging to The parallax point of same lane line will be projected in the same point in Z=d plane, therefore, all subpoints in Z=d plane it In, the most subpoint of the quantity of the parallax point of projection (the most bright subpoint i.e. on perspective view) corresponds to lane line.Along Z Direction can be found and project to the most bright subpoint, all the points in the real world images of same road direction.
Therefore, it in step S830, according to the projection of parallax point, detects in three-dimensional space and belongs to lane segmentation object Point.Specifically, the most subpoint of the parallax point quantity that can be projected among the subpoint in the plane of the Z=d obtains Parallax point in all three-dimensional space projected on selected subpoint belongs to road point as in the three-dimensional space Cut the point of object.
In the embodiment above, all parallax points are projected in the plane of Z=d.However, usually road area is in image In height do not exceed certain vertical range, therefore, optionally, in another embodiment, can be only in the flat of the Z=d The point for belonging to lane segmentation object in three-dimensional space is detected in predetermined vertical range subpoint below on face, to filter out Noise spot improves detection speed and detection accuracy.The predetermined vertical range can be by those skilled in the art rule of thumb Or it tests and is specifically arranged.
In step S830, other than being able to detect lane segmentation object, three can also be detected according to the projection of parallax point Road boundary in dimension space.Still as shown in the right figure in Figure 10, among all subpoints, two subpoints in both ends Likely correspond to the boundary of road.In view of factors such as noises, it is generally recognized that among the subpoint in the plane of Z=d, throw The subpoint that the parallax point quantity of shadow is greater than predetermined threshold is only effective subpoint of road area.The predetermined quantity can be by this Field technical staff rule of thumb or experiment and be specifically arranged.
It is thereby possible to select the parallax point quantity projected among the subpoint in the plane of the Z=d is greater than predetermined threshold The subpoint of value takes place as section of the road area in the plane of the Z=d using selected subpoint fitting a straight line Two subpoints in the both ends in the straight line, the side of the road area in three-dimensional space is determined according to described two subpoints Boundary.
Figure 11 shows the schematic diagram that road area section is fitted according to subpoint, and wherein left figure indicates parallax point in Z=d Projection in plane, right figure indicate that the road obtained according to the projection fitting of parallax point projects.Figure 12 shows basis and projects to The schematic diagram of the point for belonging to lines in three-dimensional space is obtained, wherein left figure indicates lines and road projection, right figure table Show the top view that lines and road in corresponding three-dimensional space are projected with the lines and road.
As a result, according to method 800, it can detect and belong in three-dimensional space corresponding with the parallax point after coordinate transform The point of road partage and/or road boundary.
After belonging to the point of lane segmentation object in obtaining three-dimensional space, it can be obtained according to the inverse transformation of above-mentioned coordinate transform Obtain lane segmentation object corresponding with the lane segmentation object detected in three dimensions, in original disparity map.
Figure 13 is to show to detect original based on the lane segmentation object detected in three dimensions according to one embodiment The flow chart of the method 1300 of lane segmentation object in beginning disparity map.As shown in figure 13, this method 1300 may include: step S1310 obtains and belongs to the three-dimensional detected based on the relative positional relationship between transformed parallax point and road end point The point of lane segmentation object in space is corresponding, transformed parallax point, as target parallax point;Step S1320, according to described The inverse transformation of coordinate transform, the transformation forward sight obtained in disparity map corresponding with the target parallax point, described are not good enough;Step S1330 is almost fitted to obtain the lane segmentation object transformation forward sight.
It, can be based on opposite between transformed parallax point and road end point as described in above method 800 Positional relationship obtains the distribution of transformed parallax point in three dimensions, similarly, in step S1310, after transformation Parallax point and road end point between relative positional relationship, it is also possible to obtain it is corresponding with the parallax point in three-dimensional space, The position coordinates of the parallax point in disparity map (disparity map with same road direction) after the conversion.
That is, detect in three dimensions be located at distance z0 at belong to lane segmentation object point P " (x ", Y ") arrive road end point P1 " (x1”,y1") horizontal distance and vertical range be respectively equal in disparity map after the transformation Corresponding parallax point P ' (x ', y ') arrives road end point P1 ' (x with point P " (x ", y ")1’,y1') horizontal distance and vertically away from From.
If it is known that arriving road end point P1 " (x in three-dimensional space midpoint P " (x ", y ")1”,y1") horizontal distance be Dx", vertical range is 0, then available following formula (7) and (8):
Dx"=(x '-x1’)·lz (7)
0=(y '-y1’)·lz (8)
Wherein, wherein lz is actual length of each parallax point in depth Z-direction, this is to be determined by camera and be It is known, as described above.
Thus, it is possible to obtain following formula (9) and (10):
X '=x1’+Dx”/lz (9)
Y '=y1’ (10)
Then coordinate the P ' (x of the parallax point P ' at z0 is obtained1’+Dx”/lz,y1').In this way, it is possible to obtain and belong to The all the points of lane segmentation object in the three-dimensional space are corresponding, parallax point in the disparity map with same road direction Coordinate.
Therefore, it in step S1310, can be closed based on the relative position between transformed parallax point and road end point System, obtains corresponding with the point of lane segmentation object belonged in the three-dimensional space detected, transformed parallax point, as target Parallax point.Then in step S1320, can according to the inverse transformation of above-mentioned coordinate transform, obtain it is corresponding with target parallax point, Transformation forward sight in original disparity map is not good enough.
It, will be more when being coordinately transformed based on road end point to original disparity map as described in above method 600 The end point of other blocks in a road block in addition to first piece is moved to first piece of road end point, to every in other blocks Parallax point in one piece is coordinately transformed, so that the road of the block before the parallax point before transformation in the block to transformation disappears The level of the horizontal distance of point and vertical range and the road end point of transformed corresponding parallax point to the transformed block away from It is equal from distinguishing with vertical range.
Correspondingly, in step S1320, can the inverse transformation acquisition based on the coordinate transform it is corresponding with target parallax point , the transformation forward sight in original disparity map it is not good enough.Specifically, can by the disparity map with same road direction with along with Road end point P (the end point P1 of first segment road) is moved respectively to the second road end point P2 and third road as shown in Figure 7 Road end point carries out each parallax point in the road block accordingly for road block corresponding with the second road end point Coordinate transform, so that the parallax point (the parallax point i.e. in original disparity map) for again passing by coordinate transform arrives the transformed block The horizontal distance and vertical range of road end point (have same road direction with the parallax point before converting again in the block Parallax point in disparity map) the road end point of the block before converting again is arrived (in the disparity map with same road direction Same road end point) horizontal distance and vertical range difference it is equal.
For example, in the disparity map with same road direction be located at distance z0 at parallax point P ' (x ', Y '), with the road end point P in the disparity map with same road direction1’(x1’,y1') horizontal and vertical distance answer Should parallax point P (x, y) respectively equal to point P ' (x ', y ') in corresponding original disparity map to the view in the original disparity map Not good enough P0(x0,y0) horizontal and vertical distance.Wherein, because as described above according to z-axis distance by road area piecemeal, often Block corresponds to road end point P1, P2 or P3, therefore, can be determined according to the distance z0 of parallax point P ' (x ', y ') and should The parallax point in the corresponding original disparity map of road block where parallax point P ' (x ', y ').That is, z0 is depended on, with this Road end point P in the corresponding original disparity map of point P ' (x ', y ')0(x0,y0) it can be P1 (x1,y1)、P2(x2,y2) and P3 (x3,y3) in one.
It will again be assumed that road is flat, then the y-coordinate of parallax point remains unchanged.Thus, it is possible to obtain following formula (11) (12):
X=x '-x1’+x0 (11)
Y=y ' (12)
Position coordinates P (the x of the corresponding transformation not good enough P of forward sight with target parallax point P ' (x ', y ') is obtained as a result,1’-x’+ x0,y').Therefore, it is not good enough that all transformation forward sights corresponding with target parallax point can be found in original disparity map, and in step All transformation forward sights are almost fitted in S1330 to obtain the lane segmentation object.Arbitrary song can be used herein Line approximating method, such as least square method.
Figure 14 is shown to be obtained in original disparity map according to the target parallax point in the disparity map with same road direction Lane segmentation object schematic diagram, wherein left figure indicates the disparity map with same road direction, and right figure indicates original obtained The lane segmentation object of beginning disparity map.
It, can also be according to parallax point other than being able to detect lane segmentation object as above described in step S830 Projection detects the road boundary in three-dimensional space.Correspondingly, it in step S1310, can obtain and the road in three-dimensional space Boundary is corresponding, parallax point in the disparity map with same road direction, and in step S1320, can same base In the inverse transformation of coordinate transform, parallax point corresponding with these parallax points, in original disparity map is obtained, and in step In S1330, it can be fitted based on the parallax point in original disparity map obtained, to obtain the road in original disparity map Boundary curve.
Therefore, according to method 1300, original view can be detected based on the lane segmentation object detected in three dimensions Lane segmentation object in poor figure.
It is thus possible to which above-mentioned method 600, method 800 and method 1300 are applied to above-described lane segmentation In step S320 and S330 in object detecting method 300.Lane segmentation object detecting method 300 according to the present invention as a result, can To be coordinately transformed based on road end point to the parallax point in original disparity map, to be detected according to transformed parallax point Lane segmentation object.Even if road to be detected is not linear road as a result, it is straight that detection can be also converted to by coordinate transform Drawing lines road, so as to obtain road and its lane segmentation corresponding with the linear road testing result, in original disparity map Object.Thus, test object of the invention is no longer confined to linear road.Moreover, because being clicked through using all parallaxes in disparity map Row detection, so accurate detection result can be obtained.
Specification is needed, because target area to be detected is road area, above to the change of disparity map In the description changed, only the parallax point for belonging to road area is coordinately transformed, and is also when detecting lane segmentation object It is detected based on the parallax point for belonging to road area.The road area can be by any existing image processing method from original Detection obtains in beginning disparity map.However, the invention is not limited thereto, whole picture original disparity map can also be coordinately transformed and The detection of trade road partage is clicked through based on all parallaxes.
Moreover, being described using the specific example of lane line as lane segmentation object, so in above specific example And it will be obvious to a person skilled in the art that process as described above is also applied for any other road other than lane line point Cut object.
<2.4, lane segmentation analyte detection device>
Below with reference to the lane segmentation analyte detection device of Figure 15 description according to another embodiment of the present invention.Figure 15 is shown The functional block diagram of the lane segmentation analyte detection device.As shown in figure 15, which may include: acquisition component 1510, it is configured to obtain the disparity map and road end point of road area;Transform component 1520 is configured to road disappearance Point is coordinately transformed the parallax point in the disparity map;And detection part 1530, it is configured to according to the view after coordinate transform Not good enough detection lane segmentation object.
Wherein, which is configurable to the range information based on the disparity map, will be in the disparity map Road area is divided into multiple pieces, and obtains each piece of road end point.
In a specific example, which is configurable to: by the multiple piece in addition to first piece The end points of other blocks be moved to described first piece of road end point, wherein described first piece be in the multiple piece The nearest block of distance in the disparity map;Parallax point in each piece in other described blocks is coordinately transformed, so that The horizontal distance and vertical range of the road end point of the block of the parallax point to before converting before converting in the block with it is transformed The horizontal distance and vertical range difference of road end point of the corresponding parallax point to the transformed block are equal
In another specific example, which may include: 1531 (not shown) of three-dimensional space detection part, It detects the point for belonging to lane segmentation object in three-dimensional space corresponding with the parallax point after coordinate transform;And disparity map detection 1532 (not shown) of component is obtained and is detected with the three-dimensional space detection part 1531 according to the inverse transformation of the coordinate transform To corresponding, in the disparity map the lane segmentation object of the point for belonging to lane segmentation object point, to obtain the road point Cut object.
In another specific example, which is configurable to: being based on transformed parallax point With the relative positional relationship between road end point, the distribution of transformed parallax point in three dimensions is obtained;By three-dimensional space Between in parallax point be projected in the plane of Z=d, wherein d indicate in all parallax points with maximum disparity value parallax point institute Corresponding true three-dimension space length;According to the projection of parallax point, the point for belonging to lane segmentation object in three-dimensional space is detected.
In another specific example, which is configurable to: based on transformed parallax point with Relative positional relationship between road end point obtains and belongs to three-dimensional space with what the three-dimensional space detection part 1531 detected In lane segmentation object point is corresponding, transformed parallax point, as target parallax point;According to the inversion of the coordinate transform It changes, the transformation forward sight obtained in disparity map corresponding with the target parallax point, described is not good enough;To the transformation forward sight almost into Row fitting is to obtain the lane segmentation object.
In another specific example, which is also configured as: selection is the Z=d's The most subpoint of the parallax point quantity projected among subpoint in plane;Obtain the institute projected on selected subpoint There is three-dimensional space parallax point, as the point for belonging to lane segmentation object in the three-dimensional space.
Optionally, predetermined vertical range below throwing of the three-dimensional space detection part 1531 in the plane of the Z=d The point for belonging to lane segmentation object in three-dimensional space is detected in shadow point.
Optionally, which is also configured as: throwing of the selection in the plane of the Z=d The parallax point quantity projected among shadow point is greater than the subpoint of predetermined threshold;Using selected subpoint fitting a straight line, place is taken Two subpoints in the both ends in the straight line;The boundary of road area is determined according to described two subpoints.
The concrete operations of the acquisition component 1510, transform component 1520 and detection part 1530 of the detection device 1500 Journey can be with reference to the description in above method 300, method 600, method 800 and method 1300, and details are not described herein.
Lane segmentation analyte detection device 1500 according to this embodiment, can be based on road end point in original disparity map Parallax point be coordinately transformed, so as to according to transformed parallax point detect lane segmentation object.Even if road to be detected as a result, Road is not linear road, and detection linear road can be also converted to by coordinate transform, it is possible thereby to obtain and the linear road Road testing result is corresponding, road and its lane segmentation object in original disparity map.Thus, test object of the invention no longer limits In linear road.Moreover, accurate detection knot can be obtained because being detected using all parallax points in disparity map Fruit.
<2.5, lane segmentation quality testing examining system>
Next, describing the detection system according to an embodiment of the invention for realizing lane segmentation analyte detection with reference to Figure 16 The system configuration of system.As shown in figure 16, detection system 1600 includes: input equipment 1610, for that will handle from external input Image, for example, the image can be the disparity map etc. of the target scene including road area, which be can wrap Include such as keyboard, mouse and communication network and its remote input equipment connected etc.;Processing equipment 1620, for real The above-mentioned lane segmentation object detecting method according to the embodiment of the present invention is applied, or is embodied as above-mentioned according to the embodiment of the present invention Lane segmentation object device, such as may include the central processing unit or other chip etc. with processing capacity of computer Deng, it may be connected to the network (not shown) of such as internet obtains the number needed according to the needs for the treatment of process from network According to etc.;Output equipment 1630, for being output to the outside the testing result of above-mentioned lane segmentation object, such as in the image detected Lane line, road boundary etc., the output equipment 1630 may include such as display, printer and communication network and its Remote output devices connected etc.;And storage equipment 1640, for storing above-mentioned place in volatile and nonvolatile mode Image involved in reason process, data, result obtained, order and intermediate data etc., the storage equipment 1640 can be with Including the various volatile of such as random access memory (RAM), read-only memory (ROM), hard disk or semiconductor memory etc. Or nonvolatile memory.
Certainly, it to put it more simply, illustrating only some in component related to the present invention in the system in Figure 16, is omitted Such as component of bus, input/output interface etc..In addition to this, according to concrete application situation, system 1600 can also include Any other component appropriate.
<3. summarize>
According to the present invention, the detection method, detection device and detection system of lane segmentation object are provided.Obtain road area Disparity map and road end point, the parallax point in the disparity map is coordinately transformed based on road end point, and according to Parallax point after coordinate transform detects lane segmentation object.
According to detection method, detection device and the detection system of above-mentioned lane segmentation object, can be disappeared based on road Point is coordinately transformed the parallax point in original disparity map, to detect lane segmentation object according to transformed parallax point.By This can be also converted to detection linear road by coordinate transform, thus may be used even if road to be detected is not linear road To obtain road and its lane segmentation object corresponding with the linear road testing result, in original disparity map.Thus, the present invention Test object be no longer confined to linear road.Moreover, because being detected using all parallax points in disparity map, it can Obtain accurate detection result.
The lane segmentation object detecting method and dress of embodiment according to the present invention is described in detail by reference to attached drawing above It sets.Although being described above using lane line as test object, it will be obvious to a person skilled in the art that the present invention can answer Object is without being limited thereto, and is also possible to such as road shoulder stone, fence and other can be identified for that the region and lane on road Object.
Device involved in the disclosure, equipment, system block diagram be only used as illustrative example, and be not intended to require Or hint must be attached in such a way that box illustrates, arrange, configure.As the skilled person will recognize, It can be connected by any way, arrange, configure these devices, equipment, system.Such as "include", "comprise", " having " etc. Word is open vocabulary, is referred to " including but not limited to ", and can be used interchangeably with it.Vocabulary "or" and "and" used herein above Refer to vocabulary "and/or", and can be used interchangeably with it, unless it is not such that context, which is explicitly indicated,.Vocabulary used herein above is " all As " refer to phrase " such as, but not limited to ", and can be used interchangeably with it.
Step flow chart and above method in the disclosure describe only as illustrative example, and are not intended to require Or imply the step of must carrying out each embodiment according to the sequence provided.It as the skilled person will recognize, can be with The sequence of the step in above embodiments is carried out in any order.Such as " thereafter ", the word of " then ", " following " etc. is not It is intended to the sequence of conditioning step;These words are only used for the description that guidance reader reads over these methods.In addition, for example using article "one", " one " or "the" be not interpreted the element being limited to odd number for any reference of the element of odd number.
The above description of disclosed aspect is provided so that any person skilled in the art can make or use this Invention.Various modifications in terms of these are readily apparent to those skilled in the art, and are defined herein General Principle can be applied to other aspect without departing from the scope of the present invention.Therefore, the present invention is not intended to be limited to Aspect shown in this, but according to principle disclosed herein and the consistent widest range of novel feature.

Claims (10)

1. a kind of detection method of lane segmentation object, comprising:
Obtain the disparity map and at least two road end points of road area;
The parallax point in the disparity map is coordinately transformed based on road end point;And
Lane segmentation object is detected according to the parallax point after coordinate transform.
2. the method as described in claim 1, wherein the parallax point detection lane segmentation object according to after coordinate transform includes:
Detection belongs to the point of lane segmentation object in three-dimensional space corresponding with the parallax point after coordinate transform;
According to the inverse transformation of the coordinate transform, obtain with detect in three dimensions to belong to the point of lane segmentation object corresponding , the point of lane segmentation object in the disparity map, to obtain the lane segmentation object.
3. it is method according to claim 1 or 2, wherein acquisition road end point includes:
Based on the range information of the disparity map, the road area in the disparity map is divided into multiple pieces;
Obtain each piece of road end point.
4. method as claimed in claim 3, wherein described sit the parallax point in the disparity map based on road end point Mark converts
The end point of other blocks in the multiple piece in addition to first piece is moved to described first piece of road end point, Described in first piece be the smallest piece of distance in the disparity map in the multiple piece;
Parallax point in each piece in other described blocks is coordinately transformed, so that the parallax point before transformation in the block arrives The horizontal distance and vertical range of the road end point of the block before transformation are somebody's turn to do to transformed corresponding parallax point to transformed Horizontal distance and the vertical range difference of the road end point of block are equal.
5. method according to claim 2, wherein described examine in three-dimensional space corresponding with the parallax point after coordinate transform It surveys and belongs to the point of lane segmentation object and include:
Based on the relative positional relationship between transformed parallax point and road end point, transformed parallax point is obtained in three-dimensional Distribution in space;
Parallax point in three-dimensional space is projected in the plane of Z=d, wherein d indicates there is maximum disparity in all parallax points True three-dimension space length corresponding to the parallax point of value;
According to the projection of parallax point, the point for belonging to lane segmentation object in three-dimensional space is detected.
6. method according to claim 2, wherein described obtain according to the inverse transformation of the coordinate transform and in three-dimensional space In the point of corresponding, in the disparity map the lane segmentation object of lane segmentation object that detects to obtain the lane segmentation object Include:
Based on the relative positional relationship between transformed parallax point and road end point, the three-dimensional space detected is obtained and belonged to Between in lane segmentation object point is corresponding, transformed parallax point, as target parallax point;
According to the inverse transformation of the coordinate transform, before obtaining the transformation in disparity map corresponding with the target parallax point, described Parallax point;
The transformation forward sight is almost fitted to obtain the lane segmentation object.
7. method as claimed in claim 5, wherein belonging to road in the projection detection three-dimensional space according to parallax point The point of partage includes:
The subpoint for selecting the parallax point quantity projected among the subpoint in the plane of the Z=d most;
All three-dimensional space parallax points projected on selected subpoint are obtained, as belonging in the three-dimensional space The point of road partage.
8. the method for claim 7, wherein the predetermined vertical range subpoint below in the plane of the Z=d The point for belonging to lane segmentation object in middle detection three-dimensional space.
9. the method for claim 7, further includes:
The parallax point quantity projected among the subpoint in the plane of the Z=d is selected to be greater than the subpoint of predetermined threshold;
Using selected subpoint fitting a straight line, take in two subpoints at the both ends of the straight line;
The boundary of road area is determined according to described two subpoints.
10. a kind of detection device of lane segmentation object, comprising:
Component is obtained, is configured to obtain the disparity map and road end point of road area;
Transform component is configured to road end point and is coordinately transformed to the parallax point in the disparity map;
Detection part is configured to detect lane segmentation object according to the parallax point after coordinate transform.
CN201510354626.7A 2015-06-24 2015-06-24 The detection method and device of lane segmentation object Active CN106327466B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201510354626.7A CN106327466B (en) 2015-06-24 2015-06-24 The detection method and device of lane segmentation object
JP2016121582A JP6179639B2 (en) 2015-06-24 2016-06-20 Road boundary detection method and detection apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510354626.7A CN106327466B (en) 2015-06-24 2015-06-24 The detection method and device of lane segmentation object

Publications (2)

Publication Number Publication Date
CN106327466A CN106327466A (en) 2017-01-11
CN106327466B true CN106327466B (en) 2018-12-21

Family

ID=57729661

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510354626.7A Active CN106327466B (en) 2015-06-24 2015-06-24 The detection method and device of lane segmentation object

Country Status (2)

Country Link
JP (1) JP6179639B2 (en)
CN (1) CN106327466B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107766847B (en) * 2017-11-21 2020-10-30 海信集团有限公司 Lane line detection method and device
CN108256455B (en) * 2018-01-08 2021-03-23 哈尔滨工业大学 Road image segmentation method based on vanishing points
CN110120081B (en) * 2018-02-07 2023-04-25 北京四维图新科技股份有限公司 Method, device and storage equipment for generating lane markings of electronic map
CN108256510B (en) * 2018-03-12 2022-08-12 海信集团有限公司 Road edge line detection method and device and terminal
JP6786635B2 (en) * 2018-05-15 2020-11-18 株式会社東芝 Vehicle recognition device and vehicle recognition method
KR102519666B1 (en) * 2018-10-15 2023-04-07 삼성전자주식회사 Device and method to convert image
CN111696048B (en) * 2019-03-15 2023-11-14 北京四维图新科技股份有限公司 Smoothing processing method and device for wall sampling line
CN110197173B (en) * 2019-06-13 2022-09-23 重庆邮电大学 Road edge detection method based on binocular vision
CN112304293B (en) * 2019-08-02 2022-09-13 北京地平线机器人技术研发有限公司 Road height detection method and device, readable storage medium and electronic equipment
CN112348752B (en) * 2020-10-28 2022-08-16 武汉极目智能技术有限公司 Lane line vanishing point compensation method and device based on parallel constraint
CN116523921B (en) * 2023-07-05 2023-09-29 广州市易鸿智能装备有限公司 Detection method, device and system for tab turnover condition

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090085913A1 (en) * 2007-09-21 2009-04-02 Honda Motor Co., Ltd. Road shape estimating device
JP2012243050A (en) * 2011-05-19 2012-12-10 Fuji Heavy Ind Ltd Environment recognition device and environment recognition method
CN103177236A (en) * 2011-12-22 2013-06-26 株式会社理光 Method and device for detecting road regions and method and device for detecting separation lines

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090085913A1 (en) * 2007-09-21 2009-04-02 Honda Motor Co., Ltd. Road shape estimating device
JP2012243050A (en) * 2011-05-19 2012-12-10 Fuji Heavy Ind Ltd Environment recognition device and environment recognition method
CN103177236A (en) * 2011-12-22 2013-06-26 株式会社理光 Method and device for detecting road regions and method and device for detecting separation lines

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Dense stereo matching in restricted disparity space;H. Hattori et al;《IEEE Proceedings.Intelligent Vehicles Symposium,2005》;20050606;第118-123页 *
基于分段直线模型和ATN的车道识别方法;胡斌 等;《清华大学学报(自然科学版)》;20061231;第46卷(第10期);第1762-1766页 *

Also Published As

Publication number Publication date
JP2017010553A (en) 2017-01-12
CN106327466A (en) 2017-01-11
JP6179639B2 (en) 2017-08-16

Similar Documents

Publication Publication Date Title
CN106327466B (en) The detection method and device of lane segmentation object
US11632536B2 (en) Method and apparatus for generating three-dimensional (3D) road model
EP3362982B1 (en) Systems and methods for producing an image visualization
CN108352056B (en) System and method for correcting erroneous depth information
US9378424B2 (en) Method and device for detecting road region as well as method and device for detecting road line
EP2713309B1 (en) Method and device for detecting drivable region of road
US10762643B2 (en) Method for evaluating image data of a vehicle camera
US8521418B2 (en) Generic surface feature extraction from a set of range data
Buczko et al. How to distinguish inliers from outliers in visual odometry for high-speed automotive applications
US9091553B2 (en) Systems and methods for matching scenes using mutual relations between features
Gräter et al. Robust scale estimation for monocular visual odometry using structure from motion and vanishing points
JP6299291B2 (en) Road edge detection method and road edge detection device
CN112997187A (en) Two-dimensional object bounding box information estimation based on aerial view point cloud
He et al. Pairwise LIDAR calibration using multi-type 3D geometric features in natural scene
EP3716210B1 (en) Three-dimensional point group data generation method, position estimation method, three-dimensional point group data generation device, and position estimation device
JP6515650B2 (en) Calibration apparatus, distance measuring apparatus and calibration method
WO2017176112A1 (en) Spatial data analysis
JP5297727B2 (en) Robot apparatus and object position / orientation estimation method
EP3324359A1 (en) Image processing device and image processing method
KR101280392B1 (en) Apparatus for managing map of mobile robot based on slam and method thereof
CN104471436B (en) The method and apparatus of the variation of imaging scale for computing object
JP2014178967A (en) Three-dimensional object recognition device and three-dimensional object recognition method
Kuhl et al. Monocular 3D scene reconstruction at absolute scales by combination of geometric and real-aperture methods
CN110488320A (en) A method of vehicle distances are detected using stereoscopic vision
WO2017164075A1 (en) Rent calculation device, rent calculation method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant