CN117611438A - Monocular image-based reconstruction method from 2D lane line to 3D lane line - Google Patents

Monocular image-based reconstruction method from 2D lane line to 3D lane line Download PDF

Info

Publication number
CN117611438A
CN117611438A CN202311658373.3A CN202311658373A CN117611438A CN 117611438 A CN117611438 A CN 117611438A CN 202311658373 A CN202311658373 A CN 202311658373A CN 117611438 A CN117611438 A CN 117611438A
Authority
CN
China
Prior art keywords
coordinate system
lane line
point
camera
equation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311658373.3A
Other languages
Chinese (zh)
Other versions
CN117611438B (en
Inventor
李康
张玉杰
姚进强
罗曦
李炎
金忠富
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intelligent Transportation Research Branch Of Zhejiang Transportation Investment Group Co ltd
Original Assignee
Intelligent Transportation Research Branch Of Zhejiang Transportation Investment Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intelligent Transportation Research Branch Of Zhejiang Transportation Investment Group Co ltd filed Critical Intelligent Transportation Research Branch Of Zhejiang Transportation Investment Group Co ltd
Priority to CN202311658373.3A priority Critical patent/CN117611438B/en
Publication of CN117611438A publication Critical patent/CN117611438A/en
Application granted granted Critical
Publication of CN117611438B publication Critical patent/CN117611438B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of lane line detection in automatic driving, in particular to a monocular image-based reconstruction method from a 2D lane line to a 3D lane line. The technical proposal is as follows: a reconstruction method from a 2D lane line to a 3D lane line based on a monocular image comprises the following steps: 1) Establishing a world coordinate system and a camera coordinate system; 2) Solving three-dimensional mapping points on the horizontal ground; 3) Solving three-dimensional mapping points considering gradient information; 4) Solving a gradient angle by a traversal method; 5) And calculating the coordinates of the 3D lane line.

Description

Monocular image-based reconstruction method from 2D lane line to 3D lane line
Technical Field
The invention relates to the technical field of lane line detection in automatic driving, in particular to a reconstruction method from a 2D lane line to a 3D lane line based on monocular images.
Background
The lane line detection technology based on computer vision is a key task in the field of automatic driving, and particularly on roads lacking high-precision maps, the lane line detection result based on pure vision can provide basis for the vehicle to keep centered along the lane line. The 2D lane line detection technology aims at accurately outputting a two-dimensional lane line coordinate point set on an image, is relatively mature, and is realized in many papers and open source projects. In a real driving scene, however, the road surface is inevitably provided with an ascending and descending slope, so that the 2D lane line cannot be aligned with the 3D lane line in the real world, and the lane line detection result has errors, thereby endangering the driving safety of automatic driving. Therefore, it is proper to restore the depth information of each point on the 2D lane line, i.e., to complete the 3D lane line detection.
The laser radar can provide depth information of an object, and the technical path for improving the lane line from 2D to 3D is relatively simple by using the laser radar to assist vision. However, the use cost of the laser radar is high, the current main research direction in the field of 3D lane line detection still tends to use a monocular camera, and the reconstruction of the 3D lane line from a monocular image based on pure vision is very challenging due to the lack of depth information in the monocular image.
The current method is to re-map 2D lane lines into 3D space using Inverse Perspective Mapping (IPM), but the precondition of this method is strictly based on flat ground, and is not robust in case of uneven road surface, presence of up-down slopes in real driving scenarios.
Disclosure of Invention
The invention aims to overcome the defects in the background technology, and provides a monocular image-based reconstruction method from a 2D lane line to a 3D lane line.
The technical scheme of the invention is as follows:
a reconstruction method from a 2D lane line to a 3D lane line based on a monocular image comprises the following steps:
1) Establishing a world coordinate system and a camera coordinate system;
2) Solving three-dimensional mapping points on the horizontal ground;
3) Solving three-dimensional mapping points considering gradient information;
4) Solving a gradient angle by a traversal method;
5) And calculating the coordinates of the 3D lane line.
In the step 1), an origin of the world coordinate system is a vehicle center, and an origin of the camera coordinate system is an optical center of the camera.
The step 2) comprises the following steps:
selecting a plane z=0 in the world coordinate system, wherein the optical center coordinates of the camera are (0, 0) in the camera coordinate system T The method comprises the steps of carrying out a first treatment on the surface of the Some pixel point P on 2D lane line 0 The coordinates in the pixel coordinate system are (u, v), the coordinates in the image coordinate system are (x, y), and the coordinates in the camera coordinate system are (x, y, f) T The method comprises the steps of carrying out a first treatment on the surface of the In the camera coordinate system, P 0 The light corresponding to the point passes through the optical center of the camera, and the intersection point of the light and the imaging plane of the camera is (x, y, f) T
The vector equation of the light under the camera coordinate system is:
the mapping relation from the point of the world coordinate system to the camera coordinate system is as follows:
P c =RP w +T (2)
wherein P is w Is the point of the world coordinate system, P c Is P w At the mapping points of the camera coordinate system, R is a rotation matrix, and T is an offset matrix;
the rewrites (2) to obtain:
P w =R -1 (P c -T) (3)
points in the camera coordinate system (0, 0) T AND (x, y, f) T Substitution formula (3) gives:
the vector equation of light in world coordinate system is:
L w =O w +(I w -O w )*t (5)
combining the ray equation with the known plane equation:
the method comprises the following steps:
L wz =O z +d z *t=0 (7)
solving for x w And y w
Obtaining a certain pixel point P on the 2D lane line 0 Three-dimensional mapping point P in world coordinate system z=0 w (x w ,y w ,0)。
The step 3) comprises the following steps:
let the direction vector α of the light be:
according to C, P w And obtaining the direction vector of the light:
listing ray CP w Is written as a parametric equation:
let the slope plane pass through the point n (n 1 ,n 2 ,n 3 ) And the slope plane is obtained by rotating the horizon by an angle theta around the x axis, and the normal vector of the slope plane is as follows:
the point French equation for the slope plane is:
v p1 *(x-n 1 )+v p2 *(y-n 2 )+v p3 *(z-n 3 )=0 (13)
combining formula (11) with formula (13) to obtain:
substituting the known amount into formula (14) to obtain:
substituting t into formula (11) to obtain:
a three-dimensional map point P (x, y, z) is obtained.
The step 4) comprises the following steps: two points P are selected on the same horizontal line of the 2D lane line image i 、P j Let P be i 、P j The corresponding points on the inclined roads with the gradient angle theta are w respectively i 、w j Traversing all theta angles within a certain range when |w i w j When the I-k is minimum, obtaining the inclination gradient theta; k is the road width.
The step 5) comprises the following steps: substituting θ into equation (16) yields P (x, y, z).
The beneficial effects of the invention are as follows:
the existing research on reconstructing a 2D lane line to a 3D lane line of a monocular image is generally based on modeling an ideal flat road surface, wherein the ideal flat road surface does not consider factors such as the gradient of a road, but in a real driving scene, the gradient information of the road is not negligible; according to the reconstruction method from the 2D lane line to the 3D lane line considering the real road gradient information, after the two-dimensional point set of the lane line is acquired on the monocular image, in order to further acquire the three-dimensional information of the lane line, the pixel coordinates of the extracted two-dimensional lane line are converted into the three-dimensional coordinates of the real world, and the gradient information of the road is added into the coordinate conversion, so that the reconstruction method more meets the requirements of a real driving scene, improves the accuracy of 3D lane line detection, and reduces the use cost.
Drawings
Fig. 1 is a flow chart illustrating the present invention.
Fig. 2 is a top view of the vehicle in world coordinate system.
Fig. 3 is a front view of the vehicle in the world coordinate system.
Fig. 4 is a right side view of the vehicle in the world coordinate system.
Fig. 5 is a schematic illustration of point P on a real road slope.
FIG. 6 is P 0 Schematic of the dots on the 2D lane line.
FIG. 7 is P i Point, P j Schematic of the dots on the 2D lane line.
Fig. 8 is a schematic diagram of the embodiment of fig. 7.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
As shown in FIG. 1, a method for reconstructing 2D lane lines to 3D lane lines based on monocular images comprises the following steps of
1) A world coordinate system and a camera coordinate system are established.
Referring to fig. 2, the world coordinate system is defined as: the point O is the origin of the world coordinate system and is positioned at the center of the vehicle; the X axis is perpendicular to the vehicle advancing direction and is positioned on the horizontal ground; the Y axis points to the advancing direction of the vehicle, is positioned on the horizontal ground and is positioned on the same vertical plane with the central line of the vehicle; the Z axis is upward in the forward direction; point C is the camera, which is placed in the very center position of the vehicle windshield.
The origin of the camera coordinate system is the optical center of the camera.
2) Solving three-dimensional mapping points on the horizontal ground: solving three-dimensional mapping point P of 2D lane line pixel coordinates on monocular image under world coordinate system z=0 (namely ideal flat road) w (x w ,y w ,0)。
Three-dimensional mapping point P of pixel coordinates of 2D lane line on monocular image under ideal flat road condition w (x w ,y w 0) is to acquire three-dimensional information by utilizing the intersection point of the light ray and the known plane, wherein the intersection point is the world coordinate corresponding to the pixel point in the image.
The measurement plane is a plane with z=0 in the world coordinate system, namely the horizontal ground.
Some pixel point P on 2D lane line 0 The coordinates in the pixel coordinate system are (u, v), the coordinates in the image coordinate system are (x, y), and the coordinates in the camera coordinate system are (x, y, f) T Where f is the focal length of the camera. In the camera coordinate system, P 0 The light corresponding to the point passes through the optical center of the camera, and the intersection point of the light and the imaging plane of the camera is (x, y, f) T
The ray equation can be determined by two points, one of which selects the optical center of the camera, and the coordinates in the camera coordinate system are (0, 0) T Another one ofThe point selects the intersection point (x, y) of the ray and the imaging plane, and the coordinates of the second point on the ray are (x, y, f) considering that the imaging plane is in front of the optical center (at a distance f from the optical center) T . Since the two points define a straight line, the light passes through the points (0, 0) T And the direction vector of the light is:
therefore, the vector equation of light in the camera coordinate system is:
wherein t is a parameter, t ε R.
Since the intersection points in the world coordinate system are finally solved, the ray equations need to be built in the world coordinate system. The mapping relation from the point of the world coordinate system to the camera coordinate system is as follows:
P c =RP w +T (2)
wherein P is w Is a point of the world coordinate system, P c Is P w At the mapping points of the camera coordinate system, R is a rotation matrix, and T is an offset matrix;
the rewrites (2) to obtain:
P w =R -1 (P c -T) (3)
points in the camera coordinate system (0, 0) T And point (x, y, f) T P substituted into the above C Converting the three-dimensional coordinate system into a world coordinate system to obtain the following components:
wherein: o (O) w 、I w The coordinates of the optical center and the intersection point of the camera in a world coordinate system are respectively;
similarly, in world coordinatesUnder the system, two points define a straight line, and the light passes through the point O w And the direction vector of the light is:
the vector equation of light in world coordinate system is:
L w =O w +(I w -O w )*t (5)
combining the ray equation with the known plane equation:
the method comprises the following steps:
L wz =O z +d z *t=0 (7)
wherein the above formula is an intermediate formula after the light equation and the known plane equation are combined, L wz Representing a spatial straight line L w The expressions for x, y under the condition that the z-axis is known (taken as 0).
Solving a certain pixel point P on a 2D lane line 0 In world coordinate system z=0, i.e. based on ideal flat road conditions, three-dimensional map point P w (x w ,y w ,0):
Obtaining a certain pixel point P on the 2D lane line 0 Three-dimensional mapping point P in world coordinate system z=0 w (x w ,y w ,0)。
In summary, a mapping point of a certain pixel point on the 2D lane line in the world coordinate system z=0 is obtained. Since z=0 is a horizontal ground, a certain point on the 2D lane line image is P at a three-dimensional map point on the corresponding horizontal ground when the road gradient is not considered, that is, θ=0 w (x w ,y w ,0)。
3) The three-dimensional map point P (x, y, z) considering gradient information is solved.
Solving three-dimensional map points P (x, y, z) considering gradient information, referring to FIGS. 5 and 6, P on 2D lane line image in FIG. 6 0 The points correspond to P (x, y, z) on the road slope in FIG. 5, and the point projected onto the horizontal ground is P w (x w ,y w 0), whereas step 2) already gives P w (x w ,y w ,0)。
Due to the transmission principle, point C, P, P w On a straight line, the problem of solving the three-dimensional coordinates of the P point on the slope is converted into solving the straight line CP w The intersection with the ramp plane.
Let the direction vector α of the light be:
according to C, P w The direction vector of the light can be obtained by the coordinates of (a):
known ray passing point C (x c ,y c ,z c ) And the direction vector of the known light isCan list light ray CP w Is written as a parametric equation:
let the slope plane pass through the point n (n 1 ,n 2 ,n 3 ) And the slope plane is obtained by rotating the horizon by an angle theta around the x axis, and the normal vector of the slope plane is as follows:
the n point is located on the horizontal ground in the world coordinate system, can be selected at will, and can also directly take the origin of the world coordinate system, namely the center of the vehicle.
The point French equation for the ramp plane is:
v p1 *(x-n 1 )+v p2 *(y-n 2 )+v p3 *(z-n 3 )=0 (13)
combining formula (11) with formula (13) to obtain:
substituting the known quantity into equation (14), the let t be a function of θ:
wherein the known quantity comprises the world coordinates of n points, the world coordinates of the camera C, v 1 、v 2 、v 3
Substituting t into equation (11) yields P (x, y, z), where (x, y, z) is still a function of θ,
p (x, y, z) is a three-dimensional map point.
4) The traversal method solves for the slope angle θ.
Referring to fig. 7, two points P are selected on the same horizontal line of the 2D lane line image i 、P j Let P be i 、P j The corresponding points on the inclined roads with the gradient angle theta are w respectively i 、w j From step 3), w is as follows i 、w j Is a function of θ, w i And w is equal to j Distance between |w i w j I is also a function related to θ. Therefore, within a certain rangeFor example 0 to 45 degrees) all theta angles can be traversed to obtain the corresponding |w i w j The value of i. Knowing the true width of the road is k, when we let |w i w j When the value of the-k is minimum, the θ angle at this time is considered as the inclination of the road.
5) And calculating the coordinates of the 3D lane line.
And substituting theta to calculate a specific value of P (x, y, z), substituting theta obtained by a traversal method into formula (16), and obtaining a specific value of P (x, y, z) (3D lane line coordinates considering road real gradient information), namely realizing reconstruction from a 2D lane line to a 3D lane line based on monocular images.
The conversion process of world coordinate system to camera coordinate system (related to the principle of solving three-dimensional mapping points on a truly flat road) is described below.
Referring to fig. 5 and 6, P on the 2D lane line in fig. 6 0 The point mapped to the corresponding point on the real flat road is P in FIG. 5 w And (5) a dot.
Let world coordinate system be (X) w ,Y w ,Z w ) The camera coordinate system is (X c ,Y c ,Z c ) The image coordinate system is (x, y), and the pixel coordinate system is (u, v). Firstly, the world coordinate system is converted into the camera coordinate system, and the step belongs to rigid transformation, namely, the object cannot deform and only needs to rotate and translate.
The transformation of the world coordinate system into the camera coordinate system can be expressed as:
wherein: t is a translation matrix, R x ,R y ,R z Rotating alpha, beta and gamma angles around x, y and z axes respectively for a world coordinate system to obtain a rotation matrix; formula (101) may be further abbreviated as:
wherein: r is a rotation matrix of 3*3 and T is a translation matrix of 3*1;
the conversion from camera coordinate system to image coordinate system uses pinhole imaging principle, and the conversion expression is:
wherein: f is the focal length of the camera;
the image coordinate system and the pixel coordinate system are all on the same imaging plane, but the respective origins and measurement units are different, and the conversion from the image coordinate system to the pixel coordinate system involves scaling and translation;
let the origin of the pixel coordinate system be (u) 0 ,v 0 ) The conversion relationship between the two can be expressed as:
wherein: dx and dy represent the number of millimeters represented by unit pixels per column or row in millimeters per pixel;
the above written matrix is in the form of:
the conversion relation from the world coordinate system to the pixel coordinate system can be obtained through the conversion of the four coordinate systems:
wherein:is an internal reference matrix of the camera,>is an extrinsic matrix of the camera;
the internal reference matrix and the external reference matrix can be obtained through calibration of a camera.
Therefore, through the steps, the coordinates of the pixel point corresponding to a certain point in the three-dimensional world coordinate system on the two-dimensional image can be calculated.
The mapping from the pixel coordinates to the world coordinates is the inverse of the above steps. However, the depth information Z in equation (107) cannot be known due to the two-dimensional to three-dimensional mapping c Two-dimensional to three-dimensional mapping is not a simple process of inverting the matrix.
There are generally two methods for mapping pixel coordinates to world coordinates: the first method requires using multiple cameras to simultaneously capture two or more images of the same object in different spaces to make a measurement; the second method requires only a single camera to capture the object under test, but the object must be placed on a known plane.
The 3D lane line reconstruction related to the invention is based on a monocular image, so a second method is selected.
Preferred embodiments of the present invention are shown in the drawings. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.

Claims (6)

1. A reconstruction method from a 2D lane line to a 3D lane line based on a monocular image comprises the following steps:
1) Establishing a world coordinate system and a camera coordinate system;
2) Solving three-dimensional mapping points on the horizontal ground;
3) Solving three-dimensional mapping points considering gradient information;
4) Solving a gradient angle by a traversal method;
5) And calculating the coordinates of the 3D lane line.
2. The monocular image-based 2D lane line to 3D lane line reconstruction method of claim 1, wherein: in the step 1), an origin of the world coordinate system is a vehicle center, and an origin of the camera coordinate system is an optical center of the camera.
3. A monocular image based 2D lane line to 3D lane line reconstruction method according to claim 2, wherein: the step 2) comprises the following steps:
selecting a plane z=0 in the world coordinate system, wherein the optical center coordinates of the camera are (0, 0) in the camera coordinate system T The method comprises the steps of carrying out a first treatment on the surface of the Some pixel point P on 2D lane line 0 The coordinates in the pixel coordinate system are (u, v), the coordinates in the image coordinate system are (x, y), and the coordinates in the camera coordinate system are (x, y, f) T The method comprises the steps of carrying out a first treatment on the surface of the In the camera coordinate system, P 0 The light corresponding to the point passes through the optical center of the camera, and the intersection point of the light and the imaging plane of the camera is (x, y, f) T
The vector equation of the light under the camera coordinate system is:
the mapping relation from the point of the world coordinate system to the camera coordinate system is as follows:
P c =RP w +T (2)
wherein P is w Is the point of the world coordinate system, P c Is P w At the mapping point of the camera coordinate system, R is rotationMatrix, T is offset matrix;
the rewrites (2) to obtain:
P w =R -1 (P c -T) (3)
points in the camera coordinate system (0, 0) T AND (x, y, f) T Substitution formula (3) gives:
the vector equation of light in world coordinate system is:
L w =O w +(I w -O w )*t (5)
combining the ray equation with the known plane equation:
the method comprises the following steps:
L wz =O z +d z *t=0 (7)
solving for x w And y w
Obtaining a certain pixel point P on the 2D lane line 0 Three-dimensional mapping point P in world coordinate system z=0 w (x w ,y w ,0)。
4. A monocular image-based 2D lane line to 3D lane line reconstruction method according to claim 3, wherein: the step 3) comprises the following steps:
let the direction vector α of the light be:
according to C, P w And obtaining the direction vector of the light:
listing ray CP w Is written as a parametric equation:
let the slope plane pass through the point n (n 1 ,n 2 ,n 3 ) And the slope plane is obtained by rotating the horizon by an angle theta around the x axis, and the normal vector of the slope plane is as follows:
the point French equation for the slope plane is:
v p1 *(x-n 1 )+v p2 *(y-n 2 )+v p3 *(z-n 3 )=0 (13)
combining formula (11) with formula (13) to obtain:
substituting the known amount into formula (14) to obtain:
substituting t into formula (11) to obtain:
a three-dimensional map point P (x, y, z) is obtained.
5. The monocular image-based 2D lane line to 3D lane line reconstruction method of claim 4, wherein: the step 4) comprises the following steps: two points P are selected on the same horizontal line of the 2D lane line image i 、P j Let P be i 、P j The corresponding points on the inclined roads with the gradient angle theta are w respectively i 、w j Traversing all theta angles within a certain range when |w i w j When the I-k is minimum, obtaining the inclination gradient theta; k is the road width.
6. The monocular image-based 2D lane line to 3D lane line reconstruction method of claim 5, wherein: the step 5) comprises the following steps: substituting θ into equation (16) yields P (x, y, z).
CN202311658373.3A 2023-12-06 2023-12-06 Monocular image-based reconstruction method from 2D lane line to 3D lane line Active CN117611438B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311658373.3A CN117611438B (en) 2023-12-06 2023-12-06 Monocular image-based reconstruction method from 2D lane line to 3D lane line

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311658373.3A CN117611438B (en) 2023-12-06 2023-12-06 Monocular image-based reconstruction method from 2D lane line to 3D lane line

Publications (2)

Publication Number Publication Date
CN117611438A true CN117611438A (en) 2024-02-27
CN117611438B CN117611438B (en) 2024-10-11

Family

ID=89951409

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311658373.3A Active CN117611438B (en) 2023-12-06 2023-12-06 Monocular image-based reconstruction method from 2D lane line to 3D lane line

Country Status (1)

Country Link
CN (1) CN117611438B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115995161A (en) * 2023-02-01 2023-04-21 华人运通(上海)自动驾驶科技有限公司 Method and electronic device for determining parking position based on projection

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018086764A1 (en) * 2016-11-10 2018-05-17 Lacos Computerservice Gmbh Method for predictively generating data for controlling a travel path and an operating sequence for agricultural vehicles and machines
CN110148169A (en) * 2019-03-19 2019-08-20 长安大学 A kind of vehicle target 3 D information obtaining method based on PTZ holder camera
CN110176022A (en) * 2019-05-23 2019-08-27 广西交通科学研究院有限公司 A kind of tunnel overall view monitoring system and method based on video detection
CN112102413A (en) * 2020-07-22 2020-12-18 西安交通大学 Virtual lane line-based automatic calibration method for vehicle-mounted camera
CN115655205A (en) * 2022-11-16 2023-01-31 清智汽车科技(苏州)有限公司 Method and device for assisting distance measurement by using lane
CN116665166A (en) * 2023-05-18 2023-08-29 南京航空航天大学 Intelligent vehicle 3D target detection method suitable for uneven road surface scene

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018086764A1 (en) * 2016-11-10 2018-05-17 Lacos Computerservice Gmbh Method for predictively generating data for controlling a travel path and an operating sequence for agricultural vehicles and machines
CN110148169A (en) * 2019-03-19 2019-08-20 长安大学 A kind of vehicle target 3 D information obtaining method based on PTZ holder camera
CN110176022A (en) * 2019-05-23 2019-08-27 广西交通科学研究院有限公司 A kind of tunnel overall view monitoring system and method based on video detection
CN112102413A (en) * 2020-07-22 2020-12-18 西安交通大学 Virtual lane line-based automatic calibration method for vehicle-mounted camera
CN115655205A (en) * 2022-11-16 2023-01-31 清智汽车科技(苏州)有限公司 Method and device for assisting distance measurement by using lane
CN116665166A (en) * 2023-05-18 2023-08-29 南京航空航天大学 Intelligent vehicle 3D target detection method suitable for uneven road surface scene

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王源隆 等: "智能车辆车道线识别方法研究", 重庆理工大学学报, 31 December 2022 (2022-12-31) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115995161A (en) * 2023-02-01 2023-04-21 华人运通(上海)自动驾驶科技有限公司 Method and electronic device for determining parking position based on projection

Also Published As

Publication number Publication date
CN117611438B (en) 2024-10-11

Similar Documents

Publication Publication Date Title
CN111062873B (en) Parallax image splicing and visualization method based on multiple pairs of binocular cameras
CN110148169B (en) Vehicle target three-dimensional information acquisition method based on PTZ (pan/tilt/zoom) pan-tilt camera
US9451236B2 (en) Apparatus for synthesizing three-dimensional images to visualize surroundings of vehicle and method thereof
CA2395257C (en) Any aspect passive volumetric image processing method
JP7502440B2 (en) Method for measuring the topography of an environment - Patents.com
SG189284A1 (en) Rapid 3d modeling
CN110648274B (en) Method and device for generating fisheye image
CN117611438B (en) Monocular image-based reconstruction method from 2D lane line to 3D lane line
WO2000007373A1 (en) Method and apparatus for displaying image
CN104463778A (en) Panoramagram generation method
CN111028155A (en) Parallax image splicing method based on multiple pairs of binocular cameras
CN111091076B (en) Tunnel limit data measuring method based on stereoscopic vision
CN104539928A (en) Three-dimensional printing image synthesizing method for optical grating
CN111009030A (en) Multi-view high-resolution texture image and binocular three-dimensional point cloud mapping method
CN112037159A (en) Cross-camera road space fusion and vehicle target detection tracking method and system
CN113763569B (en) Image labeling method and device used in three-dimensional simulation and electronic equipment
CN114998448B (en) Multi-constraint binocular fisheye camera calibration and space point positioning method
CN112489106A (en) Video-based vehicle size measuring method and device, terminal and storage medium
CN108154536A (en) The camera calibration method of two dimensional surface iteration
CN115239922A (en) AR-HUD three-dimensional coordinate reconstruction method based on binocular camera
CN108444451B (en) Planet surface image matching method and device
CN117830172A (en) Three-dimensional travelable area detection method and device based on monocular RGB camera
CN114935316B (en) Standard depth image generation method based on optical tracking and monocular vision
Li et al. Distortion correction algorithm of ar-hud virtual image based on neural network model of spatial continuous mapping
CN115100290A (en) Monocular vision positioning method, monocular vision positioning device, monocular vision positioning equipment and monocular vision positioning storage medium in traffic scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant