CN115507814A - Vehicle target distance measuring method, device, medium and vehicle - Google Patents

Vehicle target distance measuring method, device, medium and vehicle Download PDF

Info

Publication number
CN115507814A
CN115507814A CN202211085543.9A CN202211085543A CN115507814A CN 115507814 A CN115507814 A CN 115507814A CN 202211085543 A CN202211085543 A CN 202211085543A CN 115507814 A CN115507814 A CN 115507814A
Authority
CN
China
Prior art keywords
target
image
normalized
world
lane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211085543.9A
Other languages
Chinese (zh)
Inventor
刘刚江
张鹏
易成伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foss Hangzhou Intelligent Technology Co Ltd
Original Assignee
Foss Hangzhou Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foss Hangzhou Intelligent Technology Co Ltd filed Critical Foss Hangzhou Intelligent Technology Co Ltd
Priority to CN202211085543.9A priority Critical patent/CN115507814A/en
Publication of CN115507814A publication Critical patent/CN115507814A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders

Landscapes

  • Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Traffic Control Systems (AREA)

Abstract

The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, a medium, and a vehicle for measuring a distance to a vehicle target. The method comprises the following steps: acquiring an image to be detected acquired by acquiring equipment based on a self vehicle; determining two target lane lines based on the image to be detected, and determining target track points corresponding to preset parts of the two target lane lines in the image to be detected and position information of a target obstacle; respectively converting the target track point and the position information into a normalized coordinate system to obtain a corresponding normalized track point coordinate and a normalized target coordinate, and performing straight line fitting to obtain a normalized virtual lane line; predicting the width of a world lane based on the installation height of the normalized virtual lane line and the acquisition equipment; determining the width of an image target lane corresponding to the target obstacle based on the normalized target coordinates; and determining the target distance between the vehicle and the target obstacle based on the ratio of the world lane width to the image target lane width. The accuracy and stability of target ranging can be improved.

Description

Vehicle target distance measuring method, device, medium and vehicle
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method, an apparatus, a medium, and a vehicle for ranging a vehicle target.
Background
With the technical progress and the technological development, the sensing equipment is more and more intelligent, and the distance measurement is an important technical support of the sensing equipment. For example, autodrive is a research hotspot in the field of transportation vehicles at present, and the autodrive comprises a plurality of perception systems, such as a visual perception system based on computer image processing, which are widely applied in the field of autodrive and can be used for the identification and distance measurement of vehicle obstacles.
Because the monocular camera is low in cost, in the field of automatic driving, the monocular camera is often adopted to acquire image data, and the distance of the target is measured based on the image data.
The current common image-based target ranging method in the industry (taking automatic driving as an example) includes: a vanishing point method based on the pixel position of the middle point of the bottom edge of the target vehicle (target obstacle), namely measuring the distance of the target vehicle by using a similar triangle method according to the vanishing point and the lower edge of the target vehicle; the distance of the target vehicle is measured by using a similar triangle method based on the vehicle width and height estimation, namely the image height and the actual height of the target vehicle. However, the vanishing point method is performed assuming that the lane line is a plane, and the ranging result is not accurate for the target at the lane having the uphill/downhill and the curve. The vehicle width and height estimation is affected by the vehicle type detection result. Due to the fact that the shapes of the vehicles are various, under the condition of environmental influences such as distance, the error rate of the vehicle type detection result is high, and the target distance measurement result is inaccurate.
Aiming at the problems that the target distance measurement is inaccurate and is easily influenced by the vehicle type detection result, a vehicle target distance measurement method, a device, a medium and a vehicle are provided.
Disclosure of Invention
The embodiment of the application provides a vehicle target ranging method, a vehicle target ranging device, a vehicle target ranging medium and a vehicle. The accuracy of target ranging can be improved, the target ranging result is prevented from being influenced by the vehicle type detection result, and the stability of the target ranging result is improved.
In a first aspect, an embodiment of the present application provides a vehicle target ranging method, where the method includes:
acquiring an image to be detected acquired by acquiring equipment based on a self vehicle; the image to be detected comprises the target barrier related to the self vehicle;
based on the two target lane lines of the image to be detected, determining corresponding target track points of preset parts of the two target lane lines in the image to be detected and determining position information of the target barrier in the image to be detected;
respectively converting the target track point and the position information into a normalized coordinate system to obtain a corresponding normalized track point coordinate and a normalized target coordinate, and performing linear fitting on the normalized track point coordinate to obtain a normalized virtual lane line;
predicting the width of a world lane under a world coordinate system based on the normalized virtual lane line and the installation height of the acquisition equipment;
determining the width of an image target lane corresponding to the target obstacle based on the normalized target coordinates;
and determining the target distance between the vehicle and the target obstacle in the world coordinate system based on the ratio of the world lane width to the image target lane width.
The image target lane width in this embodiment refers to the width between the two target lane lines at the normalized target coordinates.
In some optional embodiments, predicting a world lane width in a world coordinate system based on the normalized virtual lane line and the installation height of the capturing device includes:
predicting a world virtual lane line under the world coordinate system based on the normalized virtual lane line and the installation height of the acquisition equipment; the world virtual lane line presents a horizontal trend;
and determining the world lane width based on the world virtual lane line.
In some optional embodiments, determining the two target lane lines based on the image to be detected includes:
determining two longest target virtual lane lines on the left side and the right side of a longitudinal central line in the image to be detected based on the longitudinal central line in the image to be detected along the extending direction of the lane;
and determining the two target virtual lane lines as the two target lane lines.
In some optional embodiments, the preset portion includes a first 1/4 portion of a starting end of the two target lane lines in the image to be detected.
In some alternative embodiments, the line fitting is based on a least squares method.
In some optional embodiments, the position information of the target obstacle in the image to be measured includes a target pixel coordinate corresponding to the target obstacle in the image to be measured;
the determining the corresponding target track points of the preset parts of the two target lane lines in the image to be detected comprises the following steps:
and determining the target track point pixel coordinates of the corresponding target track points of the preset parts of the two target lane lines in the image to be detected.
In a second aspect, embodiments of the present application provide a vehicle target ranging device, including:
the acquisition module is used for acquiring an image to be detected acquired by the acquisition equipment based on the self-vehicle; the image to be detected comprises the target barrier related to the self vehicle;
the first determining module is used for determining two target lane lines based on the image to be detected, determining corresponding target track points of preset parts of the two target lane lines in the image to be detected and determining position information of the target obstacle in the image to be detected;
the data processing module is used for respectively converting the target track point and the position information into a normalized coordinate system to obtain a corresponding normalized track point coordinate and a normalized target coordinate, and performing linear fitting on the normalized track point coordinate to obtain a normalized virtual lane line;
the prediction module is used for predicting the width of a world lane under a world coordinate system based on the normalized virtual lane line and the installation height of the acquisition equipment;
the second determining module is used for determining the width of an image target lane corresponding to the target obstacle in the image to be detected based on the normalized target coordinate;
and the third determining module is used for determining the target distance between the vehicle and the target obstacle in the world coordinate system based on the ratio of the world lane width to the image target lane width.
In some optional embodiments, the prediction module comprises:
the first prediction submodule is used for mapping the normalized virtual lane line under the world coordinate system based on the installation height of the acquisition equipment to obtain a world virtual lane line; the world virtual lane line presents a horizontal trend;
and the second prediction submodule is used for determining the world lane width based on the world virtual lane line.
In some optional embodiments, the first determining module is further configured to determine, based on a longitudinal center line in the image to be detected along a lane extending direction, two longest target virtual lane lines on left and right sides of the longitudinal center line in the image to be detected; and determining the two target virtual lane lines.
In some optional embodiments, the position information of the target obstacle in the image to be measured includes a target pixel coordinate corresponding to the target obstacle in the image to be measured;
the first determining module is further configured to determine target track point pixel coordinates of corresponding target track points of the preset portions of the two target lane lines in the image to be detected.
In a third aspect, the present application provides a vehicle including an electronic device, the electronic device includes a processor and a memory, the memory stores at least one instruction or at least one program, and the at least one instruction or the at least one program is loaded by the processor and executes the vehicle target ranging method.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, where instructions, when executed by a processor of an electronic device, enable the electronic device to perform the above-mentioned vehicle target ranging method.
The method comprises the steps of acquiring an image to be detected acquired by acquiring equipment based on a self-vehicle; the image to be detected comprises the target barrier related to the self vehicle; determining two target lane lines based on the image to be detected, determining corresponding target track points of preset parts of the two target lane lines in the image to be detected and determining position information of the target obstacle in the image to be detected; respectively converting the target track point and the position information into a normalized coordinate system to obtain a corresponding normalized track point coordinate and a normalized target coordinate, and performing straight line fitting on the normalized track point coordinate to obtain a normalized virtual lane line; predicting the width of a world lane under a world coordinate system based on the normalized virtual lane line and the installation height of the acquisition equipment; determining the width of an image target lane corresponding to the target obstacle based on the normalized target coordinates; and determining the target distance between the own vehicle and the target obstacle in a world coordinate system based on the ratio of the world lane width to the image target lane width. The accuracy of target ranging can be improved, the target ranging result is prevented from being influenced by the vehicle type detection result, and the stability of the target ranging result is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a scene schematic diagram of performing target ranging based on a vanishing point method according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart of a method for measuring a distance to a vehicle target according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating a method for measuring a distance to a target of a vehicle according to an embodiment of the present disclosure;
FIG. 4 is a right side view of the schematic diagram in FIG. 3;
FIG. 5 is a top view of the schematic diagram of FIG. 3;
fig. 6 is a schematic diagram of an image to be measured according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a normalized image provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of a vehicle target ranging device provided in an embodiment of the present application;
FIG. 9 is a block diagram illustrating an electronic device for implementing a vehicle target ranging method according to an exemplary embodiment.
Detailed Description
In order to make those skilled in the art better understand the technical solutions of the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic may be included in at least one implementation of the invention. In describing the present invention, it is to be understood that the terms "first," "second," "third," and "fourth," etc. in the description and claims of the present invention and the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Before introducing the vehicle target ranging method of the embodiment of the present application, a scene in which target ranging is performed based on a vanishing point method in the prior art is first introduced.
Referring to fig. 1, fig. 1 is a schematic view of a scene for performing target ranging based on a vanishing point method according to an embodiment of the present disclosure.
As shown in fig. 1, the capturing apparatus 10 is disposed on the roof of the vehicle, for example, to capture a target obstacle 20 in a captured picture in front of the capturing apparatus 10. The own vehicle predicts the distance between the own vehicle and the target obstacle 20 based on the image to be measured acquired by the acquisition device 10. The capturing apparatus 10 includes, for example, a monocular-camera-based vehicle-mounted front view camera apparatus and a vehicle-mounted rear view camera apparatus. The ranging method between the own vehicle and the target obstacle 20 based on the collection device 10 is classified into two methods, that is, a ranging method based on a vanishing point and a ranging method based on an obstacle type.
In the first vanishing point based ranging method, it is assumed that the lanes are on a plane, i.e. there are no uphill or downhill slopes. Based on the lane line vanishing point v of the lane on the normalized plane G shown in fig. 1 (i.e., on the normalized image within the normalized plane G) and the grounding point b of the target vehicle corresponding to the image to be measured, the target distance between the target vehicle and the above-described capturing device 10, i.e., the distance between the target vehicle and the own vehicle, is calculated by the similar triangle method. Specifically, the similar triangle shown in fig. 1 refers to a triangle formed by the pickup device 10, the vanishing point v of the lane line on the normalized plane, and the grounding point b of the target vehicle, and is similar to a triangle formed by the pickup device 10, the perpendicular point of the pickup device 10 on the ground, and the target vehicle (i.e., the target obstacle 20 in fig. 1), and the calculation formula of the target distance d is as follows:
Figure BDA0003834844310000061
wherein H c The mounting height of the collecting device 10; y is b Is the ordinate of the target vehicle's ground point b in the normalized plane G (the ordinate is along the direction of lane extension in fig. 1); y is v As ordinate of the lane line vanishing point v of the lane in the normalized plane G. The coordinates in the normalization plane G correspond to the coordinates of the image to be measured one by one.
In some embodiments, in the method based on double vanishing points, the near target uses the near vanishing point, and the far target uses the far vanishing point, so that the accuracy of distance measurement based on the vanishing points can be improved to a certain extent. However, in the first distance measurement method based on vanishing points, there are some cases that the vanishing points are inaccurate, and the distance measurement of the distant target is inaccurate due to the fact that the real lane is a curved surface (including ascending, descending and turning), so at present, the distance measurement of the distant target is generally performed by adopting the second method.
In the second obstacle-type-based distance measuring method, the type of the target obstacle 20 is first identified, that is, the vehicle type of the target vehicle is identified. For example, the target vehicle is a sport utility vehicle (suv), and the target distance between the host vehicle and the target obstacle 20 is calculated based on the general width and height of the suv and the pixel width and pixel height of the target vehicle in the image to be measured. The target distance is affected by the vehicle type recognition result.
As described above, the ranging method based on the vanishing point is inaccurate in ranging result for the target at the lane having the uphill and downhill and the turning; in the obstacle-type-based ranging method, the vehicle width and height estimation is susceptible to the vehicle type detection result.
In order to solve the above problems, the present application provides a vehicle target ranging method, specifically, a to-be-measured image acquired by an acquisition device based on a vehicle is acquired; the image to be detected comprises the target barrier related to the self vehicle; determining two target lane lines based on the image to be detected, determining corresponding target track points of preset parts of the two target lane lines in the image to be detected and determining position information of the target obstacle in the image to be detected; respectively converting the target track point and the position information into a normalized coordinate system to obtain a corresponding normalized track point coordinate and a normalized target coordinate, and performing linear fitting on the normalized track point coordinate to obtain a normalized virtual lane line; predicting the width of a world lane under a world coordinate system based on the normalized virtual lane line and the installation height of the acquisition equipment; determining an image target lane width corresponding to the target obstacle based on the normalized target coordinates; and determining the target distance between the vehicle and the target obstacle in a world coordinate system based on the ratio of the world lane width to the image target lane width. The accuracy of target ranging can be improved, the target ranging result is prevented from being influenced by the vehicle type detection result, and the stability of the target ranging result is improved.
A specific embodiment of a vehicle target distance measuring method according to the present application is described below, and fig. 2 is a schematic flow chart of the vehicle target distance measuring method according to the embodiment of the present application. The specification provides method steps such as in the examples or flowcharts, but may include more or fewer steps based on routine or non-inventive practice. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of sequences, and does not represent a unique order of performance. In practice, the system or server product may be implemented in a sequential or parallel manner (e.g., parallel processor or multi-threaded environment) according to the embodiments or methods shown in the figures. 3-5 are schematic diagrams of a vehicle target ranging method, and fig. 3 is a schematic diagram of a vehicle target ranging method provided by an embodiment of the present application; FIG. 4 is a right side view of the schematic diagram in FIG. 3; fig. 5 is a top view of the schematic diagram in fig. 3. The method shown in fig. 2 is described in detail below with reference to fig. 3 to 5, and specifically as shown in fig. 2, the method may include:
s201: acquiring an image to be detected acquired by acquiring equipment based on a self vehicle; the image to be detected comprises the target barrier related to the self-vehicle.
For example, as shown in fig. 3, the pickup device 10 is provided on the roof of the own vehicle 11, and the pickup device 10 acquires an image of the target obstacle 20 in front of the own vehicle 11 as an image to be measured.
S203: and determining two target lane lines based on the image to be detected, determining corresponding target track points of preset parts of the two target lane lines in the image to be detected, and determining the position information of the target barrier in the image to be detected.
Specifically, a pre-trained neural network model can be adopted to identify lane lines in the image to be detected, and two target lane lines and a target obstacle in the image to be detected are determined; and identifying the position information of the target track point and the target obstacle corresponding to the preset parts of the two target lane lines in the image to be detected, wherein the position information comprises the longitudinal coordinate value of the grounding point of the target obstacle in the image.
In some optional embodiments, determining two target lane lines based on the image to be measured includes:
determining two longest target virtual lane lines on the left side and the right side of a longitudinal central line in the image to be detected based on the longitudinal central line in the image to be detected along the extending direction of the lane;
and determining the two target virtual lane lines.
Fig. 6 is a schematic diagram of an image to be detected according to an embodiment of the present application, as shown in fig. 6, the image to be detected includes a plurality of lane lines, and the lane line with the longest left and right sides of the longitudinal center line M of the image to be detected is determined as two target virtual lane lines, that is, a first target virtual lane line X1 and a second target virtual lane line X2. Wherein it is determined which side of the longitudinal centre line M the lane line is located, based on the position of the lowest point of the lane line relative to the longitudinal centre line M. The length of the lane line refers to a longitudinal distance between a start point (lowest point) and a highest point of the lane line, for example, the length L2 of the second target virtual lane line X2 refers to a longitudinal distance between a start point and a highest point of the second target virtual lane line X2.
In some optional embodiments, the preset portion includes a first 1/4 portion of a starting end of the two target lane lines in the image to be detected.
For example, the front 1/4 part of the starting ends of the two target lane lines is a part below the 1/4 boundary line N in fig. 6.
In some optional embodiments, the position information of the target obstacle in the image to be measured includes a target pixel coordinate corresponding to the target obstacle in the image to be measured;
the determining the corresponding target track points of the preset parts of the two target lane lines in the image to be detected comprises the following steps:
and determining the target track point pixel coordinates of the corresponding target track points of the preset parts of the two target lane lines in the image to be detected.
S205: and respectively converting the target track point and the position information into a normalized coordinate system to obtain corresponding normalized track point coordinates and normalized target coordinates, and performing linear fitting on the normalized track point coordinates to obtain a normalized virtual lane line.
For example, the longitudinal coordinate value of the grounding point of the target obstacle in the image to be measured is converted into the normalized coordinate system of the normalized plane, so as to obtain the normalized target coordinate.
When the ordinate of the pixel coordinate of the grounding point is ob, the ordinate y in the normalized coordinate system is obj ' may be expressed as:
Figure BDA0003834844310000091
wherein K is the internal parameter of the acquisition equipment, and x' can be ignored.
As shown in fig. 3, in the world coordinate system, coordinate axis X, coordinate axis Y, and coordinate axis Z are included, which are perpendicular to each other, with coordinate axis X pointing vertically along the horizontal direction, coordinate axis Y pointing to the ground, and coordinate axis Z pointing to the capturing direction of the capturing apparatus 10. In the world coordinate system, the two target lane lines correspond to the first world lane line J1 and the second world lane line J2 shown in fig. 3. The first world lane line J1 and the second world lane line J2 correspond to the first target virtual lane line X1 and the second target virtual lane line X2 in fig. 6, respectively.
And intercepting pixel points of the 1/4 part of the first target virtual lane line X1 and the second target virtual lane line X2 under the image to be detected, for example, respectively taking pixel points corresponding to n target track points on the X1 and the X2, and converting the n pixel points into a normalized coordinate system to obtain 2n normalized target coordinates. Specifically, the conversion method is as follows:
the first target virtual lane line X1 and the second target virtual lane line X2 are subjected to distortion removal to obtain a pixel coordinate (u) 1i ,v 1i ) Of the pixel point of (2), which normalizes the coordinate (x) in the coordinate system 1i ,y 1i ') can be determined by the following equation:
Figure BDA0003834844310000101
where K is the internal reference matrix of the acquisition device 10.
In some alternative embodiments, the line fitting is based on a least squares method.
For example, a least square method is used to perform a straight line fitting on the 2n normalized target coordinates to obtain a first normalized virtual lane line G1 and a second normalized virtual lane line G2 corresponding to the first target virtual lane line X1 and the second target virtual lane line X2, respectively, and for example, the straight line equations of G1 and G2 are y = p, respectively 1 x+q 1 And y = p 2 x+q 2
Wherein,
Figure BDA0003834844310000102
p 2 and q is 2 The same is true.
S207: and predicting the width of the world lane under a world coordinate system based on the normalized virtual lane line and the installation height of the acquisition equipment.
In some optional embodiments, predicting a world lane width in a world coordinate system based on the normalized virtual lane line and the installation height of the capturing device includes:
mapping the normalized virtual lane line under the world coordinate system based on the installation height of the acquisition equipment to obtain a world virtual lane line; the world virtual lane line presents a horizontal trend;
and determining the world lane width based on the world virtual lane line.
As shown in fig. 3, the first normalized virtual lane line G1 and the second normalized virtual lane line G2 are mapped to the world coordinate system, and the first world virtual lane line S1 and the second world virtual lane line S2 are obtained. Determining the world lane width based on a first world virtual lane line S1 and a second world virtual lane line S2; and predicting a first world lane line J1 and a second world lane line J2. The specific process of predicting the first world lane line J1 and the second world lane line J2 is as follows:
because the normalized coordinate in the normalized coordinate system and the pixel coordinate of the image to be detected have a one-to-one correspondence relationship, and because a pixel coordinate derivation formula is additionally used, a plurality of parameters are additionally introduced, which is not beneficial to understanding, the normalized coordinate in the normalized coordinate system is used for derivation.
Referring to fig. 4, in a y-z coordinate system, the first world virtual lane line S1 and the second world virtual lane line S2 may be described as y = h + az, where z is an independent variable, h is a result of dividing the installation height of the collection device 10 by the cosine of the installation pitch angle of the collection device 10, and is a known quantity, a is an unknown quantity, and is related to the installation and the pitching of the collection device 10, and y is a dependent quantity, where the change caused by the pitching of the vehicle 11 is ignored. The actual first world lane line J1 and second world lane line J2 are uphill with the slope according to the equation h (z), the form of which is unknown, and the true first world lane line J1 and second world lane line J2 can be considered to correspond to: y = h + az-h (z).
Referring to fig. 5, in an x-z coordinate system, a first world virtual lane line S1 is a straight line and may be described as x = kz + b1, where z is an independent variable. The second world virtual lane line S2 is a straight line and can be described as x = kz + b2, where z is an independent variable. Similarly, if the actual world lane line turns in one direction, the degree of turning is in accordance with the equation f (z), the form of the equation is unknown, and the true first world lane line J1 and second world lane line J2 can be considered to be in accordance with: x = kz + b1+ f (z)/x = kz + b2+ f (z), and f (z) on the true first world lane line J1 and the second world lane line J2 are not consistent, but the error has little influence on the final ranging in the case of a small curve, is negligible, and is considered to be consistent.
In summary, taking the first world lane line J1 as an example, the equation of the first world lane line J1:
Figure BDA0003834844310000111
and mapping the first world lane line J1 under a normalized coordinate system, namely obtaining a formula (5) from a formula (4):
Figure BDA0003834844310000112
therefore, in the normalized coordinate system, the first normalized virtual lane line G1 corresponding to the first world lane line J1 conforms to formula (6):
Figure BDA0003834844310000121
equations (7) and (8) are derived from equation (6):
Figure BDA0003834844310000122
Figure BDA0003834844310000123
substituting equation (8) into (7) yields equation (9):
Figure BDA0003834844310000124
equation (9) is an equation of the first normalized virtual lane line G1 in the normalized plane.
Similarly, the formula (4) is derived, and the first normalized virtual lane line G1 in the normalized plane conforms to the formula (10):
Figure BDA0003834844310000125
simplified by equation (10):
Figure BDA0003834844310000126
in the 1/4 part under the image, the first world virtual lane line S1 and the first world lane line J1 may be considered to substantially coincide. Therefore, the linear equation y = p1x + q1 of the first-world virtual lane line S1 can be solved using the least square method to approximate the linear equation of formula (11), and it can be solved:
Figure BDA0003834844310000127
similarly, the equation of the second world lane line J2 can be calculated as the following equation (13):
Figure BDA0003834844310000128
Figure BDA0003834844310000129
in summary, based on the normalized virtual lane lines, the world lane lines in the world coordinate system are predicted, that is, the first world lane line J1 and the second world lane line J2 may include curved surfaces and slopes.
S209: and determining the width of the image target lane corresponding to the target obstacle based on the normalized target coordinates.
For example, the subtraction of the above equations (9) and (13) is the lane width d in the normalized coordinate system:
Figure BDA0003834844310000131
wherein y' isThe longitudinal coordinate value is the longitudinal coordinate y of the longitudinal coordinate value of the grounding point of the target obstacle 20 in the normalized coordinate system obj ', the height h is known, the ordinate y in the normalized coordinate system can be obtained obj ' corresponding image target lane width d, fig. 7 is a schematic diagram of a normalized image provided in the embodiment of the present application, and as shown in fig. 7, the target obstacle 20 corresponds to the image target lane width d.
The following equations (12), (14) and (15) can be solved:
Figure BDA0003834844310000132
s211: and determining the target distance between the vehicle and the target obstacle in the world coordinate system based on the ratio of the world lane width to the image target lane width.
For example, substituting equation (16) into equation (8) solves for the longitudinal coordinate Z (i.e., target distance) of the target obstacle in the world coordinate system:
Figure BDA0003834844310000133
in some embodiments, the lane width and target range finding at the far position are calculated based on the lane width and target range finding at the near position of the above-described acquisition device 10 as references, but this method may be affected by the near range finding error. The lane width depends on the lane line detection result and is not suitable for a curve scene.
In the above embodiment, based on that the two target lane lines are located on one curved surface, the first world lane line J1 and the second world lane line J2 equations of the two target lane lines in the world coordinate system are established. The image target lane width is obtained by mapping the real lane width into a normalized coordinate system, so that the image lane width is not influenced by curves and curved surfaces. Therefore, the target distance calculated based on the world lane width and the image target lane width can omit the calculation of the curve and the curved surface of the lane line, and the precision of the target distance is greatly improved.
An embodiment of the present application provides a vehicle target ranging apparatus, and fig. 8 is a schematic diagram of the vehicle target ranging apparatus provided by the embodiment of the present application, and as shown in fig. 8, the vehicle target ranging apparatus includes:
the acquisition module is used for acquiring an image to be detected acquired by the acquisition equipment based on the self-vehicle; the image to be detected comprises the target barrier related to the self vehicle;
the first determining module is used for determining two target lane lines based on the image to be detected, determining corresponding target track points of preset parts of the two target lane lines in the image to be detected and determining position information of the target obstacle in the image to be detected;
the data processing module is used for respectively converting the target track point and the position information into a normalized coordinate system to obtain a corresponding normalized track point coordinate and a normalized target coordinate, and performing linear fitting on the normalized track point coordinate to obtain a normalized virtual lane line;
the prediction module is used for predicting the width of a world lane under a world coordinate system based on the normalized virtual lane line and the installation height of the acquisition equipment;
the second determining module is used for determining the width of an image target lane corresponding to the target obstacle in the image to be detected based on the normalized target coordinate;
and the third determining module is used for determining the target distance between the vehicle and the target obstacle in the world coordinate system based on the ratio of the world lane width to the image target lane width.
In some optional embodiments, the prediction module comprises:
the first prediction submodule is used for mapping the normalized virtual lane line under the world coordinate system based on the installation height of the acquisition equipment to obtain a world virtual lane line; the world virtual lane line presents a horizontal trend;
and the second prediction submodule determines the world lane width based on the world virtual lane line.
In some optional embodiments, the first determining module is further configured to determine, based on a longitudinal center line in the to-be-detected image along a lane extending direction, two longest target virtual lane lines on left and right sides of the longitudinal center line in the to-be-detected image; and determining the two target virtual lane lines as the two target lane lines.
In some optional embodiments, the position information of the target obstacle in the image to be measured includes a target pixel coordinate corresponding to the target obstacle in the image to be measured;
the first determining module is further configured to determine target track point pixel coordinates of corresponding target track points of the preset portions of the two target lane lines in the image to be detected.
The device and method embodiments in the embodiments of the present application are based on the same application concept.
FIG. 9 is a block diagram illustrating an electronic device for implementing a vehicle target ranging method in accordance with an exemplary embodiment.
The electronic device may be a server or a terminal device, and its internal structure diagram may be as shown in fig. 9. The electronic device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic equipment comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the electronic device is used for connecting and communicating with an external terminal through a network. The computer program is executed by a processor to implement a vehicle target ranging method.
Those skilled in the art will appreciate that the architecture shown in fig. 9 is merely a block diagram of some of the structures associated with the disclosed aspects and does not constitute a limitation on the electronic devices to which the disclosed aspects apply, as a particular electronic device may include more or less components than those shown, or combine certain components, or have a different arrangement of components.
Embodiments of the present application provide a vehicle, which includes an electronic device, the electronic device includes a processor and a memory, and the memory stores at least one instruction or at least one program, and the at least one instruction or the at least one program is loaded by the processor and executes the vehicle target ranging method of the first aspect.
Embodiments of the present application provide a computer-readable storage medium, wherein when instructions of the computer-readable storage medium are executed by a processor of an electronic device, the electronic device is enabled to execute the vehicle target ranging method of the first aspect.
Optionally, in this embodiment, the storage medium may be located in at least one network server of a plurality of network servers of a computer network. Optionally, in this embodiment, the storage medium may include, but is not limited to, a storage medium including: various media that can store program codes, such as a usb disk, a Read-only Memory (ROM), a removable hard disk, a magnetic disk, or an optical disk.
In an exemplary embodiment, there is also provided a computer program product including a computer program stored in a readable storage medium, from which at least one processor of a computer device reads and executes the computer program, so that the computer device performs the vehicle object ranging method of the embodiment of the present disclosure.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the application may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the application, various features of the application are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the application and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: this application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this invention pertains. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components in the embodiments may be combined into one module or unit or component, and furthermore, may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Moreover, those of skill in the art will understand that although some embodiments described herein include some but not other features included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the application and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.

Claims (10)

1. A vehicle target ranging method, the method comprising:
acquiring an image to be detected acquired by acquiring equipment based on a self vehicle; the image to be detected comprises the target barrier related to the self vehicle;
determining two target lane lines based on the image to be detected, determining corresponding target track points of preset parts of the two target lane lines in the image to be detected and determining position information of the target obstacle in the image to be detected;
respectively converting the target track point and the position information into a normalized coordinate system to obtain corresponding normalized track point coordinates and normalized target coordinates, and performing linear fitting on the normalized track point coordinates to obtain a normalized virtual lane line;
predicting the width of a world lane under a world coordinate system based on the normalized virtual lane line and the installation height of the acquisition equipment;
determining an image target lane width corresponding to the target obstacle based on the normalized target coordinates;
and determining the target distance between the vehicle and the target obstacle in the world coordinate system based on the ratio of the world lane width to the image target lane width.
2. The method of claim 1, wherein predicting a world lane width in a world coordinate system based on the normalized virtual lane lines and an installation height of the acquisition device comprises:
mapping the normalized virtual lane line under the world coordinate system based on the installation height of the acquisition equipment to obtain a world virtual lane line; the world virtual lane line presents a horizontal trend;
and determining the world lane width based on the world virtual lane line.
3. The method of claim 1 or 2, wherein determining two target lane lines based on the image to be tested comprises:
determining two longest target virtual lane lines on the left side and the right side of a longitudinal central line in the image to be detected based on the longitudinal central line in the image to be detected along the extending direction of the lane;
and determining the two target virtual lane lines as the two target lane lines.
4. The method according to claim 1 or 2, wherein the preset portion comprises a first 1/4 portion of the starting ends of the two target lane lines in the image to be detected.
5. The method according to claim 1 or 2, wherein the straight line fitting is performed based on a least squares method.
6. The method according to claim 1 or 2, wherein the position information of the target obstacle in the image to be measured comprises corresponding target pixel coordinates of the target obstacle in the image to be measured;
the determining the corresponding target track points of the preset parts of the two target lane lines in the image to be detected comprises the following steps:
and determining the target track point pixel coordinates of the corresponding target track points of the preset parts of the two target lane lines in the image to be detected.
7. A vehicle target ranging apparatus, the apparatus comprising:
the acquisition module is used for acquiring an image to be detected acquired by the acquisition equipment based on the self vehicle; the image to be detected comprises the target barrier related to the self vehicle;
the first determining module is used for determining two target lane lines based on the image to be detected, determining corresponding target track points of preset parts of the two target lane lines in the image to be detected and determining position information of the target obstacle in the image to be detected;
the data processing module is used for respectively converting the target track point and the position information into a normalized coordinate system to obtain a corresponding normalized track point coordinate and a normalized target coordinate, and performing linear fitting on the normalized track point coordinate to obtain a normalized virtual lane line;
the prediction module is used for predicting the width of a world lane under a world coordinate system based on the normalized virtual lane line and the installation height of the acquisition equipment;
the second determining module is used for determining the width of an image target lane corresponding to the target obstacle in the image to be detected based on the normalized target coordinate;
and the third determination module is used for determining the target distance between the vehicle and the target obstacle in the world coordinate system based on the ratio of the world lane width to the image target lane width.
8. The target ranging device of claim 7, wherein the prediction module comprises:
the first prediction submodule is used for mapping the normalized virtual lane line under the world coordinate system based on the installation height of the acquisition equipment to obtain a world virtual lane line; the world virtual lane line presents a horizontal trend;
and the second prediction submodule determines the world lane width based on the world virtual lane line.
9. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the vehicle target ranging method of any one of claims 1-6.
10. A vehicle comprising an electronic device comprising a processor and a memory, the memory having stored therein at least one instruction or at least one program, the at least one instruction or the at least one program being loaded by the processor and performing the vehicle object ranging method of any one of claims 1-6.
CN202211085543.9A 2022-09-06 2022-09-06 Vehicle target distance measuring method, device, medium and vehicle Pending CN115507814A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211085543.9A CN115507814A (en) 2022-09-06 2022-09-06 Vehicle target distance measuring method, device, medium and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211085543.9A CN115507814A (en) 2022-09-06 2022-09-06 Vehicle target distance measuring method, device, medium and vehicle

Publications (1)

Publication Number Publication Date
CN115507814A true CN115507814A (en) 2022-12-23

Family

ID=84503690

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211085543.9A Pending CN115507814A (en) 2022-09-06 2022-09-06 Vehicle target distance measuring method, device, medium and vehicle

Country Status (1)

Country Link
CN (1) CN115507814A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116953680A (en) * 2023-09-15 2023-10-27 成都中轨轨道设备有限公司 Image-based real-time ranging method and system for target object

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116953680A (en) * 2023-09-15 2023-10-27 成都中轨轨道设备有限公司 Image-based real-time ranging method and system for target object
CN116953680B (en) * 2023-09-15 2023-11-24 成都中轨轨道设备有限公司 Image-based real-time ranging method and system for target object

Similar Documents

Publication Publication Date Title
CN106096525B (en) A kind of compound lane recognition system and method
KR101411668B1 (en) A calibration apparatus, a distance measurement system, a calibration method, and a computer readable medium recording a calibration program
JP6670071B2 (en) Vehicle image recognition system and corresponding method
CN108363065A (en) Object detecting system
US10867403B2 (en) Vehicle external recognition apparatus
US9042639B2 (en) Method for representing surroundings
CN110988848B (en) Vehicle-mounted laser radar relative pose monitoring method and device
CN107909047B (en) Automobile and lane detection method and system applied to automobile
JP2012225806A (en) Road gradient estimation device and program
CN112084810A (en) Obstacle detection method and device, electronic equipment and storage medium
CN115507814A (en) Vehicle target distance measuring method, device, medium and vehicle
EP3667612A1 (en) Roadside object detection device, roadside object detection method, and roadside object detection system
CN110770741B (en) Lane line identification method and device and vehicle
CN112668374A (en) Image processing method and device, re-recognition network training method and electronic equipment
CN114445404A (en) Automatic structural vibration response identification method and system based on sub-pixel edge detection
CN112927309A (en) Vehicle-mounted camera calibration method and device, vehicle-mounted camera and storage medium
CN110986887B (en) Monocular camera-based distance measurement method, storage medium and monocular camera
CN114972427A (en) Target tracking method based on monocular vision, terminal equipment and storage medium
CN112902911B (en) Ranging method, device, equipment and storage medium based on monocular camera
CN112304293B (en) Road height detection method and device, readable storage medium and electronic equipment
KR102065337B1 (en) Apparatus and method for measuring movement information of an object using a cross-ratio
CN112016366B (en) Obstacle positioning method and device
JP5981284B2 (en) Object detection device and object detection method
CN110298320B (en) Visual positioning method, device and storage medium
CN115507815A (en) Target ranging method and device and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination