CN117351447A - Lane line matching method based on long and short focus camera sensing result - Google Patents

Lane line matching method based on long and short focus camera sensing result Download PDF

Info

Publication number
CN117351447A
CN117351447A CN202311431022.9A CN202311431022A CN117351447A CN 117351447 A CN117351447 A CN 117351447A CN 202311431022 A CN202311431022 A CN 202311431022A CN 117351447 A CN117351447 A CN 117351447A
Authority
CN
China
Prior art keywords
lane line
focus
sampling point
long
short
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311431022.9A
Other languages
Chinese (zh)
Inventor
张德泽
刘富钰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202311431022.9A priority Critical patent/CN117351447A/en
Publication of CN117351447A publication Critical patent/CN117351447A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a lane line matching method based on a perception result of a long and short focus camera, which comprises the following steps: acquiring long Jiao Yuyi segmented images and short-focus semantic segmented images output by a long-focus camera and a short-focus camera, and carrying out lane line sampling and grouping on the long Jiao Yuyi segmented images and the short-focus semantic segmented images to obtain a plurality of long-focus sampling point groups and short-focus sampling point groups; converting pixel coordinates of sampling points in each group of long-focus sampling point groups and short-focus sampling point groups into vehicle body coordinates; establishing a corresponding lane line sampling point group by taking each group of short-focus sampling point groups as basic data; fitting each short-focus sampling point group into a cubic curve, calculating the transverse average distance from all long-focus sampling point groups to the cubic curve, and adding all long-focus sampling point groups meeting the requirement that the transverse average distance is smaller than a preset transverse distance threshold value into the corresponding lane line sampling point groups to obtain updated lane line sampling point groups; and fitting based on the updated lane line sampling point group to obtain a new lane line.

Description

Lane line matching method based on long and short focus camera sensing result
Technical Field
The invention relates to the technical field of lane line detection, in particular to a lane line matching method based on a long and short focal length camera sensing result.
Background
The lane boundary information is important reference information in an automatic driving system, can provide necessary references for automatic driving regulation and control decision, and has important effect on driving safety. The current common method for extracting lane boundary information is to perform semantic segmentation based on a front-view camera image only, obtain lane line pixels, convert to a vehicle body coordinate system based on camera parameters, fit lane lines and obtain a lane line curve equation.
The current common lane line image segmentation sensing result is usually from a forward-looking monocular camera, so that the defects that the sensing distance is short and the sensing result from a long-focus and short-focus double-forward-looking camera is difficult to process exist. Therefore, a matching fusion algorithm for sensing the lane line sensing result of the long-short-focus double-front-view camera is needed.
Disclosure of Invention
The invention aims to provide a lane line matching method based on a long and short focus camera sensing result, which aims to solve the problem that the sensing distance is short because the lane line image segmentation sensing result in the prior art only comes from a forward looking monocular camera.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
a lane line matching method based on a long and short focus camera sensing result comprises the following steps:
acquiring long Jiao Yuyi segmented images and short-focus semantic segmented images which are output by a long-focus camera and a short-focus camera correspondingly, and respectively carrying out lane line sampling and grouping on the long Jiao Yuyi segmented images and the short-focus semantic segmented images to obtain a plurality of long-focus sampling point groups corresponding to the long Jiao Yuyi segmented images and a plurality of short-focus sampling point groups corresponding to the short-focus semantic segmented images;
converting pixel coordinates of sampling points in each group of long-focus sampling point groups and short-focus sampling point groups into vehicle body coordinates;
establishing a corresponding lane line sampling point group by taking each group of short-focus sampling point groups as basic data;
fitting short-focus sampling point groups under each group of vehicle body coordinates into cubic curves, calculating the transverse average distance from the long-focus sampling point groups under all the vehicle body coordinates to the cubic curves corresponding to each group of short-focus sampling point groups, and adding all the long-focus sampling point groups meeting the transverse average distance smaller than a preset transverse distance threshold into the corresponding lane line sampling point groups to obtain updated lane line sampling point groups;
and fitting based on the updated lane line sampling point group to obtain a new lane line.
Preferably, the long Jiao Yuyi segmented image and the short-focus semantic segmented image are obtained by binarizing original images obtained by a long-focus camera and a short-focus camera; the binarization process classifies each pixel type in the original image as either a lane line, or a non-lane line.
Further, the lane line sampling and grouping the long Jiao Yuyi segmented image and the short-focus semantic segmented image respectively includes:
respectively carrying out progressive scanning on the long Jiao Yuyi segmented image and the short-focus semantic segmented image to extract a plurality of lane line pixel connected domains;
taking one point in the pixel communication domain of the lane line as a sampling point, and grouping the sampling points by judging whether the pixel communication domains of two adjacent lane lines belong to the same lane line or not;
wherein a point in the lane line pixel communicating region includes one of a midpoint, an inside edge point, and an outside edge point of the lane line pixel communicating region.
Still further, a plurality of lane line pixel connected domains are extracted by scanning the long Jiao Yuyi segmented image and the short-focus semantic segmented image line by line respectively; when taking the midpoint of the pixel connected domain of the lane line as a sampling point, by judging whether the pixel connected domains of two adjacent lane lines belong to the same lane line, the sampling points are further grouped, and the method comprises the following steps:
the method comprises the steps of respectively carrying out progressive scanning on a long Jiao Yuyi segmented image and a short-focus semantic segmented image to extract a lane line pixel connected domain, taking the midpoints of the obtained plurality of lane line pixel connected domains as sampling points and grouping the sampling points, and comprises the following steps:
scanning line by line from the uppermost edge of the long Jiao Yuyi segmented image and the short-focus semantic segmented image, and searching a classification result to be a lane line pixel region;
if in the y-th rowx 1 Column to the firstx 2 The column pixel labels are lane lines, the lane line labels are recorded as a lane line pixel connected domain, and the center pixel of the lane line pixel connected domain is extractedTaking the sampling point as a sampling point, and creating a sampling point group by taking the sampling point as a starting point;
then searching the pixel connected domain of the lane line in the y+1th row, if the initial column of the pixel connected domain of the lane line in the y+1th rowAnd termination column->The conditions are satisfied: />Or satisfies the condition: />The method comprises the steps of carrying out a first treatment on the surface of the Judging that the lane line pixel communication domain of the y+1 line and the lane line pixel communication domain of the y line belong to the same lane line, and adding sampling points obtained by the lane line pixel communication domain of the y+1 line into a sampling point group of the y line; if the condition is not met, a sampling point group is newly built by taking the sampling points obtained by the pixel connected domain of the lane line of the y+1th row as starting points;
and (3) traversing all lines of the long Jiao Yuyi segmented image and the short-focus semantic segmented image to obtain a plurality of long-focus sampling point groups corresponding to the long Jiao Yuyi segmented image and a plurality of short-focus sampling point groups corresponding to the short-focus semantic segmented image.
Preferably, the origin of the vehicle body coordinate system is defined at the center of the vehicle rear axis, along the vehicle advancing direction as the X axis, perpendicular to the vehicle advancing direction as the Y axis, and perpendicular to the ground direction as the Z axis.
Further, the converting the pixel coordinates of the sampling points in each group of long-focus sampling point groups and short-focus sampling point groups into vehicle body coordinates includes:
internal parameters and external parameters based on a long-focus camera and a short-focus camera, wherein an internal parameter matrix is K, and the external parameters comprise a rotation matrix R and a translation vector T; defining pixel coordinates of sampling points asThe internal reference matrix, rotation matrix, translation vector are:
wherein,and->For the focal length of the camera +.>And->The optical centers of the cameras are calibrated.
The formula for converting the pixel coordinates of the sampling points into the coordinates of the vehicle body is as follows:
wherein s is a normalization coefficient.
Preferably, each set of short-focus sampling points is fitted to a cubic curve using a polynomial least squares method.
Further, after obtaining the updated lane line sampling point set, and before obtaining a new lane line based on the updated lane line sampling point set by fitting, the method further includes:
checking the long-focus sampling points in the updated lane line sampling point group, and eliminating the long-focus sampling points which do not pass the checking; and fitting the lane lines based on the checked lane line sampling point groups.
Still further, the verifying the long-focus sampling points in the updated lane line sampling point group, and eliminating the long-focus sampling points which do not pass the verification, includes:
projecting long-focus sampling points in the updated lane line sampling point group into a short-focus semantic segmentation graph based on the internal parameters and the external parameters of the short-focus camera;
if the pixel type of the projected long-focus sampling point is judged to be a lane line, judging that the long-focus sampling point with the pixel type being the lane line is an effective sampling point;
if the pixel type of the projected long-focus sampling point is judged to be a non-lane line, searching whether pixels with the pixel type being a lane line exist in the neighborhood of the projected long-focus sampling point, and if the pixels with the pixel type being the lane line exist, judging that the projected long-focus sampling point is an effective sampling point; if the long-focus sampling point does not exist, judging that the projected long-focus sampling point is an invalid sampling point;
and removing invalid sampling points from the updated lane line sampling point group to obtain a checked lane line sampling point group.
An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the lane line matching method based on the perception result of a long and short focal camera as described above when executing the computer program.
The invention has the beneficial effects that:
according to the lane line matching method of the sensing result of the long-short focus camera, namely the long Jiao Yuyi segmented image is added, and lane lines in the long Jiao Yuyi segmented image and the short-short focus semantic segmented image are firstly sampled and then grouped into a plurality of long-short focus sampling point groups and a plurality of short-short focus sampling point groups; after pixel coordinates of sampling points are converted into coordinates of a vehicle body, each short-focus sampling point group is fitted into a cubic curve, and lane line matching is carried out according to the transverse average distance from the long-focus sampling point group to the cubic curve corresponding to each short-focus sampling point group, so that matching and fusion of the sensing results of the same lane line from a long-focus camera and a short-focus camera can be realized, the sensing range of the lane line is effectively improved, namely, the sensing distance is far, and a basis is provided for calculation of a drivable area and vehicle regulation and control decision output by the subsequent lane line.
Drawings
Fig. 1 is a flow chart of steps of a lane line matching method based on a perception result of a long and short focal length camera.
Fig. 2 is a flow chart of the progressive scan sampling point extraction in the present invention.
Fig. 3 is a flowchart for determining whether the current long-focus sampling point group belongs to a lane line to which a short-focus sampling point belongs.
Fig. 4 is a flow chart of steps of another lane line matching method based on sensing results of a long and short focal length camera.
Detailed Description
Further advantages and effects of the present invention will become readily apparent to those skilled in the art from the disclosure herein, by referring to the accompanying drawings and the preferred embodiments. The invention may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present invention. It should be understood that the preferred embodiments are presented by way of illustration only and not by way of limitation.
It should be noted that the illustrations provided in the following embodiments merely illustrate the basic concept of the present invention by way of illustration, and only the components related to the present invention are shown in the drawings and are not drawn according to the number, shape and size of the components in actual implementation, and the form, number and proportion of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complicated.
In this embodiment, as shown in fig. 1, a lane line matching method based on a perception result of a long-short focal camera includes the following steps:
acquiring long Jiao Yuyi segmented images and short-focus semantic segmented images which are output by a long-focus camera and a short-focus camera correspondingly, and respectively carrying out lane line sampling and grouping on the long Jiao Yuyi segmented images and the short-focus semantic segmented images to obtain a plurality of long-focus sampling point groups corresponding to the long Jiao Yuyi segmented images and a plurality of short-focus sampling point groups corresponding to the short-focus semantic segmented images;
converting pixel coordinates of sampling points in each group of long-focus sampling point groups and short-focus sampling point groups into vehicle body coordinates;
establishing a corresponding lane line sampling point group by taking each group of short-focus sampling point groups as basic data;
fitting short-focus sampling point groups under each group of vehicle body coordinates into cubic curves, calculating the transverse average distance from the long-focus sampling point groups under all the vehicle body coordinates to the cubic curves corresponding to each group of short-focus sampling point groups, and adding all the long-focus sampling point groups meeting the transverse average distance smaller than a preset transverse distance threshold into the corresponding lane line sampling point groups to obtain updated lane line sampling point groups;
and fitting based on the updated lane line sampling point group to obtain a new lane line.
In this embodiment, the long-focus camera and the short-focus camera may be a long-focus front-view camera and a short-focus front-view camera; or may be other image acquisition devices that can acquire long Jiao Yuyi segmented images and short-focus semantically segmented images. In this embodiment, the updated lane line sampling point sets have a plurality of sets, a polynomial least square method is adopted to fit each set of lane line sampling point sets, and a cubic curve obtained by fitting, namely, a lane line is obtained, and the lane line is represented by a cubic curve equation. The updated lane line sampling point group comprises short-focus sampling points and long-focus sampling points. The new lane line obtained by fitting is the result of matching and fusing the lane lines in the long Jiao Yuyi segmented image and the short-focus semantic segmented image, and the result is the result of realizing remote sensing. The defect of short sensing distance caused by the fact that only a forward-looking monocular camera is used for sensing in the prior art is overcome.
According to the lane line matching method of the sensing result of the long-short focus camera, namely the long Jiao Yuyi segmented image is added, and lane lines in the long Jiao Yuyi segmented image and the short-short focus semantic segmented image are firstly sampled and then grouped into a plurality of long-short focus sampling point groups and a plurality of short-short focus sampling point groups; after pixel coordinates of sampling points are converted into coordinates of a vehicle body, each short-focus sampling point group is fitted into a cubic curve, and lane line matching is carried out according to the transverse average distance from a long-focus sampling point group to the cubic curve corresponding to each short-focus sampling point group, so that matching and fusion of the sensing results of the same lane line from a long-focus camera and a short-focus camera can be realized, the sensing range of the lane line is effectively improved, and a basis is provided for calculation of a drivable area output by a subsequent lane line and vehicle regulation decision.
In this embodiment, the long Jiao Yuyi segmented image and the short-focus semantic segmented image are obtained by performing binarization processing on original images acquired by a long-focus camera and a short-focus camera; the binarization process classifies each pixel type in the original image as either a lane line, or a non-lane line. The long Jiao Yuyi segmented image and the short-focus semantic segmented image are binarized results of the original image, for example: the pixel point with the pixel value of 0 is used for representing the background (namely, the non-lane line), and the pixel point with the pixel value of 1 is used for representing the lane line, so that each pixel in the original image is classified into two types of the non-lane line and the lane line according to the pixel value.
In this embodiment, the lane line sampling and grouping the long Jiao Yuyi segmented image and the short-focus semantic segmented image respectively includes:
respectively carrying out progressive scanning on the long Jiao Yuyi segmented image and the short-focus semantic segmented image to extract a plurality of lane line pixel connected domains;
taking one point in the pixel communication domain of the lane line as a sampling point, and grouping the sampling points by judging whether the pixel communication domains of two adjacent lane lines belong to the same lane line or not;
wherein a point in the lane line pixel communicating region includes one of a midpoint, an inside edge point, and an outside edge point of the lane line pixel communicating region.
In this embodiment, the midpoint in the pixel connected domain of the lane line is taken as the sampling point, and the method of taking the inner edge point and the outer edge point as the sampling points is similar, and only the sampling point positions are considered to be different.
In the embodiment, a plurality of lane line pixel connected domains are extracted by scanning the long Jiao Yuyi segmented image and the short-focus semantic segmented image line by line; when the midpoint of the pixel connected domain of the lane line is taken as a sampling point, by judging whether the two adjacent pixel connected domains of the lane line belong to the same lane line, the sampling points are further grouped, as shown in fig. 2, and the method comprises the following steps:
the method comprises the steps of respectively carrying out progressive scanning on a long Jiao Yuyi segmented image and a short-focus semantic segmented image to extract a lane line pixel connected domain, taking the midpoints of the obtained plurality of lane line pixel connected domains as sampling points and grouping the sampling points, and comprises the following steps:
scanning line by line from the uppermost edge of the long Jiao Yuyi segmented image and the short-focus semantic segmented image, and searching a classification result to be a lane line pixel region;
if in the y-th rowx 1 Column to the firstx 2 The column pixel labels are lane lines, the lane line labels are recorded as a lane line pixel connected domain, and the center pixel of the lane line pixel connected domain is extractedTaking the sampling point as a sampling point, and creating a sampling point group by taking the sampling point as a starting point;
then searching the pixel connected domain of the lane line in the y+1th row, if the initial column of the pixel connected domain of the lane line in the y+1th rowAnd termination column->The conditions are satisfied: />Or satisfies the condition: />The method comprises the steps of carrying out a first treatment on the surface of the Judging that the lane line pixel communication domain of the y+1 line and the lane line pixel communication domain of the y line belong to the same lane line, and adding sampling points obtained by the lane line pixel communication domain of the y+1 line into a sampling point group of the y line; if the condition is not met, a sampling point group is newly built by taking the sampling points obtained by the pixel connected domain of the lane line of the y+1th row as starting points;
and (3) traversing all lines of the long Jiao Yuyi segmented image and the short-focus semantic segmented image to obtain a plurality of long-focus sampling point groups corresponding to the long Jiao Yuyi segmented image and a plurality of short-focus sampling point groups corresponding to the short-focus semantic segmented image.
Therefore, the coordinates of the sampling points in the long-focus sampling point group and the short-focus sampling point group are pixel coordinates, and the pixel coordinates of the sampling points are converted into vehicle body coordinates through internal parameters and external parameters of the long-focus camera and the short-focus camera, so that the subsequent calculation is convenient.
In the present embodiment, the origin of the vehicle body coordinate system is defined at the center of the rear axis of the vehicle, along the vehicle advancing direction as the X axis, perpendicular to the vehicle advancing direction as the Y axis, and perpendicular to the ground direction as the Z axis. Of course, the origin of the vehicle body coordinate system may be defined at the center of the vehicle front axis, along the vehicle forward direction as the X axis, perpendicular to the vehicle forward direction as the Y axis, and perpendicular to the ground direction as the Z axis.
In this embodiment, the converting the pixel coordinates of the sampling points in each of the long-focus sampling point group and the short-focus sampling point group into the vehicle body coordinates includes:
internal parameters and external parameters based on a long-focus camera and a short-focus camera, wherein an internal parameter matrix is K, and the external parameters comprise a rotation matrix R and a translation vector T; defining pixel coordinates of sampling points asThe internal reference matrix, rotation matrix, translation vector are:
wherein,and->For the focal length of the camera +.>And->The optical centers of the cameras are calibrated.
The formula for converting the pixel coordinates of the sampling points into the coordinates of the vehicle body is as follows:
wherein s is a normalization coefficient.
In the embodiment, short-focus sampling point groups under each set of vehicle body coordinates are fitted into cubic curves, the transverse average distance from a long-focus sampling point group under all vehicle body coordinates to the cubic curve corresponding to each set of short-focus sampling point groups is calculated, and all long-focus sampling point groups meeting the requirement that the transverse average distance is smaller than a preset transverse distance threshold are added into the corresponding lane line sampling point groups to obtain updated lane line sampling point groups.
And fitting each short-focus sampling point group into a cubic curve by using a polynomial least square method.
Specifically, short-focus sampling points are used as fitting data basis to fit a cubic curve, a long-focus sampling point group is traversed to calculate the transverse average distance from the short-focus sampling point group to the cubic curve, and whether the current long-focus sampling point group belongs to a lane line to which the short-focus sampling points belong is judged based on a transverse distance threshold. If the long-focus sampling point group belongs to the set, adding the long-focus sampling point group into the corresponding lane line sampling point group, and if the long-focus sampling point group does not belong to the next long-focus sampling point group, calculating. As shown in fig. 3, the specific steps are as follows:
acquiring a plurality of long-focus sampling point groups and a plurality of short-focus sampling point groups, wherein the acquired sampling points are vehicle body coordinates;
establishing a corresponding lane line sampling point group by taking each group of short-focus sampling point groups as basic data; that is, a lane line sampling point set is established, and a set of short-focus sampling point set data is added into the established lane line sampling point set.
Using a short-focus sampling point group as an initial fitting data basis to perform curve fitting, wherein the curve fitting is performed to obtain a cubic curve, specifically taking a kth group in a plurality of short-focus sampling point groups as an example, using the kth group of short-focus sampling point groups to establish a lane line sampling point group, and fitting all vehicle body coordinate system downsampling points in the kth group of short-focus sampling point groups to obtain a cubic curve;
taking the fitted cubic curve as a benchmark, matching the long-focus sampling point group, specifically, setting a cubic curve equation asThe N sampling points are shared in the long-focus sampling points of the m-th long-focus sampling point group, and the position of the N-th long-focus sampling point is +.>Distance from nth long focal sampling point to cubic curve
Thus, the transverse average distance from all sampling points in the m-th group of long-focus sampling point groups to the cubic curve is calculated, if the distance meets the requirementAnd judging that the long focus sampling points in the m-th long focus sampling point group belong to the lane lines to which the k-th short focus sampling point group belongs, and adding the m-th long focus sampling point group into the current lane line sampling point group to complete one-time matching.
Traversing all the long-focus sampling point groups, and adding all the long-focus sampling point groups meeting the matching condition into the current lane line sampling point group.
And (3) establishing a next lane line sampling point group by using a k+1th short-focus sampling point group, fitting all vehicle body coordinate system downsampling points in the k+1th short-focus sampling point group into another cubic curve, and repeating the above steps until all short-focus sampling point groups are traversed.
In another specific embodiment, after obtaining the updated set of lane line sampling points, before fitting to obtain the new lane line based on the updated set of lane line sampling points, the method further comprises:
checking the long-focus sampling points in the updated lane line sampling point group, and eliminating the long-focus sampling points which do not pass the checking; and fitting the lane lines based on the checked lane line sampling point groups.
Namely, the lane line matching method based on the perception result of the long and short focal length camera provided in this embodiment, as shown in fig. 4, includes the following steps:
acquiring long Jiao Yuyi segmented images and short-focus semantic segmented images which are output by a long-focus camera and a short-focus camera correspondingly, and respectively carrying out lane line sampling and grouping on the long Jiao Yuyi segmented images and the short-focus semantic segmented images to obtain a plurality of long-focus sampling point groups corresponding to the long Jiao Yuyi segmented images and a plurality of short-focus sampling point groups corresponding to the short-focus semantic segmented images;
converting pixel coordinates of sampling points in each group of long-focus sampling point groups and short-focus sampling point groups into vehicle body coordinates;
establishing a corresponding lane line sampling point group by taking each group of short-focus sampling point groups as basic data;
fitting short-focus sampling point groups under each group of vehicle body coordinates into cubic curves, calculating the transverse average distance from the long-focus sampling point groups under all the vehicle body coordinates to the cubic curves corresponding to each group of short-focus sampling point groups, and adding all the long-focus sampling point groups meeting the transverse average distance smaller than a preset transverse distance threshold into the corresponding lane line sampling point groups to obtain updated lane line sampling point groups;
checking the long-focus sampling points in the updated lane line sampling point group, and eliminating the long-focus sampling points which do not pass the checking;
and fitting a lane line based on the verified lane line sampling point group.
In the embodiment, as the input long Jiao Yuyi segmented image and the short-focus semantic segmented image come from two cameras, abnormal sensing result and calibration parameter errors may exist. Therefore, on the premise of improving the perception range, in order to effectively improve the perception precision, after the corresponding lane line sampling point group is obtained, all the long-focus sampling points in the obtained lane line sampling point group are also verified, and the influence of noise sampling points is reduced by eliminating the long-focus sampling points which are not verified, so that the precision of fitting the lane line is effectively improved.
Checking the long-focus sampling points in the updated lane line sampling point group, and eliminating the long-focus sampling points which do not pass the checking, wherein the method comprises the following steps:
projecting long-focus sampling points in the updated lane line sampling point group into a short-focus semantic segmentation graph based on the internal parameters and the external parameters of the short-focus camera;
if the pixel type of the projected long-focus sampling point is judged to be a lane line, judging that the long-focus sampling point with the pixel type being the lane line is an effective sampling point;
if the pixel type of the projected long-focus sampling point is judged to be a non-lane line, searching whether pixels with the pixel type being a lane line exist in the neighborhood of the projected long-focus sampling point, and if the pixels with the pixel type being the lane line exist, judging that the projected long-focus sampling point is an effective sampling point; if the long-focus sampling point does not exist, judging that the projected long-focus sampling point is an invalid sampling point;
and removing invalid sampling points from the updated lane line sampling point group to obtain a checked lane line sampling point group.
The method comprises the following steps: checking all the obtained lane line sampling point groups, and if N groups of sampling points meeting the matching condition exist in the kth lane line, wherein M groups of sampling points come from the sensing result of the tele camera, checking the sampling point groups coming from the tele camera;
1) Taking the long-focus sampling points in the m-th lane line sampling point set as an example, if K long-focus sampling points are shared in the m-th lane line sampling point set, the coordinates of the vehicle body coordinate system of the K long-focus sampling points are as followsProjecting the internal and external parameters of the short-focus camera into an image of the short-focus camera, wherein the pixel coordinates after the projection are +.>The projection formula is:
2) If the projected pixel coordinate position is the lane line in the short-focus image sensing result label, judging that the pixel coordinate position is an effective sampling point; if the label is a non-lane line, the pixel neighborhood coordinates are searched. If the neighborhood pixel label is a lane line, the neighborhood pixel label is also judged to be an effective sampling point, if the neighborhood pixel label is not provided with the lane line label, the current sampling point is judged to be an ineffective sampling point, and the current sampling point is removed from the lane line sampling point group.
Repeating the steps 1) and 2) to verify all the long-focus sampling points, and re-fitting the lane lines by using the effective sampling points in each lane line after the verification is completed to obtain new lane lines; the new lane line is a vehicle sensing result and has the advantages of long existing sensing distance and high accuracy.
In a specific embodiment, an electronic device includes a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the steps of the lane matching method based on the perception of a long and short focal camera as described above when the computer program is executed.
Where the memory and the processor are connected by a bus, the bus may comprise any number of interconnected buses and bridges, the buses connecting the various circuits of the one or more processors and the memory together. The bus may also connect various other circuits such as peripherals, voltage regulators, and power management circuits, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or may be a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor is transmitted over the wireless medium via the antenna, which further receives the data and transmits the data to the processor.
In a specific embodiment, a computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of a lane matching method based on long and short focal camera perception results as described above.
That is, it will be understood by those skilled in the art that all or part of the steps in implementing the methods of the embodiments described above may be implemented by a program stored in a storage medium, where the program includes several instructions for causing a device (which may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps in the methods of the embodiments described herein. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only memory (ROM), a random access memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It is to be understood that the above examples of the present invention are provided by way of illustration only and not by way of limitation of the embodiments of the present invention. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the invention are desired to be protected by the following claims.

Claims (10)

1. A lane line matching method based on a long and short focus camera sensing result is characterized by comprising the following steps of: the method comprises the following steps:
acquiring long Jiao Yuyi segmented images and short-focus semantic segmented images which are output by a long-focus camera and a short-focus camera correspondingly, and respectively carrying out lane line sampling and grouping on the long Jiao Yuyi segmented images and the short-focus semantic segmented images to obtain a plurality of long-focus sampling point groups corresponding to the long Jiao Yuyi segmented images and a plurality of short-focus sampling point groups corresponding to the short-focus semantic segmented images;
converting pixel coordinates of sampling points in each group of long-focus sampling point groups and short-focus sampling point groups into vehicle body coordinates;
establishing a corresponding lane line sampling point group by taking each group of short-focus sampling point groups as basic data;
fitting short-focus sampling point groups under each group of vehicle body coordinates into cubic curves, calculating the transverse average distance from the long-focus sampling point groups under all the vehicle body coordinates to the cubic curves corresponding to each group of short-focus sampling point groups, and adding all the long-focus sampling point groups meeting the transverse average distance smaller than a preset transverse distance threshold into the corresponding lane line sampling point groups to obtain updated lane line sampling point groups;
and fitting based on the updated lane line sampling point group to obtain a new lane line.
2. The lane line matching method based on the perception result of the long and short focal length camera according to claim 1, wherein the lane line matching method is characterized in that: the long Jiao Yuyi segmented image and the short-focus semantic segmented image are obtained by binarizing original images obtained by a long-focus camera and a short-focus camera; the binarization process classifies each pixel type in the original image as either a lane line, or a non-lane line.
3. The lane line matching method based on the perception result of the long and short focal length camera according to claim 2, wherein the lane line matching method is characterized in that: the lane line sampling and grouping of the long Jiao Yuyi segmented image and the short-focus semantic segmented image respectively comprises the following steps:
respectively carrying out progressive scanning on the long Jiao Yuyi segmented image and the short-focus semantic segmented image to extract a plurality of lane line pixel connected domains;
taking one point in the pixel communication domain of the lane line as a sampling point, and grouping the sampling points by judging whether the pixel communication domains of two adjacent lane lines belong to the same lane line or not;
wherein a point in the lane line pixel communicating region includes one of a midpoint, an inside edge point, and an outside edge point of the lane line pixel communicating region.
4. The lane line matching method based on the perception result of the long and short focal length camera according to claim 3, wherein the lane line matching method is characterized in that: respectively carrying out progressive scanning on the long Jiao Yuyi segmented image and the short-focus semantic segmented image to extract a plurality of lane line pixel connected domains; when taking the midpoint of the pixel connected domain of the lane line as a sampling point, by judging whether the pixel connected domains of two adjacent lane lines belong to the same lane line, the sampling points are further grouped, and the method comprises the following steps:
the method comprises the steps of respectively carrying out progressive scanning on a long Jiao Yuyi segmented image and a short-focus semantic segmented image to extract a lane line pixel connected domain, taking the midpoints of the obtained plurality of lane line pixel connected domains as sampling points and grouping the sampling points, and comprises the following steps:
scanning line by line from the uppermost edge of the long Jiao Yuyi segmented image and the short-focus semantic segmented image, and searching a classification result to be a lane line pixel region;
if in the y-th rowx 1 Column to the firstx 2 The column pixel labels are lane lines, the lane line labels are recorded as a lane line pixel connected domain, and the center pixel of the lane line pixel connected domain is extractedTaking the sampling point as a sampling point, and creating a sampling point group by taking the sampling point as a starting point;
then searching the pixel connected domain of the lane line in the y+1th row, if the initial column of the pixel connected domain of the lane line in the y+1th rowAnd termination column->The conditions are satisfied: />Or satisfies the condition:/>the method comprises the steps of carrying out a first treatment on the surface of the Judging that the lane line pixel communication domain of the y+1 line and the lane line pixel communication domain of the y line belong to the same lane line, and adding sampling points obtained by the lane line pixel communication domain of the y+1 line into a sampling point group of the y line; if the condition is not met, a sampling point group is newly built by taking the sampling points obtained by the pixel connected domain of the lane line of the y+1th row as starting points;
and (3) traversing all lines of the long Jiao Yuyi segmented image and the short-focus semantic segmented image to obtain a plurality of long-focus sampling point groups corresponding to the long Jiao Yuyi segmented image and a plurality of short-focus sampling point groups corresponding to the short-focus semantic segmented image.
5. The lane line matching method based on the perception result of the long and short focal length camera according to claim 1, wherein the lane line matching method is characterized in that: the origin of the vehicle body coordinate system is defined at the center of the rear axle of the vehicle, along the vehicle advancing direction as an X axis, perpendicular to the vehicle advancing direction as a Y axis, and perpendicular to the ground direction as a Z axis.
6. The lane line matching method based on the perception result of the long and short focal length camera according to claim 5, wherein the lane line matching method is characterized in that: the converting the pixel coordinates of the sampling points in each group of long-focus sampling point groups and short-focus sampling point groups into vehicle body coordinates comprises the following steps:
internal parameters and external parameters based on a long-focus camera and a short-focus camera, wherein an internal parameter matrix is K, and the external parameters comprise a rotation matrix R and a translation vector T; defining pixel coordinates of sampling points asThe internal reference matrix, rotation matrix, translation vector are:
wherein,and->For the focal length of the camera +.>And->The optical centers of the cameras are calibrated.
The formula for converting the pixel coordinates of the sampling points into the coordinates of the vehicle body is as follows:
wherein s is a normalization coefficient.
7. The lane line matching method based on the perception result of the long and short focal length camera according to claim 1, wherein the lane line matching method is characterized in that: and fitting each short-focus sampling point group into a cubic curve by using a polynomial least square method.
8. The lane line matching method based on the perception result of the long and short focal length camera according to claim 2, wherein the lane line matching method is characterized in that: after obtaining the updated lane line sampling point set and before obtaining a new lane line based on the updated lane line sampling point set, the method further comprises:
checking the long-focus sampling points in the updated lane line sampling point group, and eliminating the long-focus sampling points which do not pass the checking; and fitting the lane lines based on the checked lane line sampling point groups.
9. The lane line matching method based on the perception result of the long and short focal length camera according to claim 8, wherein the lane line matching method is characterized in that: checking the long-focus sampling points in the updated lane line sampling point group, and eliminating the long-focus sampling points which do not pass the checking, wherein the method comprises the following steps:
projecting long-focus sampling points in the updated lane line sampling point group into a short-focus semantic segmentation graph based on the internal parameters and the external parameters of the short-focus camera;
if the pixel type of the projected long-focus sampling point is judged to be a lane line, judging that the long-focus sampling point with the pixel type being the lane line is an effective sampling point;
if the pixel type of the projected long-focus sampling point is judged to be a non-lane line, searching whether pixels with the pixel type being a lane line exist in the neighborhood of the projected long-focus sampling point, and if the pixels with the pixel type being the lane line exist, judging that the projected long-focus sampling point is an effective sampling point; if the long-focus sampling point does not exist, judging that the projected long-focus sampling point is an invalid sampling point;
and removing invalid sampling points from the updated lane line sampling point group to obtain a checked lane line sampling point group.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the lane matching method based on the perception of a long and short focal camera as claimed in any one of claims 1 to 9 when the computer program is executed.
CN202311431022.9A 2023-10-31 2023-10-31 Lane line matching method based on long and short focus camera sensing result Pending CN117351447A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311431022.9A CN117351447A (en) 2023-10-31 2023-10-31 Lane line matching method based on long and short focus camera sensing result

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311431022.9A CN117351447A (en) 2023-10-31 2023-10-31 Lane line matching method based on long and short focus camera sensing result

Publications (1)

Publication Number Publication Date
CN117351447A true CN117351447A (en) 2024-01-05

Family

ID=89355723

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311431022.9A Pending CN117351447A (en) 2023-10-31 2023-10-31 Lane line matching method based on long and short focus camera sensing result

Country Status (1)

Country Link
CN (1) CN117351447A (en)

Similar Documents

Publication Publication Date Title
US11099275B1 (en) LiDAR point cloud reflection intensity complementation method and system
CN109919160B (en) Verification code identification method, device, terminal and storage medium
WO2018112707A1 (en) Method and device for detecting obstacles
CN108154149B (en) License plate recognition method based on deep learning network sharing
CN110414385B (en) Lane line detection method and system based on homography transformation and characteristic window
CN112541396A (en) Lane line detection method, device, equipment and computer storage medium
CN107862319B (en) Heterogeneous high-light optical image matching error eliminating method based on neighborhood voting
CN113591967A (en) Image processing method, device and equipment and computer storage medium
CN110443254B (en) Method, device, equipment and storage medium for detecting metal area in image
CN110909663A (en) Human body key point identification method and device and electronic equipment
CN104077775A (en) Shape matching method and device combined with framework feature points and shape contexts
EP4064177A1 (en) Image correction method and apparatus, and terminal device and storage medium
CN111222417A (en) Method and device for improving lane line extraction precision based on vehicle-mounted image
CN116740072A (en) Road surface defect detection method and system based on machine vision
WO2022237902A1 (en) Method, apparatus, and device for detecting object, and computer storage medium
CN115512381A (en) Text recognition method, text recognition device, text recognition equipment, storage medium and working machine
CN111488762A (en) Lane-level positioning method and device and positioning equipment
CN117351447A (en) Lane line matching method based on long and short focus camera sensing result
CN113822818B (en) Speckle extraction method, device, electronic device, and storage medium
CN114724119B (en) Lane line extraction method, lane line detection device, and storage medium
CN116152758A (en) Intelligent real-time accident detection and vehicle tracking method
CN115861659A (en) Object matching method, device, equipment and computer storage medium
CN112052859B (en) License plate accurate positioning method and device in free scene
CN114025089A (en) Video image acquisition jitter processing method and system
CN113313968A (en) Parking space detection method and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination