CN114644014A - Intelligent driving method based on lane line and related equipment - Google Patents

Intelligent driving method based on lane line and related equipment Download PDF

Info

Publication number
CN114644014A
CN114644014A CN202210179893.5A CN202210179893A CN114644014A CN 114644014 A CN114644014 A CN 114644014A CN 202210179893 A CN202210179893 A CN 202210179893A CN 114644014 A CN114644014 A CN 114644014A
Authority
CN
China
Prior art keywords
lane line
information
image
target
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210179893.5A
Other languages
Chinese (zh)
Inventor
李兆冉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lantu Automobile Technology Co Ltd
Original Assignee
Lantu Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lantu Automobile Technology Co Ltd filed Critical Lantu Automobile Technology Co Ltd
Priority to CN202210179893.5A priority Critical patent/CN114644014A/en
Publication of CN114644014A publication Critical patent/CN114644014A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/10Path keeping
    • B60W30/12Lane keeping
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo, light or radio wave sensitive means, e.g. infrared sensors
    • B60W2420/403Image sensing, e.g. optical camera
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2552/00Input parameters relating to infrastructure
    • B60W2552/53Road markings, e.g. lane marker or crosswalk

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Traffic Control Systems (AREA)
  • Steering Control In Accordance With Driving Conditions (AREA)

Abstract

The invention discloses an intelligent driving method based on lane lines and related equipment. The method comprises the following steps: acquiring target road image information, wherein the target image information is road image information of a vehicle driving direction; acquiring target lane line pixel information based on the target road image information; generating a steering control instruction according to the lane line pixel information; and controlling the target vehicle to run based on the steering control command. The method provided by the embodiment of the application only needs to acquire the image information of the target road in front of the vehicle, identify the pixel information of the lane line in the image information of the target road, and generate the steering control instruction based on the pixel information of the lane line, so that the vehicle is controlled to run based on the change of the lane line.

Description

Intelligent driving method based on lane line and related equipment
Technical Field
The present disclosure relates to the field of intelligent driving, and more particularly, to an intelligent driving method based on lane lines and related devices.
Background
Lane line tracking is a technique in the field of intelligent driving that utilizes sensors on the vehicle to detect vehicle status and road information, from which control signals are computed to the vehicle steering system to effect automatic vehicle travel along the lane line without human manipulation. As a basic technology in the automatic driving technology, the lane line tracking can enable a driver to liberate both hands in the driving process, and the driving convenience is improved.
A common method for performing intelligent driving control based on lane lines is to detect the lane lines on both sides of a road by using a vehicle-mounted camera, and calculate a driving path to be followed by a vehicle according to a distance between the two lane lines and a curvature of the lane lines. And calculating the translation and/or rotation angle difference between the actual path and the ideal running path of the vehicle by combining the vehicle state data acquired from the in-vehicle sensors (such as IMU, GPS and steering wheel rotation angle sensor) including longitudinal vehicle speed, transverse vehicle speed, course angle, wheel rotation angle, vehicle position and the like, and then driving the vehicle to approximate the ideal running path by using a linear or nonlinear controller. The method is applied to a vehicle equipped with an Advanced Driving Assistance System (ADAS System) of a level higher than L2, and can achieve good control effect. However, in the scenes of unmanned delivery vehicles in warehouses, unmanned shuttle vehicles in gardens and the like, a complete set of sensing equipment is not necessarily equipped on the vehicles due to the limitations of cost, measurement conditions and the like. The method has overhigh cost and poor economical efficiency.
Disclosure of Invention
A series of concepts in a simplified form are introduced in the summary section, which is described in further detail in the detailed description section. This summary of the invention is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
In order to provide an intelligent driving method with high economical efficiency, the invention provides an intelligent driving method based on a lane line in a first aspect, and the method comprises the following steps:
acquiring target road image information, wherein the target image information is road image information of a vehicle driving direction;
acquiring target lane line pixel information based on the target road image information;
generating a steering control instruction according to the lane line pixel information;
and controlling the target vehicle to run based on the steering control command.
Optionally, the target road image includes a target road left image and a target road right image, and the target lane line pixel information includes current left lane line pixel information and current right lane line pixel information;
the above-mentioned lane line pixel information based on the above-mentioned target road image information acquisition, including:
acquiring the left lane line pixel information from the left image of the target road;
acquiring the pixel information of the right lane line in the right image of the target road;
the generating of the steering control command by the lane line pixel information includes:
acquiring a current left lane accumulated pixel value based on the current left lane pixel information;
acquiring a right turning angle according to the current left lane accumulated pixel value;
acquiring a current right lane accumulated pixel value based on the current right lane pixel information;
acquiring a left turning angle according to the current right lane accumulated pixel value;
summing the right turning angle and the left turning angle to obtain a first turning instruction;
and determining the first steering command as the steering control command.
Optionally, the method further includes:
and dividing the target road image according to a preset proportion to obtain the left image of the target road and the right image of the target road.
Optionally, the method further includes:
acquiring the current left lane line pixel information corresponding to the left lane line close to the left side under the condition that a plurality of left lane lines exist in the left image of the target road;
or
And acquiring the current right lane line pixel information corresponding to the right lane line close to the right side when the right lane line exists in the right image of the target road.
Optionally, the target lane line pixel information further includes straight left lane line image speed information;
the generating of the steering control command by the lane line pixel information includes:
under the condition that the definition of the right image of the target road is smaller than the preset definition, acquiring a straight left lane line pixel value according to the straight left lane line pixel information;
acquiring a first steering instruction according to the difference value of the current left lane accumulated pixel value and the straight left lane line pixel value;
and determining the first steering command as the steering control command.
Optionally, the target lane line pixel information further includes image speed information of a straight right lane line;
the generating of the steering control command by the lane line pixel information includes:
under the condition that the definition of the left image of the target road is smaller than the preset definition, acquiring a straight-going right lane line pixel value according to the straight-going right lane line pixel information;
acquiring a second steering instruction according to the difference value between the current right lane accumulated pixel value and the straight right lane line pixel value;
and determining the second steering command as the steering control command.
Optionally, the method further includes:
acquiring original image information;
extracting road image information based on the original image information;
performing convolution operation on the road image information to obtain noise reduction image information;
extracting the noise reduction image information to obtain characteristic side image information;
and carrying out color filtering operation on the characteristic edge image information to obtain the target road image information.
In a second aspect, the present invention further provides an intelligent driving control device based on lane lines, including:
the system comprises a first acquisition unit, a second acquisition unit and a control unit, wherein the first acquisition unit is used for acquiring target road image information, and the target road image information is road image information of a vehicle driving direction;
a second obtaining unit, configured to obtain target lane line pixel information based on the target road image information;
the generating unit is used for generating a steering control instruction through the lane line pixel information;
and the control unit is used for controlling the target vehicle to run based on the steering control command.
In a third aspect, an electronic device includes: a memory, a processor and a computer program stored in the memory and executable on the processor, the processor being configured to implement the steps of the lane-line based intelligent driving method according to any of the first aspect as described above when executing the computer program stored in the memory.
In a fourth aspect, the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements any one of the lane-based intelligent driving methods of the first aspect.
In summary, the intelligent driving method based on lane lines provided by the embodiment of the application includes: acquiring target road image information, wherein the target image information is road image information of a vehicle driving direction; acquiring target lane line pixel information based on the target road image information; generating a steering control instruction according to the lane line pixel information; and controlling the target vehicle to run based on the steering control command. The method provided by the embodiment of the application only needs to acquire the image information of the target road in front of the vehicle, identify the pixel information of the lane line in the image information of the target road, and generate the steering control instruction based on the pixel information of the lane line, so that the vehicle is controlled to run based on the change of the lane line.
Additional advantages, objects, and features of the lane line based intelligent driving method of the present invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the specification. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a schematic flow chart of an intelligent driving method based on a lane line according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a target road according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a target straight road according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a target left-turn road according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a target right-turn road according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of a simulation test result provided in an embodiment of the present application;
fig. 7 is a schematic diagram of simulation test parameters provided in an embodiment of the present application;
FIG. 8 is a diagram illustrating parameters provided in an embodiment of the present application;
fig. 9 is a lane line-based intelligent driving control device according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an intelligent driving electronic device based on a lane line according to an embodiment of the present application.
Detailed Description
The method provided by the embodiment of the application only needs to acquire the image information of the target road in front of the vehicle, identify the pixel information of the lane line in the image information of the target road, and generate the steering control instruction based on the pixel information of the lane line, so that the vehicle is controlled to run based on the change of the lane line.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
Referring to fig. 1, a schematic flow chart of an intelligent driving method based on lane lines provided in the embodiment of the present application may specifically include:
s110, acquiring target image information, wherein the target image information is road image information of a vehicle driving direction;
for example, a common method for intelligent driving control based on lane lines is to detect the lane lines on both sides of a road by using vehicle-mounted cameras, and calculate a driving path to be followed by a vehicle according to a distance between the two lane lines and a curvature of the lane lines. And calculating the translation and/or rotation angle difference between the actual path and the theoretical running path of the vehicle by combining the vehicle state data acquired from the in-vehicle sensors (such as IMU, GPS and steering wheel rotation angle sensor) including longitudinal vehicle speed, transverse vehicle speed, course angle, wheel rotation angle, vehicle position and the like, and then driving the vehicle to approach the theoretical running path by using a linear or nonlinear controller. This method is often used to achieve good control results in vehicles equipped with ADAS systems of grades above L2. However, in the scenes of unmanned delivery vehicles in warehouses, unmanned shuttle vehicles in gardens and the like, a complete set of sensing equipment is not necessarily equipped on the vehicles due to the limitations of cost, measurement conditions and the like. The method has overhigh cost and poor economical efficiency.
According to the method and the device, only the road image information of the vehicle driving direction needs to be acquired, and the steering information of the vehicle can be acquired through corresponding post-image processing. First, image information of the traveling direction of the vehicle, that is, target image information is acquired by a camera mounted on the front end of the vehicle. It will be appreciated that the camera may be mounted in the vehicle on the grille, the windscreen or other location where the front end of the vehicle is unobstructed. The image acquired by the camera may include images except for roads, such as long-range scenes like sky and mountains, and only road image information may be acquired through preprocessing.
S120, acquiring pixel information of a target lane line through the target image information;
illustratively, the lane line information is extracted from the target image information, the lane line may be extracted based on the contour shape or based on the color difference of the image, it is understood that the color of the lane line and the color of the road form a color difference, the target image is preprocessed correspondingly, and the appropriate threshold is adopted to extract the lane line pixel information.
S130, generating a steering control instruction according to the lane line pixel information;
illustratively, in the case that the road is straight, the lane line pixels appearing in the camera should be unchanged all the time, but when the road is turned, the lane line image captured by the camera bends to the left or right, as shown in fig. 2, the lane line image is obtained when the vehicle encounters a left-turn intersection, the lane line pixel information changes, increases or decreases, and the lane line pixel information change has a certain relationship with the turning radius, and the lane line pixel information is used to generate the turning control command.
And S140, controlling the target vehicle to run based on the steering control command.
Illustratively, according to the steering control command generated in the above steps, the steering angle of the vehicle is obtained, and the vehicle is controlled to run according to the steering angle, so that the vehicle is guaranteed to run along the middle of two lane lines, and intelligent driving is realized.
In summary, the method provided by the embodiment of the application only needs to acquire the image information of the target road in front of the vehicle, identify the pixel information of the lane line in the image information of the target road, and generate the steering control instruction based on the pixel information of the lane line, so that the vehicle is controlled to run based on the change of the lane line.
Optionally, the target road image includes a target road left image and a target road right image, and the target lane line pixel information includes current left lane line pixel information and current right lane line pixel information;
the above-mentioned lane line pixel information based on the above-mentioned target road image information acquisition, including:
acquiring the left lane line pixel information in the left image of the target road;
acquiring the pixel information of the right lane line in the right image of the target road;
the generating of the steering control command by the lane line pixel information includes:
acquiring a current left lane accumulated pixel value based on the current left lane pixel information;
acquiring a right turn signal according to the current left lane accumulated pixel value;
acquiring a current right lane accumulated pixel value based on the current right lane pixel information;
acquiring a left turn signal according to the current right lane accumulated pixel value;
summing the right-turn signal and the left-turn signal to obtain a first steering command;
and determining the first steering command as the steering control command.
For example, the target road image includes a target road left image and a target road right image, and the target road left image and the target road right image may be obtained by dividing an image obtained by one camera according to a certain ratio, or may be obtained by two cameras respectively distributed on two sides in front of the vehicle. The target lane line pixel information includes current left lane line pixel information and current right lane line pixel information. The original image shown in fig. 2 is an international bidirectional lane diagram, and the middle dotted line is a left lane line driving forward, and the right solid line is a right lane line driving currently. In the process of acquiring the lane line information, only extracting current left lane information and ignoring right lanes from the left image of the target road, only extracting current right lane information and ignoring left lanes from the right image of the target road, respectively acquiring a current left lane accumulated pixel value and a current right lane pixel value according to the current left lane pixel information and the current right lane pixel information, acquiring a right turning angle signal according to the current left lane pixel value, acquiring a left turning angle signal according to the current right lane pixel value, summing a left turning angle and a right turning angle to acquire a first turning instruction, and determining the first turning instruction as the steering control instruction. For example: when the road ahead is straight, as shown in fig. 3, it is obtained that the current left lane accumulated pixel value (dotted line) is 50, the current right lane (solid line) accumulated pixel value is 50, it can be predetermined that the left is positive, the right is negative, the left side pixel value generates a right turn angle, the right side pixel value generates a left turn angle, that is, the right turn angle is-50, the left turn angle is +50, and the sum of the right turn angle and the left turn angle is 0, at this time, the driving direction of the vehicle does not need to be changed. When the road ahead of the vehicle is left-steered, as shown in fig. 4, it is acquired that the current left lane cumulative pixel value (dotted line) is-10, the current right lane (solid line) pixel value is 80, the right and left turn angles are summed to +70, and the above-mentioned predetermined left is positive, so the total steering signal is 70. When the road ahead of the vehicle is steered, as shown in fig. 4, it is acquired that the current right lane cumulative pixel value (dotted line) is-90, the current right lane (solid line) pixel value is 10, the right turn angle and the left turn angle are summed to-80, and the above-mentioned predetermined right is negative, so the integrated steering signal is 80. It should be noted that the above specific numerical values are only for illustration, and the specific steering angle needs to obtain the total pixel amount of the image for calibration according to the installation position of the camera, and in addition, since the left lane line is a dotted line and the right lane line is a solid line, the phenomenon that the current accumulated pixel value of the left lane is different from the current pixel value of the right lane still exists in the case of straight driving, and a proper adjustment ratio can be set through calibration so as to meet the control requirement.
In summary, in the method provided in this embodiment, the current left lane accumulated pixel value is obtained in the left image of the target road, the current right lane accumulated pixel value is obtained in the right image of the target road, the right/left turning angle signals are respectively formed through the current left/right lane accumulated pixel values, and the first steering command is obtained through the sum of the two turning angle signals.
Optionally, the method further includes:
and dividing the target road image according to a preset proportion to obtain the left image of the target road and the right image of the target road.
For example, as shown in fig. 2, the original image is an international bidirectional lane schematic diagram, the middle dotted line is a left lane line driving forward, the right solid line is a right lane line driving currently, the original image is preprocessed to obtain a target road image, the target road image is divided into a left image of the target road and a right image of the target road according to a preset ratio (1: 1 may be selected), and the current left lane line information and the current right lane line information are identified based on the above method, and a control instruction is obtained according to the left lane line information and the right lane line information, so as to control the vehicle to drive intelligently. According to the method, only one camera is required to be installed in front of the vehicle, the acquired picture is divided into the left image and the right image according to the preset proportion, the vehicle is controlled to intelligently run based on the lane line identification method, and hardware cost is reduced to a certain extent.
In summary, the method provided by the embodiment of the application can obtain an original image through the monocular camera, preprocess and divide the original image at a preset ratio to obtain a left image of the target road and a right image of the target road, obtain left/right lane line information based on the left image of the target road and the right image of the target road, and calculate a left/right turning angle signal according to the left/right lane line information to generate a control instruction to control the vehicle to run.
Optionally, the method further includes:
acquiring the current left lane line pixel information corresponding to the left lane line close to the left side under the condition that a plurality of left lane lines exist in the left image of the target road;
or
And acquiring the current right lane line pixel information corresponding to the right lane line close to the right side when the right lane line exists in the right image of the target road.
For example, as shown in fig. 5, two solid lines (i.e., right lane lines) appear in the right image of the target, if the sum of the pixel information of the two right lane lines is used as the current right lane line pixel information, the phenomenon that the current right lane line does not meet the actual situation may be caused, because the upper long lane line is the right lane line of the opposite-traveling vehicle, at this time, a short solid line close to the lower right should be selected as the current right lane line, and the pixel information of the current right lane line is acquired for the intelligent driving of the vehicle. Correspondingly, under the condition that a plurality of left lane lines exist in the left image of the target, the left lane line close to the left side is obtained as the current left lane line. It will be appreciated that in the case where there are multiple lane lines, a reasonable distance threshold may be set for screening to prevent the oncoming lane line from affecting it.
In summary, the method provided in this embodiment can effectively screen opposite lane lines when multiple lane lines appear in a certain side image, thereby avoiding logic confusion caused by control behavior.
Optionally, the target lane line pixel information further includes straight left lane line image speed information;
the generating of the steering control command by the lane line pixel information includes:
under the condition that the definition of the right image of the target road is smaller than the preset definition, acquiring a straight left lane line pixel value according to the straight left lane line pixel information;
acquiring a first steering instruction according to the difference value between the current left lane accumulated pixel value and the straight left lane line pixel value;
and determining the first steering command as the steering control command.
For example, when the definition of the right image of the target road is smaller than the preset definition, a steering angle cannot be generated by using the right image and the left image simultaneously to obtain a steering command, at this time, a current pixel value of a left lane line is obtained, a first steering command is obtained by a difference value between the current pixel value of the left lane line and a pixel value of a straight left lane line, and the vehicle is controlled to run by taking the first steering command as a steering control command. It can be understood that, when the front lane line turns left, the current pixel value of the left lane line is decreased compared with the accumulated pixel value of the straight left lane, and the left turn control is performed according to the proportional value. It should be noted that the straight left lane line pixel value may be set to a fixed value in advance, or may be acquired when the vehicle is traveling straight during traveling.
In summary, according to the method provided in this embodiment, when the definition of the right image is insufficient, the potential safety hazard caused by controlling the vehicle to run according to the lane information of the left and right images is avoided, at this time, the first steering instruction is generated according to the difference between the left lane accumulated pixel value and the left lane line pixel value, and the vehicle is controlled to steer by using the first steering instruction as the steering control instruction, so that the safety performance of vehicle control is improved, and an accident situation is avoided.
Optionally, the target lane line pixel information further includes image speed information of a straight right lane line;
the generating of the steering control command by the lane line pixel information includes:
under the condition that the definition of the left image of the target road is smaller than the preset definition, acquiring a straight-going right lane line pixel value according to the straight-going right lane line pixel information;
acquiring a second steering instruction according to the difference value between the current right lane accumulated pixel value and the straight right lane line pixel value;
and determining the second steering command as the steering control command.
For example, when the definition of the left image of the target road is smaller than the preset definition, a steering angle cannot be generated by using the left image and the right image to obtain a steering command, and at this time, a current pixel value of a right lane line is obtained, a second steering command is obtained by a difference value between the current pixel value of the right lane line and a pixel value of a straight right lane line, and the vehicle is controlled to run by taking the second steering command as a steering control command. It can be understood that, when the front lane line is a right turn, the current pixel value of the right lane line is decreased compared with the accumulated pixel value of the straight right lane, and the right turn control is performed according to the difference value. It should be noted that the straight-driving right lane line pixel value may be set to a fixed value in advance, or may be acquired during straight driving during driving.
In summary, according to the method provided in this embodiment, when the definition of the left image is insufficient, the potential safety hazard caused by controlling the vehicle to run according to the lane information of the right and left images is avoided, at this time, the second steering instruction is generated only according to the difference between the accumulated pixel value of the right lane and the pixel value of the execution right lane line, and the vehicle is controlled to steer by using the second steering instruction as the steering control instruction, so that the safety performance of vehicle control is improved, and an accident situation is avoided.
In some examples, the method further comprises:
acquiring original image information;
extracting road image information based on the original image information;
performing convolution operation on the road image information to obtain noise reduction image information;
extracting the noise reduction image information to obtain characteristic side image information;
and carrying out color filtering operation on the characteristic edge image information to obtain the target road image information.
For example, an image directly captured by a camera is original image information, and since the original image information may have noise or some other useless information, in order to reduce the influence of the useless information on subsequent lane line identification and other operations, the original image needs to be preprocessed, where the preprocessing mainly includes extracting road image information in the original image information, performing convolution operation on the road information to obtain noise-reduced image information, performing extraction operation on the noise-reduced image information to obtain characteristic edge image information, performing color filtering operation on the characteristic edge image information to obtain target image information, and performing lane line identification and control instruction determination by using the target image information.
In some examples, pre-processing the original image may include:
a, filtering pictures above a horizon:
the intelligent driving method based on the lane line only needs to pay attention to the road surface condition, so in order to reduce the calculated amount of a processor and a display card, the part above the horizon line in the picture is removed firstly. To achieve this, the position of the horizon in the picture is first determined. The position of the horizon line can be dynamically determined according to the pitch angle of the vehicle and the installation position of the camera, and a proper fixed value can be selected, namely, the lane line in the range close to the front end of the vehicle is selected. In the embodiment, a target range 3 meters ahead of the vehicle is selected, and coordinate points (3,0,1) (3, -5,1) of two horizon lines are selected in a normalized coordinate system of the three-dimensional space, so that a horizon line equation of the three-dimensional space is obtained. By calibrating the camera parameters, a normalized matrix H of the camera can be obtained, the horizon of the three-dimensional space can be converted into the horizon of the two-dimensional image space by utilizing the H, and the content on the horizon is invalid information, so that the operation amount can be reduced.
And B, improving the signal-to-noise ratio:
the image generated by the camera inevitably contains noise, which is generally randomly distributed. Based on the property of noise, in combination with the illumination condition of the road, the resolution of the camera and other conditions, the original image is convolved by using a normal distribution convolution kernel with an average value of 0 and a mean square error of 3. The processed image is slightly blurred, but most of the background noise is cancelled out.
C, extracting at the same time:
the lane lines are line patterns, and the horizontal edges and the longitudinal edges in the pictures can be extracted by sober filtering. The principle of the sober filter is that edge elements are extracted through the brightness change gradient of pixel points, and reflective points, stains and small obstacles on a road surface can be reserved. In order to filter out the contents irrelevant to the lane lines, the root mean square of the brightness variation intensity in the horizontal direction and the longitudinal direction of the image needs to be calculated, and only the image contents with the root mean square larger than a preset threshold (10 in the embodiment) are reserved.
D: left and right image segmentation:
in order for a monocular camera to "simulate" the effect of a binocular camera, the image also needs to be divided equally into left and right parts. The left image is used to detect a left lane line and the right image is used to detect a right lane line.
E: color filtering:
in order to calculate the actuator excitation intensities for the left and right turns, respectively, it is also necessary to extract the left and right lane lines of the road in the left and right images, respectively. In the driving environment of the present embodiment, the vehicle is driven on the right side, and according to the general marking rule of the international bidirectional lane, the lane on the left side is a yellow dotted line, and the lane on the right side is a white solid line. When the target image is in HSV format, the yellow dotted line and the white solid line can be extracted by adjusting the range of H, S, V values.
In conclusion, the method provided by the embodiment can effectively screen noise and irrelevant information, and accurately and quickly extract the target road image information.
In some examples, the size ratio of the vehicle model used in the simulation test by using the method to a general passenger vehicle is about 20:1, and the vehicle model can obtain a relatively ideal control effect in a simulator and a real environment after calibration and adjustment. The effect of the vehicle following the lane is that the vehicle can smoothly follow the lane line as shown in fig. 6 and 7. The angular deviation and the lateral deviation of the vehicle are extracted from data such as a vehicle running track, a lateral deviation, a course angle, an included angle of a lane line tangent direction and the like, and it should be noted that, as shown in fig. 8, the angular deviation is an included angle θ between a vehicle running direction and a lane line direction, and the lateral deviation is a perpendicular distance d between a vehicle center position and a left lane line. As can be seen from the data in FIG. 7, the average angular offset of the vehicle travel path is 3.2 degrees, and the average lateral offset is 2.7 cm. The intelligent driving method based on the lane lines has high precision and can be applied to engineering practice.
Referring to fig. 9, an embodiment of the lane line-based intelligent driving control apparatus in the embodiment of the present application may include:
a first acquisition unit 21 configured to acquire target road image information, where the target road image information is road image information of a vehicle traveling direction;
a second obtaining unit 22, configured to obtain target lane line pixel information based on the target road image information;
a generating unit 23, configured to generate a steering control instruction according to the lane line pixel information;
and a control unit 24 for controlling the travel of the target vehicle based on the steering control command.
As shown in fig. 10, the embodiment of the present application further provides an electronic device 300, which includes a memory 310, a processor 320, and a computer program 311 stored on the memory 320 and executable on the processor, wherein when the computer program 311 is executed by the processor 320, the steps of any one of the above-mentioned lane line-based intelligent driving methods are implemented.
Since the electronic device described in this embodiment is a device used for implementing an intelligent driving control device based on a lane line in this embodiment, based on the method described in this embodiment, a person skilled in the art can understand the specific implementation manner of the electronic device of this embodiment and various variations thereof, so that how to implement the method in this embodiment by the electronic device is not described in detail herein, and as long as the person skilled in the art implements the device used for implementing the method in this embodiment, the device falls within the scope of protection intended by this application.
In a specific implementation, the computer program 311 may implement any of the embodiments corresponding to fig. 1 when executed by a processor.
It should be noted that, in the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to relevant descriptions of other embodiments for parts that are not described in detail in a certain embodiment.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Embodiments of the present application further provide a computer program product, where the computer program product includes computer software instructions, and when the computer software instructions are executed on a processing device, the processing device is caused to execute a flow of the lane line-based intelligent driving method in the embodiment corresponding to fig. 1.
The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the present application are all or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). A computer-readable storage medium may be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. An intelligent driving method based on lane lines is characterized by comprising the following steps:
acquiring target road image information, wherein the target image information is road image information of a vehicle driving direction;
acquiring target lane line pixel information based on the target road image information;
generating a steering control instruction according to the lane line pixel information;
and controlling the target vehicle to run based on the steering control command.
2. The method of claim 1, wherein the target road image includes a target road left image and a target road right image, the target lane line pixel information including current left lane line pixel information and current right lane line pixel information;
the acquiring of the lane line pixel information based on the target road image information includes:
acquiring the left lane line pixel information in the left image of the target road;
acquiring the pixel information of the right lane line in the right image of the target road;
the generating of the steering control instruction through the lane line pixel information includes:
acquiring a current left lane accumulated pixel value based on the current left lane line pixel information;
acquiring a right turn signal amplitude according to the current left lane accumulated pixel value;
acquiring a current right lane accumulated pixel value based on the current right lane pixel information;
acquiring a left-turning angle signal amplitude according to the current right lane accumulated pixel value;
summing the right turning angle signal amplitude and the left turning angle signal amplitude to obtain a first turning instruction;
determining the first steering command as the steering control command.
3. The method of claim 2, further comprising:
and dividing the target road image according to a preset proportion to obtain the target road left image and the target road right image.
4. The method of claim 2, further comprising:
under the condition that a plurality of left lane lines exist in the left image of the target road, acquiring pixel information of the current left lane line corresponding to the left lane line close to the left side;
or
And under the condition that a plurality of right lane lines exist in the right image of the target road, acquiring pixel information of the current right lane line corresponding to the right lane line close to the right side.
5. The method of claim 2, wherein the target lane line pixel information further includes straight left lane line image speed information;
the generating of the steering control instruction through the lane line pixel information includes:
under the condition that the definition of the right image of the target road is smaller than the preset definition, acquiring a straight-going left lane line pixel value according to the straight-going left lane line pixel information;
acquiring a first steering instruction according to the difference value of the current left lane accumulated pixel value and the straight left lane line accumulated pixel value;
determining the first steering command as the steering control command.
6. The method of claim 2, wherein the target lane line pixel information further includes straight-ahead right lane line pixel speed information;
the generating of the steering control instruction through the lane line pixel information includes:
under the condition that the definition of the left image of the target road is smaller than the preset definition, acquiring a straight-driving right lane line pixel value according to the straight-driving right lane line pixel information;
acquiring a second steering instruction according to the difference value between the current right lane accumulated pixel value and the straight-going right lane line pixel value;
determining the second steering command as the steering control command.
7. The method of claim 1, further comprising:
acquiring original image information;
extracting road image information based on the original image information;
performing convolution operation on the road image information to obtain noise reduction image information;
extracting the noise reduction image information to obtain characteristic side image information;
and carrying out color filtering operation on the characteristic edge image information to obtain the target road image information.
8. The utility model provides an intelligence driving control device based on lane line which characterized in that includes:
the system comprises a first acquisition unit, a second acquisition unit and a control unit, wherein the first acquisition unit is used for acquiring target road image information, and the target road image information is road image information of a vehicle driving direction;
a second acquisition unit configured to acquire target lane line pixel information based on the target road image information;
the generating unit is used for generating a steering control instruction through the lane line pixel information;
and the control unit is used for controlling the target vehicle to run based on the steering control command.
9. An electronic device, comprising: memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor is adapted to carry out the steps of the lane line based intelligent driving method according to any of claims 1-7 when executing the computer program stored in the memory.
10. A computer-readable storage medium having stored thereon a computer program, characterized in that: the computer program, when executed by a processor, implements the lane-line based intelligent driving method of any one of claims 1-7.
CN202210179893.5A 2022-02-25 2022-02-25 Intelligent driving method based on lane line and related equipment Pending CN114644014A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210179893.5A CN114644014A (en) 2022-02-25 2022-02-25 Intelligent driving method based on lane line and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210179893.5A CN114644014A (en) 2022-02-25 2022-02-25 Intelligent driving method based on lane line and related equipment

Publications (1)

Publication Number Publication Date
CN114644014A true CN114644014A (en) 2022-06-21

Family

ID=81994255

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210179893.5A Pending CN114644014A (en) 2022-02-25 2022-02-25 Intelligent driving method based on lane line and related equipment

Country Status (1)

Country Link
CN (1) CN114644014A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116588112A (en) * 2023-07-11 2023-08-15 广汽埃安新能源汽车股份有限公司 Intersection vehicle control method and device and vehicle

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116588112A (en) * 2023-07-11 2023-08-15 广汽埃安新能源汽车股份有限公司 Intersection vehicle control method and device and vehicle
CN116588112B (en) * 2023-07-11 2024-01-30 广汽埃安新能源汽车股份有限公司 Intersection vehicle control method and device and vehicle

Similar Documents

Publication Publication Date Title
US10685246B2 (en) Systems and methods for curb detection and pedestrian hazard assessment
US9619719B2 (en) Systems and methods for detecting traffic signs
CN108647638B (en) Vehicle position detection method and device
US10402665B2 (en) Systems and methods for detecting traffic signs
US9569673B2 (en) Method and device for detecting a position of a vehicle on a lane
DE102018116040B4 (en) PERIPHERAL DISPLAY CONTROL DEVICE
CN103679119B (en) Self adaptation slope road detection method and device
EP2958054A2 (en) Hazard detection in a scene with moving shadows
DE112016000187T5 (en) Method and apparatus for estimating vehicle intrinsic movement based on panoramic images
US11200432B2 (en) Method and apparatus for determining driving information
CN107628032A (en) Automatic Pilot control method, device, vehicle and computer-readable recording medium
CN104751151A (en) Method for identifying and tracing multiple lanes in real time
CN111178122A (en) Detection and planar representation of three-dimensional lanes in a road scene
Liu et al. Development of a vision-based driver assistance system with lane departure warning and forward collision warning functions
DE112016000423T5 (en) SECTION LINE DETECTION DEVICE
DE112016006962T5 (en) Recognition region estimation device, recognition region estimation method, and recognition region estimation program
CN114644014A (en) Intelligent driving method based on lane line and related equipment
US11904843B2 (en) Autonomous parking systems and methods for vehicles
CN114648743A (en) Three-dimensional traffic sign detection
KR102003387B1 (en) Method for detecting and locating traffic participants using bird's-eye view image, computer-readerble recording medium storing traffic participants detecting and locating program
JP2005173899A (en) Surrounding situation display unit
CN115346191A (en) Method and apparatus for calibration
DE102017216954A1 (en) Method and device for determining a highly accurate position and for operating an automated vehicle
GB2580400A (en) A control system, system and method for providing assistance to an occupant of a vehicle
US11919513B2 (en) Control system, system and method for providing assistance to an occupant of a vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination