CN108528433B - Automatic control method and device for vehicle running - Google Patents

Automatic control method and device for vehicle running Download PDF

Info

Publication number
CN108528433B
CN108528433B CN201710122206.5A CN201710122206A CN108528433B CN 108528433 B CN108528433 B CN 108528433B CN 201710122206 A CN201710122206 A CN 201710122206A CN 108528433 B CN108528433 B CN 108528433B
Authority
CN
China
Prior art keywords
lane
image
vehicle
target vehicle
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710122206.5A
Other languages
Chinese (zh)
Other versions
CN108528433A (en
Inventor
黄忠伟
姜波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BYD Co Ltd
Original Assignee
BYD Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BYD Co Ltd filed Critical BYD Co Ltd
Priority to CN201710122206.5A priority Critical patent/CN108528433B/en
Publication of CN108528433A publication Critical patent/CN108528433A/en
Application granted granted Critical
Publication of CN108528433B publication Critical patent/CN108528433B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W10/00Conjoint control of vehicle sub-units of different type or different function
    • B60W10/18Conjoint control of vehicle sub-units of different type or different function including control of braking systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/09Taking automatic action to avoid collision, e.g. braking and steering
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/14Adaptive cruise control
    • B60W30/143Speed control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/04Traffic conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/06Road conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0002Automatic control, details of type of controller or control system architecture
    • B60W2050/0014Adaptive controllers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0043Signal treatments, identification of variables or parameters, parameter estimation or state estimation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0062Adapting control system settings
    • B60W2050/0075Automatic parameter input, automatic initialising or calibrating means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2552/00Input parameters relating to infrastructure
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/80Spatial relation or speed relative to objects
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2710/00Output or target parameters relating to a particular sub-units
    • B60W2710/18Braking system

Landscapes

  • Engineering & Computer Science (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a vehicle running automatic control method and a vehicle running automatic control device, wherein the method comprises the following steps: acquiring a first image and a second image of an environment in front of a subject vehicle from a front-facing 3D camera; acquiring a front road lane line according to the first image, and further acquiring a third image and a rear road lane line; generating a plurality of front vehicle identification ranges according to the first image and the second image so as to identify a front target vehicle; generating a plurality of rear vehicle identification ranges according to the third image and the rear road lane lines; acquiring point cloud data from a laser radar to acquire a plurality of rear target point cloud data so as to identify a rear target vehicle; and performing cruise control on the motion parameters of the main vehicle according to the motion parameters of the front target vehicle and the motion parameters of the rear target vehicle. Therefore, the driving safety is ensured.

Description

Automatic control method and device for vehicle running
Technical Field
The invention relates to the technical field of vehicle control, in particular to a vehicle running automatic control method and device.
Background
Currently, the adaptive cruise system of a vehicle has a growing interest, in which a user sets a desired vehicle speed, the system obtains an exact position of a preceding vehicle using a low power radar or an infrared beam, and if the preceding vehicle is decelerated or a new target is detected, the system transmits an execution signal to an engine or a brake system to reduce the vehicle speed so that the vehicle and the preceding vehicle maintain a safe driving distance. When the front road is empty, the vehicle is accelerated to return to the set vehicle speed, and the radar system can automatically monitor the next target. The self-adaptive cruise control system of the vehicle replaces a user to control the speed of the vehicle, avoids frequent cancellation and setting of cruise control, enables the self-adaptive cruise control system to be suitable for more road conditions, and provides a more relaxed driving mode for the user.
However, the existing adaptive cruise system for a vehicle needs to use a laser radar as a ranging sensor, and for the case where a preceding target vehicle travels in a curve, the laser radar cannot recognize the lane line well due to its operating principle, so that a subject vehicle loaded with only the laser radar is likely to recognize a preceding target vehicle other than the subject lane as being in the subject lane, and may generate a recognition delay for the preceding target vehicle that is changing lanes to the subject lane, and such a false recognition or recognition delay may cause the adaptive cruise system for the subject vehicle to perform a false braking or braking delay, thereby increasing the risk of a rear-end collision.
Disclosure of Invention
The object of the present invention is to solve at least to some extent one of the above mentioned technical problems.
Therefore, the first objective of the present invention is to provide an automatic control method for vehicle driving, which can accurately identify a road lane line, acquire lane environment information of a host vehicle by combining data acquired by a laser radar, perform cruise control on the host vehicle according to the specific lane environment information, and ensure driving safety.
A second object of the present invention is to provide an automatic vehicle running control device.
In order to achieve the above object, an embodiment of a first aspect of the present invention provides a vehicle running automatic control method, including: acquiring a first image and a second image of an environment in front of a subject vehicle from a front-facing 3D camera, wherein the first image is a color or brightness image, and the second image is a depth image; acquiring a front road lane line according to the first image; acquiring a third image and a rear road lane line according to the imaging parameters of the first image and the front road lane line; mapping the front road lane lines into the second image according to the interweaving mapping relation between the first image and the second image to generate a plurality of front vehicle identification ranges; identifying a front target vehicle according to all the front vehicle identification ranges; generating a plurality of rear vehicle identification ranges according to the third image and the rear road lane lines; acquiring point cloud data from a laser radar, and projecting the point cloud data into the third image according to the installation parameters of the laser radar to acquire rear target point cloud data; identifying rear target vehicles according to all rear vehicle identification ranges and the rear target point cloud data; identifying a steering lamp of a corresponding front target vehicle according to the front vehicle lamp identification area; and carrying out cruise control on the motion parameters of the main vehicle according to the motion parameters of the front target vehicle and the motion parameters of the rear target vehicle.
The automatic control method for vehicle driving of the embodiment of the invention acquires a first image and a second image of the environment in front of a main vehicle from a front 3D camera, acquires a front road lane line, and acquiring a third image and a rear road lane line according to the imaging parameters of the first image and the front road lane line, mapping the front road lane lines into the second image according to the interleaved mapping relationship between the first image and the second image to generate a plurality of front vehicle identification ranges to identify the front target vehicle according to the front vehicle identification ranges, generating a plurality of rear vehicle identification ranges according to the third image and the rear road lane lines, and acquiring rear target point cloud data from the laser radar, and identifying a rear target vehicle according to the rear target point cloud data, and finally performing cruise control on the motion parameters of the main vehicle according to the motion parameters of the front target vehicle and the rear target vehicle. Therefore, the highway lane lines are accurately identified, the lane environment information of the main vehicle is obtained by combining the data acquired by the laser radar, the cruise control is performed on the main vehicle according to the specific lane environment information, and the driving safety is guaranteed.
In order to achieve the above object, a second aspect of the present invention provides a vehicle running automatic control device including: the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a first image and a second image of the environment in front of a main vehicle from a front 3D camera, the first image is a color or brightness image, and the second image is a depth image; the second acquisition module is used for acquiring a front highway lane line according to the first image; the third acquisition module is used for acquiring a third image and a rear road lane line according to the imaging parameters of the first image and the front road lane line; the first generation module is used for mapping the front road lane lines into the second image according to the interweaving mapping relation between the first image and the second image to generate a plurality of front vehicle identification ranges; the first identification module is used for identifying the front target vehicles according to all the front vehicle identification ranges; a second generating module, configured to generate a plurality of rear vehicle identification ranges according to the third image and the rear road lane; the fourth acquisition module is used for acquiring point cloud data from a laser radar and projecting the point cloud data into the third image according to the installation parameters of the laser radar so as to acquire rear target point cloud data; the second identification module is used for identifying rear target vehicles according to all the rear vehicle identification ranges and the rear target point cloud data; and the control module is used for carrying out cruise control on the motion parameters of the main vehicle according to the motion parameters of the front target vehicle and the motion parameters of the rear target vehicle.
The automatic control device for vehicle running of the embodiment of the invention acquires a first image and a second image of the environment in front of the subject vehicle from the front-facing 3D camera, acquires the lane line of the front road, and acquiring a third image and a rear road lane line according to the imaging parameters of the first image and the front road lane line, mapping the front road lane lines into the second image according to the interleaved mapping relationship between the first image and the second image to generate a plurality of front vehicle identification ranges to identify the front target vehicle according to the front vehicle identification ranges, generating a plurality of rear vehicle identification ranges according to the third image and the rear road lane lines, and acquiring rear target point cloud data from the laser radar, and identifying a rear target vehicle according to the rear target point cloud data, and finally performing cruise control on the motion parameters of the main vehicle according to the motion parameters of the front target vehicle and the rear target vehicle. Therefore, the highway lane lines are accurately identified, the lane environment information of the main vehicle is obtained by combining the data acquired by the laser radar, the cruise control is performed on the main vehicle according to the specific lane environment information, and the driving safety is guaranteed.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a flowchart of a vehicle running automatic control method according to a first embodiment of the invention;
fig. 2 is a flowchart of a vehicle running automatic control method according to a second embodiment of the invention;
fig. 3 is a flowchart of a vehicle running automatic control method according to a third embodiment of the invention;
fig. 4 is a flowchart of a vehicle running automatic control method according to a fourth embodiment of the invention;
fig. 5 is a flowchart of a vehicle running automatic control method according to a fifth embodiment of the invention;
fig. 6 is a scene diagram of a vehicle running automatic control method according to an embodiment of the invention;
fig. 7 is a scene diagram of a vehicle running automatic control method according to another embodiment of the invention;
fig. 8 is a schematic configuration diagram of a vehicle running automatic control apparatus according to a first embodiment of the invention;
fig. 9 is a schematic configuration diagram of an automatic control device for vehicle running according to a second embodiment of the present invention;
fig. 10 is a schematic configuration diagram of an automatic control device for vehicle running according to a third embodiment of the invention;
fig. 11 is a schematic configuration diagram of an automatic control device for vehicle running according to a fourth embodiment of the invention;
fig. 12 is a schematic configuration diagram of a vehicular running automatic control apparatus according to a fifth embodiment of the invention;
and
fig. 13 is a schematic configuration diagram of an automatic control device for vehicle running according to a sixth embodiment of the invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The following describes a vehicle running automatic control method and apparatus according to an embodiment of the present invention with reference to the drawings.
Fig. 1 is a flowchart of a vehicle running automatic control method according to a first embodiment of the present invention.
Generally, a laser radar is assembled in front, at the side or at the rear of a vehicle to complete different functions of forward-looking collision avoidance, side collision avoidance, rear-looking collision avoidance and the like, and in practical application, the laser radar analyzes the distance and the relative speed between a front vehicle and a current main vehicle according to a feedback signal aiming at a transmitted signal of the laser radar so as to automatically adjust the speed of the vehicle and ensure the driving safety.
Specifically, after a vehicle provided with the laser radar gets on the road, the laser radar selects a following vehicle, and then the following vehicle is used as a target vehicle to monitor, so that no matter the front vehicle accelerates, decelerates, stops or starts, the subject vehicle can know and take corresponding measures in real time. However, since the laser radar is a single pulse, the type and the property of the target vehicle to be detected cannot be accurately determined, and particularly, when the target vehicle at the current destination runs on a curve, a lane line cannot be well identified, so that identification delay and the like are easily caused, and potential safety hazards exist in driving.
In order to solve the problems, the invention provides an automatic control method for vehicle running, which can accurately identify a highway lane line and highway lane environment information and ensure the driving safety.
The following describes the automatic control method for vehicle running according to the present invention with reference to specific embodiments. As shown in fig. 1, the vehicle travel automatic control method includes:
s101, acquiring a first image and a second image of the environment in front of the main vehicle from the front 3D camera, wherein the first image is a color or brightness image, and the second image is a depth image.
Specifically, a 3D camera is arranged in front of a current host vehicle in advance to acquire a first image and a second image of an environment in front of the host vehicle, wherein the first image is a color or brightness image, and the second image is a depth image.
In practical applications, the first image and the second image of the environment in front of the subject vehicle may be acquired from the front 3D camera in various ways according to the structure of the imaging device of the front 3D camera.
As one possible implementation, a first image of the environment in front of the subject vehicle is acquired from an image sensor of the front 3D camera, and a second image of the environment in front of the subject vehicle is acquired from a Time of Flight (TOF) sensor of the front 3D camera.
Where an image sensor refers to an array or collection of luminance pixel sensors, such as red, green, blue (RGB) or luminance, chrominance (YUV) luminance pixel sensors, for example, which are limited in their ability to accurately determine the distance between the luminance pixel sensor and the object being inspected, are commonly used to obtain a luminance image of an environment.
TOF sensors refer to an array or set of TOF pixel sensors, which may be light sensors, phase detectors, etc., that can detect the time of flight of light from a pulsed light source, a modulated light source, traveling between the TOF pixel sensor and a detected object to detect the distance of the object and acquire a depth image.
In addition, in practical applications, both the image sensor and the TOF sensor may be fabricated using Complementary Metal Oxide Semiconductor (CMOS) processes, and the luminance pixel sensor and the TOF pixel sensor may be scaled on the same substrate, for example, 8 luminance pixel sensors and 1 TOF pixel sensor fabricated in an 8:1 ratio constitute one large interleaved pixel, where the light sensing area of 1 TOF pixel sensor may be equal to the light sensing area of 8 luminance pixel sensors, and 8 luminance pixel sensors may be arranged in an array of 2 rows and 4 columns.
For example, an array of 360 rows and 480 columns of the active interleaved pixels described above can be fabricated on a 1 inch optical target surface substrate, 720 rows and 1920 columns of active luminance pixel sensor arrays, and 360 rows and 480 columns of active TOF pixel sensor arrays can be acquired, whereby the same camera of image sensor and TOF sensor can acquire color or luminance images and depth images simultaneously.
Thereby, the same front-facing 3D camera, which acquires the first image and the second image about the environment in front of the subject vehicle, can be manufactured using CMOS processes, and the front-facing 3D camera will have a sufficiently low production cost for a limited period of time according to moore's law of the semiconductor industry.
And S102, acquiring a front road lane line according to the first image.
Specifically, since the first image is a color or luminance image, the position of the highway lane line can be identified only by using the luminance difference between the highway lane line and the road surface. Therefore, in an actual implementation, the highway lane line may be acquired through the luminance information of the first image.
Specifically, if the first image is a luminance image, the front road lane line is identified from a luminance difference between the front road lane line and the road surface in the first image.
And if the first image is a color image, converting the color image into a brightness image, and identifying the front highway lane line according to the brightness difference between the front highway lane line and the road surface in the first image.
Since the conversion method from the color image to the luminance image is familiar to those skilled in the art, the detailed process of converting the color image to the luminance image is not described herein.
And S103, acquiring a third image and a rear road lane line according to the imaging parameters of the first image and the front road lane line.
The imaging parameters of the first image may include an imaging pixel coordinate system of a camera that obtains the first image, a focal length, and a position and an orientation of the camera in a physical world coordinate system of the host vehicle, that is, a projection relationship may be established between any image pixel coordinate of the first image and the physical world coordinate system of the host vehicle through the imaging parameters, and the method for establishing the projection relationship is familiar to those skilled in the art.
The third image is a top view of all pixel positions of the projected front road lane line, and therefore the position of the front road lane line in the third image is the position of the road lane line in front of the host vehicle relative to the origin of the physical world coordinate system of the host vehicle.
Further, since the rear highway lane line is a continuation of the front highway lane line, the rear highway lane line is also acquired on the basis of the acquired first highway lane line.
And S104, mapping the front road lane lines into the second image according to the interleaved mapping relation between the first image and the second image to generate a plurality of front vehicle identification ranges.
Specifically, the first image and the second image are a color or brightness image and a depth image acquired by the same front 3D camera, so that the first image and the second image have an interleaving mapping relationship, and due to the interleaving mapping relationship between the first image and the second image, the row-column coordinates of each pixel of the first image can determine at least one row-column coordinate of one pixel in the second image through proportional adjustment, so that each edge pixel position of a front highway lane line acquired according to the first image can determine at least one pixel position in the second image, and thus the front highway lane line with the proportional adjustment is acquired in the second image.
Furthermore, according to the visual field viewed by human eyes, a front vehicle identification range is uniquely established for every two adjacent front road lane lines according to the equal-proportion front road lane lines acquired from the second image.
And S105, identifying the front target vehicle according to all the front vehicle identification ranges.
Specifically, after the recognition range of the front lane is acquired, the vehicle located within the recognition range of the front lane is acquired as the front target vehicle.
And S106, generating a plurality of rear vehicle identification ranges according to the third image and the rear road lane line.
Specifically, according to the field of view viewed by human eyes, according to the rear road lane lines in the third image, each two adjacent rear road lane lines uniquely create one rear vehicle identification range.
And S107, acquiring point cloud data from the laser radar, and projecting the point cloud data into a third image according to the installation parameters of the laser radar to acquire rear target point cloud data.
The laser receiver array with a small beam angle demodulates and senses reflected light of an object irradiated by the laser radar to obtain high-precision distance resolution, and then accurate mechanical rotation scanning is assisted to obtain point cloud data with high pitch resolution and azimuth resolution. The point cloud data of the lidar thus visually looks like a contour diagram.
Specifically, the installation parameters of the position and the orientation of the physical world coordinate system of the laser radar on the host vehicle can be acquired and recorded through offline detection of the host vehicle, and information such as a distance, a pitch angle and an azimuth angle included in point cloud data of an environment behind the host vehicle acquired by the laser radar can be converted into information relative to the origin of the physical world coordinate system of the host vehicle, namely, the point cloud data is projected into a third image to acquire rear target point cloud data.
For example, the normal line of the laser radar coincides with the Y-axis of the physical world coordinate system of the subject vehicle, the distance from the origin of the normal line of the laser radar to the origin of the physical world coordinate system of the subject vehicle is-2 m, the distance of certain rear target point cloud data acquired by the laser radar is 10m, the pitch angle is 2 degrees, and the azimuth angle is 30 degrees (namely the included angle between the projection line of the connecting line of the target vehicle and the origin on the XY plane and the Y axis), the X, Y, Z coordinates of the point cloud data at the origin of the physical world coordinate system of the subject vehicle are (10m sin30 ° cos2 °, -2m-10m cos30 ° cos2 °, 10m sin2 °) i.e. (4.997m, -10.655m, 0.349m), namely, the point cloud data is projected into a third image according to the installation parameters of the laser radar to obtain rear target point cloud data (4.997m, -10.655m, 0.349 m).
And S108, identifying rear target vehicles according to all the rear vehicle identification ranges and the rear target point cloud data.
Specifically, since the set of rear target vehicle parameters of the laser radar is projected into the third image according to the installation parameters of the laser radar to acquire a plurality of rear target point cloud data, the target vehicles whose rear target point cloud data fall within the corresponding vehicle identification range are marked as rear target vehicles.
And S109, performing cruise control on the motion parameters of the main vehicle according to the motion parameters of the front target vehicle and the motion parameters of the rear target vehicle.
The motion parameters of the target vehicle comprise information such as speed and steering of the target vehicle, and the motion parameters of the main vehicle comprise speed of the vehicle, steering lamp display information and the like.
Specifically, after the motion parameters of the front target vehicle are acquired, cruise control is performed on the motion parameters of the main vehicle to ensure driving safety, for example, when the front target vehicle decelerates, the speed is appropriately reduced, and rear-end accidents are avoided.
It should be emphasized that, in practical applications, if there is a rear target vehicle behind the host vehicle, it is necessary to provide a turn signal to the rear target vehicle in order to avoid an accident or the like caused by a rear collision between the rear vehicle and the host vehicle. For example, if the subject vehicle changes lanes, a turn signal is turned on to alert the rear target vehicle.
Thus, in one embodiment of the present invention, the forward target vehicle range may also be generated from the forward target vehicle, and the forward target vehicle range may be mapped into the first image according to the interleaved mapping relationship between the first image and the second image to generate the forward headlight recognition region.
It can be understood that, since the driving safety is related to the driving state of the front target vehicle, for example, when the front target vehicle is moving straight, the subject vehicle can normally drive, and if the front target vehicle suddenly decelerates and changes lanes, the subject vehicle needs to be braked to avoid the occurrence of a rear-end collision. Since the lamps of the preceding target vehicle through which the traveling state of the preceding target vehicle can pass react, in the present embodiment, in order to know the lamps of the preceding target vehicle, it is necessary to determine the preceding lamp recognition area.
Specifically, since the front headlight identification area is located within the front target vehicle range, the front target vehicle range is generated according to the front target vehicle, due to the interleaving mapping relationship between the first image and the second image, the row-column coordinates of each pixel of the front target vehicle range in the second image are subjected to proportional adjustment, the row-column coordinates of at least one pixel in the first image can be determined, and the imaging of the headlight of the target vehicle is included in the corresponding front target vehicle range, so that the front headlight identification area is generated in the first image.
In practical applications, the manner of generating the range of the front target vehicle according to the front target vehicle is different according to different application scenarios, and the following examples are given:
the first example:
and generating a front target vehicle range according to a closed area defined by the target boundary of the front target vehicle.
In this example, as a possible implementation manner, a boundary detection method (e.g., a boundary detection method such as Canny, Sobel, Laplace, etc.) in an image processing algorithm is adopted to detect a target boundary of a front target vehicle for recognition.
In the depth image, the depth sub-image formed by reflecting the light on the back or front of the same target vehicle to the TOF sensor contains consistent distance information, so that the distance information of the target vehicle can be acquired by only identifying the position of the depth sub-image formed by the target vehicle in the depth image. Wherein a sub-image refers to a combination of a part of the pixels of an image.
The depth sub-image formed by reflecting light on the back or front of the same target vehicle to the TOF sensor contains consistent distance information, and the depth sub-image formed by reflecting light on the road surface to the TOF sensor contains continuously-changed distance information, so that the depth sub-image containing the consistent distance information and the depth sub-image containing the continuously-changed distance information necessarily form abrupt differences at the junction of the two, and the junction of the abrupt differences forms a target boundary of the target vehicle in the depth image.
The second example is:
a forward target vehicle range is generated from an extended enclosed area of the target boundary of the forward target vehicle.
In this example, as a possible implementation manner, a boundary detection method in an image processing algorithm is adopted to detect the target boundary of the front target vehicle for identification.
The third example:
and generating a front target vehicle range according to a closed area formed by connecting a plurality of pixel positions of the front target vehicle.
The vehicle identification range is determined by all pixel positions of the lane line, so that the detection of the target boundary of the target vehicle in the vehicle identification range reduces the boundary interference formed by road facilities such as an isolation zone, a light pole, a protection pile and the like.
Further, the turn lights of the corresponding front target vehicle are identified according to the front lamp identification area.
Specifically, after the front vehicle lamp identification area is acquired, in order to accurately know the specific form state of the front target vehicle, the turn lamp of the corresponding front target vehicle is identified according to the front vehicle lamp identification area.
It should be noted that, according to different specific application requirements, the manner of identifying the turn signal of the corresponding front target vehicle according to the front vehicle light identification area is different.
As one possible implementation, the turn signal of the corresponding preceding target vehicle is identified according to the color, the flashing frequency or the flashing sequence of the tail light in the preceding vehicle light identification area.
In this embodiment, the fact that both the longitudinal displacement and the lateral displacement of the preceding target vehicle are small at the initial stage of lane change means that the size change of the headlight recognition region of the preceding target vehicle is also small, and only the change in the brightness of the image formed at the turn signal is large due to flickering.
Therefore, a time-differentiated vehicle light recognition area sub-image of the front target vehicle is created by continuously acquiring a plurality of color or brightness images at different time instants and performing time-differentiation processing on the vehicle light recognition area of the front target vehicle. The time differentiated vehicle light identification area sub-images will highlight the continuously flashing vehicle light sub-images of the preceding target vehicle.
And then projecting the time differential car lamp identification area sub-image to a column coordinate axis, performing one-dimensional search to obtain the initial and end column coordinate positions of the car lamp sub-image of the target vehicle, projecting the initial and end column coordinate positions to the time differential car lamp identification area sub-image, searching the initial and end row coordinate positions of the car lamp sub-image, projecting the initial and end row and column coordinate positions of the car lamp sub-image to the plurality of color or brightness images at different moments to confirm the color, the flashing frequency or the flashing sequence of the car lamp of the front target vehicle, thereby determining the row and column coordinate positions of the flashing car lamp sub-image.
Further, when the line and column coordinate positions of the flickering headlamp subimages are only on the left side of the headlamp identification area of the front target vehicle, it can be determined that the front target vehicle turns on a left turn lamp, when the line and column coordinate positions of the flickering headlamp subimages are only on the right side of the headlamp identification area of the front target vehicle, it can be determined that the front target vehicle turns on a right turn lamp, and when the line and column coordinate positions of the flickering headlamp subimages are on both sides of the headlamp identification area of the front target vehicle, it can be determined that the target vehicle turns on a double-flashing warning lamp.
In addition, in the implementation manner, the longitudinal displacement or the transverse displacement of the front target vehicle is large in the lane changing process of the front target vehicle, so that the size of the vehicle lamp identification area of the front target vehicle is also large in change, longitudinal displacement or transverse displacement compensation can be performed on a plurality of vehicle lamp identification areas of the front target vehicle which are continuously acquired at different moments, the vehicle lamp identification areas are scaled into vehicle lamp identification areas with the same size, time differentiation processing is performed on the adjusted vehicle lamp identification areas of the front target vehicle to create time differential vehicle lamp identification area sub-images of the front target vehicle, the time differential vehicle lamp identification area sub-images are projected to a column coordinate axis, one-dimensional search is performed to acquire the initial and end column coordinate positions of the vehicle lamps of the front target vehicle, and the initial and end column coordinate positions are projected to the time differential vehicle lamp identification area sub-images, and searching the line coordinate positions of the start and the end of the vehicle lamp subimage, projecting the line and column coordinate positions of the start and the end of the vehicle lamp subimage to the plurality of color or brightness images at different moments to confirm the color, the flashing frequency or the flashing sequence of the vehicle lamp of the front target vehicle, thereby determining the line and column coordinate positions of the flashing vehicle lamp subimage and finally finishing the identification of the left steering lamp, the right steering lamp or the double-flashing warning lamp.
Specifically, the working condition that the target vehicle in the non-self-lane in the front decelerates and changes to the self-lane is identified according to the motion parameters of the target vehicle in the front and the steering lamp, so that the motion parameter control system of the main vehicle performs braking adjustment in advance, and the lamp system of the main vehicle reminds the target vehicle in the rear, so that the lamp system of the main vehicle can make adjustment earlier to remind the target vehicle in the rear, more braking or adjusting time is provided for the target vehicle in the rear, and the rear-end collision risk is effectively reduced.
Or the working condition that the target vehicle in the front road decelerates and changes to the non-self-road in the front road is identified according to the motion parameters of the target vehicle in the front and the steering lamp, so that the motion parameter control system of the main vehicle does not perform braking adjustment, the motion parameter control system of the main vehicle can reduce unnecessary braking adjustment, and the risk of rear-end collision caused by the unnecessary braking adjustment of the main vehicle is reduced.
In summary, in the vehicle driving automatic control method according to the embodiment of the present invention, the front 3D camera acquires the first image and the second image of the environment in front of the subject vehicle, acquires the front road lane line, acquires the third image and the rear road lane line according to the imaging parameter of the first image and the front road lane line, maps the front road lane line into the second image according to the interleaving mapping relationship between the first image and the second image to generate a plurality of front vehicle identification ranges, identifies the front target vehicle according to the front vehicle identification ranges, generates a plurality of rear vehicle identification ranges according to the third image and the rear road lane line, acquires the rear target point cloud data from the laser radar, and finally performs cruise control on the motion parameter of the subject vehicle according to the motion parameter of the front target vehicle and the rear target vehicle. Therefore, the highway lane lines are accurately identified, the lane environment information of the main vehicle is obtained by combining the data acquired by the laser radar, the cruise control is performed on the main vehicle according to the specific lane environment information, and the driving safety is guaranteed.
Based on the above description, it should be noted that, according to different application scenarios, different techniques may be adopted to identify the front road lane line according to the brightness difference between the front road lane line and the road surface in the first image. The following description is made with reference to specific examples.
Fig. 2 is a flowchart of a vehicle running automatic control method according to a second embodiment of the present invention, and as shown in fig. 2, the step S102 includes:
s201, creating a binary image of the front road lane line according to the brightness information of the first image and a preset brightness threshold value.
Specifically, in real life, the highway lane lines include both solid line lane lines and broken line lane lines, and for convenience of description, the following description will first take an example of recognizing the solid line lane lines as an example.
Specifically, a brightness threshold is preset by using the brightness difference between the highway lane line and the road surface in the first image, wherein the preset brightness threshold is obtained by searching for some brightness thresholds, and the brightness thresholds can be obtained by searching for the brightness thresholds by using a histogram statistics-bimodal algorithm.
Furthermore, a preset brightness threshold and a preset brightness image are used for creating a binary image of the protruded highway lane line, the brightness image can be further divided into a plurality of brightness sub-images, a histogram statistics-double peak algorithm is executed on each brightness sub-image to search for the plurality of brightness thresholds, the brightness thresholds and the corresponding brightness sub-images are used for creating the binary sub-images of the protruded highway lane line, and the binary sub-images are used for creating the complete binary image of the protruded highway lane line, so that the condition of brightness change of the road surface or the lane line can be met.
The specific implementation steps of finding the luminance threshold and creating the binary image of the highway lane line may be obtained by those skilled in the art based on the prior art, and are not described herein again.
S202, detecting all edge pixel positions of the straight-line lane line or all edge pixel positions of the curve solid-line lane line in the binary image according to a preset detection algorithm.
Specifically, after obtaining the binary image of the road lane line in front, the pixels of the curve where the solid line lane line is arranged in a straight line in the luminance image also account for most of the imaging pixels of the solid line lane line because the curvature radius of the road lane line cannot be too small and the imaging pixels of the road lane line at a position close to the road lane line are more due to the camera projection principle.
Therefore, all edge pixel positions of the solid line lane line of the straight road or most initial straight line edge pixel positions of the solid line lane line of the curved road can be detected in the binary image of the prominent highway lane line by using a preset detection algorithm, such as a straight line detection algorithm like a Hough transform algorithm.
Of course, if the filtering process is not performed, the straight line detection also detects most straight line edge pixel positions of the isolation belt and the telegraph pole in the binary image. The slope range of the lane line in the binary image can be set according to the length-width ratio of the image sensor, the focal length of the camera lens, the road width range of the road design specification and the installation position of the image sensor on the main vehicle, so that the straight line of the non-lane line is filtered and eliminated according to the slope range.
Since the edge pixel positions of the solid line lane line of the curve are always continuously changed, the connected pixel positions of the edge pixel positions at the two ends of the detected initial straight line are searched and merged into the initial straight line edge pixel set, the searching and merging into the connected pixel positions are repeated, and finally, all the edge pixel positions of the solid line lane line of the curve are uniquely determined.
S203, detecting all edge pixel positions of the straight road dotted line lane line or detecting all edge pixel positions of the curve dotted line lane line in the binary image according to a preset detection algorithm.
To fully explain the recognition of the front road lane line based on the brightness difference between the front road lane line and the road surface in the first image, the recognition of the broken line lane line will be first described as an example.
The straight line detection algorithm described in step S201 may also detect most of the initial straight line edge pixel positions of the dashed line lane line, and may connect edge pixels of other shorter lane lines belonging to the dashed line lane line by an extension line or a search and merge method of the most of the initial straight line edge pixel positions of the dashed line lane line, so as to obtain all the edge pixel positions of the dashed line lane line. The method for extending the line is used for obtaining all edge pixel positions of the straight line dashed line lane line, the method for searching and combining is used for obtaining all edge pixel positions of the curve dashed line lane line, and the priori knowledge of whether the dashed line lane line is a straight line or a curve needs to be obtained by selecting the method for extending the line or the method for searching and combining, and of course, the priori knowledge can be obtained by detecting the solid line lane line.
As an implementation manner, all edge pixel positions of the solid line lane line may be projected to an initial straight line edge pixel position of the dotted line lane line according to prior knowledge of the solid line lane line, a principle that the lane lines are parallel to each other in reality, and projection parameters of the image sensor and the camera, so as to connect the initial straight line edge pixel position of the dotted line lane line and edge pixel positions of other shorter lane lines belonging to the dotted line lane line, thereby obtaining all edge pixel positions of the dotted line lane line.
As another implementation, prior knowledge of a straight road or a curved road does not need to be obtained, and since the lateral offset of the dashed lane line can be almost ignored in a short continuous time but the longitudinal offset is large in the process of the vehicle cruising on the straight road or the curve cruising at a constant steering angle, the dashed lane line can be superimposed into a solid lane line in continuous binary images of a plurality of prominent highway lane lines at different times, and then all edge pixel positions of the dashed lane line are obtained by the identification method of the solid lane line.
The longitudinal offset of the dotted lane line is influenced by the speed of the main vehicle, so that the minimum number of continuous binary images of the protruded road lane line at different moments can be dynamically determined according to the speed of the vehicle obtained from the wheel speed sensor, the dotted lane line is superposed into a solid lane line, and all edge pixel positions of the dotted lane line are obtained.
In summary, in the automatic control method for vehicle driving according to the embodiment of the present invention, a binary image of a front highway lane line is created according to the luminance information of the first image and a preset luminance threshold, all edge pixel positions of a straight solid lane line or all edge pixel positions of a curved solid lane line are detected in the binary image according to a preset detection algorithm, and all edge pixel positions of a straight dashed lane line or all edge pixel positions of a curved dashed lane line are detected in the binary image according to a preset detection algorithm. Therefore, the broken line and the solid line lane line of the straight road and the curve road in the highway lane line can be accurately identified.
It should be noted that, according to different application scenarios, different techniques may be adopted to obtain the third image and the rear highway lane line according to the imaging parameters of the first image and the front highway lane line. The following description will be made more clearly with reference to specific examples.
Fig. 3 is a flowchart of a vehicle running automatic control method according to a third embodiment of the present invention, and as shown in fig. 3, the above step S103 includes:
s301, projecting all pixel positions of the front road lane line to a physical world coordinate system of the main vehicle according to the imaging parameters of the first image to establish a third image.
And S302, accumulating the position of the front road lane line in the third image through continuous time and obtaining the position of the rear road lane line through displacement relative to the origin of the physical world coordinate system of the host vehicle.
Specifically, the third image is created by projecting all the pixel positions of the acquired front road lane line to the physical world coordinate system of the host vehicle, and the third image may be a top view of all the pixel positions of the projected front road lane line, so that the position of the front road lane line in the third image is the position of the road lane line in front of the host vehicle relative to the origin of the physical world coordinate system of the host vehicle.
Since the front road lane line acquired at a certain time will be located behind the subject vehicle after a certain time has elapsed, the position of the front road lane line in the third image will acquire the position of the rear road lane line of the subject vehicle through continuous time accumulation and displacement from the origin of the physical world coordinate system of the subject vehicle.
For example, the Y-axis distance of the point a of the road lane line ahead at the time T1 from the origin of the physical world coordinate system of the host vehicle is D1 (the X-axis distance is D2), the host vehicle travels at a constant speed V for a time T along the Y-axis, that is, the displacement of the point a of the road lane line from the origin of the physical world coordinate system of the host vehicle is V × T (for example, V × T is 2 × D1) at the time T2 + T, and the distance of the point a of the road lane line from the origin of the physical world coordinate system of the host vehicle is D1-V × T is-D1 at the time T2, that is, the position of the road lane line behind the host vehicle is obtained (the X-axis distance is still D2).
For the case of variable speed running of the subject vehicle, the variation curve of V versus T may be acquired by the wheel speed sensor, and the displacement of variable speed running may be acquired using the integral of V versus T. For the situation that the main vehicle runs on the arc-shaped curve, the curvature radius of the curve is calculated by using the coordinates of the front road lane line in the physical world coordinate system of the main vehicle, and then the coordinates of the elapsed time T of the front road lane line relative to the origin of the physical world coordinate system of the main vehicle can be calculated by using the curvature radius and the arc-shaped displacement of the integral of the running time T of the main vehicle, namely the position of the rear road lane line of the main vehicle is obtained.
In summary, according to the automatic control method for vehicle driving in the embodiment of the present invention, all pixel positions of the front road lane line are projected to the physical world coordinate system of the host vehicle according to the imaging parameters of the first image to create a third image, and the position of the front road lane line in the third image is accumulated for a continuous time and is displaced from the origin of the physical world coordinate system of the host vehicle to obtain the position of the rear road lane line. Therefore, the position of the rear lane line is accurately known, the vehicle is conveniently and relatively controlled according to the position of the rear lane line, and a foundation is laid for ensuring the driving safety.
Since in practical applications, whether it is a target vehicle ahead of the subject vehicle or a target vehicle behind the subject vehicle, there are a plurality of driving states and driving positions, and different driving states and driving positions are directly related to specific control operations on the subject vehicle, for example, for a target vehicle other than the host vehicle, as long as it remains running in the non-host vehicle, there is no influence on the driving safety of the subject vehicle regardless of acceleration and deceleration thereof, but once it changes lane to the host vehicle, the subject vehicle needs to perform a deceleration operation or the like.
The following describes in detail how to recognize the preceding target vehicle and the following target vehicle, respectively.
Fig. 4 is a flowchart of a vehicle running automatic control method according to a fourth embodiment of the present invention, and as shown in fig. 4, the above step S105 includes:
s401 marks the front own lane and the front non-own lane in all the front vehicle recognition ranges.
Specifically, according to the equal-proportion front road lane lines acquired in the second image, the slope of the initial straight line of each front road lane line is obtained by comparing the number of rows and the number of columns occupied by the initial straight line portion of each front road lane line, a front vehicle identification range created according to the front road lane line where the initial straight lines of the two road lane lines with the largest slope are located is marked with the label of the front local lane, and other created front vehicle identification ranges are marked with the labels of the non-local lanes in front.
Thus, the front road lane lines may be mapped into the second image according to the interleaved mapping relationship between the first image and the second image to generate a number of front vehicle recognition ranges in the second image, and labels of the front own lane and the front non-own lane for all the front vehicle recognition ranges.
S402, the target vehicle of the front self-lane is identified according to the vehicle identification range marking the label of the front self-lane.
Specifically, after the front own-lane tag is acquired, the front own-lane target vehicle can be identified within the front own-lane identification range according to the vehicle identification range marking the front own-lane tag.
Specifically, since the distance and position of the target vehicle relative to the TOF sensor always changes over time, the distance and position of the road surface, the isolation belt relative to the TOF sensor is approximately unchanged over time. Therefore, a time differential depth image can be created by using two depth images acquired at different time points to detect the distance and position changes, so that the front own-lane target vehicle can be actually identified in the vehicle identification range for marking the own-lane label.
And S403, identifying the front non-own-lane target vehicle according to the vehicle identification range marking the front non-own-lane label.
Specifically, after the forward non-own-lane tag is acquired, the forward non-own-lane target vehicle can be identified within the forward non-own-lane identification range according to the vehicle identification range marking the forward non-own-lane tag.
Specifically, since the distance and position of the target vehicle relative to the TOF sensor always changes over time, the distance and position of the road surface, the isolation belt relative to the TOF sensor is approximately unchanged over time. Therefore, a time differential depth image can be created by using two depth images acquired at different time points to detect the change of the distance and the position, so that the front non-own-lane target vehicle can be actually identified in the vehicle identification range marked with the non-own-lane label.
And S404, identifying the front lane-changing target vehicle according to the front vehicle identification range of the two-two combination.
Specifically, since the preceding own-lane target vehicle and the preceding non-own-lane target vehicle can be recognized, the preceding lane-change target vehicle is recognized from the preceding vehicle recognition ranges combined two by two based on the same recognition method.
Specifically, since the distance and position of the target vehicle relative to the TOF sensor always changes over time, the distance and position of the road surface, the isolation belt relative to the TOF sensor is approximately unchanged over time. Therefore, a time differential depth image can be created by using two depth images acquired at different moments to detect the change of the distance and the position, and then the front lane-changing target vehicle is actually identified according to the front vehicle identification range combined in pairs.
Fig. 5 is a flowchart of a vehicle running automatic control method according to a fifth embodiment of the present invention, and as shown in fig. 5, the above step S108 includes:
and S501, marking labels of a rear own lane and a rear non-own lane for all rear vehicle identification ranges.
Specifically, according to the rear road lane lines in the third image, the slope of the initial straight line of each rear road lane line is obtained by comparing the number of rows and the number of columns occupied by the initial straight line portion of each rear road lane line, the created rear vehicle identification range marks the label of the rear road lane according to the rear road lane line where the initial straight lines of the two road lane lines with the largest absolute value of the slope are located, and the other created rear vehicle identification ranges mark the labels of the rear non-road lanes.
Thus, the rear highway lane lines can be mapped into the second image according to the interweaving mapping relation between the first image and the second image so as to generate a plurality of rear vehicle identification ranges in the second image, and labels of the rear lane and the rear non-own lane are marked for the rear vehicle identification ranges.
And S502, identifying the target vehicle of the lane behind the mark according to the vehicle identification range of the lane label behind the mark and the point cloud data of the rear target point.
Specifically, after the rear own-lane tag is acquired, the rear own-lane target vehicle can be identified within the rear own-lane identification range according to the vehicle identification range of the rear own-lane tag marked.
Specifically, since the distance and position of the target vehicle relative to the TOF sensor always changes over time, the distance and position of the road surface, the isolation belt relative to the TOF sensor is approximately unchanged over time. Therefore, a time differential depth image can be created by using two depth images acquired at different time points to detect the distance and position changes, so that the rear own-lane target vehicle can be actually identified in the vehicle identification range for marking the own-lane label.
And S503, identifying the non-self-lane target vehicles behind the mark according to the vehicle identification range of the non-self-lane mark behind the mark and the rear target point cloud data.
Specifically, after the rear non-own-lane tag is acquired, the rear non-own-lane target vehicle can be identified within the rear non-own-lane identification range according to the vehicle identification range of the rear non-own-lane tag marked.
Specifically, since the distance and position of the target vehicle relative to the TOF sensor always changes over time, the distance and position of the road surface, the isolation belt relative to the TOF sensor is approximately unchanged over time. Therefore, a time differential depth image can be created by using two depth images acquired at different time points to detect the change of the distance and the position, so that the rear non-own-lane target vehicle can be actually identified in the vehicle identification range marked with the non-own-lane label.
And S504, identifying and marking the rear lane-changing target vehicle according to the rear vehicle identification range and the rear target point cloud data which are combined in pairs.
Specifically, since the rear own-lane target vehicle and the rear non-own-lane target vehicle can be recognized, the rear lane-change target vehicle is recognized from the rear vehicle recognition ranges combined two by two based on the same recognition method.
Specifically, since the distance and position of the target vehicle relative to the TOF sensor always changes over time, the distance and position of the road surface, the isolation belt relative to the TOF sensor is approximately unchanged over time. Therefore, a time differential depth image can be created by using two depth images acquired at different moments to detect the change of the distance and the position, and then the rear lane-changing target vehicle is actually identified according to the rear vehicle identification range combined in pairs.
Of course, in addition to the above-described method for identifying the target vehicle, other methods may be used to obtain the target vehicle, as a possible implementation manner, after the target boundary of the front target vehicle is identified after step S109, the target boundary detected in each vehicle identification range is respectively projected onto the row coordinate axis of the image, and one-dimensional search is performed on the row coordinate axis, so that the number of rows and the range of row coordinates occupied by the longitudinal target boundaries of all the front target vehicles in the vehicle identification range, and the number of columns and the position of row coordinates occupied by the transverse target boundaries can be determined.
The longitudinal target boundary refers to a target boundary which occupies a large number of pixel rows and a small number of columns, and the transverse target boundary refers to a target boundary which occupies a large number of pixel rows and a large number of columns.
Furthermore, according to the column number and the row coordinate position occupied by all the transverse target boundaries in the vehicle identification range, the column coordinate positions of all the longitudinal target boundaries (namely the column coordinate starting positions and the column coordinate end positions of the corresponding transverse target boundaries) are searched in the vehicle identification range, and the target boundaries of different target vehicles are distinguished according to the principle that the target boundaries contain consistent distance information, so that the positions and the distance information of all front target vehicles in the vehicle identification range are determined.
Therefore, the detection of the target boundary of the front target vehicle can uniquely determine the position of the depth sub-image formed by the front target vehicle in the depth image, so as to uniquely determine the distance information of the front target vehicle.
The boundary detection method according to this example can simultaneously detect a plurality of preceding target vehicles and their distance information, and further identify a preceding own-lane target vehicle within the vehicle identification range marked with the own-lane tag, identify a preceding non-own-lane target vehicle within the vehicle identification range marked with the non-own-lane tag, and identify a preceding lane-change target vehicle within the vehicle identification ranges combined two by two.
Based on the same principle, the rear target vehicle can also be identified, and the details are not repeated herein.
As another implementation manner of acquiring the rear target vehicle, after acquiring a plurality of rear target point cloud data in step S107 described above, the point cloud data is projected into the third image according to the installation parameters of the laser radar to acquire the rear target point cloud data, for example, X, Y, Z coordinate data regarding the origin of the physical world coordinate system of the subject vehicle, and the rear vehicle identification range in the third image is X, Y coordinate data regarding the origin of the physical world coordinate system of the subject vehicle, so that data outside the rear vehicle identification range in the rear target point cloud data may be discarded first to reduce the data amount of the rear target point cloud data.
In practical application, the reduced rear target point cloud data still contains point cloud data formed by partially reflecting laser on the ground, and the point cloud data formed by partially reflecting laser on the ground can be discarded through Z coordinate data in the rear target point cloud data, so that only point cloud data formed by reflecting laser on a plurality of rear target vehicles are basically reserved, and the rear target vehicle point cloud data is obtained.
Further, the point cloud data set of the same target vehicle may be obtained by clustering the two-dimensional contour of the rear target vehicle point cloud data within the vehicle identification range of the own-lane tag behind the mark, for example, according to X, Y coordinates, for example, using a clustering method such as k-means familiar to those skilled in the art, so that the rear own-lane target vehicle is identified according to the vehicle identification range of the own-lane tag behind the mark and the rear target point cloud data.
Similarly, according to the method, the rear non-self-lane target vehicles can be identified according to the vehicle identification range marked with the rear non-self-lane labels and the rear target point cloud data, and the rear lane-changing target vehicles can be identified according to the rear vehicle identification range and the rear target point cloud data combined in pairs.
In summary, the vehicle driving automatic control method according to the embodiment of the invention accurately identifies the front target vehicle and the rear target vehicle, so as to control the driving of the host vehicle according to the front target vehicle and the rear target vehicle, and provide guarantee for ensuring driving safety.
Based on the above embodiment, for a clearer explanation, how the cruise control is performed on the motion parameters of the host vehicle according to the motion parameters and the turn lights of the front target vehicle and the motion parameters of the rear target vehicle in the above step S111, a continuous process from turning on the turn lights to completing lane change to non-host-lane recognizing and monitoring the front host-lane target vehicle of the host vehicle in the present embodiment will be described below with specific application scenarios.
Specifically, the forward own-lane target vehicle is identified according to the vehicle identification range marking the forward own-lane label, the forward lane-changing target vehicle is identified according to the forward vehicle identification range combined in pairs, and the turn lights of the corresponding target vehicles are identified according to the vehicle light identification areas, so that the continuous process from turning on the turn lights to completing the lane changing to the non-own-lane of the forward own-lane target vehicle can be identified and monitored, and the motion parameters such as the duration, the distance relative to the host vehicle, the relative speed and the transverse displacement of the target vehicle in the continuous lane changing process can be easily monitored, so that the motion parameters of the target vehicle can be used for controlling the host vehicle according to the motion parameters of the target vehicle.
When the right steering lamp of the target vehicle of the front own lane is identified to be turned on, the pixel distance from the left target boundary of the target vehicle to the left lane line of the front own lane is converted through a camera projection relation to be determined as a transverse distance P, N first images and N second images at different moments are continuously acquired (the time for acquiring one first image or one second image is T), the change of the distance R of the target vehicle is identified and recorded in the process, and the relative speed V of the target vehicle can be calculated through the change of the distance R of the target vehicle relative to the T.
Recognizing that the target vehicle just completes lane changing to a non-self-lane on the right side of the front self-lane, wherein the left target boundary of the target vehicle coincides with the lane line on the right side of the front self-lane, and the width of the self-lane is D, so that the motion parameters of the front target vehicle in the continuous lane changing process are duration N multiplied by T, distance to the host vehicle is R, relative speed V and transverse displacement (D-P).
It should be emphasized that, the above identified lateral displacement is based on the left and right lane lines of the lane, and the lateral displacement can be accurately identified no matter the target vehicle is in a straight lane or a curve when changing the lane, or no matter the target vehicle changes the lane to the left or to the right, so as to provide an accurate control basis for the adaptive cruise system of the host vehicle.
Further, the lateral displacement of the target vehicle identified by the conventional vehicle adaptive cruise system that relies only on the lidar is based on the subject vehicle, and the lateral displacement of the target vehicle identified by the subject vehicle is sometimes not provided to the vehicle adaptive cruise system for accurate motion control.
Fig. 6 is a scene diagram of a vehicle running automatic control method according to an embodiment of the present invention.
As shown in fig. 6, when the target vehicle ahead of the host vehicle has completed the lane change from the host vehicle to the right in a curve that turns to the left, the lidar of the conventional vehicle that is located on a straight lane may still recognize that the target vehicle ahead is partially located on the host vehicle lane, the curvature radius of the curve is 250 meters, the target vehicle ahead has traveled 25 meters on the curve during the lane change, and the lane line on the right side of the host vehicle lane that coincides with the boundary of the target vehicle on the left side of the target vehicle ahead has been shifted to the left at 25 meters of the curve from the line extending the straight lane of the target vehicle, to the left
Figure BDA0001236872270000171
If the lidar of the conventional vehicle recognizes that the distance to the target vehicle is 50 m to 80 m, i.e. the lidar of the above conventional vehicle is located on a straight road and still has a distance of 25 to 55 meters from the entrance of the curve, the lidar of the conventional vehicle recognizes that the front target vehicle still has a body with a width of about 1.25 m on the lane without the prior knowledge of the curve, and as the target vehicle continues to decelerate along the leftward curve, the lidar of the conventional vehicle recognizes that the target vehicle has a vehicle body of a greater width on the own lane, that is, the lidar of the conventional vehicle described above will generate inaccurate identification and will cause the conventional vehicle adaptive cruise system to perform continuous inaccurate and unnecessary braking, resulting in an increased risk of rear-end collision of the conventional vehicle with its rear target vehicle.
Similarly, the lidar of the conventional vehicle has inaccuracy in recognizing the lane change from the left lane to the right lane of the target vehicle in the right-hand curve.
In order to solve the inaccuracy of the above recognition, the above conventional vehicle lidar either adds a camera to help recognize the lane line or increases the accuracy of the azimuth recognition, which in short increases the complexity and cost of the system.
Therefore, according to the above example, according to the motion parameters of the target vehicle identified by the present invention and the turn signal lamp of the corresponding identified target vehicle, the condition that the target vehicle in the own lane changes to the subject vehicle without the own lane at a deceleration can be identified, so that the motion parameter control system of the subject vehicle can reduce unnecessary braking adjustment, thereby reducing the risk of rear-end collision due to unnecessary braking adjustment of the subject vehicle.
Similarly, according to the above example, the present invention can also recognize and monitor the continuous process from turning the turn signal to completing the lane change to the own lane of the non-own-lane target vehicle, and the motion parameters such as the duration, the distance from the host vehicle, the relative speed and the lateral displacement of the front target vehicle in the continuous lane change process are also easily monitored, so that the motion parameters of the front target vehicle can be used to control the motion parameters of the host vehicle to make braking adjustment earlier and improve the driving safety, and to control the lamp to warn the rear target vehicle earlier to reduce the rear-end collision risk.
Fig. 7 is a scene diagram of a vehicle running automatic control method according to another embodiment of the present invention.
For example, as shown in fig. 7, the subject vehicle travels in a constant speed mode in a straight lane of the own lane, and there is still a distance of 55 meters (or up to 25 meters) from the entrance of the curve, the curve curves to the right and has a radius of curvature of 250 meters, a target vehicle not ahead of the own lane is turning the left turn signal to the own lane on the right side of the own lane 25 meters ahead of the entrance of the curve, and the left target boundary of the target vehicle already coincides with the right lane line of the own lane.
According to the above example, the present invention can accurately recognize that the front target vehicle is changing lane to the lane, and since the target vehicle is about 80 meters (or 50 meters) away from the host vehicle, the present invention can control the power system of the host vehicle to accurately perform the action of reducing power output and even braking, and turn on the brake light in time, so as to ensure the safe distance between the host vehicle and the front and rear target vehicles, thereby improving the driving safety of the host vehicle and reducing the risk of rear-end collision.
However, the lateral displacement of the target vehicle identified by the conventional vehicle adaptive cruise system only relying on the laser radar is based on the subject vehicle, and the extension line of the front target vehicle from the lane line at the right side of the vehicle lane is also about the extension line of the front target vehicle to the lane line at the right side of the vehicle lane under the condition of lacking the prior knowledge of the curve
Figure BDA0001236872270000181
The transverse distance of meter, i.e. the laser radar needs to be displaced transversely by about 1.25 meters to the left to confirm that the front target vehicle starts entering the lane by mistake.
If the lateral displacement speed of the front target vehicle is 1 meter per second, the above-mentioned conventional vehicle adaptive cruise system relying only on the lidar will perform the action of reducing the power output and even braking after the front target vehicle actually enters the lane for about 1.25 seconds, which undoubtedly reduces the safe distance between the host vehicle and the front and rear target vehicles, resulting in the reduction of the driving safety of the host vehicle and the increase of the risk of rear-end collision.
Therefore, according to the above example, according to the motion parameters of the identified target vehicle and the corresponding steering lamp of the identified target vehicle, the working condition that the target vehicle without the host lane changes to the host lane during deceleration can be identified, so that the motion parameter control system and the safety system of the host vehicle can make adjustments earlier, the running safety of the host vehicle and passengers thereof is improved, the lamp system of the host vehicle can make adjustments earlier to remind the rear target vehicle, more braking or adjusting time is provided for the rear target vehicle, and the rear-end collision risk is reduced more effectively.
In conclusion, the vehicle running automatic control method provided by the embodiment of the invention improves the running safety of the main vehicle and passengers thereof, so that the lamp system of the main vehicle can make adjustment earlier to remind a rear target vehicle, more braking or adjusting time is provided for the rear target vehicle, and the rear-end collision risk is reduced more effectively.
In order to achieve the purpose, the invention also provides an automatic vehicle running control device. Fig. 8 is a schematic configuration diagram of an automatic control device for vehicle running according to a first embodiment of the present invention, which includes, as shown in fig. 8: a first obtaining module 1010, a second obtaining module 1020, a third obtaining module 1030, a first generating module 1040, a first identifying module 1050, a second generating module 1060, a fourth obtaining module 1070, a second identifying module 1080, and a control module 1090.
The first acquiring module 1010 is configured to acquire a first image and a second image of an environment in front of the subject vehicle from the front-facing 3D camera, where the first image is a color or brightness image and the second image is a depth image.
In one embodiment of the invention, the first acquisition module 1010 acquires a first image of an environment in front of the subject vehicle from an image sensor of a front-facing 3D camera.
In one embodiment of the invention, the first acquisition module 1010 acquires a second image of the environment in front of the subject vehicle from a time-of-flight sensor of the front-facing 3D camera.
And a second obtaining module 1020, configured to obtain a front highway lane line according to the first image.
In an embodiment of the invention, when the first image is a luminance image, the second obtaining module 1020 identifies the front road lane line according to a luminance difference between the front road lane line and the road surface in the first image.
In an embodiment of the present invention, when the first image is a color image, the second obtaining module 1020 converts the color image into a luminance image, and identifies the front road lane line according to a luminance difference between the front road lane line and the road surface in the first image.
And a third obtaining module 1030, configured to obtain a third image and a rear road lane line according to the imaging parameter of the first image and the front road lane line.
The first generating module 1040 is configured to map the front road lane line into the second image according to the interleaved mapping relationship between the first image and the second image to generate a plurality of front vehicle identification ranges.
The first recognition module 1050 is configured to recognize the preceding target vehicle according to all the preceding vehicle recognition ranges.
A second generating module 1060 for generating a plurality of rear vehicle identification ranges according to the third image and the rear road lane line.
A fourth obtaining module 1070, configured to obtain point cloud data from a laser radar, and project the point cloud data into the third image according to the installation parameters of the laser radar to obtain rear target point cloud data.
And the second identification module 1080 is used for identifying rear target vehicles according to all the rear vehicle identification ranges and the rear target point cloud data.
And the control module 1090 is used for performing cruise control on the motion parameters of the main vehicle according to the motion parameters and the steering lamps of the front target vehicle and the rear target vehicle.
In an embodiment of the present invention, the control module 1090 identifies, according to the motion parameter of the front target vehicle, a condition that the front non-own-lane target vehicle decelerates and changes lane to the own lane, so that the motion parameter control system of the host vehicle performs braking adjustment in advance, and the lamp system of the host vehicle reminds the rear target vehicle.
In an embodiment of the present invention, fig. 9 is a schematic structural diagram of an automatic control device for vehicle driving according to a second embodiment of the present invention, as shown in fig. 9, and on the basis of fig. 8, the automatic control device for vehicle driving further includes a third generation module 1100 and a third identification module 1110.
The third generating module 1100 is configured to generate a front target vehicle range according to the front target vehicle, and map the front target vehicle range into the first image according to the interleaved mapping relationship between the first image and the second image to generate a front headlight identification region.
In one embodiment of the invention, the third generating module 1100 generates the range of the front target vehicle from a closed area enclosed by target boundaries of the front target vehicle.
In one embodiment of the invention, the third generating module 1100 generates the forward target vehicle range from an extended enclosed closed area of the target boundary of the forward target vehicle.
In one embodiment of the invention, the third generating module 1100 generates the front target vehicle range from a closed area surrounded by a plurality of pixel position connecting lines of the front target vehicle.
The third identifying module 1110 is configured to identify a turn signal of a corresponding front target vehicle according to the front headlight identifying area.
In one embodiment of the present invention, the third identifying module 1110 identifies the turn signal of the corresponding front target vehicle according to the color, the flashing frequency or the flashing sequence of the tail lights in the front lamp identifying region.
In this embodiment, the control module 1090 identifies a condition that the target vehicle in the front lane changes from the deceleration to the non-self-lane in front according to the motion parameters of the target vehicle in front and the turn signal, so that the motion parameter control system of the host vehicle does not perform braking adjustment.
It should be noted that the foregoing explanation of the vehicle driving automatic control method is also applicable to the vehicle driving automatic control device according to the embodiment of the present invention, and the implementation principle thereof is similar and will not be described herein again.
In summary, the automatic control device for vehicle driving according to the embodiment of the present invention acquires the first image and the second image of the environment in front of the subject vehicle from the front 3D camera, acquires the front road lane line, and acquiring a third image and a rear road lane line according to the imaging parameters of the first image and the front road lane line, mapping the front road lane lines into the second image according to the interleaved mapping relationship between the first image and the second image to generate a plurality of front vehicle identification ranges to identify the front target vehicle according to the front vehicle identification ranges, generating a plurality of rear vehicle identification ranges according to the third image and the rear road lane lines, and acquiring rear target point cloud data from the laser radar, and identifying a rear target vehicle according to the rear target point cloud data, and finally performing cruise control on the motion parameters of the main vehicle according to the motion parameters of the front target vehicle and the rear target vehicle. Therefore, the highway lane lines are accurately identified, the lane environment information of the main vehicle is obtained by combining the data acquired by the laser radar, the cruise control is performed on the main vehicle according to the specific lane environment information, and the driving safety is guaranteed.
Fig. 10 is a schematic structural diagram of an automatic control device for vehicle running according to a third embodiment of the present invention, and as shown in fig. 10, on the basis of fig. 9, a second acquisition module 1020 includes a creation unit 1021, a first detection unit 1022, and a second detection unit 1023.
Wherein, the creating unit 1021 is used for creating a binary image of the front road lane line according to the brightness information of the first image and a preset brightness threshold value.
A first detecting unit 1022, configured to detect, in the binary image, all edge pixel positions of a straight-lane solid-line lane line or all edge pixel positions of a curve solid-line lane line according to a preset detection algorithm;
the second detecting unit 1023 is configured to detect all edge pixel positions of the straight dashed lane line or all edge pixel positions of the curved dashed lane line in the binary image according to a preset detection algorithm.
It should be noted that the foregoing explanation of the vehicle driving automatic control method is also applicable to the vehicle driving automatic control device according to the embodiment of the present invention, and the implementation principle thereof is similar and will not be described herein again.
In summary, the automatic vehicle driving control apparatus according to the embodiment of the present invention creates a binary image of a front highway lane line according to the luminance information of the first image and a preset luminance threshold, detects all edge pixel positions of a straight solid lane line or all edge pixel positions of a curve solid lane line in the binary image according to a preset detection algorithm, and detects all edge pixel positions of a straight dotted lane line or all edge pixel positions of a curve dotted lane line in the binary image according to a preset detection algorithm. Therefore, the broken line and the solid line lane line of the straight road and the curve road in the highway lane line can be accurately identified.
Fig. 11 is a schematic structural diagram of an automatic control device for vehicle running according to a fourth embodiment of the present invention, and as shown in fig. 11, on the basis of fig. 9, a third obtaining module 1030 includes: a projection unit 1031, and an acquisition unit 1032.
And the projection unit 1031 is used for projecting all the pixel positions of the front road lane line to the physical world coordinate system of the host vehicle according to the imaging parameters of the first image to establish a third image.
An obtaining unit 1032 for obtaining the position of the front road lane line in the third image by continuous time accumulation and displacement from the origin of the physical world coordinate system of the subject vehicle.
It should be noted that the foregoing explanation of the vehicle driving automatic control method is also applicable to the vehicle driving automatic control device according to the embodiment of the present invention, and the implementation principle thereof is similar and will not be described herein again.
In summary, the automatic vehicle driving control apparatus according to the embodiment of the present invention projects all pixel positions of the front road lane line to the physical world coordinate system of the host vehicle according to the imaging parameters of the first image to create a third image, and acquires the position of the rear road lane line by accumulating the position of the front road lane line in the third image for continuous time and by displacing the position of the front road lane line with respect to the origin of the physical world coordinate system of the host vehicle. Therefore, the position of the rear lane line is accurately known, the vehicle is conveniently and relatively controlled according to the position of the rear lane line, and a foundation is laid for ensuring the driving safety.
Fig. 12 is a schematic structural diagram of an automatic control device for vehicle running according to a fifth embodiment of the present invention, and as shown in fig. 12, the first recognition module 1050 includes a first marking unit 1051, a first recognition unit 1052, a second recognition unit 1053, and a third recognition unit 1054, in addition to that shown in fig. 9.
The first marking unit 1051 is used for marking the labels of the front own lane and the front non-own lane for all the front vehicle identification ranges.
A first recognition unit 1052 for recognizing the front own-lane target vehicle according to the vehicle recognition range marking the front own-lane tag.
A second recognition unit 1053 for recognizing the front non-own-lane target vehicle from the vehicle recognition range marking the front non-own-lane tag.
And a third identifying unit 1054 for identifying the preceding lane-change target vehicle according to the preceding vehicle identification ranges combined two by two.
Fig. 13 is a schematic configuration diagram of an automatic control device for vehicle running according to a sixth embodiment of the present invention, and as shown in fig. 13, on the basis of fig. 9, a second recognition module 1080 includes: a second marking unit 1081, a fourth recognition unit 1082, a fifth recognition unit 1083 and a sixth recognition unit 1084.
The second marking unit 1081 is configured to mark the labels of the rear own lane and the rear non-own lane for all the rear vehicle identification ranges.
And a fourth recognition unit 1082, configured to recognize the target vehicle in the lane behind the mark according to the vehicle recognition range of the lane label behind the mark and the point cloud data of the rear target point.
And a fifth identifying unit 1083, configured to identify a non-own-lane target vehicle behind the mark according to the vehicle identification range of the non-own-lane label behind the mark and the rear target point cloud data.
And a sixth identifying unit 1084, configured to identify the marked rear lane-change target vehicle according to the combined rear vehicle identification range and the rear target point cloud data.
It should be noted that the foregoing explanation of the vehicle driving automatic control method is also applicable to the vehicle driving automatic control device according to the embodiment of the present invention, and the implementation principle thereof is similar and will not be described herein again.
In summary, the automatic vehicle driving control device according to the embodiment of the invention accurately identifies the front target vehicle and the rear target vehicle, so as to control the driving of the host vehicle according to the front target vehicle and the rear target vehicle, and provide guarantee for ensuring driving safety.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (18)

1. An automatic control method for vehicle running, characterized by comprising:
acquiring a first image and a second image of an environment in front of a subject vehicle from a front-facing 3D camera, wherein the first image is a color or brightness image, and the second image is a depth image;
acquiring a front road lane line according to the first image;
acquiring a third image and a rear road lane line according to the imaging parameters of the first image and the front road lane line;
mapping the front road lane lines into the second image according to the interweaving mapping relation between the first image and the second image to generate a plurality of front vehicle identification ranges;
marking the labels of the front lane and the front non-local lane for all the front vehicle identification ranges;
identifying a front own-lane target vehicle according to the vehicle identification range of the mark front own-lane label;
identifying the front non-own-lane target vehicle according to the vehicle identification range marking the front non-own-lane label;
identifying a front lane-changing target vehicle according to the combined front vehicle identification ranges;
generating a plurality of rear vehicle identification ranges according to the third image and the rear road lane lines;
acquiring point cloud data from a laser radar, and projecting the point cloud data into the third image according to the installation parameters of the laser radar to acquire rear target point cloud data;
marking labels of a rear own lane and a rear non-own lane for all rear vehicle identification ranges;
identifying a target vehicle of the lane behind the mark according to the vehicle identification range of the lane label behind the mark and the rear target point cloud data;
identifying the non-self-lane target vehicles behind the mark according to the vehicle identification range of the non-self-lane mark behind the mark and the rear target point cloud data;
identifying and marking rear lane-changing target vehicles according to the rear vehicle identification range and the rear target point cloud data which are combined in pairs;
generating a front target vehicle range according to the front lane-changing target vehicle, and mapping the front lane-changing target vehicle range to the first image according to the interleaving mapping relation between the first image and the second image to generate a front car light identification area;
identifying a steering lamp of a corresponding front lane-changing target vehicle according to the front vehicle lamp identification area;
and carrying out cruise control on the motion parameters of the main vehicle according to the motion parameters and the steering lamps of the front lane-changing target vehicle and the rear lane-changing target vehicle.
2. The method of claim 1, wherein the acquiring the first image and the second image of the environment in front of the subject vehicle from the front-facing 3D camera comprises:
acquiring a first image of an environment in front of a subject vehicle from an image sensor of a front-facing 3D camera;
a second image of the environment in front of the subject vehicle is acquired from a time-of-flight sensor of the front-facing 3D camera.
3. The method of claim 1, wherein said obtaining a front highway lane line from said first image comprises:
when the first image is a brightness image, identifying a front road lane line according to the brightness difference between the front road lane line and the road surface in the first image; or,
and when the first image is a color image, converting the color image into a brightness image, and identifying the front highway lane line according to the brightness difference between the front highway lane line and the road surface in the first image.
4. The method of claim 3, wherein identifying the front highway lane line based on a difference in luminance of the front highway lane line and a road surface in the first image comprises:
creating a binary image of the front road lane line according to the brightness information of the first image and a preset brightness threshold;
detecting all edge pixel positions of a straight-line lane line or detecting all edge pixel positions of a curve solid-line lane line in the binary image according to a preset detection algorithm;
and detecting all edge pixel positions of the straight road dotted line lane line or detecting all edge pixel positions of the curve dotted line lane line in the binary image according to a preset detection algorithm.
5. The method of claim 1, wherein said obtaining a third image and a rear highway lane line based on imaging parameters of the first image and the front highway lane line comprises:
projecting all pixel positions of the front road lane line to the physical world coordinate system of the main vehicle according to the imaging parameters of the first image to establish a third image;
and acquiring the position of the front road lane line in the third image through continuous time accumulation and displacement relative to the origin of the physical world coordinate system of the host vehicle.
6. The method of claim 1, wherein generating a forward target vehicle range from the forward lane change target vehicle comprises:
and detecting the target boundary of the front lane-changing target vehicle by adopting a boundary detection method in an image processing algorithm for identification.
7. The method of claim 1, wherein generating a forward target vehicle range from the forward lane change target vehicle comprises:
generating a front target vehicle range according to a closed area defined by the target boundary of the front lane-changing target vehicle; or,
generating a front target vehicle range according to a closed area formed by the extended target boundary of the front lane-changing target vehicle; or,
and generating a front target vehicle range according to a closed area defined by a plurality of pixel position connecting lines of the front lane changing target vehicle.
8. The method of claim 1, wherein identifying the turn signal of the respective forward lane-change target vehicle based on the forward vehicle light identification region comprises:
and identifying the steering lamp of the corresponding front lane-changing target vehicle according to the color, the flashing frequency or the flashing sequence of the tail lamp in the front vehicle lamp identification area.
9. The method according to claim 1, wherein the cruise controlling the moving parameters of the subject vehicle according to the moving parameters and turn signals of the front lane-change target vehicle and the rear lane-change target vehicle includes:
identifying the working condition that the non-self-lane target vehicle in front decelerates and changes lane to the self-lane according to the motion parameters of the front lane-changing target vehicle and the steering lamp, so that the motion parameter control system of the main vehicle carries out braking adjustment in advance, and the lamp system of the main vehicle reminds the rear lane-changing target vehicle;
or,
and identifying the working condition that the front self-lane target vehicle decelerates and changes lane to a front non-self-lane according to the motion parameters of the front lane-changing target vehicle and the steering lamp so that the motion parameter control system of the main vehicle does not perform braking adjustment.
10. An automatic vehicle travel control device, characterized by comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a first image and a second image of the environment in front of a main vehicle from a front 3D camera, the first image is a color or brightness image, and the second image is a depth image;
the second acquisition module is used for acquiring a front highway lane line according to the first image;
the third acquisition module is used for acquiring a third image and a rear road lane line according to the imaging parameters of the first image and the front road lane line;
the first generation module is used for mapping the front road lane lines into the second image according to the interweaving mapping relation between the first image and the second image to generate a plurality of front vehicle identification ranges;
the first recognition module is used for marking the labels of the front lane and the front non-own lane in all the front vehicle recognition ranges, recognizing the front own-lane target vehicle according to the vehicle recognition range of the front own-lane label, recognizing the front non-own-lane target vehicle according to the vehicle recognition range of the front non-own-lane label, and recognizing the front lane-changing target vehicle according to the front vehicle recognition ranges combined in pairs;
a second generating module, configured to generate a plurality of rear vehicle identification ranges according to the third image and the rear road lane;
the fourth acquisition module is used for acquiring point cloud data from a laser radar and projecting the point cloud data into the third image according to the installation parameters of the laser radar so as to acquire rear target point cloud data;
the second recognition module is used for marking the labels of a rear lane and a rear non-own lane for all rear vehicle recognition ranges, recognizing the target vehicles of the rear lane behind the labels according to the vehicle recognition ranges of the labels of the rear lane behind the labels and the rear target point cloud data, recognizing the target vehicles of the rear non-own lane behind the labels according to the vehicle recognition ranges of the labels of the rear non-own lane behind the labels and the rear target point cloud data, and recognizing the target vehicles of the rear lane change behind the labels according to the rear vehicle recognition ranges combined in pairs and the rear target point cloud data;
the third generation module is used for generating a front target vehicle range according to the front lane-changing target vehicle and mapping the front target vehicle range to the first image according to the interweaving mapping relation between the first image and the second image to generate a front vehicle lamp identification area;
the third identification module is used for identifying the steering lamp of the corresponding front lane-changing target vehicle according to the front vehicle lamp identification area;
and the control module is used for carrying out cruise control on the motion parameters of the main vehicle according to the motion parameters and the steering lamps of the front lane-changing target vehicle and the rear lane-changing target vehicle.
11. The apparatus of claim 10, wherein the first obtaining module is to:
acquiring a first image of an environment in front of a subject vehicle from an image sensor of a front-facing 3D camera;
a second image of the environment in front of the subject vehicle is acquired from a time-of-flight sensor of the front-facing 3D camera.
12. The apparatus of claim 10, wherein the second obtaining module is to:
when the first image is a brightness image, identifying a front road lane line according to the brightness difference between the front road lane line and the road surface in the first image; or,
and when the first image is a color image, converting the color image into a brightness image, and identifying the front road lane line according to the brightness difference between the front road lane line and the road surface in the first image.
13. The apparatus of claim 12, wherein the second obtaining module comprises:
the creating unit is used for creating a binary image of the front road lane line according to the brightness information of the first image and a preset brightness threshold value;
the first detection unit is used for detecting all edge pixel positions of a straight-line lane line or all edge pixel positions of a curve solid-line lane line in the binary image according to a preset detection algorithm;
and the second detection unit is used for detecting all edge pixel positions of the straight-way dotted line lane line or detecting all edge pixel positions of the curve dotted line lane line in the binary image according to a preset detection algorithm.
14. The apparatus of claim 10, wherein the third acquisition module comprises:
the projection unit is used for projecting all pixel positions of the front road lane line to the physical world coordinate system of the main vehicle according to the imaging parameters of the first image to establish a third image;
and the acquisition unit is used for acquiring the position of the front road lane line in the third image through continuous time accumulation and displacement relative to the origin of the physical world coordinate system of the host vehicle.
15. The apparatus of claim 12, wherein the third generation module is to:
and detecting the target boundary of the front target vehicle by adopting a boundary detection method in an image processing algorithm for identification.
16. The apparatus of claim 15, wherein the third generation module is to:
generating a front target vehicle range according to a closed area defined by the target boundary of the front lane-changing target vehicle; or,
generating a front target vehicle range according to a closed area formed by the extended target boundary of the front lane-changing target vehicle; or,
and generating a front target vehicle range according to a closed area defined by a plurality of pixel position connecting lines of the front lane changing target vehicle.
17. The apparatus of claim 15, wherein the third identification module is to:
and identifying the steering lamp of the corresponding front lane-changing target vehicle according to the color, the flashing frequency or the flashing sequence of the tail lamp in the front vehicle lamp identification area.
18. The apparatus of claim 10, wherein the control module is to:
identifying the working condition that the non-self-lane target vehicle in front decelerates and changes lane to the self-lane according to the motion parameters of the front lane-changing target vehicle and the steering lamp, so that the motion parameter control system of the main vehicle carries out braking adjustment in advance, and the lamp system of the main vehicle reminds the rear lane-changing target vehicle;
or,
and identifying the working condition that the front self-lane target vehicle decelerates and changes lane to a front non-self-lane according to the motion parameters of the front lane-changing target vehicle and the steering lamp so that the motion parameter control system of the main vehicle does not perform braking adjustment.
CN201710122206.5A 2017-03-02 2017-03-02 Automatic control method and device for vehicle running Active CN108528433B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710122206.5A CN108528433B (en) 2017-03-02 2017-03-02 Automatic control method and device for vehicle running

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710122206.5A CN108528433B (en) 2017-03-02 2017-03-02 Automatic control method and device for vehicle running

Publications (2)

Publication Number Publication Date
CN108528433A CN108528433A (en) 2018-09-14
CN108528433B true CN108528433B (en) 2020-08-25

Family

ID=63489065

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710122206.5A Active CN108528433B (en) 2017-03-02 2017-03-02 Automatic control method and device for vehicle running

Country Status (1)

Country Link
CN (1) CN108528433B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110967699B (en) * 2018-09-30 2022-03-22 毫末智行科技有限公司 Method and device for determining area of vehicle where environmental target is located
CN109657686B (en) * 2018-10-31 2021-04-20 百度在线网络技术(北京)有限公司 Lane line generation method, apparatus, device, and storage medium
EP3697659B1 (en) * 2018-12-26 2023-11-22 Baidu.com Times Technology (Beijing) Co., Ltd. Method and system for generating reference lines for autonomous driving vehicles
CN112185144A (en) * 2019-07-01 2021-01-05 大陆泰密克汽车系统(上海)有限公司 Traffic early warning method and system
CN111578894B (en) * 2020-06-02 2021-10-15 北京经纬恒润科技股份有限公司 Method and device for determining heading angle of obstacle
CN112937604B (en) * 2021-03-03 2022-07-29 福瑞泰克智能系统有限公司 Lane changing processing method, device and equipment and vehicle
CN112926476B (en) * 2021-03-08 2024-06-18 京东鲲鹏(江苏)科技有限公司 Vehicle identification method, device and storage medium
CN114312838B (en) * 2021-12-29 2023-07-28 上海洛轲智能科技有限公司 Control method and device for vehicle and storage medium
CN114212079B (en) * 2022-02-18 2022-05-20 国汽智控(北京)科技有限公司 ACC-based vehicle control method, device and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001114048A (en) * 1999-10-20 2001-04-24 Matsushita Electric Ind Co Ltd On-vehicle operation supporting information display device
CN104952254A (en) * 2014-03-31 2015-09-30 比亚迪股份有限公司 Vehicle identification method and device and vehicle
KR20160000495A (en) * 2014-06-24 2016-01-05 주식회사 만도 System and method for estimating distance of front vehicle
CN205573939U (en) * 2016-01-22 2016-09-14 江苏大学 It is preceding to collision avoidance system based on it is preceding to action of driving of vehicle operation people
CN106043312A (en) * 2016-06-28 2016-10-26 戴姆勒股份公司 Driving assistance system and method and motor vehicle provided with driving assistance system
CN106463064A (en) * 2014-06-19 2017-02-22 日立汽车系统株式会社 Object recognition apparatus and vehicle travel controller using same

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001114048A (en) * 1999-10-20 2001-04-24 Matsushita Electric Ind Co Ltd On-vehicle operation supporting information display device
CN104952254A (en) * 2014-03-31 2015-09-30 比亚迪股份有限公司 Vehicle identification method and device and vehicle
CN106463064A (en) * 2014-06-19 2017-02-22 日立汽车系统株式会社 Object recognition apparatus and vehicle travel controller using same
KR20160000495A (en) * 2014-06-24 2016-01-05 주식회사 만도 System and method for estimating distance of front vehicle
CN205573939U (en) * 2016-01-22 2016-09-14 江苏大学 It is preceding to collision avoidance system based on it is preceding to action of driving of vehicle operation people
CN106043312A (en) * 2016-06-28 2016-10-26 戴姆勒股份公司 Driving assistance system and method and motor vehicle provided with driving assistance system

Also Published As

Publication number Publication date
CN108528433A (en) 2018-09-14

Similar Documents

Publication Publication Date Title
CN108528433B (en) Automatic control method and device for vehicle running
CN108528432B (en) Automatic control method and device for vehicle running
CN108528431B (en) Automatic control method and device for vehicle running
CN107886770B (en) Vehicle identification method and device and vehicle
CN106909152B (en) Automobile-used environmental perception system and car
CN109204311B (en) Automobile speed control method and device
CN108528448B (en) Automatic control method and device for vehicle running
EP3141926A1 (en) Automated detection of hazardous drifting vehicles by vehicle sensors
US20180001894A1 (en) Vehicle cruise control device and cruise control method
US9886773B2 (en) Object detection apparatus and object detection method
CN108536134B (en) Automatic control method and device for vehicle running
US20140240502A1 (en) Device for Assisting a Driver Driving a Vehicle or for Independently Driving a Vehicle
EP2461305A1 (en) Road shape recognition device
JP5363921B2 (en) Vehicle white line recognition device
WO2020259284A1 (en) Obstacle detection method and device
CN108974010B (en) Processing device, vehicle, processing method, and storage medium
JP5313638B2 (en) Vehicle headlamp device
JP6354659B2 (en) Driving support device
CN107886729B (en) Vehicle identification method and device and vehicle
JP6340738B2 (en) Vehicle control device, vehicle control method, and vehicle control program
CN111196217A (en) Vehicle assistance system
JP7255345B2 (en) Driving lane recognition device, driving lane recognition method and program
CN108528450B (en) Automatic control method and device for vehicle running
CN108528449B (en) Automatic control method and device for vehicle running
CN114084129A (en) Fusion-based vehicle automatic driving control method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant