CN108528449B - Automatic control method and device for vehicle running - Google Patents

Automatic control method and device for vehicle running Download PDF

Info

Publication number
CN108528449B
CN108528449B CN201710120748.9A CN201710120748A CN108528449B CN 108528449 B CN108528449 B CN 108528449B CN 201710120748 A CN201710120748 A CN 201710120748A CN 108528449 B CN108528449 B CN 108528449B
Authority
CN
China
Prior art keywords
vehicle
image
target vehicle
lane
tunnel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710120748.9A
Other languages
Chinese (zh)
Other versions
CN108528449A (en
Inventor
黄忠伟
姜波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BYD Co Ltd
Original Assignee
BYD Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BYD Co Ltd filed Critical BYD Co Ltd
Priority to CN201710120748.9A priority Critical patent/CN108528449B/en
Publication of CN108528449A publication Critical patent/CN108528449A/en
Application granted granted Critical
Publication of CN108528449B publication Critical patent/CN108528449B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/14Adaptive cruise control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo or light sensitive means, e.g. infrared sensors
    • B60W2420/403Image sensing, e.g. optical camera
    • B60W2420/408
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/80Spatial relation or speed relative to objects
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2555/00Input parameters relating to exterior conditions, not covered by groups B60W2552/00, B60W2554/00
    • B60W2555/60Traffic rules, e.g. speed limits or right of way

Abstract

The invention discloses a vehicle running automatic control method and a vehicle running automatic control device, wherein the method comprises the following steps: identifying a front target vehicle and a rear target vehicle according to a first image and a second image of an environment in front of a subject vehicle acquired from a front-facing 3D camera; acquiring tunnel information and speed limit information; changing the setting of the cruising speed upper limit and the cruising safety distance of the main vehicle according to the tunnel information and the speed limit information; and adjusting the focal length of the front 3D camera according to the tunnel information, and performing in-tunnel cruise control on the motion parameters of the main vehicle according to the motion parameters of the front target vehicle and the motion parameters of the rear target vehicle. Therefore, the driving safety of the vehicle in the tunnel is improved.

Description

Automatic control method and device for vehicle running
Technical Field
The invention relates to the technical field of vehicle control, in particular to a vehicle running automatic control method and device.
Background
Currently, the adaptive cruise system of a vehicle has a growing interest, in which a user sets a desired vehicle speed, the system obtains an exact position of a preceding vehicle using a low power radar or an infrared beam, and if the preceding vehicle is decelerated or a new target is detected, the system transmits an execution signal to an engine or a brake system to reduce the vehicle speed so that the vehicle and the preceding vehicle maintain a safe driving distance. When the front road is empty, the vehicle is accelerated to return to the set vehicle speed, and the radar system can automatically monitor the next target. The self-adaptive cruise control system of the vehicle replaces a user to control the speed of the vehicle, avoids frequent cancellation and setting of cruise control, enables the self-adaptive cruise control system to be suitable for more road conditions, and provides a more relaxed driving mode for the user.
However, in the case where a plurality of target vehicles travel in a tunnel, a distance measurement sensor such as a millimeter wave radar does not recognize a lane line well. Therefore, a subject vehicle mounting only the millimeter wave radar is likely to recognize a target vehicle of the own lane as being in a non-own lane and may recognize a target vehicle of the non-own lane as being in the own lane, which may cause the adaptive cruise system of the subject vehicle to perform erroneous braking or a delay in braking, and the traveling safety of the subject vehicle is low.
Disclosure of Invention
The object of the present invention is to solve at least to some extent one of the above mentioned technical problems.
To this end, a first object of the present invention is to propose an automatic control method for vehicle travel, which improves the safety of vehicle travel in tunnels.
A second object of the present invention is to provide an automatic vehicle running control device.
In order to achieve the above object, an embodiment of a first aspect of the present invention provides a vehicle running automatic control method, including: acquiring a first image and a second image of an environment in front of a subject vehicle from a front-facing 3D camera, wherein the first image is a color or brightness image, and the second image is a depth image; acquiring a front road lane line according to the first image; acquiring a third image and a rear road lane line according to the imaging parameters of the first image and the front road lane line; mapping the front road lane lines into the second image according to the interweaving mapping relation between the first image and the second image to generate a plurality of front vehicle identification ranges; identifying a front target vehicle according to all the front vehicle identification ranges; generating a plurality of rear vehicle identification ranges according to the third image and the rear road lane lines; acquiring a rear target vehicle parameter group from a millimeter wave radar, and projecting the rear target vehicle parameter group into the third image according to installation parameters of the millimeter wave radar to acquire a plurality of rear target vehicle parameter points; identifying rear target vehicles according to all the rear vehicle identification ranges and the plurality of rear target vehicle parameter points; acquiring tunnel information and speed limit information; changing the settings of the cruising speed upper limit and the cruising safety distance of the main vehicle according to the tunnel information and the speed limit information; and adjusting the focal length of the front 3D camera according to the tunnel information, and performing in-tunnel cruise control on the motion parameters of the main vehicle according to the motion parameters of the front target vehicle and the motion parameters of the rear target vehicle.
The automatic control method for vehicle driving of the embodiment of the invention acquires a first image and a second image of the environment in front of a subject vehicle from a front-facing 3D camera, acquires a front road lane line, acquires a third image and a rear road lane line according to imaging parameters of the first image and the front road lane line, maps the front road lane line into the second image according to an interleaved mapping relationship between the first image and the second image to generate a plurality of front vehicle identification ranges, identifies a front target vehicle according to the front vehicle identification ranges, generates a plurality of rear vehicle identification ranges according to the third image and the rear road lane line, acquires a rear target vehicle parameter group from a millimeter wave radar, projects the rear target vehicle parameter group into the third image according to installation parameters of the millimeter wave radar to acquire a plurality of rear target vehicle parameter points, thereby identifying the rear target vehicle, and finally, acquiring tunnel information and speed limit information, changing the setting of the cruising speed upper limit and cruising safety distance of the main vehicle according to the tunnel information and the speed limit information, adjusting the focal length of the front 3D camera according to the tunnel information, and performing in-tunnel cruising control on the motion parameters of the main vehicle according to the motion parameters of the front target vehicle and the motion parameters of the rear target vehicle. Therefore, the main vehicle can control the braking of the main vehicle according to the environmental information of the highway lane line, unnecessary braking adjustment is reduced, the risk of rear-end collision is effectively reduced, and the running safety of the main vehicle in the tunnel is improved.
In order to achieve the above object, a second aspect of the present invention provides a vehicle running automatic control device including: the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a first image and a second image of the environment in front of a main vehicle from a front 3D camera, the first image is a color or brightness image, and the second image is a depth image; the second acquisition module is used for acquiring a front highway lane line according to the first image; the third acquisition module is used for acquiring a third image and a rear road lane line according to the imaging parameters of the first image and the front road lane line; the first generation module is used for mapping the front road lane lines into the second image according to the interweaving mapping relation between the first image and the second image to generate a plurality of front vehicle identification ranges; the first identification module is used for identifying the front target vehicles according to all the front vehicle identification ranges; a second generating module, configured to generate a plurality of rear vehicle identification ranges according to the third image and the rear road lane; the fourth acquisition module is used for acquiring a rear target vehicle parameter group from a millimeter wave radar and projecting the rear target vehicle parameter group into the third image according to the installation parameters of the millimeter wave radar so as to acquire a plurality of rear target vehicle parameter points; the second identification module is used for identifying rear target vehicles according to all the rear vehicle identification ranges and the plurality of rear target vehicle parameter points; the fifth acquisition module is used for acquiring tunnel information and speed limit information; the first adjusting module is used for changing the settings of the cruising speed upper limit and the cruising safety distance of the main vehicle according to the tunnel information and the speed limit information; the second adjusting module is used for adjusting the focal length of the front 3D camera according to the tunnel information; and the control module is used for carrying out in-tunnel cruise control on the motion parameters of the main vehicle according to the motion parameters of the front target vehicle and the motion parameters of the rear target vehicle.
The automatic vehicle driving control apparatus of an embodiment of the present invention acquires a first image and a second image of an environment ahead of a subject vehicle from a front 3D camera, acquires a front road lane line, acquires a third image and a rear road lane line according to an imaging parameter of the first image and the front road lane line, maps the front road lane line into the second image according to an interleaved mapping relationship between the first image and the second image to generate a plurality of front vehicle recognition ranges, recognizes a front target vehicle according to the front vehicle recognition ranges, generates a plurality of rear vehicle recognition ranges according to the third image and the rear road lane line, acquires a rear target vehicle parameter group from a millimeter wave radar, projects the rear target vehicle parameter group into the third image according to an installation parameter of the millimeter wave radar to acquire a plurality of rear target vehicle parameter points, thereby recognizing the rear target vehicle, and finally, acquiring tunnel information and speed limit information, changing the setting of the cruising speed upper limit and cruising safety distance of the main vehicle according to the tunnel information and the speed limit information, adjusting the focal length of the front 3D camera according to the tunnel information, and performing in-tunnel cruising control on the motion parameters of the main vehicle according to the motion parameters of the front target vehicle and the motion parameters of the rear target vehicle. Therefore, the main vehicle can control the braking of the main vehicle according to the environmental information of the highway lane line, unnecessary braking adjustment is reduced, the risk of rear-end collision is effectively reduced, and the running safety of the main vehicle in the tunnel is improved.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a flowchart of a vehicle running automatic control method according to a first embodiment of the invention;
fig. 2 is a flowchart of a vehicle running automatic control method according to a second embodiment of the invention;
fig. 3 is a flowchart of a vehicle running automatic control method according to a third embodiment of the invention;
fig. 4 is a flowchart of a vehicle running automatic control method according to a fourth embodiment of the invention;
fig. 5 is a flowchart of a vehicle running automatic control method according to a fifth embodiment of the invention;
FIG. 6 is a flowchart of a vehicle running automatic control method according to a sixth embodiment of the invention
Fig. 7 is a flowchart of a vehicle running automatic control method according to a seventh embodiment of the invention;
fig. 8 is a scene diagram of a vehicle running automatic control method according to an embodiment of the invention;
fig. 9 is a scene diagram of a vehicle running automatic control method according to another embodiment of the invention;
fig. 10 is a schematic configuration diagram of a vehicle running automatic control apparatus according to a first embodiment of the invention;
fig. 11 is a schematic configuration diagram of an automatic control device for vehicle running according to a second embodiment of the present invention;
fig. 12 is a schematic configuration diagram of an automatic control device for vehicle running according to a third embodiment of the invention;
fig. 13 is a schematic configuration diagram of an automatic control device for vehicle running according to a fourth embodiment of the invention;
fig. 14 is a schematic configuration diagram of a vehicular running automatic control apparatus according to a fifth embodiment of the invention;
fig. 15 is a schematic configuration diagram of an automatic control device for vehicle running according to a sixth embodiment of the invention; and
fig. 16 is a schematic configuration diagram of an automatic control device for vehicle running according to a seventh embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The following describes a vehicle running automatic control method and apparatus according to an embodiment of the present invention with reference to the drawings.
Fig. 1 is a flowchart of a vehicle running automatic control method according to an embodiment of the present invention.
Generally, a millimeter wave radar is assembled in front, at the side or at the rear of a vehicle to achieve different functions of forward-looking collision avoidance, side collision avoidance, rear-looking collision avoidance and other side points.
Specifically, after a vehicle provided with the millimeter wave radar is on the road, the millimeter wave radar selects a following vehicle, and then the following vehicle is used as a target vehicle for monitoring, so that no matter the front vehicle is accelerated, decelerated, stopped or started, the subject vehicle can know and take corresponding measures in real time. However, because the millimeter wave radar is a monopulse, the type and the property of the target vehicle to be detected cannot be accurately determined, and when the target vehicle is running in a tunnel, the lane line cannot be well identified, so that identification delay and the like are easily caused, and potential safety hazards are caused in driving.
In order to solve the problems, the invention provides an automatic vehicle running control method, which ensures the running safety of a main vehicle in a tunnel.
The following describes the automatic control method for vehicle running according to the present invention with reference to specific embodiments. As shown in fig. 1, the vehicle travel automatic control method includes:
s101, acquiring a first image and a second image of the environment in front of the main vehicle from the front 3D camera, wherein the first image is a color or brightness image, and the second image is a depth image.
Specifically, a 3D camera is arranged in front of a current host vehicle in advance to acquire a first image and a second image of an environment in front of the host vehicle, wherein the first image is a color or brightness image, and the second image is a depth image.
In practical applications, the first image and the second image of the environment in front of the subject vehicle may be acquired from the front 3D camera in various ways according to the structure of the imaging device of the front 3D camera.
As one possible implementation, a first image of the environment in front of the subject vehicle is acquired from an image sensor of the front 3D camera, and a second image of the environment in front of the subject vehicle is acquired from a Time of Flight (TOF) sensor of the front 3D camera.
Where an image sensor refers to an array or collection of luminance pixel sensors, such as red, green, blue (RGB) or luminance, chrominance (YUV) luminance pixel sensors, for example, which are limited in their ability to accurately determine the distance between the luminance pixel sensor and the object being inspected, are commonly used to obtain a luminance image of an environment.
TOF sensors refer to an array or set of TOF pixel sensors, which may be light sensors, phase detectors, etc., that can detect the time of flight of light from a pulsed light source, a modulated light source, traveling between the TOF pixel sensor and a detected object to detect the distance of the object and acquire a depth image.
In addition, in practical applications, both the image sensor and the TOF sensor may be fabricated using Complementary Metal Oxide Semiconductor (CMOS) processes, and the luminance pixel sensor and the TOF pixel sensor may be scaled on the same substrate, for example, 8 luminance pixel sensors and 1 TOF pixel sensor fabricated in an 8:1 ratio constitute one large interleaved pixel, where the light sensing area of 1 TOF pixel sensor may be equal to the light sensing area of 8 luminance pixel sensors, and 8 luminance pixel sensors may be arranged in an array of 2 rows and 4 columns.
For example, an array of 360 rows and 480 columns of the active interleaved pixels described above can be fabricated on a 1 inch optical target surface substrate, 720 rows and 1920 columns of active luminance pixel sensor arrays, and 360 rows and 480 columns of active TOF pixel sensor arrays can be acquired, whereby the same camera of image sensor and TOF sensor can acquire color or luminance images and depth images simultaneously.
Thereby, the same front-facing 3D camera, which acquires the first image and the second image about the environment in front of the subject vehicle, can be manufactured using CMOS processes, and the front-facing 3D camera will have a sufficiently low production cost for a limited period of time according to moore's law of the semiconductor industry.
And S102, acquiring a front road lane line according to the first image.
Specifically, since the first image is a color or luminance image, the position of the highway lane line can be identified only by using the luminance difference between the highway lane line and the road surface. Therefore, in an actual implementation, the highway lane line may be acquired through the luminance information of the first image.
Specifically, if the first image is a luminance image, the front road lane line is identified from a luminance difference between the front road lane line and the road surface in the first image.
And if the first image is a color image, converting the color image into a brightness image, and identifying the front highway lane line according to the brightness difference between the front highway lane line and the road surface in the first image.
Since the conversion method from the color image to the luminance image is familiar to those skilled in the art, the detailed process of converting the color image to the luminance image is not described herein.
And S103, acquiring a third image and a rear road lane line according to the imaging parameters of the first image and the front road lane line.
The imaging parameters of the first image may include an imaging pixel coordinate system of a camera that obtains the first image, a focal length, and a position and an orientation of the camera in a physical world coordinate system of the host vehicle, that is, a projection relationship may be established between any image pixel coordinate of the first image and the physical world coordinate system of the host vehicle through the imaging parameters, and the method for establishing the projection relationship is familiar to those skilled in the art.
The third image is a top view of all pixel positions of the projected front road lane line, and therefore the position of the front road lane line in the third image is the position of the road lane line in front of the host vehicle relative to the origin of the physical world coordinate system of the host vehicle.
Further, since the rear highway lane line is a continuation of the front highway lane line, the rear highway lane line is also acquired on the basis of the acquired first highway lane line.
And S104, mapping the front road lane lines into the second image according to the interleaved mapping relation between the first image and the second image to generate a plurality of front vehicle identification ranges.
Specifically, the first image and the second image are a color or brightness image and a depth image acquired by the same front 3D camera, so that the first image and the second image have an interleaving mapping relationship, and due to the interleaving mapping relationship between the first image and the second image, the row-column coordinates of each pixel of the first image can determine at least one row-column coordinate of one pixel in the second image through proportional adjustment, so that each edge pixel position of a front highway lane line acquired according to the first image can determine at least one pixel position in the second image, and thus the front highway lane line with the proportional adjustment is acquired in the second image.
Furthermore, according to the visual field viewed by human eyes, a front vehicle identification range is uniquely established for every two adjacent front road lane lines according to the equal-proportion front road lane lines acquired from the second image.
And S105, identifying the front target vehicle according to all the front vehicle identification ranges.
Specifically, after the recognition range of the front lane is acquired, the vehicle located within the recognition range of the front lane is acquired as the front target vehicle.
And S106, generating a plurality of rear vehicle identification ranges according to the third image and the rear road lane line.
Specifically, according to the field of view viewed by human eyes, according to the rear road lane lines in the third image, each two adjacent rear road lane lines uniquely create one rear vehicle identification range.
And S107, acquiring a rear target vehicle parameter group from the millimeter wave radar, and projecting the rear target vehicle parameter group into a third image according to the installation parameters of the millimeter wave radar to acquire a plurality of rear target vehicle parameter points.
Specifically, a millimeter wave radar, such as a frequency modulated continuous wave radar operating at 24GHz or 77GHz, may acquire parameters such as the distance, relative speed, and azimuth of a plurality of rear target vehicles, forming a rear target vehicle parameter set, and thus acquire rear target vehicle parameters by the millimeter wave radar, each set of the rear target vehicle parameter set including at least the distance, relative speed, and azimuth of the rear target vehicle.
The installation parameters of the position and the orientation of the millimeter wave radar in the physical world coordinate system of the host vehicle can be detected through the offline of the host vehicle so as to be acquired and recorded, so that the parameters of the millimeter wave radar, such as the distance, the relative speed and the azimuth angle of each target vehicle in the target vehicle parameter set, can be converted into the parameters relative to the origin of the physical world coordinate system of the host vehicle, namely, the rear target vehicle parameter set is projected into the third image according to the installation parameters of the millimeter wave radar so as to acquire a plurality of rear target vehicle parameter points.
For example, the normal of the millimeter wave radar coincides with the Y axis of the physical world coordinate system of the host vehicle, the distance from the starting point of the normal of the millimeter wave radar to the origin of the physical world coordinate system of the host vehicle is-2 m, the distance of a rear target vehicle identified by the millimeter wave radar is 10m, the relative speed is 2m/s, and the azimuth angle is 30 ° (i.e., the included angle between the connecting line of the target vehicle and the origin and the Y axis), and the X, Y coordinates of the target vehicle to the origin of the physical world coordinate system of the host vehicle are (10m sin30 °, -2m-10m cos30 °), i.e., (5m, -10.66m), that is, the parameter point (5m, -10.66m) of the rear target vehicle is obtained by projecting the target vehicle parameter set of the millimeter wave radar into the third image according to the installation parameters of the millimeter wave radar.
And S108, identifying the rear target vehicles according to all the rear vehicle identification ranges and the plurality of rear target vehicle parameter points.
Specifically, since the group of rear target vehicle parameters of the millimeter wave radar is projected into the third image according to the installation parameters of the millimeter wave radar to acquire the number of rear target vehicle parameter points, the target vehicles whose rear target vehicle parameter points fall within the corresponding vehicle identification ranges are marked as rear target vehicles.
And S109, acquiring tunnel information and speed limit information.
It is understood that the tunnel information and the speed limit information may be tunnel entrance information and its corresponding speed limit information, or tunnel exit information and its corresponding speed limit information.
Therefore, there are various ways to obtain the tunnel information and the speed limit information, and the tunnel information and the speed limit information can be selectively set according to the actual application requirements, for example, as follows:
in a first example, tunnel entrance information and speed limit information are acquired from a navigation system.
In a second example, tunnel exit information and speed limit information are obtained from a navigation system.
In a third example, tunnel entrance information and speed limit information are identified from the first image and the second image.
In a fourth example, tunnel exit information and speed limit information are identified from the first image and the second image.
And S110, changing the setting of the cruising speed upper limit and the cruising safety distance of the main vehicle according to the tunnel information and the speed limit information.
It is understood that the settings for changing the cruising speed upper limit and cruising safety distance of the subject vehicle are different for different tunnel information.
As an example, when the tunnel information is tunnel exit information, the settings of the cruising speed upper limit and cruising safety distance of the subject vehicle are changed according to the tunnel exit information, speed limit information, and user setting information.
And S111, adjusting the front 3D camera according to the tunnel information, and performing in-tunnel cruise control on the motion parameters of the main vehicle according to the motion parameters of the front target vehicle and the motion parameters of the rear target vehicle.
It will be appreciated that different tunnel information adjusts the focal length of the front 3D camera differently, for example as follows:
first, when the tunnel information is tunnel entrance information, the focal length of the front 3D camera is reduced.
In a second example, when the tunnel information is tunnel exit information, the focal length of the front 3D camera is increased.
And further, performing in-tunnel cruise control on the motion parameters of the main vehicle according to the motion parameters of the front target vehicle, the rear target vehicle and the steering lamp.
Thus, in one embodiment of the present invention, the forward target vehicle range may also be generated from the forward target vehicle, and the forward target vehicle range may be mapped into the first image according to the interleaved mapping relationship between the first image and the second image to generate the forward headlight recognition region.
It can be understood that, since the driving safety is related to the driving state of the front target vehicle, for example, when the front target vehicle is moving straight, the subject vehicle can normally drive, and if the front target vehicle suddenly decelerates and changes lanes, the subject vehicle needs to be braked to avoid the occurrence of a rear-end collision. Since the lamps of the preceding target vehicle through which the traveling state of the preceding target vehicle can pass react, in the present embodiment, in order to know the lamps of the preceding target vehicle, it is necessary to determine the preceding lamp recognition area.
Specifically, since the front headlight identification area is located within the front target vehicle range, the front target vehicle range is generated according to the front target vehicle, due to the interleaving mapping relationship between the first image and the second image, the row-column coordinates of each pixel of the front target vehicle range in the second image are subjected to proportional adjustment, the row-column coordinates of at least one pixel in the first image can be determined, and the imaging of the headlight of the target vehicle is included in the corresponding front target vehicle range, so that the front headlight identification area is generated in the first image.
In practical applications, the manner of generating the range of the front target vehicle according to the front target vehicle is different according to different application scenarios, and the following examples are given:
the first example:
and generating a front target vehicle range according to a closed area defined by the target boundary of the front target vehicle.
In this example, as a possible implementation manner, a boundary detection method (e.g., a boundary detection method such as Canny, Sobel, Laplace, etc.) in an image processing algorithm is adopted to detect a target boundary of a front target vehicle for recognition.
In the depth image, the depth sub-image formed by reflecting the light on the back or front of the same target vehicle to the TOF sensor contains consistent distance information, so that the distance information of the target vehicle can be acquired by only identifying the position of the depth sub-image formed by the target vehicle in the depth image. Wherein a sub-image refers to a combination of a part of the pixels of an image.
The depth sub-image formed by reflecting light on the back or front of the same target vehicle to the TOF sensor contains consistent distance information, and the depth sub-image formed by reflecting light on the road surface to the TOF sensor contains continuously-changed distance information, so that the depth sub-image containing the consistent distance information and the depth sub-image containing the continuously-changed distance information necessarily form abrupt differences at the junction of the two, and the junction of the abrupt differences forms a target boundary of the target vehicle in the depth image.
The second example is:
a forward target vehicle range is generated from an extended enclosed area of the target boundary of the forward target vehicle.
In this example, as a possible implementation manner, a boundary detection method in an image processing algorithm is adopted to detect the target boundary of the front target vehicle for identification.
The third example:
and generating a front target vehicle range according to a closed area formed by connecting a plurality of pixel positions of the front target vehicle.
The vehicle identification range is determined by all pixel positions of the lane line, so that the detection of the target boundary of the target vehicle in the vehicle identification range reduces the boundary interference formed by road facilities such as an isolation zone, a light pole, a protection pile and the like.
Further, the turn lights of the corresponding front target vehicle are identified according to the front lamp identification area.
Specifically, after the front vehicle lamp identification area is acquired, in order to accurately know the specific form state of the front target vehicle, the turn lamp of the corresponding front target vehicle is identified according to the front vehicle lamp identification area.
It should be noted that, according to different specific application requirements, the manner of identifying the turn signal of the corresponding front target vehicle according to the front vehicle light identification area is different.
As one possible implementation, the turn signal of the corresponding preceding target vehicle is identified according to the color, the flashing frequency or the flashing sequence of the tail light in the preceding vehicle light identification area.
In this embodiment, the fact that both the longitudinal displacement and the lateral displacement of the preceding target vehicle are small at the initial stage of lane change means that the size change of the headlight recognition region of the preceding target vehicle is also small, and only the change in the brightness of the image formed at the turn signal is large due to flickering.
Therefore, a time-differentiated vehicle light recognition area sub-image of the front target vehicle is created by continuously acquiring a plurality of color or brightness images at different time instants and performing time-differentiation processing on the vehicle light recognition area of the front target vehicle. The time differentiated vehicle light identification area sub-images will highlight the continuously flashing vehicle light sub-images of the preceding target vehicle.
And then projecting the time differential car lamp identification area sub-image to a column coordinate axis, performing one-dimensional search to obtain the initial and end column coordinate positions of the car lamp sub-image of the target vehicle, projecting the initial and end column coordinate positions to the time differential car lamp identification area sub-image, searching the initial and end row coordinate positions of the car lamp sub-image, projecting the initial and end row and column coordinate positions of the car lamp sub-image to the plurality of color or brightness images at different moments to confirm the color, the flashing frequency or the flashing sequence of the car lamp of the front target vehicle, thereby determining the row and column coordinate positions of the flashing car lamp sub-image.
Further, when the line and column coordinate positions of the flickering headlamp subimages are only on the left side of the headlamp identification area of the front target vehicle, it can be determined that the front target vehicle turns on a left turn lamp, when the line and column coordinate positions of the flickering headlamp subimages are only on the right side of the headlamp identification area of the front target vehicle, it can be determined that the front target vehicle turns on a right turn lamp, and when the line and column coordinate positions of the flickering headlamp subimages are on both sides of the headlamp identification area of the front target vehicle, it can be determined that the target vehicle turns on a double-flashing warning lamp.
In addition, in the implementation manner, the longitudinal displacement or the transverse displacement of the front target vehicle is large in the lane changing process of the front target vehicle, so that the size of the vehicle lamp identification area of the front target vehicle is also large in change, longitudinal displacement or transverse displacement compensation can be performed on a plurality of vehicle lamp identification areas of the front target vehicle which are continuously acquired at different moments, the vehicle lamp identification areas are scaled into vehicle lamp identification areas with the same size, time differentiation processing is performed on the adjusted vehicle lamp identification areas of the front target vehicle to create time differential vehicle lamp identification area sub-images of the front target vehicle, the time differential vehicle lamp identification area sub-images are projected to a column coordinate axis, one-dimensional search is performed to acquire the initial and end column coordinate positions of the vehicle lamps of the front target vehicle, and the initial and end column coordinate positions are projected to the time differential vehicle lamp identification area sub-images, and searching the line coordinate positions of the start and the end of the vehicle lamp subimage, projecting the line and column coordinate positions of the start and the end of the vehicle lamp subimage to the plurality of color or brightness images at different moments to confirm the color, the flashing frequency or the flashing sequence of the vehicle lamp of the front target vehicle, thereby determining the line and column coordinate positions of the flashing vehicle lamp subimage and finally finishing the identification of the left steering lamp, the right steering lamp or the double-flashing warning lamp.
Examples are as follows:
in a first example, the working condition that the target vehicle in the non-self-lane in front decelerates and changes to the self-lane is recognized according to the motion parameters of the target vehicle in front and a steering lamp, so that the motion parameter control system of the main vehicle performs braking adjustment in the tunnel in advance, and the lamp system of the main vehicle reminds the target vehicle behind. Therefore, the motion parameter control system and the vehicle lamp system of the main vehicle can be adjusted earlier to remind the rear target vehicle, more braking or adjusting time is provided for the rear target vehicle, the rear-end collision risk is effectively reduced, and the running safety of the main vehicle and passengers of the main vehicle is improved.
In a second example, the operating condition that the target vehicle in the front road changes into the non-self-road in the front road in a deceleration way is identified according to the motion parameters of the target vehicle in the front and the steering lamp, so that the motion parameter control system of the main vehicle does not perform braking adjustment in the tunnel. Therefore, the motion parameter control system of the host vehicle can reduce unnecessary braking adjustment, thereby reducing the risk of rear-end collision caused by the unnecessary braking adjustment of the host vehicle.
In summary, the automatic control method for vehicle driving according to the embodiment of the present invention acquires a first image and a second image of an environment in front of a subject vehicle from a front 3D camera, acquires a front road lane line, acquires a third image and a rear road lane line according to imaging parameters of the first image and the front road lane line, maps the front road lane line into the second image according to an interleaved mapping relationship between the first image and the second image to generate a plurality of front vehicle recognition ranges, recognizes a front target vehicle according to the front vehicle recognition ranges, generates a plurality of rear vehicle recognition ranges according to the third image and the rear road lane line, acquires a rear target vehicle parameter group from a millimeter wave radar, projects the rear target vehicle parameter group into the third image according to installation parameters of the millimeter wave radar to acquire a plurality of rear target vehicle parameter points, and thereby recognizes the rear target vehicle, and finally, acquiring tunnel information and speed limit information, changing the setting of the cruising speed upper limit and cruising safety distance of the main vehicle according to the tunnel information and the speed limit information, adjusting the focal length of the front 3D camera according to the tunnel information, and performing in-tunnel cruising control on the motion parameters of the main vehicle according to the motion parameters of the front target vehicle. Therefore, the main vehicle can control the braking of the main vehicle according to the environmental information of the highway lane line, unnecessary braking adjustment is reduced, the risk of rear-end collision is effectively reduced, and the running safety of the main vehicle in the tunnel is improved.
Based on the above description, it should be noted that, according to different application scenarios, different techniques may be adopted to identify the front road lane line according to the brightness difference between the front road lane line and the road surface in the first image. The following description is made with reference to specific examples.
Fig. 2 is a flowchart of a vehicle running automatic control method according to a second embodiment of the present invention, and as shown in fig. 2, the step S102 includes:
s201, creating a binary image of the front road lane line according to the brightness information of the first image and a preset brightness threshold value.
Specifically, in real life, the highway lane lines include both solid line lane lines and broken line lane lines, and for convenience of description, the following description will first take an example of recognizing the solid line lane lines as an example.
Specifically, a brightness threshold is preset by using the brightness difference between the highway lane line and the road surface in the first image, wherein the preset brightness threshold is obtained by searching for some brightness thresholds, and the brightness thresholds can be obtained by searching for the brightness thresholds by using a histogram statistics-bimodal algorithm.
Furthermore, a preset brightness threshold and a preset brightness image are used for creating a binary image of the protruded highway lane line, the brightness image can be further divided into a plurality of brightness sub-images, a histogram statistics-double peak algorithm is executed on each brightness sub-image to search for the plurality of brightness thresholds, the brightness thresholds and the corresponding brightness sub-images are used for creating the binary sub-images of the protruded highway lane line, and the binary sub-images are used for creating the complete binary image of the protruded highway lane line, so that the condition of brightness change of the road surface or the lane line can be met.
The specific implementation steps of finding the luminance threshold and creating the binary image of the highway lane line may be obtained by those skilled in the art based on the prior art, and are not described herein again.
S202, detecting all edge pixel positions of the straight-line lane line or all edge pixel positions of the curve solid-line lane line in the binary image according to a preset detection algorithm.
Specifically, after obtaining the binary image of the road lane line in front, the pixels of the curve where the solid line lane line is arranged in a straight line in the luminance image also account for most of the imaging pixels of the solid line lane line because the curvature radius of the road lane line cannot be too small and the imaging pixels of the road lane line at a position close to the road lane line are more due to the camera projection principle.
Therefore, all edge pixel positions of the solid line lane line of the straight road or most initial straight line edge pixel positions of the solid line lane line of the curved road can be detected in the binary image of the prominent highway lane line by using a preset detection algorithm, such as a straight line detection algorithm like a Hough transform algorithm.
Of course, if the filtering process is not performed, the straight line detection also detects most straight line edge pixel positions of the isolation belt and the telegraph pole in the binary image. The slope range of the lane line in the binary image can be set according to the length-width ratio of the image sensor, the focal length of the camera lens, the road width range of the road design specification and the installation position of the image sensor on the main vehicle, so that the straight line of the non-lane line is filtered and eliminated according to the slope range.
Since the edge pixel positions of the solid line lane line of the curve are always continuously changed, the connected pixel positions of the edge pixel positions at the two ends of the detected initial straight line are searched and merged into the initial straight line edge pixel set, the searching and merging into the connected pixel positions are repeated, and finally, all the edge pixel positions of the solid line lane line of the curve are uniquely determined.
S203, detecting all edge pixel positions of the straight road dotted line lane line or detecting all edge pixel positions of the curve dotted line lane line in the binary image according to a preset detection algorithm.
To fully explain the recognition of the front road lane line based on the brightness difference between the front road lane line and the road surface in the first image, the recognition of the broken line lane line will be first described as an example.
The straight line detection algorithm described in step S201 may also detect most of the initial straight line edge pixel positions of the dashed line lane line, and may connect edge pixels of other shorter lane lines belonging to the dashed line lane line by an extension line or a search and merge method of the most of the initial straight line edge pixel positions of the dashed line lane line, so as to obtain all the edge pixel positions of the dashed line lane line. The method for extending the line is used for obtaining all edge pixel positions of the straight line dashed line lane line, the method for searching and combining is used for obtaining all edge pixel positions of the curve dashed line lane line, and the priori knowledge of whether the dashed line lane line is a straight line or a curve needs to be obtained by selecting the method for extending the line or the method for searching and combining, and of course, the priori knowledge can be obtained by detecting the solid line lane line.
As an implementation manner, all edge pixel positions of the solid line lane line may be projected to an initial straight line edge pixel position of the dotted line lane line according to prior knowledge of the solid line lane line, a principle that the lane lines are parallel to each other in reality, and projection parameters of the image sensor and the camera, so as to connect the initial straight line edge pixel position of the dotted line lane line and edge pixel positions of other shorter lane lines belonging to the dotted line lane line, thereby obtaining all edge pixel positions of the dotted line lane line.
As another implementation, prior knowledge of a straight road or a curved road does not need to be obtained, and since the lateral offset of the dashed lane line can be almost ignored in a short continuous time but the longitudinal offset is large in the process of the vehicle cruising on the straight road or the curve cruising at a constant steering angle, the dashed lane line can be superimposed into a solid lane line in continuous binary images of a plurality of prominent highway lane lines at different times, and then all edge pixel positions of the dashed lane line are obtained by the identification method of the solid lane line.
The longitudinal offset of the dotted lane line is influenced by the speed of the main vehicle, so that the minimum number of continuous binary images of the protruded road lane line at different moments can be dynamically determined according to the speed of the vehicle obtained from the wheel speed sensor, the dotted lane line is superposed into a solid lane line, and all edge pixel positions of the dotted lane line are obtained.
In summary, in the automatic control method for vehicle driving according to the embodiment of the present invention, a binary image of a front highway lane line is created according to the luminance information of the first image and a preset luminance threshold, all edge pixel positions of a straight solid lane line or all edge pixel positions of a curved solid lane line are detected in the binary image according to a preset detection algorithm, and all edge pixel positions of a straight dashed lane line or all edge pixel positions of a curved dashed lane line are detected in the binary image according to a preset detection algorithm. Therefore, the broken line and the solid line lane line of the straight road and the curve road in the highway lane line can be accurately identified.
It should be noted that, according to different application scenarios, different techniques may be adopted to obtain the third image and the rear highway lane line according to the imaging parameters of the first image and the front highway lane line. The following description will be made more clearly with reference to specific examples.
Fig. 3 is a flowchart of a vehicle running automatic control method according to a third embodiment of the present invention, and as shown in fig. 3, the above step S103 includes:
s301, projecting all pixel positions of the front road lane line to a physical world coordinate system of the main vehicle according to the imaging parameters of the first image to establish a third image.
And S302, acquiring the position of the front road lane line in the third image through continuous time accumulation and displacement relative to the origin of the physical world coordinate system of the host vehicle.
Specifically, the third image is created by projecting all the pixel positions of the acquired front road lane line to the physical world coordinate system of the host vehicle, and the third image may be a top view of all the pixel positions of the projected front road lane line, so that the position of the front road lane line in the third image is the position of the road lane line in front of the host vehicle relative to the origin of the physical world coordinate system of the host vehicle.
Since the front road lane line acquired at a certain time will be located behind the subject vehicle after a certain time has elapsed, the position of the front road lane line in the third image acquires the position of the rear road lane line of the subject vehicle by continuous time accumulation and displacement from the origin of the physical world coordinate system of the subject vehicle.
For example, the Y-axis distance of the point a of the road lane line ahead at the time T1 from the origin of the physical world coordinate system of the host vehicle is D1 (the X-axis distance is D2), the host vehicle travels at a constant speed V for a time T along the Y-axis, that is, the displacement of the point a of the road lane line from the origin of the physical world coordinate system of the host vehicle is V × T (for example, V × T is 2 × D1) at the time T2 + T, and the distance of the point a of the road lane line from the origin of the physical world coordinate system of the host vehicle is D1-V × T is-D1 at the time T2, that is, the position of the road lane line behind the host vehicle is obtained (the X-axis distance is still D2).
For the case of variable speed running of the subject vehicle, the variation curve of V versus T may be acquired by the wheel speed sensor, and the displacement of variable speed running may be acquired using the integral of V versus T. For the situation that the main vehicle runs on the arc-shaped curve, the curvature radius of the curve is calculated by using the coordinates of the front road lane line in the physical world coordinate system of the main vehicle, and then the coordinates of the elapsed time T of the front road lane line relative to the origin of the physical world coordinate system of the main vehicle can be calculated by using the curvature radius and the arc-shaped displacement of the integral of the running time T of the main vehicle, namely the position of the rear road lane line of the main vehicle is obtained.
In summary, according to the automatic control method for vehicle driving in the embodiment of the present invention, all pixel positions of the front road lane line are projected to the physical world coordinate system of the host vehicle according to the imaging parameters of the first image to create a third image, and the position of the front road lane line in the third image is obtained by continuous time accumulation and displacement from the origin of the physical world coordinate system of the host vehicle. Therefore, the position of the rear lane line is accurately known, the vehicle is conveniently and relatively controlled according to the position of the rear lane line, and a foundation is laid for ensuring the driving safety.
Since in practical applications, whether it is a target vehicle ahead of the subject vehicle or a target vehicle behind the subject vehicle, there are a plurality of driving states and driving positions, and different driving states and driving positions are directly related to specific control operations on the subject vehicle, for example, for a target vehicle other than the host vehicle, as long as it remains running in the non-host vehicle, there is no influence on the driving safety of the subject vehicle regardless of acceleration and deceleration thereof, but once it changes lane to the host vehicle, the subject vehicle needs to perform a deceleration operation or the like.
The following describes in detail how to recognize the preceding target vehicle and the following target vehicle, respectively.
Fig. 4 is a flowchart of a vehicle running automatic control method according to a fourth embodiment of the present invention, and as shown in fig. 4, the above step S105 includes:
s401 marks the front own lane and the front non-own lane in all the front vehicle recognition ranges.
Specifically, according to the equal-proportion front road lane lines acquired in the second image, the slope of the initial straight line of each front road lane line is obtained by comparing the number of rows and the number of columns occupied by the initial straight line portion of each front road lane line, a front vehicle identification range created according to the front road lane line where the initial straight lines of the two road lane lines with the largest slope are located is marked with the label of the front local lane, and other created front vehicle identification ranges are marked with the labels of the non-local lanes in front.
Thus, the front road lane lines may be mapped into the second image according to the interleaved mapping relationship between the first image and the second image to generate a number of front vehicle recognition ranges in the second image, and labels of the front own lane and the front non-own lane for all the front vehicle recognition ranges.
S402, the target vehicle of the front self-lane is identified according to the vehicle identification range marking the label of the front self-lane.
Specifically, after the front own-lane tag is acquired, the front own-lane target vehicle can be identified within the front own-lane identification range according to the vehicle identification range marking the front own-lane tag.
Specifically, since the distance and position of the target vehicle relative to the TOF sensor always changes over time, the distance and position of the road surface, the isolation belt relative to the TOF sensor is approximately unchanged over time. Therefore, a time differential depth image can be created by using two depth images acquired at different time points to detect the distance and position changes, so that the front own-lane target vehicle can be actually identified in the vehicle identification range for marking the own-lane label.
And S403, identifying the front non-own-lane target vehicle according to the vehicle identification range marking the front non-own-lane label.
Specifically, after the forward non-own-lane tag is acquired, the forward non-own-lane target vehicle can be identified within the forward non-own-lane identification range according to the vehicle identification range marking the forward non-own-lane tag.
Specifically, since the distance and position of the target vehicle relative to the TOF sensor always changes over time, the distance and position of the road surface, the isolation belt relative to the TOF sensor is approximately unchanged over time. Therefore, a time differential depth image can be created by using two depth images acquired at different time points to detect the change of the distance and the position, so that the front non-own-lane target vehicle can be actually identified in the vehicle identification range marked with the non-own-lane label.
And S404, identifying the front lane-changing target vehicle according to the front vehicle identification range of the two-two combination.
Specifically, since the preceding own-lane target vehicle and the preceding non-own-lane target vehicle can be recognized, the preceding lane-change target vehicle is recognized from the preceding vehicle recognition ranges combined two by two based on the same recognition method.
Specifically, since the distance and position of the target vehicle relative to the TOF sensor always changes over time, the distance and position of the road surface, the isolation belt relative to the TOF sensor is approximately unchanged over time. Therefore, a time differential depth image can be created by using two depth images acquired at different moments to detect the change of the distance and the position, and then the front lane-changing target vehicle is actually identified according to the front vehicle identification range combined in pairs.
Fig. 5 is a flowchart of a vehicle running automatic control method according to a fifth embodiment of the present invention, and as shown in fig. 5, the above step S108 includes:
and S501, marking labels of a rear own lane and a rear non-own lane for all rear vehicle identification ranges.
Specifically, according to the rear road lane lines in the third image, the slope of the initial straight line of each rear road lane line is obtained by comparing the number of rows and the number of columns occupied by the initial straight line portion of each rear road lane line, the created rear vehicle identification range marks the label of the rear road lane according to the rear road lane line where the initial straight lines of the two road lane lines with the largest absolute value of the slope are located, and the other created rear vehicle identification ranges mark the labels of the rear non-road lanes.
Thus, the rear highway lane lines can be mapped into the second image according to the interweaving mapping relation between the first image and the second image so as to generate a plurality of rear vehicle identification ranges in the second image, and labels of the rear lane and the rear non-own lane are marked for the rear vehicle identification ranges.
And S502, identifying the target vehicle of the back lane of the mark according to the vehicle identification range of the mark back lane mark and the parameter point of the back target vehicle.
Specifically, after the rear own-lane tag is acquired, the rear own-lane target vehicle can be identified within the rear own-lane identification range according to the vehicle identification range of the rear own-lane tag marked.
Specifically, since the distance and position of the target vehicle relative to the TOF sensor always changes over time, the distance and position of the road surface, the isolation belt relative to the TOF sensor is approximately unchanged over time. Therefore, a time differential depth image can be created by using two depth images acquired at different time points to detect the distance and position changes, so that the rear own-lane target vehicle can be actually identified in the vehicle identification range for marking the own-lane label.
And S503, identifying the non-self-lane target vehicles behind the mark according to the vehicle identification range of the non-self-lane mark behind the mark and the parameter points of the rear target vehicles.
Specifically, after the rear non-own-lane tag is acquired, the rear non-own-lane target vehicle can be identified within the rear non-own-lane identification range according to the vehicle identification range of the rear non-own-lane tag marked.
Specifically, since the distance and position of the target vehicle relative to the TOF sensor always changes over time, the distance and position of the road surface, the isolation belt relative to the TOF sensor is approximately unchanged over time. Therefore, a time differential depth image can be created by using two depth images acquired at different time points to detect the change of the distance and the position, so that the rear non-own-lane target vehicle can be actually identified in the vehicle identification range marked with the non-own-lane label.
And S504, identifying and marking the rear lane-changing target vehicle according to the rear vehicle identification range and the rear target vehicle parameter points which are combined in pairs.
Specifically, since the rear own-lane target vehicle and the rear non-own-lane target vehicle can be recognized, the rear lane-change target vehicle is recognized from the rear vehicle recognition ranges combined two by two based on the same recognition method.
Specifically, since the distance and position of the target vehicle relative to the TOF sensor always changes over time, the distance and position of the road surface, the isolation belt relative to the TOF sensor is approximately unchanged over time. Therefore, a time differential depth image can be created by using two depth images acquired at different moments to detect the change of the distance and the position, and then the rear lane-changing target vehicle is actually identified according to the rear vehicle identification range combined in pairs.
Of course, in addition to the above-described method for identifying the target vehicle, other methods may be used to obtain the target vehicle, as a possible implementation manner, after the target boundary of the front target vehicle is identified after step S109, the target boundary detected in each vehicle identification range is respectively projected onto the row coordinate axis of the image, and one-dimensional search is performed on the row coordinate axis, so that the number of rows and the range of row coordinates occupied by the longitudinal target boundaries of all the front target vehicles in the vehicle identification range, and the number of columns and the position of row coordinates occupied by the transverse target boundaries can be determined.
The longitudinal target boundary refers to a target boundary which occupies a large number of pixel rows and a small number of columns, and the transverse target boundary refers to a target boundary which occupies a large number of pixel rows and a large number of columns.
Furthermore, according to the column number and the row coordinate position occupied by all the transverse target boundaries in the vehicle identification range, the column coordinate positions of all the longitudinal target boundaries (namely the column coordinate starting positions and the column coordinate end positions of the corresponding transverse target boundaries) are searched in the vehicle identification range, and the target boundaries of different target vehicles are distinguished according to the principle that the target boundaries contain consistent distance information, so that the positions and the distance information of all front target vehicles in the vehicle identification range are determined.
Therefore, the detection of the target boundary of the front target vehicle can uniquely determine the position of the depth sub-image formed by the front target vehicle in the depth image, so as to uniquely determine the distance information of the front target vehicle.
The boundary detection method according to this example can simultaneously detect a plurality of preceding target vehicles and their distance information, and further identify a preceding own-lane target vehicle within the vehicle identification range marked with the own-lane tag, identify a preceding non-own-lane target vehicle within the vehicle identification range marked with the non-own-lane tag, and identify a preceding lane-change target vehicle within the vehicle identification ranges combined two by two.
Based on the same principle, the rear target vehicle can also be identified, and the details are not repeated herein.
As another implementation manner of acquiring the rear target vehicles, after acquiring a plurality of rear target vehicle parameter points in step S107, a target vehicle whose rear target vehicle parameter point falls within the vehicle identification range of the marking rear own-lane tag is marked as a rear own-lane target vehicle, and a target vehicle whose rear target vehicle parameter point falls within the vehicle identification range of the marking rear non-own-lane tag is marked as a rear non-own-lane target vehicle.
In practical application, for the millimeter wave radar with a wider working bandwidth, the ranging resolution is higher, more target vehicle parameter points are acquired by the millimeter wave radar for the same rear target vehicle, a plurality of target vehicle parameter points can be classified into a target vehicle parameter set by a clustering method, and the target vehicle parameter set can form the contour of the same rear target vehicle.
For example, all target vehicle parameter points may first be categorized into several initial sets by relative speed parameters, then, clustering two-dimensional contours of all parameter points in each initial set according to X, Y coordinates of the origin of the physical world coordinate system to obtain a parameter set of the same target vehicle, for example, using a clustering method familiar to those skilled in the art such as k-means, the target vehicles whose rear target vehicle parameter sets fall within the vehicle identification range of the own-lane mark behind the mark are finally marked as rear own-lane target vehicles, the target vehicles whose rear target vehicle parameter sets fall within the vehicle identification range of the own-lane mark behind the mark are marked as rear non-own-lane target vehicles, and the target vehicles whose rear target vehicle parameter sets fall within two or more adjacent rear vehicle identification ranges are marked as rear lane-change target vehicles.
In summary, the vehicle driving automatic control method according to the embodiment of the invention accurately identifies the front and rear target vehicles, so as to control the driving of the subject vehicle according to the motion parameters and the turn lights of the front target vehicle and the rear target vehicle, and provide guarantee for ensuring driving safety.
Based on the above embodiments, in order to more clearly describe the driving control of the host vehicle in the tunnel, the following description will be given by combining the control implementation processes of different working conditions of the host vehicle when the tunnel information is taken as the tunnel entrance information and the tunnel exit information, respectively.
Fig. 6 is a flowchart of a vehicle running automatic control method according to a sixth embodiment of the present invention, and as shown in fig. 6, the above step S113 includes:
s601, when the tunnel information is tunnel entrance information, after the settings of the cruising speed upper limit and cruising safety distance of the main vehicle are changed according to the tunnel information and the speed limit information, deceleration control is executed.
Specifically, the vehicle navigation system is generally capable of providing distance information from a current subject vehicle position to an entrance of a tunnel before the subject vehicle enters the tunnel, and is capable of providing speed limit information of the tunnel, and providing distance information from the current subject vehicle position to the speed limit sign. The cruising speed of the main vehicle before entering the tunnel is usually higher than the tunnel speed limit, namely the vehicle speed needs to be controlled before entering the tunnel.
For example, tunnel entrance information and speed limit information may be acquired from a navigation system periodically through a bus system of a subject vehicle, a difference value between a current vehicle speed value and a tunnel speed limit value is calculated, if the difference value is a positive value, a comfortable sliding distance from the subject vehicle to the tunnel speed limit is calculated, when the distance from the subject vehicle to the tunnel entrance or the speed limit sign reaches, for example, 1.2 times the comfortable sliding distance, the setting of the cruise vehicle speed upper limit (i.e., the tunnel speed limit value) and the setting of the cruise safety distance (e.g., the cruise safety distance is updated to half of the original value) may be updated, and the comfortable sliding deceleration may be performed by reducing the power output.
It should be noted that, as another implementation form, it is also possible to identify tunnel entrance information and speed limit information based on a first image and a second image, which are a color image and a depth image, respectively, change the settings of the cruise vehicle speed upper limit and the cruise safe distance according to the identified tunnel entrance information and speed limit information, and perform necessary deceleration control.
For example, the entrance of the highway tunnel is a section of the tunnel, the section always intersects with the lane and the lane line, and the imaging brightness of the lane line inside the tunnel entrance is small and is not obvious in the first image, while the imaging brightness of the lane line outside the tunnel is obvious in the first image, so that the farthest pixel position for identifying the imaging of the lane line in the first image is equivalent to the imaging position of the tunnel entrance in the first image.
The imaging of the tunnel entrance in the first image is affected by illumination and brightness, but the depth imaging in the second image is not affected by illumination and brightness, so that the highway lane lines identified by the first image are mapped into the second image to generate a plurality of vehicle identification ranges in the second image, and a depth value, such as a (distance from the host vehicle to the tunnel entrance) at the tunnel entrance is obtained according to the farthest pixel position of the vehicle identification range, and the tunnel entrance (tunnel section) has an approximately consistent depth value.
For example, the complete shape of the tunnel entrance section can be obtained by taking the pixel position belonging to the depth value range of A +/-1 m, namely the shape formed by the depth pixel of the tunnel outer wall around the tunnel entrance, and the depth value range of the tunnel entrance in A +/-1 m is that the hole has no reflection, and the depth pixel of the tunnel outer wall and the tunnel entrance hole pixel position form strong contrast, so that the tunnel entrance hole pixel position can be easily extracted to determine the height, the width and the shape of the tunnel entrance, and the tunnel entrance information is identified.
It is understood that the speed limit sign is red in color due to the outer circle, and the speed limit information is suitably recognized using the second image of the color image. For example, most of non-red image information is filtered by red chroma in the second image, the pixel position of the speed limit sign in the second image is determined by a circular or elliptical Hough transform algorithm familiar to a person skilled in the art in the filtered red image, the pixel position can be projected into the first image to determine the depth value, namely the distance, of the speed limit sign to the host vehicle, and finally the speed limit value in a red outer circle is recognized by a digital template matching method familiar to the person skilled in the art according to the pixel position of the speed limit sign, so that the speed limit information is recognized.
Further, for example, the cruise system calculates the difference value of the current vehicle speed value minus the identified tunnel speed limit value, and if the difference value is a positive value, the comfortable sliding distance of the host vehicle from the speed reduction to the tunnel speed limit is calculated; the cruise system may start updating the setting of the cruise vehicle speed upper limit (i.e., updating to the tunnel speed limit value) and updating the setting of the cruise safe distance (e.g., updating the cruise safe distance to half of the original value) when the distance from the host vehicle to the speed limit sign reaches, for example, 1.2 times the above-mentioned comfortable coasting distance, and reduce the power output (or recovery of the electric vehicle start braking energy) to perform the comfortable coasting deceleration.
And S602, when the tunnel information is tunnel entrance information, reducing the front 3D camera, and identifying the working condition that the front non-self-lane target vehicle decelerates and changes lane to the self-lane according to the motion parameters of the front target vehicle and the steering lamp, so that the motion parameter control system of the main vehicle performs braking adjustment in the tunnel in advance, and the lamp system of the main vehicle reminds the rear target vehicle.
It can be understood that the main vehicle usually has a working condition of running at a medium-low speed in the tunnel, and at this time, the road environment near the main vehicle has a greater influence on the control of the main vehicle, and the focal length of the front 3D camera is reduced to obtain the environment image of a wider viewing angle near the front of the main vehicle, for example, the focal length reduction can be realized by an electric focusing lens, and the light source for assisting the 3D camera imaging can be switched to a wide-near-light illumination type matched with the 3D camera. Further improving the driving safety.
It should be noted that, with reference to the left and right lane lines of the front own lane, no matter the front target vehicle is in a straight lane or a curve when changing lanes, no matter the front target vehicle changes lanes left or right, the identification can be accurate, so as to provide an accurate motion control basis for the adaptive cruise system of the subject vehicle.
It should be noted that the identification error of the azimuth angle of the millimeter wave radar is generally ± 0.5 degrees, in the tunnel, the vehicle width of the front target vehicle occupies a wide range of the azimuth angle of the millimeter wave radar due to the fact that the lane is generally narrow and the target vehicle is relatively close to the host vehicle, namely, the millimeter wave radar of the same front target vehicle generates a plurality of scattering points, the fluctuation of the radar scattering cross section area of the plurality of scattering points causes the deterioration of the identification error of the azimuth angle of the millimeter wave radar, and finally the millimeter wave radar cannot accurately identify the vehicle width of the front target vehicle which is positioned in the own lane or the non-own lane of the main vehicle, so that the millimeter wave radar cannot provide the accurate motion control basis for the vehicle adaptive cruise system, that is, unnecessary shifting cruising of the host vehicle may be caused to reduce the traveling economy of the host vehicle, and unnecessary sudden braking of the host vehicle may be caused to reduce the traveling safety of the host vehicle.
Therefore, in the embodiment of the invention, the focal length of the front 3D camera is reduced, and the in-tunnel cruise control is carried out on the motion parameters of the main vehicle according to the recognized front target vehicle and the recognized rear target vehicle and the tail turn lamps thereof.
Fig. 7 is a flowchart of a vehicle running automatic control method according to a seventh embodiment of the present invention, and as shown in fig. 7, the above step S113 includes:
s701, identifying tunnel exit information and speed limit information according to the first image and the second image, and changing the settings of the cruising speed upper limit and cruising safety distance of the main vehicle according to the tunnel exit information, the speed limit information and user setting information.
Specifically, tunnel exit information and speed limit information are identified from the first image and the second image. Based on the above description, the exit of the highway tunnel is a cross section of the tunnel, the cross section always intersects with the lane and the lane line, the lane line inside the exit of the tunnel is obvious in imaging in the first image, and the lane line outside the tunnel is usually not obvious in overexposure imaging in the first image, so that the step of identifying the farthest pixel position of the imaging of the lane line in the first image is equivalent to identifying the imaging position of the exit of the tunnel in the first image.
In the third step of the above-mentioned vehicle identification method, the road lane line identified by the first image is mapped into the second image to generate a plurality of vehicle identification ranges in the second image, and a depth value at the tunnel exit, such as B (i.e. the distance from the host vehicle to the tunnel exit) is obtained according to the farthest pixel position of the vehicle identification range, and the depth value at the tunnel exit (i.e. the tunnel section) is approximately consistent.
For example, the complete shape of the tunnel exit cross section can be obtained by taking the pixel position belonging to the depth value range of B +/-0.5 m, namely the shape formed by the depth pixel of the tunnel inner wall around the tunnel exit, and the depth value range of the tunnel exit at B +/-0.5 m is a cavity without reflection, and the depth pixel of the tunnel inner wall and the tunnel exit cavity pixel position form a strong contrast, so that the tunnel exit cavity pixel position can be easily extracted to determine the height, the width and the shape of the tunnel exit, and the tunnel exit information can be identified.
It should be noted that the specific description of the speed limit information is not described in detail herein.
It should be noted that the cruising speed of the subject vehicle after exiting the tunnel is usually higher than the tunnel speed limit, that is, the speed of the subject vehicle after exiting the tunnel may be controlled according to the setting of the user of the subject vehicle.
S702, the focal length of the front 3D camera is increased, and the working condition that the front target vehicle in the main lane changes to the non-main lane in the front in a deceleration way is identified according to the motion parameters of the front target vehicle and the steering lamp, so that the motion parameter control system of the main vehicle does not perform braking adjustment in the tunnel.
Specifically, the motion parameters of the main vehicle are controlled according to the motion parameters of the identified front target vehicle in the main lane and the tail turn lamp of the main vehicle, so that the running economy of the main vehicle is improved.
It can be understood that, after the host vehicle exits the tunnel, the road environment at a position far away from the host vehicle has a greater influence on the host vehicle control, the focal length of the front 3D camera is increased to acquire the environment imaging details at a position far away from the front of the host vehicle, for example, the focal length can be increased by an electric focusing lens, and the light source for assisting the 3D camera imaging can be switched to a high-beam irradiation light type matched with the 3D camera, for example.
Similarly, when the vehicle is running on the road outside the tunnel, the self-adaptive cruise system of the vehicle keeps the constant-speed cruise working condition, so that the good running economy of the main vehicle is obtained, and the running economy of the vehicle is poorer as the times of the variable-speed cruise working condition are more. For example, during deceleration lane change of the subject vehicle on the own lane to an emergency stop lane or ramp, the constant speed cruise of the conventional vehicle adaptive cruise system is interrupted, and the subject vehicle decelerates first and accelerates again after the subject vehicle ahead exits the own lane of the subject vehicle, resulting in an uneconomical shift cruise.
Therefore, the continuous process from turning to completing lane changing to non-self lane of the target vehicle in front of the self lane can be recognized and monitored without a combined navigation system, the duration, the distance and the transverse displacement of the front target vehicle in the continuous lane changing process are easy to monitor, and the motion parameters of the target vehicle can be used for controlling the motion parameters of the main vehicle so as to reduce unnecessary variable speed cruising.
For example, the pixel distance from the left target boundary of the front target vehicle to the left lane line of the host vehicle when the right turn light of the host vehicle is turned on is determined as the lateral distance P through conversion of camera projection relationship, N first images and N second images at different times are continuously acquired (the time for acquiring one first image or one second image is T), the change of the distance R of the target vehicle is identified and recorded during the period, the non-host vehicle which just completes lane change to the right side of the host vehicle is identified, the left target boundary of the front target vehicle coincides with the right lane line of the host vehicle, the width of the host vehicle is D, and therefore the motion parameters of the front target vehicle in the continuous lane change process are the duration N × T, the distance to the host vehicle is R, and the lateral displacement is (D-P).
Therefore, according to the distance R during the lane change of the front target vehicle identified above, the adaptive cruise system of the host vehicle can keep cruising at a constant speed as long as it is determined that R is always greater than the set safe cruise brake distance, and even if it is identified that the front target vehicle has just completed the lane change to the non-host vehicle lane on the right side of the host vehicle, at this time, the left target boundary of the front target vehicle coincides with the lane line on the right side of the host vehicle lane and R is less than the safe cruise brake distance, the adaptive cruise system of the host vehicle may reduce the power output, slightly wait for the identification that the target vehicle continues to be displaced to the right, and restore the power output to keep cruising at a constant speed.
In summary, the automatic vehicle driving control method according to the embodiment of the present invention can obtain tunnel entrance information and speed limit information from a navigation system, change the settings of the cruising speed upper limit and the cruising safety distance according to the obtained tunnel entrance information and speed limit information, execute necessary deceleration control, reduce the focal length of the front 3D camera, perform intra-tunnel cruise control on the movement parameters of the host vehicle according to the movement parameters of the front target vehicle and the turn signal, recognize tunnel exit information and speed limit information based on the first image and the second image, change the settings of the cruising speed upper limit and the cruising safety distance according to the recognized tunnel exit information, speed limit information and user setting information, and recognize the working condition that the host vehicle changes its own lane to the main vehicle from the deceleration lane through the recognized movement parameters of the target vehicle and the corresponding vehicle tail turn signal of the recognized target vehicle, the motion parameter control system of the subject vehicle can reduce unnecessary braking adjustment, thereby reducing the risk of rear-end collision due to unnecessary braking adjustment of the subject vehicle.
Based on the above embodiments, in order to more clearly describe how to perform in-tunnel cruise control on the motion parameters of the subject vehicle according to the motion parameters and the turn lights of the front target vehicle and the motion parameters of the rear target vehicle, the following describes, in conjunction with a specific application scenario, a continuous process from turning on the turn lights to completing lane change to a non-self-lane of the subject vehicle in front of the subject vehicle and monitoring the subject vehicle in the present embodiment.
Specifically, the forward own-lane target vehicle is identified according to the vehicle identification range marking the forward own-lane label, the forward lane-changing target vehicle is identified according to the forward vehicle identification range combined in pairs, and the turn lights of the corresponding target vehicles are identified according to the vehicle light identification areas, so that the continuous process from turning on the turn lights to completing the lane changing to the non-own-lane of the forward own-lane target vehicle can be identified and monitored, and the motion parameters such as the duration, the distance relative to the host vehicle, the relative speed and the transverse displacement of the target vehicle in the continuous lane changing process can be easily monitored, so that the motion parameters of the target vehicle can be used for controlling the host vehicle according to the motion parameters of the target vehicle.
When the right steering lamp of the target vehicle of the front own lane is identified to be turned on, the pixel distance from the left target boundary of the target vehicle to the left lane line of the front own lane is converted through a camera projection relation to be determined as a transverse distance P, N first images and N second images at different moments are continuously acquired (the time for acquiring one first image or one second image is T), the change of the distance R of the target vehicle is identified and recorded in the process, and the relative speed V of the target vehicle can be calculated through the change of the distance R of the target vehicle relative to the T.
Recognizing that the target vehicle just completes lane changing to a non-self-lane on the right side of the front self-lane, wherein the left target boundary of the target vehicle coincides with the lane line on the right side of the front self-lane, and the width of the self-lane is D, so that the motion parameters of the front target vehicle in the continuous lane changing process are duration N multiplied by T, distance to the host vehicle is R, relative speed V and transverse displacement (D-P).
It should be emphasized that, the above identified lateral displacement is based on the left and right lane lines of the lane, and the lateral displacement can be accurately identified no matter the target vehicle is in a straight lane or a curve when changing the lane, or no matter the target vehicle changes the lane to the left or to the right, so as to provide an accurate control basis for the adaptive cruise system of the host vehicle.
Further, the lateral displacement of the target vehicle recognized by the conventional vehicle adaptive cruise system that relies only on the millimeter wave radar is based on the subject vehicle, and the lateral displacement of the target vehicle recognized by the subject vehicle as a reference sometimes fails to provide an accurate basis for motion control of the vehicle adaptive cruise system.
Fig. 8 is a scene diagram of a vehicle running automatic control method according to an embodiment of the present invention.
As shown in fig. 8, when the target vehicle ahead of the host vehicle has completed the lane change from the host vehicle to the right in a curve that turns to the left, the millimeter wave radar of the conventional vehicle on a straight lane may recognize that the target vehicle ahead is partially on the host vehicle lane, the curvature radius of the curve is 250 m, the target vehicle ahead has traveled 25 m on the curve during the lane change, and the lane line on the right side of the host vehicle lane coinciding with the boundary of the target vehicle ahead has been shifted to the left at 25 m on the curve from the line extending from the straight lane of the target vehicle, to the left
Figure GDA0002284716680000211
The invention accurately identifies that the front target vehicle completes the lane change to the right, but the millimeter wave radar can not determine whether the front target vehicle completes the lane change to the right due to the identification error of the transverse distance caused by the lack of the priori knowledge of the curve, specifically, if the millimeter wave radar of the traditional vehicle identifies the target vehicle at the distance of 50 meters to 80 meters, namely, the millimeter wave radar of the traditional vehicle is positioned on a straight lane and still has the distance of 25 meters to 55 meters from the entrance of the curve, the millimeter wave radar of the traditional vehicle identifies that the front target vehicle still has the vehicle body with the width of about 1.25 meters on the lane under the condition of the lack of the priori knowledge of the curve, and as the target vehicle continues to decelerate along the left curve, the millimeter wave radar of the traditional vehicle identifies that the target vehicle has the vehicle body with the larger width on the lane, namely, the millimeter wave radar of the traditional vehicle generates inaccurate identification and causes the self-adaptive cruise system of the traditional vehicle to the self-adaptive Continuous inaccurate and unnecessary braking is performed, resulting in an increased risk of rear-end collision of the conventional vehicle with its rear target vehicle.
Similarly, the millimeter wave radar of the conventional vehicle may also have inaccuracy in recognizing the lane change from the left lane to the right lane of the target vehicle in the right-hand curve.
In order to solve the inaccuracy of the recognition, the millimeter wave radar of the conventional vehicle is added with a camera to help recognize the lane line or the recognition precision of the azimuth angle, which in short increases the complexity and cost of the system.
Therefore, according to the above example, according to the motion parameters of the target vehicle identified by the present invention and the turn signal lamp of the corresponding identified target vehicle, the condition that the target vehicle in the own lane changes to the subject vehicle without the own lane at a deceleration can be identified, so that the motion parameter control system of the subject vehicle can reduce unnecessary braking adjustment, thereby reducing the risk of rear-end collision due to unnecessary braking adjustment of the subject vehicle.
Similarly, according to the above example, the present invention can also recognize and monitor the continuous process from turning the turn signal to completing the lane change to the own lane of the non-own-lane target vehicle, and the motion parameters such as the duration, the distance from the host vehicle, the relative speed and the lateral displacement of the front target vehicle in the continuous lane change process are also easily monitored, so that the motion parameters of the front target vehicle can be used to control the motion parameters of the host vehicle to make braking adjustment earlier and improve the driving safety, and to control the lamp to warn the rear target vehicle earlier to reduce the rear-end collision risk.
Fig. 9 is a scene diagram of a vehicle running automatic control method according to another embodiment of the present invention.
For example, as shown in fig. 9, the subject vehicle travels in a constant speed mode in a straight lane of the own lane, and there is still a distance of 55 meters (or up to 25 meters) from the entrance of the curve, the curve curves to the right and has a radius of curvature of 250 meters, a target vehicle not ahead of the own lane is turning the left turn signal to the own lane on the right side of the own lane 25 meters ahead of the entrance of the curve, and the left target boundary of the target vehicle already coincides with the right lane line of the own lane.
According to the above example, the present invention can accurately recognize that the front target vehicle is changing lane to the lane, and since the target vehicle is about 80 meters (or 50 meters) away from the host vehicle, the present invention can control the power system of the host vehicle to accurately perform the action of reducing power output and even braking, and turn on the brake light in time, so as to ensure the safe distance between the host vehicle and the front and rear target vehicles, thereby improving the driving safety of the host vehicle and reducing the risk of rear-end collision.
However, the method accurately identifies that the front target vehicle is changing into the left lane, but the millimeter wave radar cannot determine whether the front target vehicle is changing into the left lane or not due to the identification error of the transverse distance caused by the lack of the priori knowledge of the curve, specifically, the transverse displacement of the target vehicle identified by the traditional vehicle adaptive cruise system only relying on the millimeter wave radar is based on the subject vehicle, and the extension line of the front target vehicle from the lane line on the right side of the lane is also approximately identified under the condition of the lack of the priori knowledge of the curveThe transverse distance of meter, i.e. the millimeter wave radar which is used for identifying the front target vehicle by mistake and needs to be displaced transversely by about 1.25 meters to the left can confirm that the front target vehicle starts to enter the lane.
If the lateral displacement speed of the front target vehicle is 1 meter per second, the above-mentioned conventional vehicle adaptive cruise system relying only on the millimeter wave radar will perform the action of reducing the power output and even braking after the front target vehicle actually enters the own lane for about 1.25 seconds, which undoubtedly reduces the safe distance between the subject vehicle and the front and rear target vehicles, resulting in the reduction of the driving safety of the subject vehicle and the increase of the rear-end collision risk.
Therefore, according to the above example, according to the motion parameters of the identified target vehicle and the corresponding steering lamp of the identified target vehicle, the working condition that the target vehicle without the host lane changes to the host lane during deceleration can be identified, so that the motion parameter control system and the safety system of the host vehicle can make adjustments earlier, the running safety of the host vehicle and passengers thereof is improved, the lamp system of the host vehicle can make adjustments earlier to remind the rear target vehicle, more braking or adjusting time is provided for the rear target vehicle, and the rear-end collision risk is reduced more effectively.
In conclusion, the vehicle running automatic control method provided by the embodiment of the invention improves the running safety of the main vehicle and passengers thereof, so that the lamp system of the main vehicle can make adjustment earlier to remind a rear target vehicle, more braking or adjusting time is provided for the rear target vehicle, and the rear-end collision risk is reduced more effectively.
In order to achieve the purpose, the invention also provides an automatic vehicle running control device.
Fig. 10 is a schematic configuration diagram of an automatic control device for vehicle running according to a first embodiment of the present invention.
As shown in fig. 10, the vehicle travel automatic control device may include: a first obtaining module 1010, a second obtaining module 1020, a third obtaining module 1030, a first generating module 1040, a first identifying module 1050, a second generating module 1060, a fourth obtaining module 1070, a second identifying module 1080, a fifth obtaining module 1090, a first adjusting module 1100, a second adjusting module 1110 and a control module 1120.
The first acquiring module 1010 is configured to acquire a first image and a second image of an environment in front of the subject vehicle from the front-facing 3D camera, where the first image is a color or brightness image and the second image is a depth image.
In one embodiment of the invention, the first acquisition module 1010 acquires a first image of the environment in front of the subject vehicle from an image sensor of the front 3D camera and a second image of the environment in front of the subject vehicle from a time-of-flight sensor of the front 3D camera.
And a second obtaining module 1020, configured to obtain a front highway lane line according to the first image.
In an embodiment of the present invention, the second obtaining module 1020, when the first image is a luminance image, identifies a front road lane line according to a luminance difference between the front road lane line and a road surface in the first image; alternatively, the first and second electrodes may be,
and when the first image is a color image, converting the color image into a brightness image, and identifying the front highway lane line according to the brightness difference between the front highway lane line and the road surface in the first image.
And a third obtaining module 1030, configured to obtain a third image and a rear road lane line according to the imaging parameter of the first image and the front road lane line.
The first generating module 1040 is configured to map the front road lane line into the second image according to the interleaved mapping relationship between the first image and the second image to generate a plurality of front vehicle identification ranges.
The first recognition module 1050 is configured to recognize the preceding target vehicle according to all the preceding vehicle recognition ranges.
A second generating module 1060 for generating a plurality of rear vehicle identification ranges according to the third image and the rear road lane line.
And a fourth obtaining module 1070, configured to obtain the rear target vehicle parameter set from the millimeter wave radar, and project the rear target vehicle parameter set into the third image according to the installation parameters of the millimeter wave radar to obtain a plurality of rear target vehicle parameter points.
And a second identification module 1080, configured to identify the rear target vehicles according to all the rear vehicle identification ranges and the plurality of rear target vehicle parameter points.
And a fifth obtaining module 1090 configured to obtain the tunnel information and the speed limit information.
In an embodiment of the present invention, the fifth obtaining module 1090 identifies the tunnel entrance information and the speed limit information according to the first image and the second image. Alternatively, the first and second electrodes may be,
and identifying tunnel exit information and speed limit information according to the first image and the second image.
And the first adjusting module 1100 is used for changing the settings of the cruising speed upper limit and the cruising safety distance of the main vehicle according to the tunnel information and the speed limit information.
And a second adjusting module 1110, configured to adjust a focal length of the front 3D camera according to the tunnel information.
And a control module 1120, configured to perform intra-tunnel cruise control on the motion parameters of the host vehicle according to the motion parameters of the front target vehicle and the motion parameters of the rear target vehicle.
Fig. 11 is a schematic configuration diagram of an automatic control device for vehicle running according to a second embodiment of the present invention, in one embodiment of the present invention. As shown in fig. 11, the automatic vehicle running control apparatus further includes a third generation module 1130 and a third generation module 1140 in addition to those shown in fig. 10.
The third generating module 1130 is configured to generate a front target vehicle range according to the front target vehicle, and map the front target vehicle range into the first image according to the interleaved mapping relationship between the first image and the second image to generate a front headlight identification region.
In one embodiment of the present invention, the third generation module 1130 detects the target boundary of the front target vehicle for recognition by using a boundary detection method in an image processing algorithm.
In one embodiment of the invention, the third generating module 1130 generates the front target vehicle range from a closed area enclosed by target boundaries of the front target vehicle; alternatively, the first and second electrodes may be,
generating a front target vehicle range according to a closed area enclosed by the extended target boundary of the front target vehicle; alternatively, the forward target vehicle range is generated from a closed region surrounded by a plurality of pixel position connecting lines of the forward target vehicle.
A third generating module 1140 for identifying the turn signal of the corresponding front target vehicle according to the front lamp identification region.
In one embodiment of the present invention, the third generating module 1140 identifies the turn signals of the corresponding preceding target vehicle according to the color, the flashing frequency or the flashing sequence of the tail lights in the preceding lamp identification region.
In this embodiment, the control module 1120 identifies the condition that the target vehicle in the non-own-lane ahead decelerates and changes lane to the own-lane according to the motion parameter of the target vehicle in the ahead and the turn signal, so that the motion parameter control system of the host vehicle performs braking adjustment in the tunnel in advance, and the lamp system of the host vehicle reminds the target vehicle in the back.
In one embodiment of the present invention, the control module 1120 identifies a condition that the target vehicle in the front lane changes to the non-self-lane in front at a reduced speed according to the motion parameters of the target vehicle in front and the turn signal lamps, so that the motion parameter control system of the host vehicle does not perform braking adjustment in the tunnel.
It should be noted that the foregoing explanation of the vehicle driving automatic control method is also applicable to the vehicle driving automatic control device according to the embodiment of the present invention, and the implementation principle thereof is similar and will not be described herein again.
In summary, the automatic vehicle driving control apparatus according to the embodiment of the present invention acquires a first image and a second image of an environment ahead of a subject vehicle from a front 3D camera, acquires a front road lane line, acquires a third image and a rear road lane line according to imaging parameters of the first image and the front road lane line, maps the front road lane line into the second image according to an interleaved mapping relationship between the first image and the second image to generate a plurality of front vehicle recognition ranges, recognizes a front target vehicle according to the front vehicle recognition ranges, generates a plurality of rear vehicle recognition ranges according to the third image and the rear road lane line, acquires a rear target vehicle parameter group from a millimeter wave radar, projects the rear target vehicle parameter group into the third image according to installation parameters of the millimeter wave radar to acquire a plurality of rear target vehicle parameter points, and thereby recognizes the rear target vehicle, and finally, acquiring tunnel information and speed limit information, changing the setting of the cruising speed upper limit and cruising safety distance of the main vehicle according to the tunnel information and the speed limit information, adjusting the focal length of the front 3D camera according to the tunnel information, and performing in-tunnel cruising control on the motion parameters of the main vehicle according to the motion parameters of the front target vehicle and the motion parameters of the rear target vehicle. Therefore, the main vehicle can control the braking of the main vehicle according to the environmental information of the highway lane line, unnecessary braking adjustment is reduced, the risk of rear-end collision is effectively reduced, and the running safety of the main vehicle in the tunnel is improved.
Fig. 12 is a schematic configuration diagram of an automatic control device for vehicle running according to a third embodiment of the present invention. As shown in fig. 12, the second obtaining module 1020 includes a creating unit 1021, a first detecting unit 1022 and a second detecting unit 1023 on the basis as shown in fig. 11.
Wherein, the creating unit 1021 is used for creating a binary image of the front road lane line according to the brightness information of the first image and a preset brightness threshold value.
The first detecting unit 1022 is configured to detect all edge pixel positions of a straight-lane solid-line lane line or all edge pixel positions of a curve solid-line lane line in the binary image according to a preset detection algorithm.
The second detecting unit 1023 is configured to detect all edge pixel positions of the straight dashed lane line or all edge pixel positions of the curved dashed lane line in the binary image according to a preset detection algorithm.
It should be noted that the foregoing explanation of the vehicle driving automatic control method is also applicable to the vehicle driving automatic control device according to the embodiment of the present invention, and the implementation principle thereof is similar and will not be described herein again.
The automatic vehicle driving control device according to the embodiment of the present invention creates a binary image of a front road lane line according to the luminance information of the first image and a preset luminance threshold, detects all edge pixel positions of a straight solid line lane line or detects all edge pixel positions of a curve solid line lane line in the binary image according to a preset detection algorithm, and detects all edge pixel positions of a straight dotted line lane line or detects all edge pixel positions of a curve dotted line lane line in the binary image according to a preset detection algorithm. Therefore, the broken line and the solid line lane line of the straight road and the curve road in the highway lane line can be accurately identified.
Fig. 13 is a schematic configuration diagram of an automatic control device for vehicle running according to a fourth embodiment of the present invention. As shown in fig. 13, the third obtaining module 1030 includes a projection unit 1031 and an obtaining unit 1032 on the basis as shown in fig. 11.
And the projection unit 1031 is used for projecting all the pixel positions of the front road lane line to the physical world coordinate system of the host vehicle according to the imaging parameters of the first image to establish a third image.
An obtaining unit 1032 for obtaining the position of the front road lane line in the third image by continuous time accumulation and displacement from the origin of the physical world coordinate system of the subject vehicle.
It should be noted that the foregoing explanation of the vehicle driving automatic control method is also applicable to the vehicle driving automatic control device according to the embodiment of the present invention, and the implementation principle thereof is similar and will not be described herein again.
According to the automatic vehicle driving control device, all pixel positions of the front road lane line are projected to the physical world coordinate system of the main vehicle according to the imaging parameters of the first image to establish a third image, and the position of the front road lane line in the third image is subjected to continuous time accumulation and displacement relative to the origin of the physical world coordinate system of the main vehicle to obtain the position of the rear road lane line. Therefore, the position of the rear lane line is accurately known, the vehicle is conveniently and relatively controlled according to the position of the rear lane line, and a foundation is laid for ensuring the driving safety.
Fig. 14 is a schematic configuration diagram of a vehicular running automatic control apparatus according to a fifth embodiment of the invention. As shown in fig. 14, the first recognition module 1050 includes a first mark unit 1051, a first recognition unit 1052, a second recognition unit 1053 and a third recognition unit 1054 on the basis of that shown in fig. 11.
The first marking unit 1051 is used for marking the labels of the front own lane and the front non-own lane for all the front vehicle identification ranges.
A first recognition unit 1052 for recognizing the front own-lane target vehicle according to the vehicle recognition range marking the front own-lane tag.
A second recognition unit 1053 for recognizing the front non-own-lane target vehicle from the vehicle recognition range marking the front non-own-lane tag.
And a third identifying unit 1054 for identifying the preceding lane-change target vehicle according to the preceding vehicle identification ranges combined two by two.
Fig. 15 is a schematic configuration diagram of an automatic control device for vehicle running according to a sixth embodiment of the invention. As shown in fig. 15, the second recognition module 1080 includes a second marking unit 1081, a fourth recognition unit 1082, a fifth recognition unit 1083 and a sixth recognition unit 1084 on the basis of the illustration shown in fig. 11.
The second marking unit 1081 is configured to mark the labels of the rear own lane and the rear non-own lane for all the rear vehicle identification ranges.
And a fourth recognition unit 1082, configured to recognize the target vehicle in the lane behind the mark according to the vehicle recognition range of the lane behind the mark and the parameter point of the rear target vehicle.
And a fifth identifying unit 1083, configured to identify a non-own-lane target vehicle behind the mark according to the vehicle identification range of the non-own-lane mark behind the mark and the parameter point of the rear target vehicle.
And a sixth identifying unit 1084, configured to identify the marked rear lane-change target vehicle according to the rear vehicle identification range and the rear target vehicle parameter point in combination of two.
It should be noted that the foregoing explanation of the vehicle driving automatic control method is also applicable to the vehicle driving automatic control device according to the embodiment of the present invention, and the implementation principle thereof is similar and will not be described herein again.
In summary, the automatic vehicle driving control device according to the embodiment of the invention accurately identifies the front and rear target vehicles, so as to control the driving of the subject vehicle according to the motion parameters and the turn lights of the front target vehicle and the rear target vehicle, and provide guarantee for ensuring driving safety.
Fig. 16 is a schematic configuration diagram of an automatic control device for vehicle running according to a seventh embodiment of the present invention. As shown in fig. 16, the automatic vehicle running control apparatus further includes a third adjustment module 1150 based on that shown in fig. 11.
The third adjusting module 1150 is configured to execute deceleration control.
In an embodiment of the present invention, the first adjusting module 1100 is further configured to change the settings of the cruising speed upper limit and the cruising safety distance of the host vehicle according to the tunnel exit information, the speed limit information, and the user setting information.
The second adjusting module 1110 is further configured to decrease the focal length of the front-facing 3D camera when the tunnel information is tunnel entrance information, or increase the focal length of the front-facing 3D camera when the tunnel information is tunnel exit information.
It should be noted that the foregoing explanation of the vehicle driving automatic control method is also applicable to the vehicle driving automatic control device according to the embodiment of the present invention, and the implementation principle thereof is similar and will not be described herein again.
In summary, the automatic vehicle driving control apparatus according to the embodiment of the present invention can acquire tunnel entrance information and speed limit information from a navigation system, change the settings of the cruising speed upper limit and the cruising safety distance according to the acquired tunnel entrance information and speed limit information, execute necessary deceleration control, reduce the focal length of the front 3D camera, perform in-tunnel cruise control on the motion parameters of the host vehicle according to the motion parameters of the front target vehicle and the rear target vehicle and the turn signal, recognize tunnel exit information and speed limit information based on the first image and the second image, change the settings of the cruising speed upper limit and the cruising safety distance according to the recognized tunnel exit information, speed limit information and user setting information, and recognize the working condition that the target vehicle in the own lane changes to the non-own lane of the host vehicle through the recognized motion parameters of the target vehicle and the corresponding recognized tail turn signal of the target vehicle, the motion parameter control system of the subject vehicle can reduce unnecessary braking adjustment, thereby reducing the risk of rear-end collision due to unnecessary braking adjustment of the subject vehicle.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (32)

1. An automatic control method for vehicle running, characterized by comprising:
acquiring a first image and a second image of an environment in front of a subject vehicle from a front-facing 3D camera, wherein the first image is a color or brightness image, and the second image is a depth image;
acquiring a front road lane line according to the first image;
acquiring a third image and a rear road lane line according to the imaging parameters of the first image and the front road lane line;
mapping the front road lane lines into the second image according to the interweaving mapping relation between the first image and the second image to generate a plurality of front vehicle identification ranges;
identifying a front target vehicle according to all the front vehicle identification ranges;
generating a plurality of rear vehicle identification ranges according to the third image and the rear road lane lines;
acquiring a rear target vehicle parameter group from a millimeter wave radar, and projecting the rear target vehicle parameter group into the third image according to installation parameters of the millimeter wave radar to acquire a plurality of rear target vehicle parameter points;
identifying rear target vehicles according to all the rear vehicle identification ranges and the plurality of rear target vehicle parameter points;
acquiring tunnel information and speed limit information;
changing the settings of the cruising speed upper limit and the cruising safety distance of the main vehicle according to the tunnel information and the speed limit information;
adjusting the focal length of the front 3D camera according to the tunnel information, performing in-tunnel cruise control on the motion parameters of the subject vehicle according to the motion parameters of the front target vehicle and the motion parameters of the rear target vehicle, wherein adjusting the focal length of the front 3D camera according to the tunnel information includes:
when the tunnel information is tunnel entrance information, reducing the focal length of the front 3D camera;
and when the tunnel information is tunnel exit information, increasing the focal length of the front 3D camera.
2. The method of claim 1, wherein the acquiring the first image and the second image of the environment in front of the subject vehicle from the front-facing 3D camera comprises:
acquiring a first image of an environment in front of a subject vehicle from an image sensor of a front-facing 3D camera;
a second image of the environment in front of the subject vehicle is acquired from a time-of-flight sensor of the front-facing 3D camera.
3. The method of claim 1, wherein said obtaining a front highway lane line from said first image comprises:
when the first image is a brightness image, identifying a front road lane line according to the brightness difference between the front road lane line and the road surface in the first image; alternatively, the first and second electrodes may be,
and when the first image is a color image, converting the color image into a brightness image, and identifying the front highway lane line according to the brightness difference between the front highway lane line and the road surface in the first image.
4. The method of claim 3, wherein identifying the front highway lane line based on a difference in luminance of the front highway lane line and a road surface in the first image comprises:
creating a binary image of the front road lane line according to the brightness information of the first image and a preset brightness threshold;
detecting all edge pixel positions of a straight-line lane line or detecting all edge pixel positions of a curve solid-line lane line in the binary image according to a preset detection algorithm;
and detecting all edge pixel positions of the straight road dotted line lane line or detecting all edge pixel positions of the curve dotted line lane line in the binary image according to a preset detection algorithm.
5. The method of claim 1, wherein said obtaining a third image and a rear highway lane line based on imaging parameters of the first image and the front highway lane line comprises:
projecting all pixel positions of the front road lane line to the physical world coordinate system of the main vehicle according to the imaging parameters of the first image to establish a third image;
and acquiring the position of the front road lane line in the third image through continuous time accumulation and displacement relative to the origin of the physical world coordinate system of the host vehicle.
6. The method of claim 1, wherein identifying a preceding target vehicle based on all preceding vehicle identification ranges comprises:
marking the labels of the front lane and the front non-local lane for all the front vehicle identification ranges;
identifying a front own-lane target vehicle according to the vehicle identification range of the mark front own-lane label;
identifying the front non-own-lane target vehicle according to the vehicle identification range marking the front non-own-lane label;
and identifying the front lane-changing target vehicle according to the combined front vehicle identification range.
7. The method of claim 1, wherein said identifying rear target vehicles based on all rear vehicle identification ranges and the plurality of rear target vehicle parameter points comprises:
marking labels of a rear own lane and a rear non-own lane for all rear vehicle identification ranges;
identifying the target vehicle of the lane behind the mark according to the vehicle identification range of the lane label behind the mark and the parameter point of the rear target vehicle;
identifying the non-own-lane target vehicles behind the mark according to the vehicle identification range of the non-own-lane mark behind the mark and the parameter points of the rear target vehicles;
and identifying and marking the rear lane-changing target vehicle according to the rear vehicle identification range and the rear target vehicle parameter points which are combined pairwise.
8. The method of claim 1, further comprising:
generating a front target vehicle range according to the front target vehicle, and mapping the front target vehicle range to the first image according to the interleaving mapping relation between the first image and the second image to generate a front vehicle lamp identification area;
identifying a steering lamp of a corresponding front target vehicle according to the front vehicle lamp identification area;
the performing in-tunnel cruise control on the motion parameters of the subject vehicle according to the motion parameters of the front target vehicle and the motion parameters of the rear target vehicle includes:
and adjusting the focal length of the front 3D camera according to the tunnel information, and performing in-tunnel cruise control on the motion parameters of the main vehicle according to the motion parameters and the steering lamp of the front target vehicle and the motion parameters of the rear target vehicle.
9. The method of claim 8, wherein the generating a forward target vehicle range from the forward target vehicle comprises:
and detecting the target boundary of the front target vehicle by adopting a boundary detection method in an image processing algorithm for identification.
10. The method of claim 8, wherein the generating a forward target vehicle range from the forward target vehicle comprises:
generating a front target vehicle range according to a closed area defined by the target boundary of the front target vehicle; alternatively, the first and second electrodes may be,
generating a front target vehicle range according to a closed area enclosed by the extended target boundary of the front target vehicle; alternatively, the first and second electrodes may be,
and generating a front target vehicle range according to a closed area formed by connecting a plurality of pixel positions of the front target vehicle.
11. The method of claim 8, wherein identifying the turn signals of the respective forward target vehicles based on the forward vehicle light identification regions comprises:
and identifying the steering lamp of the corresponding front target vehicle according to the color, the flashing frequency or the flashing sequence of the tail lamp in the front vehicle lamp identification area.
12. The method of claim 1, wherein the obtaining of the tunnel information and the speed limit information comprises:
acquiring tunnel entrance information and speed limit information from a navigation system; alternatively, the first and second electrodes may be,
and acquiring tunnel exit information and speed limit information from a navigation system.
13. The method of claim 1, wherein the obtaining of the tunnel information and the speed limit information comprises:
identifying tunnel entrance information and speed limit information according to the first image and the second image; alternatively, the first and second electrodes may be,
and identifying tunnel exit information and speed limit information according to the first image and the second image.
14. The method according to claim 1, wherein when the tunnel information is tunnel entrance information, after the changing of the settings of the cruising speed upper limit and the cruising safety distance of the subject vehicle based on the tunnel information and the speed limit information, further comprises:
deceleration control is executed.
15. The method according to claim 1, wherein when the tunnel information is tunnel exit information, the changing the settings of the cruising speed upper limit and the cruising safety distance of the subject vehicle according to the tunnel information and the speed limit information includes:
and changing the settings of the cruising speed upper limit and the cruising safety distance of the main vehicle according to the tunnel exit information, the speed limit information and the user setting information.
16. The method according to claim 8, wherein the performing intra-tunnel cruise control on the motion parameters of the subject vehicle based on the motion parameters of the preceding target vehicle and turn signals, and the following target vehicle, comprises:
identifying the working condition that the target vehicle in the non-self-lane in the front decelerates and changes lane to the self-lane according to the motion parameters of the target vehicle in the front and the steering lamp, so that the motion parameter control system of the main vehicle performs braking adjustment in the tunnel in advance, and the lamp system of the main vehicle reminds the target vehicle behind;
alternatively, the first and second electrodes may be,
and identifying the working condition that the target vehicle in the front road changes into the non-self-road in the front road in a deceleration way according to the motion parameters of the target vehicle in the front and the steering lamp so that the motion parameter control system of the main vehicle does not perform braking adjustment in the tunnel.
17. An automatic vehicle travel control device, characterized by comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a first image and a second image of the environment in front of a main vehicle from a front 3D camera, the first image is a color or brightness image, and the second image is a depth image;
the second acquisition module is used for acquiring a front highway lane line according to the first image;
the third acquisition module is used for acquiring a third image and a rear road lane line according to the imaging parameters of the first image and the front road lane line;
the first generation module is used for mapping the front road lane lines into the second image according to the interweaving mapping relation between the first image and the second image to generate a plurality of front vehicle identification ranges;
the first identification module is used for identifying the front target vehicles according to all the front vehicle identification ranges;
a second generating module, configured to generate a plurality of rear vehicle identification ranges according to the third image and the rear road lane;
the fourth acquisition module is used for acquiring a rear target vehicle parameter group from a millimeter wave radar and projecting the rear target vehicle parameter group into the third image according to the installation parameters of the millimeter wave radar so as to acquire a plurality of rear target vehicle parameter points;
the second identification module is used for identifying rear target vehicles according to all the rear vehicle identification ranges and the plurality of rear target vehicle parameter points;
the fifth acquisition module is used for acquiring tunnel information and speed limit information;
the first adjusting module is used for changing the settings of the cruising speed upper limit and the cruising safety distance of the main vehicle according to the tunnel information and the speed limit information;
the second adjusting module is configured to adjust the focal length of the front-facing 3D camera according to the tunnel information, where the second adjusting module is specifically configured to reduce the focal length of the front-facing 3D camera when the tunnel information is tunnel entrance information;
when the tunnel information is tunnel exit information, increasing the focal length of the front 3D camera;
and the control module is used for carrying out in-tunnel cruise control on the motion parameters of the main vehicle according to the motion parameters of the front target vehicle and the motion parameters of the rear target vehicle.
18. The apparatus of claim 17, wherein the first obtaining module is to:
acquiring a first image of an environment in front of a subject vehicle from an image sensor of a front-facing 3D camera;
a second image of the environment in front of the subject vehicle is acquired from a time-of-flight sensor of the front-facing 3D camera.
19. The apparatus of claim 17, wherein the second obtaining module is to:
when the first image is a brightness image, identifying a front road lane line according to the brightness difference between the front road lane line and the road surface in the first image; alternatively, the first and second electrodes may be,
and when the first image is a color image, converting the color image into a brightness image, and identifying the front road lane line according to the brightness difference between the front road lane line and the road surface in the first image.
20. The apparatus of claim 17, wherein the second obtaining module comprises:
the creating unit is used for creating a binary image of the front road lane line according to the brightness information of the first image and a preset brightness threshold value;
the first detection unit is used for detecting all edge pixel positions of a straight-line lane line or all edge pixel positions of a curve solid-line lane line in the binary image according to a preset detection algorithm;
and the second detection unit is used for detecting all edge pixel positions of the straight-way dotted line lane line or detecting all edge pixel positions of the curve dotted line lane line in the binary image according to a preset detection algorithm.
21. The apparatus of claim 17, wherein the third obtaining module comprises:
the projection unit is used for projecting all pixel positions of the front road lane line to the physical world coordinate system of the main vehicle according to the imaging parameters of the first image to establish a third image;
and the acquisition unit is used for acquiring the position of the front road lane line in the third image through continuous time accumulation and displacement relative to the origin of the physical world coordinate system of the host vehicle.
22. The apparatus of claim 17, wherein the first identification module comprises:
a first marking unit for marking the labels of the front own lane and the front non-own lane for all the front vehicle identification ranges;
a first recognition unit configured to recognize a preceding own-lane target vehicle on the basis of a vehicle recognition range in which a preceding own-lane tag is marked;
a second recognition unit for recognizing a non-own-lane target vehicle ahead according to the vehicle recognition range marking the non-own-lane tag ahead;
and the third identification unit is used for identifying the front lane-changing target vehicle according to the combined front vehicle identification ranges.
23. The apparatus of claim 17, wherein the second identification module comprises:
the second marking unit is used for marking the labels of the rear lane and the rear non-local lane for all the rear vehicle identification ranges;
the fourth identification unit is used for identifying the target vehicle of the lane behind the mark according to the vehicle identification range of the lane label behind the mark and the parameter point of the rear target vehicle;
the fifth identification unit is used for identifying the non-self-lane target vehicles behind the mark according to the vehicle identification range of the non-self-lane mark behind the mark and the parameter points of the rear target vehicles;
and the sixth identification unit is used for identifying and marking the rear lane-changing target vehicle according to the rear vehicle identification range and the rear target vehicle parameter points which are combined in pairs.
24. The apparatus of claim 17, further comprising:
the third generation module is used for generating a front target vehicle range according to the front target vehicle and mapping the front target vehicle range to the first image according to the interweaving mapping relation between the first image and the second image to generate a front vehicle lamp identification area;
the third identification module is used for identifying the steering lamp of the corresponding front target vehicle according to the front vehicle lamp identification area;
the control module is further configured to:
and adjusting the focal length of the front 3D camera according to the tunnel information, and performing in-tunnel cruise control on the motion parameters of the main vehicle according to the motion parameters and the steering lamp of the front target vehicle and the motion parameters of the rear target vehicle.
25. The apparatus of claim 24, wherein the third generation module is to:
and detecting the target boundary of the front target vehicle by adopting a boundary detection method in an image processing algorithm for identification.
26. The apparatus of claim 24, wherein the third generation module is to:
generating a front target vehicle range according to a closed area defined by the target boundary of the front target vehicle; alternatively, the first and second electrodes may be,
generating a front target vehicle range according to a closed area enclosed by the extended target boundary of the front target vehicle; alternatively, the first and second electrodes may be,
and generating a front target vehicle range according to a closed area formed by connecting a plurality of pixel positions of the front target vehicle.
27. The apparatus of claim 24, wherein the third identification module is to:
and identifying the steering lamp of the corresponding front target vehicle according to the color, the flashing frequency or the flashing sequence of the tail lamp in the front vehicle lamp identification area.
28. The apparatus of claim 17, wherein the fifth obtaining module is to:
acquiring tunnel entrance information and speed limit information from a navigation system; alternatively, the first and second electrodes may be,
and acquiring tunnel exit information and speed limit information from a navigation system.
29. The apparatus of claim 17, wherein the fifth obtaining module is to:
identifying tunnel entrance information and speed limit information according to the first image and the second image; alternatively, the first and second electrodes may be,
and identifying tunnel exit information and speed limit information according to the first image and the second image.
30. The apparatus of claim 17, further comprising:
and the third adjusting module is used for executing deceleration control.
31. The apparatus of claim 17, wherein the first adjustment module is to:
and changing the settings of the cruising speed upper limit and the cruising safety distance of the main vehicle according to the tunnel exit information, the speed limit information and the user setting information.
32. The apparatus of claim 24, wherein the control module is to:
identifying the working condition that the target vehicle in the non-self-lane in the front decelerates and changes lane to the self-lane according to the motion parameters of the target vehicle in the front and the steering lamp, so that the motion parameter control system of the main vehicle performs braking adjustment in the tunnel in advance, and the lamp system of the main vehicle reminds the target vehicle behind;
alternatively, the first and second electrodes may be,
and identifying the working condition that the target vehicle in the front road changes into the non-self-road in the front road in a deceleration way according to the motion parameters of the target vehicle in the front and the steering lamp so that the motion parameter control system of the main vehicle does not perform braking adjustment in the tunnel.
CN201710120748.9A 2017-03-02 2017-03-02 Automatic control method and device for vehicle running Active CN108528449B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710120748.9A CN108528449B (en) 2017-03-02 2017-03-02 Automatic control method and device for vehicle running

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710120748.9A CN108528449B (en) 2017-03-02 2017-03-02 Automatic control method and device for vehicle running

Publications (2)

Publication Number Publication Date
CN108528449A CN108528449A (en) 2018-09-14
CN108528449B true CN108528449B (en) 2020-02-21

Family

ID=63489321

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710120748.9A Active CN108528449B (en) 2017-03-02 2017-03-02 Automatic control method and device for vehicle running

Country Status (1)

Country Link
CN (1) CN108528449B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102019105547A1 (en) * 2019-03-05 2020-09-10 Bayerische Motoren Werke Aktiengesellschaft Method and control unit for recognizing a vehicle entering or exiting
CN114115174B (en) * 2021-10-27 2022-06-24 北京云汉通航科技有限公司 Fault processing method, system and device of high-speed police early-warning vehicle

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101767539A (en) * 2008-12-30 2010-07-07 比亚迪股份有限公司 Automobile cruise control method and cruise device
CN103324936A (en) * 2013-05-24 2013-09-25 北京理工大学 Vehicle lower boundary detection method based on multi-sensor fusion
CN104477168A (en) * 2014-11-28 2015-04-01 长城汽车股份有限公司 Automotive adaptive cruise system and method
CN104952254A (en) * 2014-03-31 2015-09-30 比亚迪股份有限公司 Vehicle identification method and device and vehicle
CN105302125A (en) * 2015-10-10 2016-02-03 广东轻工职业技术学院 Automatic control method for vehicle
CN106463064A (en) * 2014-06-19 2017-02-22 日立汽车系统株式会社 Object recognition apparatus and vehicle travel controller using same

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101767539A (en) * 2008-12-30 2010-07-07 比亚迪股份有限公司 Automobile cruise control method and cruise device
CN103324936A (en) * 2013-05-24 2013-09-25 北京理工大学 Vehicle lower boundary detection method based on multi-sensor fusion
CN104952254A (en) * 2014-03-31 2015-09-30 比亚迪股份有限公司 Vehicle identification method and device and vehicle
CN106463064A (en) * 2014-06-19 2017-02-22 日立汽车系统株式会社 Object recognition apparatus and vehicle travel controller using same
CN104477168A (en) * 2014-11-28 2015-04-01 长城汽车股份有限公司 Automotive adaptive cruise system and method
CN105302125A (en) * 2015-10-10 2016-02-03 广东轻工职业技术学院 Automatic control method for vehicle

Also Published As

Publication number Publication date
CN108528449A (en) 2018-09-14

Similar Documents

Publication Publication Date Title
CN108528433B (en) Automatic control method and device for vehicle running
CN108528448B (en) Automatic control method and device for vehicle running
CN108528431B (en) Automatic control method and device for vehicle running
CN106909152B (en) Automobile-used environmental perception system and car
WO2018059585A1 (en) Vehicle identification method and device, and vehicle
US10384681B2 (en) Vehicle cruise control device and cruise control method
CN108528432B (en) Automatic control method and device for vehicle running
US9505338B2 (en) Vehicle driving environment recognition apparatus
JP5863536B2 (en) Outside monitoring device
CN108536134B (en) Automatic control method and device for vehicle running
JP5363921B2 (en) Vehicle white line recognition device
US8305431B2 (en) Device intended to support the driving of a motor vehicle comprising a system capable of capturing stereoscopic images
CN109204311B (en) Automobile speed control method and device
US9886773B2 (en) Object detection apparatus and object detection method
EP2863374A1 (en) Lane partition marking detection apparatus, and drive assist system
JP5313638B2 (en) Vehicle headlamp device
JP5363920B2 (en) Vehicle white line recognition device
WO2020259284A1 (en) Obstacle detection method and device
CN111937002A (en) Obstacle detection device, automatic braking device using obstacle detection device, obstacle detection method, and automatic braking method using obstacle detection method
JP5361901B2 (en) Headlight control device
CN107886729B (en) Vehicle identification method and device and vehicle
CN108528449B (en) Automatic control method and device for vehicle running
US10759329B2 (en) Out-of-vehicle notification device
CN108528450B (en) Automatic control method and device for vehicle running
CN107886036B (en) Vehicle control method and device and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant