CN108528431A - Vehicle travels autocontrol method and device - Google Patents
Vehicle travels autocontrol method and device Download PDFInfo
- Publication number
- CN108528431A CN108528431A CN201710120432.XA CN201710120432A CN108528431A CN 108528431 A CN108528431 A CN 108528431A CN 201710120432 A CN201710120432 A CN 201710120432A CN 108528431 A CN108528431 A CN 108528431A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- image
- identification
- target vehicle
- objects ahead
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 238000013507 mapping Methods 0.000 claims abstract description 40
- 230000008859 change Effects 0.000 claims description 70
- 238000001514 detection method Methods 0.000 claims description 41
- 238000003384 imaging method Methods 0.000 claims description 23
- 238000006073 displacement reaction Methods 0.000 description 16
- 238000010586 diagram Methods 0.000 description 11
- 238000009792 diffusion process Methods 0.000 description 9
- 238000009434 installation Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000002955 isolation Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000035772 mutation Effects 0.000 description 4
- 238000010009 beating Methods 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 238000005259 measurement Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000001228 spectrum Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000002902 bimodal effect Effects 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 239000000758 substrate Substances 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000001154 acute effect Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000005452 bending Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000010924 continuous production Methods 0.000 description 1
- 238000005183 dynamical system Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 208000015181 infectious disease Diseases 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 230000002035 prolonged effect Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W10/00—Conjoint control of vehicle sub-units of different type or different function
- B60W10/18—Conjoint control of vehicle sub-units of different type or different function including control of braking systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60T—VEHICLE BRAKE CONTROL SYSTEMS OR PARTS THEREOF; BRAKE CONTROL SYSTEMS OR PARTS THEREOF, IN GENERAL; ARRANGEMENT OF BRAKING ELEMENTS ON VEHICLES IN GENERAL; PORTABLE DEVICES FOR PREVENTING UNWANTED MOVEMENT OF VEHICLES; VEHICLE MODIFICATIONS TO FACILITATE COOLING OF BRAKES
- B60T7/00—Brake-action initiating means
- B60T7/12—Brake-action initiating means for automatic initiation; for initiation not subject to will of driver or passenger
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/08—Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
- B60W30/09—Taking automatic action to avoid collision, e.g. braking and steering
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/14—Adaptive cruise control
- B60W30/143—Speed control
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/02—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
- B60W40/04—Traffic conditions
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/02—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
- B60W40/06—Road conditions
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W2050/0001—Details of the control system
- B60W2050/0002—Automatic control, details of type of controller or control system architecture
- B60W2050/0014—Adaptive controllers
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W2050/0001—Details of the control system
- B60W2050/0043—Signal treatments, identification of variables or parameters, parameter estimation or state estimation
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W2050/0062—Adapting control system settings
- B60W2050/0075—Automatic parameter input, automatic initialising or calibrating means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2554/00—Input parameters relating to objects
- B60W2554/80—Spatial relation or speed relative to objects
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2710/00—Output or target parameters relating to a particular sub-units
- B60W2710/18—Braking system
Landscapes
- Engineering & Computer Science (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Chemical & Material Sciences (AREA)
- Combustion & Propulsion (AREA)
- Traffic Control Systems (AREA)
Abstract
The present invention proposes a kind of vehicle traveling autocontrol method and device, wherein method includes:Front lines on highway is obtained according to the first image, rear lines on highway is obtained according to third image, front lines on highway is mapped in the second image according to the intertexture mapping relations between the first image and the second image and generates multiple front vehicles identification ranges, rear lines on highway is mapped in the 4th image according to the intertexture mapping relations between third image and the 4th image and generates multiple front vehicle identification ranges, objects ahead vehicle is identified according to multiple front vehicles identification ranges are generated, rear area target vehicle is identified according to multiple front vehicle identification ranges are generated;Cruise control is carried out to the kinematic parameter of main body vehicle according to the kinematic parameter of objects ahead vehicle and rear area target vehicle.Thereby, it is possible to correctly be controlled cruise, the driving safety of main body vehicle is improved.
Description
Technical field
The present invention relates to technical field of automobile control more particularly to a kind of vehicle traveling autocontrol methods and device.
Background technology
Currently, vehicle self-adaption cruise system is used as distance measuring sensor usually using millimetre-wave radar, laser radar etc..By
This, main body vehicle can sense multiple targets in front of main body vehicle by installing the distance measuring sensor of above-mentioned any kind type
Vehicle and the kinematic parameter for being adaptively adjusted cruise system.
However, the case where travelling on bend for multiple target vehicles, the distance measuring sensors such as millimetre-wave radar, laser radar
Lane line can not be identified well.Therefore only installation carries millimetre-wave radar or the main body vehicle of laser radar is likely to this vehicle
The target vehicle in road is identified as in this non-track, and may be identified as the target vehicle in this non-track, in this track, may lead
Cause braking or brake latency, the driving safety of main body vehicle of the self-adaption cruise system execution mistake of main body vehicle low.
Invention content
The purpose of the present invention is intended to solve at least some of the technical problems in related technologies.
For this purpose, first purpose of the present invention is to propose a kind of vehicle traveling autocontrol method, main body is enabled to
The self-adaption cruise system of vehicle executes correctly braking, reduces unnecessary braking adjustment, effectively reduces rear-end impact
Risk improves the driving safety of main body vehicle.
Second object of the present invention is to propose a kind of vehicle traveling automatic control device.
In order to achieve the above object, first aspect present invention embodiment proposes a kind of vehicle traveling autocontrol method, including:
The first image and the second image that environment in front of main body vehicle is obtained from preposition 3D cameras obtain main body vehicle from postposition 3D cameras
The third image and the 4th image of rear environment, wherein the first image and third image are color or luminance picture, the second image
It is depth image with the 4th image;Front lines on highway is obtained according to described first image, is obtained according to the third image
Rear lines on highway;According to the intertexture mapping relations between described first image and second image by the front highway
Lane line, which maps in second image, generates multiple front vehicles identification ranges, according to the third image and the described 4th
Intertexture mapping relations between image, which map to the rear lines on highway in the 4th image, generates multiple rear vehicles
Identification range;Objects ahead vehicle is identified according to all front vehicles identification ranges, according to all front vehicle identification ranges
Identify rear area target vehicle;According to the kinematic parameter of the objects ahead vehicle and the rear area target vehicle to the main body vehicle
Kinematic parameter carry out cruise control.
The vehicle of the embodiment of the present invention travels autocontrol method, obtains ring in front of main body vehicle from preposition 3D cameras first
First image and the second image in border obtain the third image and the 4th image of main body vehicle rear environment from postposition 3D cameras,
And front lines on highway is obtained according to the first image and rear lines on highway is obtained according to third image, then according to first
Intertexture mapping relations between image and the second image, which map to front lines on highway in the second image, generates multiple fronts
Rear lines on highway is mapped to by vehicle identification range according to the intertexture mapping relations between third image and the 4th image
Generate multiple front vehicle identification ranges in four images, and according to all front vehicles identification ranges identify objects ahead vehicle and
Rear area target vehicle is identified according to all front vehicle identification ranges, finally according to objects ahead vehicle and rear area target vehicle
Kinematic parameter carries out cruise control to the kinematic parameter of main body vehicle.
In order to achieve the above object, second aspect of the present invention embodiment proposes a kind of vehicle traveling automatic control device, including:
First acquisition module, the first image and the second image for obtaining environment in front of main body vehicle from preposition 3D cameras, from postposition
3D cameras obtain the third image and the 4th image of main body vehicle rear environment, wherein the first image and third image are color
Or luminance picture, the second image and the 4th image are depth image;Second acquisition module, for being obtained according to described first image
Front lines on highway;Third acquisition module, for obtaining rear lines on highway according to the third image;First generates mould
Block, for being reflected the front lines on highway according to the intertexture mapping relations between described first image and second image
It is incident upon in second image and generates multiple front vehicles identification ranges, according between the third image and the 4th image
Intertexture mapping relations the rear lines on highway mapped in the 4th image generate multiple front vehicles identification models
It encloses;First identification module, for identifying objects ahead vehicle according to all front vehicles identification ranges, according to all front vehicles
Identification range identifies rear area target vehicle;Control module, for according to the objects ahead vehicle and the rear area target vehicle
Kinematic parameter and steering indicating light cruise control is carried out to the kinematic parameter of the main body vehicle.
The vehicle of the embodiment of the present invention travels automatic control device, obtains ring in front of main body vehicle from preposition 3D cameras first
First image and the second image in border obtain the third image and the 4th image of main body vehicle rear environment from postposition 3D cameras,
And front lines on highway is obtained according to the first image and rear lines on highway is obtained according to third image, then according to first
Intertexture mapping relations between image and the second image, which map to front lines on highway in the second image, generates multiple fronts
Rear lines on highway is mapped to by vehicle identification range according to the intertexture mapping relations between third image and the 4th image
Generate multiple front vehicle identification ranges in four images, and according to all front vehicles identification ranges identify objects ahead vehicle and
Rear area target vehicle is identified according to all front vehicle identification ranges, finally according to objects ahead vehicle and rear area target vehicle
Kinematic parameter carries out cruise control to the kinematic parameter of main body vehicle.
The additional aspect of the present invention and advantage will be set forth in part in the description, and will partly become from the following description
Obviously, or practice through the invention is recognized.
Description of the drawings
Above-mentioned and/or additional aspect and advantage of the invention will become from the following description of the accompanying drawings of embodiments
Obviously and it is readily appreciated that, wherein:
Fig. 1 is the flow diagram that vehicle provided by one embodiment of the present invention travels autocontrol method;
Fig. 2 is the flow diagram that the vehicle that another embodiment of the present invention provides travels autocontrol method;
Fig. 3 is the steering indicating light provided by one embodiment of the present invention for accurately identifying this non-track target vehicle of left rear side
Schematic diagram;
Fig. 4 accurately identifies the objects ahead vehicle completion into left bend to be provided by one embodiment of the present invention in straight way
The schematic diagram of lane change to the right;
Fig. 5 is provided by one embodiment of the present invention objects ahead vehicle to be accurately identified in bend to the right in straight way
The schematic diagram of lane change to the left;
Fig. 6 is the flow diagram that the vehicle that another embodiment of the invention provides travels autocontrol method;
Fig. 7 is the structural schematic diagram that vehicle provided by one embodiment of the present invention travels automatic control device;
Fig. 8 is the structural schematic diagram that the vehicle that another embodiment of the present invention provides travels automatic control device.
Specific implementation mode
The embodiment of the present invention is described below in detail, examples of the embodiments are shown in the accompanying drawings, wherein from beginning to end
Same or similar label indicates same or similar element or element with the same or similar functions.Below with reference to attached
The embodiment of figure description is exemplary, it is intended to for explaining the present invention, and is not considered as limiting the invention.
Below with reference to the accompanying drawings the vehicle traveling autocontrol method and device of the embodiment of the present invention are described.
In general, being used as by installation millimetre-wave radar, laser radar etc. more in front of distance measuring sensor sensing main body vehicle
A target vehicle and the kinematic parameter for being adaptively adjusted cruise system.
However, in the case where multiple target vehicles travel on bend, the distance measuring sensors such as millimetre-wave radar, laser radar
Lane line can not be identified well.The braking or braking that may lead to the self-adaption cruise system execution mistake of main body vehicle are prolonged
Late, the driving safety of main body vehicle is low.
To solve the above-mentioned problems, the present invention proposes that a kind of vehicle travels autocontrol method, enables to main body vehicle
Self-adaption cruise system execute correctly braking, reduce unnecessary braking and adjust, improve the driving safety of main body vehicle
Property.It is specific as follows:
Fig. 1 is the flow diagram that vehicle provided by one embodiment of the present invention travels autocontrol method.
As shown in Figure 1, vehicle traveling autocontrol method includes the following steps:
Step 101, the first image and the second image that environment in front of main body vehicle is obtained from preposition 3D cameras, from postposition 3D
Camera obtains the third image and the 4th image of main body vehicle rear environment, wherein the first image and third image be color or
Luminance picture, the second image and the 4th image are depth image.
It is understood that from preposition 3D cameras obtain main body vehicle in front of environment the first image and the second image and
It is obtained from postposition 3D cameras there are many kinds of the third image of main body vehicle rear environment and the modes of the 4th image, it can be according to reality
Border application needs to carry out selection setting.It is illustrated below:
The first example obtains the first image of environment in front of main body vehicle from the imaging sensor of preposition 3D cameras, from
The time-of-flight sensor of preposition 3D cameras obtains the second image of environment in front of main body vehicle.
Specifically, the color of environment or luminance picture in front of main body vehicle are obtained by imaging sensor and is used as the first image
It is used as the second image with the depth image for obtaining environment in front of main body vehicle by time-of-flight sensor.
Second of example obtains the third image of main body vehicle rear environment from the imaging sensor of postposition 3D cameras, from
The time-of-flight sensor of postposition 3D cameras obtains the 4th image of main body vehicle rear environment.
Specifically, the color of main body vehicle rear environment is obtained by imaging sensor or luminance picture is used as third image
It is used as the 4th image with the depth image for obtaining main body vehicle rear environment by time-of-flight sensor.
Step 102, front lines on highway is obtained according to the first image, rear road driveway is obtained according to third image
Line.
It is understood that the first image can be luminance picture or be color image.According to the first different images
Front lines on highway difference is obtained according to the first image, is described as follows:
The first example, when the first image is luminance picture, according to front lines on highway and road surface in the first image
Luminance difference identification front lines on highway.
Second of example, when the first image be color image, color image is converted into luminance picture, according to the first image
Middle front lines on highway and the luminance difference on road surface identification front lines on highway.
It refers to obtaining road driveway all in front of main body vehicle it should be noted that obtaining front lines on highway
Line.
It is understood that third image can be luminance picture or be color image.According to different third images
Rear lines on highway difference is obtained according to third image, is described as follows:
The first example, when third image is luminance picture, according to rear lines on highway in third image and road surface
Luminance difference identifies rear lines on highway.
Second of example, when third image be color image, color image is converted into luminance picture, according to third image
Middle rear lines on highway and the luminance difference on road surface identify rear lines on highway.
It refers to obtaining all road driveways in main body vehicle rear it should be noted that obtaining rear lines on highway
Line.
Step 103, front lines on highway is mapped according to the intertexture mapping relations between the first image and the second image
To multiple front vehicles identification ranges are generated in the second image, according to the intertexture mapping relations between third image and the 4th image
Rear lines on highway is mapped in the 4th image and generates multiple front vehicle identification ranges.
It is understood that the intertexture mapping relations between the first image and the second image indicate each picture of the first image
Adjustment of the ranks coordinate of element by equal proportion can at least determine the ranks coordinate of a pixel, therefore root in the second image
Each edge pixel location of the front lines on highway obtained according to the first image can at least determine one in the second image
Location of pixels, to obtain the front lines on highway of equal proportion adjustment in the second image.
As a result, according to the front lines on highway of the equal proportion obtained in the second image, each two adjacent front highway
Lane line uniquely creates a front vehicles identification range, to which multiple two adjacent front lines on highway can generate
Multiple front vehicles identification ranges.
It is understood that the intertexture mapping relations between third image and the 4th image indicate each picture of third image
Adjustment of the ranks coordinate of element by equal proportion can at least determine the ranks coordinate of a pixel, therefore root in the 4th image
Each edge pixel location of the rear lines on highway obtained according to third image can at least determine one in the 4th image
Location of pixels, to obtain the rear lines on highway of equal proportion adjustment in the 4th image.
As a result, according to the rear lines on highway of the equal proportion obtained in the 4th image, each two adjacent rear highway
Lane line uniquely creates a front vehicle identification range, to which multiple two adjacent rear lines on highway can generate
Multiple front vehicle identification ranges.
Step 104, it identifies objects ahead vehicle according to all front vehicles identification ranges, is identified according to all front vehicles
Range identifies rear area target vehicle.
It is understood that according to all front vehicles identification ranges identify objects ahead vehicle mode there are many kinds of,
Selection setting can be carried out according to the actual application.It is illustrated below:
The first example marks all front vehicles identification ranges on the label in front this track and this non-track of front,
According to this track target vehicle of the vehicle identification range identification front of this track label of label front, according to label front Fei Benche
This non-track target vehicle of vehicle identification range identification front of road label, knows according to the front vehicles identification range of combination of two
Other lane change ahead target vehicle.
Second of example detects the object boundary of objects ahead vehicle using the boundary detection method in image processing algorithm
It is identified.
Specifically, objects ahead vehicle always changes at any time relative to the distance of time-of-flight sensor and position,
And road surface, isolation strip are approximately indeclinable at any time relative to the distance of time-of-flight sensor and position.Therefore can lead to
The second image (depth image) the creation time differential depth image obtained using two width different moments is crossed to detect above-mentioned distance
With the variation of position, and then mark this track label front vehicles identification range identify front this track target vehicle, label
This non-track target vehicle of front vehicles identification range identification front of this non-track label, knows in the front vehicles of combination of two
Other range identifies lane change ahead target vehicle.
It is understood that according to all front vehicle identification ranges identify rear area target vehicle mode there are many kinds of,
Selection setting can be carried out according to the actual application.It is illustrated below:
The first example marks all front vehicle identification ranges on the label in this track of rear and this non-track of rear,
This track target vehicle of rear is identified according to the vehicle identification range of label this track label of rear, according to label rear Fei Benche
The vehicle identification range of road label identifies this non-track target vehicle of rear, is known according to the front vehicle identification range of combination of two
Other rear lane change target vehicle.
Second of example detects the object boundary of rear area target vehicle using the boundary detection method in image processing algorithm
It is identified.
Specifically, rear area target vehicle always changes at any time relative to the distance of time-of-flight sensor and position,
And road surface, isolation strip are approximately indeclinable at any time relative to the distance of time-of-flight sensor and position.Therefore can lead to
The second image (depth image) the creation time differential depth image obtained using two width different moments is crossed to detect above-mentioned distance
With the variation of position, and then mark this track label front vehicle identification range identify this track target vehicle of rear, label
The front vehicle identification range of this non-track label identifies this non-track target vehicle of rear, knows in the front vehicle of combination of two
Other range identifies rear lane change target vehicle.
Step 105, according to the kinematic parameter of objects ahead vehicle and rear area target vehicle to the kinematic parameter of main body vehicle
Carry out cruise control.
It is understood that the movement according to the kinematic parameter of objects ahead vehicle and rear area target vehicle to main body vehicle
Parameter there are many kinds of the modes of cruise control, different control is carried out according to different operating modes, is described as follows:
Specifically, in one embodiment of the invention, objects ahead vehicle model can be generated according to objects ahead vehicle
It encloses, rear area target vehicle range is generated according to rear area target vehicle.
It is understood that generated according to objects ahead vehicle there are many kinds of the modes of objects ahead vehicle range, it can be with
Selection setting is carried out according to the actual application.It is illustrated below:
The first example generates objects ahead vehicle model according to the enclosed region that the object boundary of objects ahead vehicle surrounds
It encloses.
Second of example generates front mesh according to the enclosed region of the extension of the object boundary of objects ahead vehicle surrounded
Mark vehicle range.
The third example generates front mesh according to the enclosed region that multiple location of pixels lines of objects ahead vehicle surround
Mark vehicle range.
Specifically, in one embodiment of the invention, rear area target vehicle model can be generated according to rear area target vehicle
There are many kinds of the modes enclosed, and can carry out selection setting according to the actual application.It is illustrated below:
The first example generates rear area target vehicle model according to the enclosed region that the object boundary of rear area target vehicle surrounds
It encloses.
Second of example, square mesh after being generated according to the enclosed region of the extension of the object boundary of rear area target vehicle surrounded
Mark vehicle range.
The third example, square mesh after being generated according to the enclosed region that multiple location of pixels lines of rear area target vehicle surround
Mark vehicle range.
It specifically, in one embodiment of the invention, can also be according to the intertexture between the first image and the second image
Objects ahead vehicle range is mapped in the first image and generates front car light identification region by mapping relations, according to third image and
Intertexture mapping relations between 4th image, which map to rear area target vehicle range in third image, generates the identification of rear car light
Region.
Specifically, the intertexture mapping relations between the first image and the second image indicate objects ahead vehicle in the second image
Adjustment of the ranks coordinate of each pixel of range by equal proportion can at least determine the row of a pixel in the first image
Row coordinate, and the imaging of the car light of objects ahead vehicle is included in corresponding objects ahead vehicle range, to first
Car light identification region is generated in image.
Specifically, the intertexture mapping relations between third image and the 4th image indicate rear area target vehicle in the 4th image
Adjustment of the ranks coordinate of each pixel of range by equal proportion can at least determine the row of a pixel in third image
Row coordinate, and the imaging of the car light of rear area target vehicle is included in corresponding rear area target vehicle range, in third
Car light identification region is generated in image.
Specifically, in one embodiment of the invention, corresponding front can also be identified according to front car light identification region
The steering indicating light of target vehicle identifies the steering indicating light of corresponding rear area target vehicle according to rear car light identification region.
Specifically, it can be identified according to the color, flicker frequency or flashing sequence of tail-light in the car light identification region of front
The steering indicating light of corresponding objects ahead vehicle.
For example, its length travel of the initial stage of objects ahead vehicle lane change and lateral displacement are all smaller, it is meant that front
The car light identification region size variation of target vehicle is also smaller, and the brightness being imaged only at steering indicating light is changed greatly because of flicker.
Specifically, it can be identified according to the color, flicker frequency or flashing sequence of headlight in the car light identification region of rear
The steering indicating light of corresponding rear area target vehicle.
For example, its length travel of the initial stage of rear area target vehicle lane change and lateral displacement are all smaller, it is meant that rear
The car light identification region size variation of target vehicle is also smaller, and the brightness being imaged only at steering indicating light is changed greatly because of flicker.
The kinematic parameter of main body vehicle is carried out according to the kinematic parameter of objects ahead vehicle and rear area target vehicle as a result,
There are many kinds of the modes of cruise control, can also add the accuracy that steering indicating light further increases control.It is specific as follows:
The first example recognizes this non-track target carriage of front according to the kinematic parameter of objects ahead vehicle and steering indicating light
Deceleration lane change to this track operating mode so that the kinematic parameter control system of main body vehicle carries out braking adjustment in advance.As a result,
The kinematic parameter control system of main body vehicle and security system are adjusted earlier, improves main body vehicle and its multiplies
The driving safety of member so that the lamp system of main body vehicle can be adjusted earlier to remind rear area target vehicle, after being
Square target vehicle provides more brakings or adjustment time, more effectively reduces rear-end impact risk.
Second of example recognizes this track target vehicle of front according to the kinematic parameter of objects ahead vehicle and steering indicating light
Deceleration lane change to this non-track of front operating mode so that the kinematic parameter control system of main body vehicle without braking adjust.By
This so that the kinematic parameter control system of main body vehicle can reduce unnecessary braking adjustment, to reduce due to main body
Rear-end impact risk caused by the unnecessary braking adjustment of vehicle.
The third example recognizes this non-track target carriage of rear according to the kinematic parameter of rear area target vehicle and steering indicating light
Deceleration lane change to this track operating mode so that the kinematic parameter control system of main body vehicle is adjusted in advance, and/or, with
The lamp system of main body vehicle is set to remind rear area target vehicle.As a result, main body vehicle can light brake lamp with police in advance
Show that rear area target vehicle driver cancels lane change or deceleration lane change, to slow down knocking into the back for main body vehicle and rear area target vehicle
Risk of collision.
The vehicle of the embodiment of the present invention travels autocontrol method, obtains ring in front of main body vehicle from preposition 3D cameras first
First image and the second image in border obtain the third image and the 4th image of main body vehicle rear environment from postposition 3D cameras,
And front lines on highway is obtained according to the first image and rear lines on highway is obtained according to third image, then according to first
Intertexture mapping relations between image and the second image, which map to front lines on highway in the second image, generates multiple fronts
Rear lines on highway is mapped to by vehicle identification range according to the intertexture mapping relations between third image and the 4th image
Generate multiple front vehicle identification ranges in four images, and according to all front vehicles identification ranges identify objects ahead vehicle and
Rear area target vehicle is identified according to all front vehicle identification ranges, finally according to objects ahead vehicle and rear area target vehicle
Kinematic parameter carries out cruise control to the kinematic parameter of main body vehicle.
Fig. 2 is the flow diagram that the vehicle that another embodiment of the present invention provides travels autocontrol method.
As shown in Fig. 2, vehicle traveling autocontrol method includes the following steps:
Step 201, the first image that environment in front of main body vehicle is obtained from the imaging sensor of preposition 3D cameras, from preposition
The time-of-flight sensor of 3D cameras obtains the second image of environment in front of main body vehicle, from the imaging sensor of postposition 3D cameras
The third image for obtaining main body vehicle rear environment obtains main body vehicle rear ring from the time-of-flight sensor of postposition 3D cameras
Wherein, the first image and third image are color or luminance picture to 4th image in border, and the second image and the 4th image are depth
Image.
Specifically, imaging sensor refers to the array or set of luminance pixel sensor, for example, red, green, blue (RGB) or
Luminance pixels sensor, the distance between the detection objects such as brightness, coloration (YUV) have relationship, commonly used in obtaining environment
Luminance picture.
Specifically, time-of-flight sensor refers to the array or set of flight time element sensor, such as flight time picture
Plain sensor can be optical sensor, phase detectors etc., can detect from light-pulse generator, modulated light source light in flight
Between flight time for being propagated between element sensor and object to be detected, to detection object distance and obtain depth image.
For example, imaging sensor or time-of-flight sensor can use complementary metal oxide semiconductor
(CMOS) technique is made, and luminance pixel sensor and flight time element sensor can be produced in proportion it is same
On substrate, such as with 8:The 8 luminance pixel sensors and 1 flight time element sensor composition that 1 ratio is made
One big intertexture pixel, wherein the photosensitive area of 1 flight time element sensor can be equal to 8 luminance pixel sensors
Photosensitive area, wherein 8 luminance pixel sensors can be arranged by the array format that 2 rows and 4 arrange.
Further, for another example can be made on the substrate of 1 inch optical target surface 360 rows and 480 row it is above-mentioned active
The array of intertexture pixel, can obtain 720 rows and 1920 row enliven luminance pixel sensor array, 360 rows and 480 row enliven
TOF pixel sensor arrays, the same camera that thus imaging sensor and time-of-flight sensor form can obtain coloured silk simultaneously
Color or luminance picture and depth image.
Therefore, in the present embodiment, the same preposition 3D cameras can be utilized to obtain about environment in front of main body vehicle
First image and the second image, wherein the first image is colored or luminance picture, and the second image is depth image;It can utilize
Same postposition 3D cameras obtain the third image and the 4th image about main body vehicle rear environment, wherein third image is
Colored or luminance picture, the 4th image are depth image.
Wherein, colored or luminance picture pixel and depth image pixel are characterized in interleaved arrangement in proportion.And by
CMOS technology manufacture can be used to realize and obtain the in the same preposition 3D cameras for obtaining the first image and the second image
The same postposition 3D cameras of three images and the 4th image can use CMOS technology manufacture to realize, according to rubbing for semi-conductor industry
That law, 3D cameras will be provided with sufficiently low production cost in limited period.
Step 202, when the first image be luminance picture, according in the first image front lines on highway and road surface brightness
Lines on highway in front of difference identification, when third image is luminance picture, according to rear lines on highway and road in third image
The luminance difference in face identifies rear lines on highway.
It should be noted that front (rear) the existing front of lines on highway (rear) solid line lane line also have front (after
Side) dotted line lane line.When first image and third image are all luminance picture, the two identification front (rear) lines on highway
Detailed process is identical, below using the first image as luminance picture when, according to front lines on highway and road surface in the first image
It is described as follows for the lines on highway of luminance difference identification front:
Specifically, front lines on highway is created according to the luminance information of the first image and preset luminance threshold first
Bianry image, it is to be appreciated that can search to obtain by using the luminance difference of front lines on highway and road surface
Certain luminance thresholds as preset luminance threshold, preset luminance threshold can using " statistics with histogram-is bimodal " algorithm come
Lookup obtains, and the bianry image of front lines on highway is created using preset luminance threshold and luminance picture.
It will also be appreciated that can be by the way that luminance picture be divided into multiple brightness subgraphs, to each brightness subgraph
It searches to obtain multiple luminance thresholds as executing " statistics with histogram-is bimodal " algorithm, utilizes each luminance threshold and corresponding bright
The two-value subgraph of lines on highway in front of creation of sub-pictures is spent, and utilizes the complete front road driveway of two-value creation of sub-pictures
The bianry image of line.
Further, whole edges of straight way solid line lane line are detected in bianry image according to preset detection algorithm
Location of pixels or the whole edge pixel locations for detecting solid line lane line of going off the curve.
It is understood that the radius of curvature of front lines on highway can not possibly be too small, and due to camera projection theory
Cause the imaging pixel of the nearby opposite lines on highway of front at a distance of front lines on highway more so that the front solid line of bend
The pixel that lane line is arranged in a straight line in luminance picture also accounts for the major part of front solid line lane line imaging pixel.
Thus, it is possible to by the preset detection algorithm such as Hough transform algorithm front lines on highway bianry image
In detect that whole edge pixel locations of straight way solid line lane line or detection are gone off the curve whole edge pixels of solid line lane line
Position.
It should be noted that above-mentioned preset detection algorithm can also be straight in bianry image by isolation strip, electric pole
Line edge pixel location detects.According to the Aspect Ratio of imaging sensor, the road of 3D camera lens focal length, highway layout specification
Slope range of the lane line in bianry image can be arranged in the installation site of main body vehicle in width range and imaging sensor,
It is excluded to be filtered the straight line of non-lane line according to slope range.
It should be noted that the edge pixel location of the solid line lane line of bend always consecutive variations, therefore according to looking into
The connected pixel position of the edge pixel location at the initial straight both ends of above-mentioned detection is looked for, and the connected pixel position is incorporated to this
Initial straight edge pixel set repeats above-mentioned lookup and is incorporated to the connected pixel position, finally by bend solid line lane line
Whole edge pixel locations uniquely determine.
It will be appreciated that aforesaid way can also detected front dotted line lane line whole edge pixel location.
As an example, according to be mutually parallel in the priori of solid line lane line, lane line principle, image sensing
Whole edge pixel locations of solid line lane line are projected to the initial straight of dotted line lane line by the projective parameter of device and 3D cameras
Edge pixel location with connect the dotted line lane line initial straight edge pixel location and belong to the dotted line lane line other
The edge pixel location of shorter lane line, to obtain whole edge pixel locations of dotted line lane line.
As another example, the priori of straight way or bend need not be obtained.Since vehicle is in straight way cruise or perseverance
During determining the cruise of steering angle bend, the lateral shift of dotted line lane line can almost be ignored within shorter continuous time,
But vertical misalignment is larger, therefore dotted line lane line can be in the bianry image of continuous a few width lines on highway of different moments
It is superimposed as a solid line lane line, then obtains the whole of the dotted line lane line by the recognition methods of above-mentioned solid line lane line again
Edge pixel location.
It is understood that the vertical misalignment amount due to dotted line lane line is influenced by main body vehicle speed, it can with this
With according to the bianry image for the continuous lines on highway for dynamically determining different moments from the speed of wheel speed sensors acquisition
Minimum width number by dotted line lane line to be superimposed as a solid line lane line, to obtain whole edge pixel positions of dotted line lane line
It sets.
It should be noted that in the present embodiment, whole edge pixel positions of " lines on highway " acute pyogenic infection of finger tip lines on highway
It sets.
It should be noted that identifying rear highway according to rear lines on highway in third image and the luminance difference on road surface
Lane line includes the bianry image that rear lines on highway is created according to the luminance information of third image and preset luminance threshold;
Whole edge pixel locations of straight way solid line lane line are detected in bianry image according to preset detection algorithm or are detected
Whole edge pixel locations of bend solid line lane line;Straight way dotted line is detected in bianry image according to preset detection algorithm
Whole edge pixel locations of lane line or the illustrating for whole edge pixel locations for detecting dotted line lane line of going off the curve can
With referring to identifying the specific of front lines on highway according to front lines on highway in the first image and the luminance difference on road surface
Description, and will not be described here in detail.
Step 203, front lines on highway is mapped according to the intertexture mapping relations between the first image and the second image
To multiple front vehicles identification ranges are generated in the second image, according to the intertexture mapping relations between third image and the 4th image
Rear lines on highway is mapped in the 4th image and generates multiple front vehicle identification ranges.
It should be noted that the description of step S203 is corresponding with above-mentioned steps S103, thus to step S203 retouch
The description with reference to above-mentioned steps S103 is stated, details are not described herein.
Step 204, the label that all front vehicles identification ranges are marked with front this track and this non-track of front, according to
This track target vehicle of the vehicle identification range identification front of this track label of label front, is marked according to this non-track of label front
This non-track target vehicle of front vehicles identification range identification front of label, knows according to the front vehicles identification range of combination of two
Other lane change ahead target vehicle.
In being embodiment at one of the present invention, according to the front lines on highway of the equal proportion obtained in the second image,
The line number shared by the initial linear portion of each front lines on highway is taken to compare to obtain the first of front lines on highway with columns
The slope of beginning straight line, to the front lines on highway where the initial straight according to two front lines on highway of maximum slope
The front vehicles identification range of establishment marks the label in this track, and Fei Benche is marked to the front vehicles identification range of other establishments
The label in road.
It is understood that the second image is depth image, the light reflection at the back side of the same objects ahead vehicle to TOF
The depth sub-picture pack that sensor is formed is containing consistent range information, as long as therefore identifying depth that objects ahead vehicle is formed
Position of the image in depth image can obtain the range information of objects ahead vehicle.Wherein, subgraph refers to one of image
Divide the combination of pixel.
It is understood that the light reflection at the back side of the same objects ahead vehicle forms depth subgraph to TOF sensor
It includes consistent range information to seem, and it includes consecutive variations that the light reflection on road surface forms depth subgraph to be to TOF sensor
Range information, therefore include depth of depth subgraph and the range information comprising consecutive variations of consistent range information
Image necessarily forms mutation differences in the intersection of the two, and the boundary of these mutation differences forms objects ahead vehicle in depth
Object boundary in image.
It is possible to further by using Canny, Sobel, the Laplace etc. on the detection boundary in image processing algorithm
A variety of boundary detection methods are to detect the object boundary of objects ahead vehicle.
Further, vehicle identification range is determined by whole location of pixels of lane line, therefore in vehicle identification range
The object boundary of interior detection objects ahead vehicle is done the boundary that the road equipments such as isolation strip, light pole, fender pile are formed is reduced
It disturbs.
Specifically, the object boundary detected within the scope of each vehicle identification is projected to the row reference axis of image respectively,
And one-dimensional lookup is carried out in reference axis of being expert at, you can determine longitudinal mesh of all objects ahead vehicles in front vehicles identification range
The line number and row coordinate range shared by boundary are marked, and determines the shared columns and row coordinate position of lateral object boundary.Its
In, longitudinal object boundary refers to the object boundary for occupying that number of lines of pixels is more and columns is few, and lateral object boundary, which refers to have, occupies pixel
Line number is few and columns more than object boundary.
Further, the columns shared by lateral object boundary all in front vehicles identification range, row coordinate bit
It sets, row coordinate position (namely the respective transversal object boundary of all longitudinal object boundaries is searched in front vehicles identification range
Row coordinate initial position and final position), and different mesh are distinguished according to the principle that object boundary includes consistent range information
The object boundary for marking vehicle, so that it is determined that the position of all objects ahead vehicles and range information in front vehicles identification range.
Therefore, the object boundary of detection acquisition objects ahead vehicle can uniquely determine the depth of objects ahead vehicle formation
Position of the subgraph in depth image is spent, to uniquely determine the range information of objects ahead vehicle.
Multiple objects ahead vehicles can be detected by the boundary detection method of above-mentioned example simultaneously as a result, and its apart from letter
Breath.
Further, according to above-mentioned example, in front of the label within the scope of the vehicle identification of this track label in front of identification this
Track target vehicle, this non-track target vehicle in front of identification within the scope of the vehicle identification of this non-track label in front of the label,
Lane change objects ahead vehicle is identified in the front vehicles identification range of combination of two.
Step 205, the label that all front vehicle identification ranges are marked with this track of rear and this non-track of rear, according to
It marks the vehicle identification range of this track label of rear to identify this track target vehicle of rear, is marked according to this non-track of label rear
The vehicle identification range of label identifies this non-track target vehicle of rear, after being identified according to the front vehicle identification range of combination of two
Fang Biandao target vehicles.
In being embodiment at one of the present invention, according to the rear lines on highway of the equal proportion obtained in the 4th image,
The line number shared by the initial linear portion of each rear lines on highway is taken to compare to obtain the first of rear lines on highway with columns
The slope of beginning straight line, to the rear lines on highway where the initial straight according to two rear lines on highway of maximum slope
The front vehicle identification range of establishment marks the label in this track, and Fei Benche is marked to the front vehicle identification range of other establishments
The label in road.
It is understood that the second image is depth image, the light reflection at the back side of the same rear area target vehicle to TOF
The depth sub-picture pack that sensor is formed is containing consistent range information, as long as therefore identifying depth that rear area target vehicle is formed
Position of the image in depth image can obtain the range information of rear area target vehicle.Wherein, subgraph refers to one of image
Divide the combination of pixel.
It is understood that the light reflection at the back side of the same rear area target vehicle forms depth subgraph to TOF sensor
It includes consistent range information to seem, and it includes consecutive variations that the light reflection on road surface forms depth subgraph to be to TOF sensor
Range information, therefore include depth of depth subgraph and the range information comprising consecutive variations of consistent range information
Image necessarily forms mutation differences in the intersection of the two, and the boundary of these mutation differences forms rear area target vehicle in depth
Object boundary in image.
It is possible to further by using Canny, Sobel, the Laplace etc. on the detection boundary in image processing algorithm
A variety of boundary detection methods are to detect the object boundary of rear area target vehicle.
Further, front vehicle identification range is determined by whole location of pixels of lane line, therefore in front vehicle
The object boundary of detection rear area target vehicle is formed the road equipments such as isolation strip, light pole, fender pile are reduced in identification range
Border interference.
Specifically, the object boundary detected in each front vehicle identification range is projected to the row reference axis of image respectively
On, and one-dimensional lookup is carried out in reference axis of being expert at, you can determine the vertical of all rear area target vehicles in front vehicle identification range
To the line number and row coordinate range shared by object boundary, and determine the shared columns and row coordinate bit of lateral object boundary
It sets.Wherein, longitudinal object boundary refers to the object boundary for occupying that number of lines of pixels is more and columns is few, and lateral object boundary, which refers to have, to be occupied
Number of lines of pixels is few and columns more than object boundary.
Further, the columns shared by lateral object boundary all in front vehicle identification range, row coordinate bit
It sets, row coordinate position (namely the respective transversal object boundary of all longitudinal object boundaries is searched in front vehicle identification range
Row coordinate initial position and final position), and different mesh are distinguished according to the principle that object boundary includes consistent range information
The object boundary for marking vehicle, so that it is determined that the position of all rear area target vehicles and range information in front vehicle identification range.
Therefore, the object boundary of detection acquisition rear area target vehicle can uniquely determine the depth of rear area target vehicle formation
Position of the subgraph in depth image is spent, to uniquely determine the range information of rear area target vehicle.
Multiple rear area target vehicles can be detected by the boundary detection method of above-mentioned example simultaneously as a result, and its apart from letter
Breath.
Further, according to above-mentioned example, rear sheet is identified within the scope of the vehicle identification of label this track label of rear
Track target vehicle identifies this non-track target vehicle of rear within the scope of the vehicle identification of label this non-track label of rear,
Lane change rear area target vehicle is identified in the front vehicle identification range of combination of two.
Step 206, objects ahead vehicle range is generated according to the enclosed region that the object boundary of objects ahead vehicle surrounds,
Objects ahead vehicle range is mapped into life in the first image according to the intertexture mapping relations between the first image and the second image
At front car light identification region, according to the intertexture mapping relations between third image and the 4th image by rear area target vehicle range
Map to generation rear car light identification region in third image.
Specifically, the intertexture mapping relations between the first image and the second image indicate objects ahead vehicle in the second image
Adjustment of the ranks coordinate of each pixel of range by equal proportion can at least determine the row of a pixel in the first image
Row coordinate, and the imaging of the car light of objects ahead vehicle is included in corresponding objects ahead vehicle range, to first
Car light identification region is generated in image.
Specifically, the intertexture mapping relations between third image and the 4th image indicate rear area target vehicle in the 4th image
Adjustment of the ranks coordinate of each pixel of range by equal proportion can at least determine the row of a pixel in third image
Row coordinate, and the imaging of the car light of rear area target vehicle is included in corresponding rear area target vehicle range, in third
Car light identification region is generated in image.
Step 207, it is identified according to the color, flicker frequency of tail-light in the car light identification region of front or flashing sequence corresponding
The steering indicating light of objects ahead vehicle is known according to the color, flicker frequency of headlight in the car light identification region of rear or flashing sequence
The steering indicating light of not corresponding rear area target vehicle.
Specifically, by continuously acquiring the colour or luminance picture and to front (rear) target vehicle of several different moments
Car light identification region carry out time diffusion processing with create front (rear) target vehicle time diffusion car light identification region
Subgraph.
Wherein, time diffusion car light identification region subgraph will protrude the vehicle continuously flickered of front (rear) target vehicle
Taillight (headlight) subgraph.Then time diffusion car light identification region subgraph is projected into row reference axis, carries out one-dimensional look into
Starting and the terminal row coordinate position for looking for tail-light (headlight) subgraph for obtaining front (rear) target vehicle, these are risen
Begin and terminal row coordinate position projects to time diffusion car light identification region subgraph and searches tail-light (headlight) subgraph
Starting and end line coordinate position, by the row, column coordinate position of the starting of tail-light (headlight) subgraph and terminal project
Extremely to confirm the tail-light (headlight) of front (rear) target vehicle in the colour or luminance picture of several above-mentioned different moments
Color, flicker frequency or flashing sequence, so that it is determined that the row, column coordinate bit of tail-light (headlight) subgraph of flicker
It sets.
Further, the row, column coordinate position of tail-light (headlight) subgraph of flicker is only in front (rear) target
It can determine that front (rear) target vehicle is beating left steering lamp, the tail-light of flicker when on the left of the car light identification region of vehicle
The row, column coordinate position of (headlight) subgraph only can be true at the car light identification region right side of front (rear) target vehicle
Fixed front (rear) target vehicle exists beating right turn lamp, the row, column coordinate position of tail-light (headlight) subgraph of flicker
It can determine that front (rear) target vehicle is playing double sudden strain of a muscle warnings when the car light identification region both sides of front (rear) target vehicle
Lamp.
It should be noted that its length travel or lateral displacement are larger during front (rear) target vehicle lane change
Cause the car light identification region size variation of front (rear) target vehicle also larger, can to continuously acquire several it is different when
The car light identification region of front (rear) target vehicle at quarter carries out length travel or lateral displacement compensates and is scaled to size one
The car light identification region of cause, then time diffusion processing is carried out to the car light identification region of front (rear) target vehicle after adjustment
To create the time diffusion car light identification region subgraph of front (rear) target vehicle.
Further, time diffusion car light identification region subgraph is projected into row reference axis, carries out one-dimensional lookup and obtains
The starting of tail-light (headlight) subgraph of front (rear) target vehicle and terminal row coordinate position, by these startings and eventually
Point range coordinate position projects to time diffusion car light identification region subgraph and searches the starting of tail-light (headlight) subgraph
With end line coordinate position, the row, column coordinate position of the starting of tail-light (headlight) subgraph and terminal is projected to above-mentioned
In the colour or luminance picture of several different moments with confirm front (rear) target vehicle tail-light (headlight) color,
Flicker frequency or flashing sequence, so that it is determined that the row, column coordinate position of tail-light (headlight) subgraph of flicker, last complete
At left steering lamp, right turn lamp or double identifications for dodging warning lamp.
Step 208, this non-track target vehicle of front is recognized according to the kinematic parameter of objects ahead vehicle and steering indicating light
Deceleration lane change to this track operating mode so that the kinematic parameter control system of main body vehicle carries out braking adjustment in advance.
It should be noted that with the left and right lane line in this track of front be reference, no matter objects ahead vehicle lane change when at
In straight way or bend, no matter the leftward or rightward lane change of objects ahead vehicle can identify accurately, to based on vehicle from
It adapts to cruise system and accurate motion control foundation is provided.
As a result, main body vehicle can light brake lamp in advance with warn rear area target vehicle driver cancel lane change or
Deceleration lane change, to slow down the rear-end impact risk of main body vehicle and rear area target vehicle.
As another realization method, this track of front is recognized according to the kinematic parameter of objects ahead vehicle and steering indicating light
Target vehicle deceleration lane change to this non-track of front operating mode so that the kinematic parameter control system of main body vehicle without braking
Adjustment.
For example, the left side target side of objects ahead vehicle when the right turn lamp of this track target vehicle lights is recognized
The pixel distance of boundary to this track left-hand lane line is determined as lateral distance P through the conversion of camera projection relation;By continuously acquiring N
The first image of width different moments and the second image (time for obtaining first image or the second image is T), during which identify
And record the variation of the distance R of target vehicle;It recognizes objects ahead vehicle and just completes non-on the right side of lane change to this track
Track, for the left side object boundary of objects ahead vehicle to this track right-hand lane line coincidence, this lane width is D at this time;Therefore,
Kinematic parameter of objects ahead vehicle during continuous lane change is duration N × T, the distance of opposite main body vehicle be R and
Lateral displacement is (D-P).
Therefore, according to the distance R during the objects ahead vehicle lane change of above-mentioned identification, main body vehicle adaptive cruise system
As long as system determines that R is consistently greater than the safety cruise braking distance set and can remain a constant speed cruise, in addition even if recognizing in front of
Target vehicle just completes this non-track on the right side of lane change to this track, and the left side object boundary of objects ahead vehicle is to originally at this time
Track right-hand lane line overlaps and R is less than safety cruise braking distance, and main body vehicle self-adaption cruise system may also reduce dynamic
Power exports, and slightly waits for and recognizes target vehicle and continue displacement to the right and generate the lateral displacement of safer bigger that power can be restored
Output is with the cruise that remains a constant speed.
As a result, the kinematic parameter control system of main body vehicle can reduce unnecessary braking adjustment, to reduce
Rear-end impact risk caused by unnecessary braking adjustment due to main body vehicle.
As another realization method, rear Fei Benche is recognized according to the kinematic parameter of rear area target vehicle and steering indicating light
Road target vehicle deceleration lane change to this track operating mode so that the kinematic parameter control system of main body vehicle is adjusted in advance,
And/or so that the lamp system of main body vehicle reminds rear area target vehicle.
This non-track target vehicle deceleration lane change of rear is recognized as a result, to the operating mode in this track of main body vehicle so that main
Body vehicle can light brake lamp to warn rear area target vehicle driver to cancel lane change or deceleration lane change, to slow down in advance
The rear-end impact risk of main body vehicle and rear area target vehicle.
In order to which those skilled in the art more understand that above-described embodiment controls process, illustrated such as with reference to Fig. 3 to Fig. 5
Under:
It is understood that the lateral displacement of above-mentioned identification with the left and right lane line in this track be reference, no matter target carriage
When lane change in straight way or bend, no matter the leftward or rightward lane change of target vehicle can identify accurately, thus based on
Vehicle self-adaption cruise system provides accurately control foundation.
As shown in figure 3, can control main body vehicle in common straight way or bend operating mode follows this track target of front
Vehicle at the uniform velocity cruises, and the distance of identification objects ahead vehicle is RA, and recognizes this non-track of main body vehicle left rear side simultaneously
The distance of target vehicle is RB and identifies that the lane change that its right turn lamp is flickering is intended to;When RB is apart from too small, rear area target
Vehicle lane change to main body vehicle rear is easy to happen rear-end impact, since identification rear area target vehicle plays the initial lane change of steering indicating light
It is intended to, the brake lamp that can control main body vehicle is lighted in advance to warn rear area target vehicle driver to cancel lane change or change of slowing down
Road, to slow down the rear-end impact risk of main body vehicle and rear area target vehicle.
It is possible to further accurately identify rear area target vehicle continue by force lane change and lane change do not slow down generation relative to
The lateral displacement of this lane line, the cruise system that can control main body vehicle automatically improve speed suitably to reduce main body vehicle
With the following distance of objects ahead vehicle, increase at a distance from main body vehicle and rear area target vehicle, to slow down main body vehicle
Rear-end impact risk with rear area target vehicle.
However, traditional only rely on millimetre-wave radar or the vehicle identification rear lane change target vehicle of laser radar has foot
Enough big lane change lateral displacements could judge that the lane change of rear area target vehicle is intended to, and rear-end impact risk will be caused to increase.
Therefore, the steering indicating light of the kinematic parameter of identification rear area target vehicle and corresponding identification rear area target vehicle, Ke Yishi
This non-track target vehicle deceleration lane change of rear is clipped to the operating mode in this track of main body vehicle so that main body vehicle can be bright in advance
Brake lamp is played to warn rear area target vehicle driver to cancel lane change or deceleration lane change, to slow down main body vehicle and rear square mesh
Mark the rear-end impact risk of vehicle.
Further, the mesh of traditional vehicle self-adaption cruise system identification for only relying on millimetre-wave radar or laser radar
It is reference that the lateral displacement of mark vehicle, which is with main body vehicle, is the lateral displacement with reference to the target vehicle of identification with main body vehicle
Sometimes the accurate motion control foundation of vehicle self-adaption cruise system cannot be supplied to.
As shown in figure 4, when this track objects ahead vehicle from this track complete lane change to the right be just in it is curved curved to the left
When road, the millimetre-wave radar or laser radar of conventional truck may still identify that objects ahead vehicle sections are in this on straight way
On track, 250 meters of above-mentioned bend radius of curvature, above-mentioned objects ahead vehicle lane change has travelled 25 meters on bend in the process, with
This track right-hand lane line that the left side object boundary of objects ahead vehicle overlaps opposite lane line at 25 meters of the bend
Straight way extended line is offset by the left
It is understood that if the millimetre-wave radar or laser radar of above-mentioned conventional truck recognize objects ahead vehicle at this time
Distance be 50 meters to 80 meters, i.e., the millimetre-wave radar or laser radar of above-mentioned conventional truck are located on straight way and apart from curved
Road entrance still has 25 meters to 55 meters of distance, the millimetre-wave radar or laser radar of above-mentioned conventional truck to know in shortage bend priori
Objects ahead vehicle will be recognized in the case of knowledge, and still there are about the vehicle bodies of 1.25 meters of width on this track, and with target
Vehicle is continued on to left bend Reduced Speed Now, and the millimetre-wave radar or laser radar of above-mentioned conventional truck recognize objects ahead
Vehicle has the vehicle body of bigger width on this track, i.e., the millimetre-wave radar or laser radar of above-mentioned conventional truck will produce not
It accurately identifies and conventional truck self-adaption cruise system will be caused to execute continuous inaccurate and unnecessary braking, to lead
Conventional truck and the rear-end impact risk of its rear area target vehicle is caused to increase.
Similarly, the millimetre-wave radar or laser radar of above-mentioned conventional truck to this above-mentioned track target vehicle in bend to the right
On from this track complete lane change to the left identification there is also inaccuracies.
Therefore, it according to the steering indicating light of the kinematic parameter of the target vehicle of identification and corresponding identification target vehicle, can identify
To this track target vehicle deceleration lane change to the operating mode in this non-track of main body vehicle so that the kinematic parameter control system of main body vehicle
System can reduce unnecessary braking adjustment, knock into the back caused by the unnecessary braking adjustment due to main body vehicle to reduce
Risk of collision.
It is understood that this non-track target vehicle can be identified and be monitored from steering indicating light is played to completing lane change to this vehicle
The continuous process in road, and the distance, opposite of duration of objects ahead vehicle during continuous lane change, opposite main body vehicle
The kinematic parameters such as speed and lateral displacement are also easy to be monitored, to which the kinematic parameter of objects ahead vehicle can be used for controlling master
The kinematic parameter of body vehicle with make earlier braking adjust and improve driving safety and earlier control car light warning after square mesh
Vehicle is marked to reduce rear-end impact risk.
As shown in figure 5, main body vehicle is travelled in this track straight way with constant speed mode, and still there are 55 meters apart from bend entrance
The distance of (or to 25 meters), bend is bending to the right and radius of curvature is 250 meters, in 25 meters of this tracks in front of bend entrance
Right side has this non-track objects ahead vehicle beating left steering lamp to this lane, and a left side for objects ahead vehicle
Side object boundary is overlapped with the right-hand lane line in this track.
Thus, it is possible to accurately identify objects ahead vehicle to this lane, due to objects ahead vehicle distances master
About 80 meters of body vehicle (or to 50 meters), the dynamical system that can control main body vehicle accurately execute power output reduction even system
Dynamic acts, lights brake lamp in time, to ensure main body vehicle and front, the safe distance of rear area target vehicle, to improve
The driving safety of main body vehicle and reduce rear-end impact risk.
It should be noted that traditional vehicle self-adaption cruise system identification for only relying on millimetre-wave radar or laser radar
Objects ahead vehicle lateral displacement be with main body vehicle be reference, in the case where lacking bend priori will identify
The extended line of this track right-hand lane line of objects ahead vehicle distances also there are aboutCross
To distance, i.e., mistakenly identification objects ahead vehicle needs to continue about 1.25 meters of above-mentioned millimetre-wave radars of lateral displacement to the left or swash
Optical radar just can confirm that objects ahead vehicle initially enters this track.
It is understood that if objects ahead lateral direction of car velocity of displacement is 1 metre per second (m/s), above-mentioned traditional only relies in the least
The vehicle self-adaption cruise system of metre wave radar or laser radar will actually enter this track about 1.25 seconds in objects ahead vehicle
The action that power output reduces even braking could be executed later, undoubtedly reduce main body vehicle and front, rear area target vehicle
Safe distance, the driving safety for resulting in main body vehicle declines and increases rear-end impact risk.
Therefore, it according to the steering indicating light of the kinematic parameter of the target vehicle of identification and corresponding identification target vehicle, can identify
To this non-track target vehicle deceleration lane change to the operating mode in this track of main body vehicle so that the kinematic parameter control system of main body vehicle
System and security system can adjust earlier, improve main body vehicle and its driving safety of occupant so that main body vehicle
Lamp system can adjust earlier to remind rear area target vehicle, for rear area target vehicle provide more brakings or
Adjustment time more effectively reduces rear-end impact risk.
Further, be based on above-described embodiment, in order to effectively further reduce rear-end impact risk and further
Improve the driving safety of main body vehicle.It can be handled by way of calibrating automatically.Detailed process combination Fig. 6 descriptions
It is as follows:
Step 601, objects ahead vehicle heading angle is obtained according to objects ahead vehicle range or front car light identification region.
Specifically, there are many kinds of the modes for obtaining objects ahead vehicle heading angle, for example passes through preset algorithm, formula etc..
It is illustrated below:
As an example, the lens parameters and installation site for obtaining the video camera of the first image or the second image first can
By camera calibration technical limit spacing known to prior those skilled in the art, therefore the road using video camera as origin can be established
The relationship look-up table of scenery coordinate and the pixel coordinate of the first image or the second image.
Further, can include by objects ahead vehicle range or front car light identification region by above-mentioned relation look-up table
Pixel coordinate be converted to the objects ahead vehicle coordinate using video camera as origin, to according to converting using video camera into origin
Objects ahead vehicle coordinate calculate using video camera as the objects ahead vehicle heading angle of origin.
Step 602, the intermediate-freuqncy signal for obtaining dead load frequency radar obtains the relatively fast of front radar target according to intermediate-freuqncy signal
Degree and azimuth.
It should be noted that the transmitter of dead load frequency radar works on nearly constant wave frequency, therefore dead load
The electromagnetic wave bandwidth very little that frequency radar is occupied relative to the frequency modulated(FM) radar of ranging, to which dead load frequency radar can reduce component
Using and reduce cost.
It is understood that when objects ahead vehicle and main body vehicle are there are when relative velocity, dead load frequency radar receives
The reflection signal of objects ahead vehicle can pass through phase shifter and generate orthogonal reflection signal, orthogonal reflection signal and dead load frequency thunder
The transmitting signal reached generates orthogonal intermediate-freuqncy signal by frequency mixer, and orthogonal intermediate-freuqncy signal includes about the how general of above-mentioned relative velocity
Frequency is strangled, the size of Doppler frequency is directly proportional to the size of relative velocity, the sign of Doppler frequency and and relative velocity
Sign it is identical.
Further, the prominent Doppler can be created in the way of analog-digital converter and complex fast Fourier algorithm etc.
The frequency spectrum of the orthogonal intermediate-freuqncy signal of frequency, this that can obtain the frequency spectrum of the orthogonal intermediate-freuqncy signal using peak detection algorithm are more
The general size and sign for strangling frequency;According to the size of the Doppler frequency of acquisition and sign i.e. using Doppler range rate measurement
Formula determines the size and sign of relative velocity.
Further, dead load frequency radar can obtain the orientation of front radar target comprising more than two receivers
Angle.The position difference that each receiver of dead load frequency radar is mutual causes the orthogonal intermediate-freuqncy signal that each receiver obtains same
There are phase differences for phase at Doppler frequency.
Further, phase of each receiver obtained according to the frequency spectrum of orthogonal intermediate-freuqncy signal at same Doppler frequency
Potential difference and the mutual position relationship of each receiver are the azimuth that front radar target is obtained using phase method angle measurement formula.
Step 603, front is obtained according to objects ahead vehicle heading angle, the relative velocity of front radar target and azimuth
The relative velocity of target vehicle calibrates dead load frequency radar according to the objects ahead vehicle of identification automatically.
It is understood that when there are multiple objects ahead vehicles, the azimuth of multiple objects ahead vehicles is obtained, is connect
Relative velocity and the azimuth of multiple front radar targets will be obtained by, using single objects ahead vehicle azimuth with it is a certain
The relative velocity of front radar target is determined as objects ahead vehicle by the approximately equal principle in azimuth of front radar target
Relative velocity.
It is understood that when the installation site of video camera and dead load frequency radar differs farther out, above-mentioned azimuth is approximate
Equal principle may lead to error, as long as according to the installation site relationship of video camera and dead load frequency radar by the Bu Tong former of the two
The azimuthal coordinate that the azimuthal coordinate of point is calibrated to same origin can eliminate above-mentioned error.
It is understood that when the installation site of dead load frequency radar is in other than driver's cabin, azimuthal measurement result
Possible vibrated, temperature change, sleet dirt covering influence, needs to be calibrated automatically.
For example, there are multiple and different azimuthal objects ahead vehicles when recognizing main body vehicle, you can comparison identification
The azimuth of multiple objects ahead vehicles and the azimuth of front radar target whether have consistent deviation, if having consistent inclined
Difference records the deviation into the reservoir of dead load frequency radar, and dead load frequency radar reads this in subsequent azimuth determination partially
Difference is calibrated and compensated for automatically;If there is inconsistent deviation, dead load frequency radar can be sent out to main body vehicle driver can not
Warning reminds main body vehicle driver to check or clean dead load frequency radar.
As a result, by way of calibrating automatically, effectively further reduces rear-end impact risk and further increase
The driving safety of main body vehicle.
In order to realize that above-described embodiment, the present invention also propose a kind of vehicle traveling automatic control device.
Fig. 7 is the structural schematic diagram that vehicle provided by one embodiment of the present invention travels automatic control device.
As shown in fig. 7, vehicle traveling automatic control device includes:First acquisition module 701, the second acquisition module 702,
Third acquisition module 703, the first generation module 704, the first identification module 705 and control module 706.
Wherein, the first acquisition module 701 be used for from preposition 3D cameras obtain main body vehicle in front of environment the first image and
Second image obtains the third image and the 4th image of main body vehicle rear environment from postposition 3D cameras, wherein the first image and
Third image is color or luminance picture, and the second image and the 4th image are depth image.
Second acquisition module 702 is used to obtain front lines on highway according to the first image.
Third acquisition module 703 is used to obtain rear lines on highway according to third image.
First generation module 704 is used for front highway according to the intertexture mapping relations between the first image and the second image
Lane line, which maps in the second image, generates multiple front vehicles identification ranges, according to the friendship between third image and the 4th image
It knits mapping relations and maps to rear lines on highway in the 4th image and generate multiple front vehicle identification ranges.
First identification module 705 is used to identify objects ahead vehicle according to all front vehicles identification ranges, according to all
Front vehicle identification range identifies rear area target vehicle.
Control module 706 is used for kinematic parameter and steering indicating light according to objects ahead vehicle and rear area target vehicle to master
The kinematic parameter of body vehicle carries out cruise control.
Further, in a kind of possible realization method of the embodiment of the present invention, the first acquisition module 701 is in the past
The imaging sensor for setting 3D cameras obtains the first image of environment in front of main body vehicle;It is sensed from the flight time of preposition 3D cameras
Device obtains the second image of environment in front of main body vehicle;Main body vehicle rear environment is obtained from the imaging sensor of postposition 3D cameras
Third image;The 4th image of main body vehicle rear environment is obtained from the time-of-flight sensor of postposition 3D cameras.
Further, in a kind of possible realization method of the embodiment of the present invention, the second acquisition module 702 includes:The
One recognition unit 7021 and the first converting unit 7022.
Wherein, the first recognition unit 7021 is used to when the first image be luminance picture, according to front highway in the first image
Lane line and the luminance difference on road surface identification front lines on highway.Alternatively,
First converting unit 7022 is used to when the first image be color image, and color image is converted to luminance picture, the
One recognition unit 7021 is used for according to front lines on highway in the first image and the luminance difference on road surface identification front highway car
Diatom.
In one embodiment of the invention, the first recognition unit 7021 is specifically used for the luminance information according to the first image
The bianry image of front lines on highway is created with preset luminance threshold;It is examined in bianry image according to preset detection algorithm
It measures whole edge pixel locations of straight way solid line lane line or detects the whole edge pixel locations for solid line lane line of going off the curve;
Whole edge pixel locations of straight way dotted line lane line are detected in bianry image according to preset detection algorithm or are detected
Whole edge pixel locations of bend dotted line lane line.
Further, in a kind of possible realization method of the embodiment of the present invention, third acquisition module 703 includes:The
Two recognition units 7031 and the first converting unit 7032.
Wherein, the second recognition unit 7031 is used to when third image be luminance picture, according to rear highway in third image
Lane line and the luminance difference on road surface identify rear lines on highway.Alternatively,
Second converting unit 7032 is used to when third image be color image, and color image is converted to luminance picture, the
Two recognition units 7031 are used to identify rear highway car according to rear lines on highway in third image and the luminance difference on road surface
Diatom.
In one embodiment of the invention, the second recognition unit 7031 is specifically used for the luminance information according to third image
The bianry image of rear lines on highway is created with preset luminance threshold;It is examined in bianry image according to preset detection algorithm
It measures whole edge pixel locations of straight way solid line lane line or detects the whole edge pixel locations for solid line lane line of going off the curve;
Whole edge pixel locations of straight way dotted line lane line are detected in bianry image according to preset detection algorithm or are detected
Whole edge pixel locations of bend dotted line lane line.
In one embodiment of the invention, the first identification module 705 is specifically used for all front vehicles identification ranges
The label in label front this track and this non-track of front;Before the vehicle identification range identification of this track label of label front
This track target vehicle of side;According to this non-track target carriage of vehicle identification range identification front of this non-track label of label front
;Lane change ahead target vehicle is identified according to the front vehicles identification range of combination of two.To all front vehicle identification ranges
Mark the label in this track of rear and this non-track of rear;After the vehicle identification range identification of label this track label of rear
This track target vehicle of side;This non-track target carriage of rear is identified according to the vehicle identification range of label this non-track label of rear
;Rear lane change target vehicle is identified according to the front vehicle identification range of combination of two.
In one embodiment of the invention, the first identification module 705 is additionally operable to using the boundary in image processing algorithm
The object boundary of detection method detection objects ahead vehicle is identified;Using the boundary detection method inspection in image processing algorithm
The object boundary for surveying rear area target vehicle is identified.
Further, in a kind of possible realization method of the embodiment of the present invention, as shown in figure 8, on the basis of Fig. 7
On, vehicle traveling automatic control device further includes:Second generation module 707, third generation module 708, the second identification module
709, the 4th acquisition module 710, the 5th acquisition module 711 and calibration module 712.
Wherein, the second generation module 707 is used to generate objects ahead vehicle range according to objects ahead vehicle, according to rear
Target vehicle generates rear area target vehicle range.
Third generation module 708 is used for objects ahead according to the intertexture mapping relations between the first image and the second image
Vehicle range, which maps in the first image, generates front car light identification region, according to the intertexture between third image and the 4th image
Rear area target vehicle range is mapped to generation rear car light identification region in third image by mapping relations.
Second identification module 709 is used to identify the steering indicating light of corresponding objects ahead vehicle according to front car light identification region,
The steering indicating light of corresponding rear area target vehicle is identified according to rear car light identification region.
Further, in a kind of possible realization method of the embodiment of the present invention, the second generation module 707 is used for basis
The enclosed region that the object boundary of objects ahead vehicle surrounds generates objects ahead vehicle range, according to the mesh of rear area target vehicle
The enclosed region that mark boundary surrounds generates rear area target vehicle range.Alternatively,
Objects ahead vehicle range is generated according to the enclosed region of the extension of the object boundary of objects ahead vehicle surrounded,
Rear area target vehicle range is generated according to the enclosed region of the extension of the object boundary of rear area target vehicle surrounded.Alternatively,
Objects ahead vehicle range is generated according to the enclosed region that multiple location of pixels lines of objects ahead vehicle surround,
Rear area target vehicle range is generated according to the enclosed region that multiple location of pixels lines of rear area target vehicle surround.
Further, in a kind of possible realization method of the embodiment of the present invention, the second identification module 709 is used for basis
Color, flicker frequency or the flashing sequence of tail-light identify the steering of corresponding objects ahead vehicle in the car light identification region of front
Lamp;Corresponding rear area target vehicle is identified according to the color, flicker frequency of headlight in the car light identification region of rear or flashing sequence
Steering indicating light.
Further, in a kind of possible realization method of the embodiment of the present invention, control module 706 is used for according to front
The kinematic parameter and steering indicating light of target vehicle recognize non-this track target vehicle deceleration lane change in front to the operating mode in this track, with
The kinematic parameter control system of main body vehicle is set to carry out braking adjustment in advance.Alternatively,
This track target vehicle deceleration lane change of front is recognized according to the kinematic parameter of objects ahead vehicle and steering indicating light extremely
The operating mode in this non-track of front, so that the kinematic parameter control system of main body vehicle is adjusted without braking.Alternatively,
This non-track target vehicle deceleration lane change of rear is recognized according to the kinematic parameter of rear area target vehicle and steering indicating light
To the operating mode in this track, so that the kinematic parameter control system of main body vehicle is adjusted in advance, and/or, so that main body vehicle
Lamp system remind rear area target vehicle.
4th acquisition module 710 is used to obtain objects ahead according to objects ahead vehicle range or front car light identification region
Vehicle heading angle.
5th acquisition module 711 is used to obtain the intermediate-freuqncy signal of dead load frequency radar, and front radar is obtained according to intermediate-freuqncy signal
The relative velocity of target and azimuth.
Calibration module 712 is used for according to objects ahead vehicle heading angle, the relative velocity of front radar target and azimuth
The relative velocity for obtaining objects ahead vehicle, calibrates dead load frequency radar according to the objects ahead vehicle of identification automatically.
It should be noted that the aforementioned explanation for travelling autocontrol method embodiment to vehicle is also applied for the present invention
The vehicle of embodiment travels automatic control device, and details are not described herein again.
In conclusion the vehicle of the embodiment of the present invention travels automatic control device, main body is obtained from preposition 3D cameras first
The first image and the second image of vehicle front environment, from postposition 3D cameras obtain main body vehicle rear environment third image and
4th image, and front lines on highway is obtained according to the first image and rear lines on highway is obtained according to third image, so
Front lines on highway is mapped to according to the intertexture mapping relations between the first image and the second image afterwards raw in the second image
At multiple front vehicles identification ranges, according to the intertexture mapping relations between third image and the 4th image by rear road driveway
Line, which maps in the 4th image, generates multiple front vehicle identification ranges, and identifies front according to all front vehicles identification ranges
Target vehicle and according to all front vehicle identification ranges identify rear area target vehicle, finally according to objects ahead vehicle and rear
The kinematic parameter of target vehicle carries out cruise control to the kinematic parameter of main body vehicle.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show
The description of example " or " some examples " etc. means specific features, structure, material or spy described in conjunction with this embodiment or example
Point is included at least one embodiment or example of the invention.In the present specification, schematic expression of the above terms are not
It must be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be in office
It can be combined in any suitable manner in one or more embodiments or example.In addition, without conflicting with each other, the skill of this field
Art personnel can tie the feature of different embodiments or examples described in this specification and different embodiments or examples
It closes and combines.
In addition, term " first ", " second " are used for description purposes only, it is not understood to indicate or imply relative importance
Or implicitly indicate the quantity of indicated technical characteristic.Define " first " as a result, the feature of " second " can be expressed or
Implicitly include at least one this feature.In the description of the present invention, the meaning of " plurality " is at least two, such as two, three
It is a etc., unless otherwise specifically defined.
Any process described otherwise above or method description are construed as in flow chart or herein, and expression includes
It is one or more for realizing custom logic function or process the step of executable instruction code module, segment or portion
Point, and the range of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discuss suitable
Sequence, include according to involved function by it is basic simultaneously in the way of or in the opposite order, to execute function, this should be of the invention
Embodiment person of ordinary skill in the field understood.
Those skilled in the art are appreciated that realize all or part of step that above-described embodiment method carries
Suddenly it is that relevant hardware can be instructed to complete by program, the program can be stored in a kind of computer-readable storage medium
In matter, which includes the steps that one or a combination set of embodiment of the method when being executed.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing module, it can also
That each unit physically exists alone, can also two or more units be integrated in a module.Above-mentioned integrated mould
The form that hardware had both may be used in block is realized, can also be realized in the form of software function module.The integrated module is such as
Fruit is realized in the form of software function module and when sold or used as an independent product, can also be stored in a computer
In read/write memory medium.
Storage medium mentioned above can be read-only memory, disk or CD etc..Although having been shown and retouching above
The embodiment of the present invention is stated, it is to be understood that above-described embodiment is exemplary, and should not be understood as the limit to the present invention
System, those skilled in the art can be changed above-described embodiment, change, replace and become within the scope of the invention
Type.
Claims (26)
1. a kind of vehicle travels autocontrol method, which is characterized in that including:
The first image and the second image that environment in front of main body vehicle is obtained from preposition 3D cameras obtain main body from postposition 3D cameras
The third image and the 4th image of rear of vehicle environment, wherein the first image and third image are color or luminance picture, second
Image and the 4th image are depth image;
Front lines on highway is obtained according to described first image, rear lines on highway is obtained according to the third image;
The front lines on highway is mapped according to the intertexture mapping relations between described first image and second image
To multiple front vehicles identification ranges are generated in second image, according between the third image and the 4th image
The rear lines on highway is mapped in the 4th image and generates multiple front vehicle identification ranges by intertexture mapping relations;
Objects ahead vehicle is identified according to all front vehicles identification ranges, and rear is identified according to all front vehicle identification ranges
Target vehicle;
According to the kinematic parameter of the objects ahead vehicle and the rear area target vehicle to the kinematic parameter of the main body vehicle
Carry out cruise control.
2. the method as described in claim 1, which is characterized in that described to obtain environment in front of main body vehicle from preposition 3D cameras
First image and the second image obtain the third image and the 4th image of main body vehicle rear environment from postposition 3D cameras, including:
The first image of environment in front of main body vehicle is obtained from the imaging sensor of preposition 3D cameras;
The second image of environment in front of main body vehicle is obtained from the time-of-flight sensor of preposition 3D cameras;
The third image of main body vehicle rear environment is obtained from the imaging sensor of postposition 3D cameras;
The 4th image of main body vehicle rear environment is obtained from the time-of-flight sensor of postposition 3D cameras.
3. the method as described in claim 1, which is characterized in that described to obtain front road driveway according to described first image
Line, including:
When described first image be luminance picture, according in described first image front lines on highway and road surface luminance difference
Identify the front lines on highway;Alternatively,
When described first image be color image, the color image is converted into luminance picture, according in described first image
Front lines on highway and the luminance difference on road surface identify the front lines on highway.
4. method as claimed in claim 3, which is characterized in that it is described according in described first image front lines on highway with
The luminance difference on road surface identifies the front lines on highway, including:
The binary map of the front lines on highway is created according to the luminance information of described first image and preset luminance threshold
Picture;
Whole edge pixel locations of straight way solid line lane line are detected in the bianry image according to preset detection algorithm
Or detect the whole edge pixel locations for solid line lane line of going off the curve;
Whole edge pixel locations of straight way dotted line lane line are detected in the bianry image according to preset detection algorithm
Or detect the whole edge pixel locations for dotted line lane line of going off the curve.
5. the method as described in claim 1, which is characterized in that described to obtain rear road driveway according to the third image
Line, including:
When the third image be luminance picture, according to the luminance difference of rear lines on highway and road surface in the third image
Identify the rear lines on highway;Alternatively,
When the third image be color image, the color image is converted into luminance picture, according in the third image
Rear lines on highway and the luminance difference on road surface identify the rear lines on highway.
6. method as claimed in claim 5, which is characterized in that it is described according to rear lines on highway in the third image with
The luminance difference on road surface identifies the rear lines on highway, including:
The binary map of the rear lines on highway is created according to the luminance information of the third image and preset luminance threshold
Picture;
Whole edge pixel locations of straight way solid line lane line are detected in the bianry image according to preset detection algorithm
Or detect the whole edge pixel locations for solid line lane line of going off the curve;
Whole edge pixel locations of straight way dotted line lane line are detected in the bianry image according to preset detection algorithm
Or detect the whole edge pixel locations for dotted line lane line of going off the curve.
7. the method as described in claim 1, which is characterized in that described to identify front mesh according to all front vehicles identification ranges
Vehicle is marked, objects ahead vehicle range is generated according to the objects ahead vehicle, including:
All front vehicles identification ranges are marked with the label in front this track and this non-track of front;
According to this track target vehicle of the vehicle identification range identification front of this track label of label front;
According to this non-track target vehicle of vehicle identification range identification front of this non-track label of label front;
Lane change ahead target vehicle is identified according to the front vehicles identification range of combination of two;
All front vehicle identification ranges are marked with the label in this track of rear and this non-track of rear;
This track target vehicle of rear is identified according to the vehicle identification range of label this track label of rear;
This non-track target vehicle of rear is identified according to the vehicle identification range of label this non-track label of rear;
Rear lane change target vehicle is identified according to the front vehicle identification range of combination of two.
8. the method as described in claim 1, which is characterized in that described to identify front mesh according to all front vehicles identification ranges
Vehicle is marked, rear area target vehicle is identified according to all front vehicle identification ranges, including:
The object boundary that objects ahead vehicle is detected using the boundary detection method in image processing algorithm is identified;
The object boundary that rear area target vehicle is detected using the boundary detection method in image processing algorithm is identified.
9. the method as described in claim 1, which is characterized in that further include:
Objects ahead vehicle range is generated according to the objects ahead vehicle, rear area target is generated according to the rear area target vehicle
Vehicle range;
The objects ahead vehicle range is reflected according to the intertexture mapping relations between described first image and second image
It is incident upon in described first image and generates front car light identification region, according to the friendship between the third image and the 4th image
It knits mapping relations and the rear area target vehicle range is mapped into generation rear car light identification region in the third image;
The steering indicating light that corresponding objects ahead vehicle is identified according to the front car light identification region is identified according to the rear car light
The steering indicating light of the corresponding rear area target vehicle of region recognition;
The movement according to the kinematic parameter of the objects ahead vehicle and the rear area target vehicle to the main body vehicle
Parameter carries out cruise control, including:
According to the kinematic parameter and steering indicating light of the objects ahead vehicle and the rear area target vehicle to the main body vehicle
Kinematic parameter carry out cruise control.
10. method as claimed in claim 9, which is characterized in that described to generate objects ahead according to the objects ahead vehicle
Vehicle range generates rear area target vehicle range according to the rear area target vehicle, including:
Objects ahead vehicle range is generated according to the enclosed region that the object boundary of the objects ahead vehicle surrounds, according to described
The enclosed region that the object boundary of rear area target vehicle surrounds generates rear area target vehicle range;Alternatively,
Objects ahead vehicle range is generated according to the enclosed region of the extension of the object boundary of the objects ahead vehicle surrounded,
Rear area target vehicle range is generated according to the enclosed region of the extension of the object boundary of the rear area target vehicle surrounded;Or
Person,
Objects ahead vehicle range is generated according to the enclosed region that multiple location of pixels lines of the objects ahead vehicle surround,
Rear area target vehicle range is generated according to the enclosed region that multiple location of pixels lines of the rear area target vehicle surround.
11. method as claimed in claim 9, which is characterized in that described to be identified accordingly according to the front car light identification region
The steering indicating light of objects ahead vehicle identifies the steering indicating light of corresponding rear area target vehicle according to the rear car light identification region, packet
It includes:
Corresponding objects ahead is identified according to the color, flicker frequency of tail-light in the front car light identification region or flashing sequence
The steering indicating light of vehicle;
Corresponding rear area target is identified according to the color, flicker frequency of headlight in the rear car light identification region or flashing sequence
The steering indicating light of vehicle.
12. method as claimed in claim 9, which is characterized in that it is described according to the objects ahead vehicle and it is described after square mesh
The kinematic parameter and steering indicating light for marking vehicle carry out cruise control to the kinematic parameter of the main body vehicle, including:
This non-track target vehicle deceleration lane change of front is recognized according to the kinematic parameter of the objects ahead vehicle and steering indicating light
To the operating mode in this track, so that the kinematic parameter control system of the main body vehicle carries out braking adjustment in advance;Alternatively,
This track target vehicle deceleration lane change of front is recognized according to the kinematic parameter of the objects ahead vehicle and steering indicating light extremely
The operating mode in this non-track of front, so that the kinematic parameter control system of the main body vehicle is adjusted without braking;Alternatively,
This non-track target vehicle deceleration lane change of rear is recognized according to the kinematic parameter of the rear area target vehicle and steering indicating light
To the operating mode in this track, so that the kinematic parameter control system of the main body vehicle is adjusted in advance, and/or, so that described
The lamp system of main body vehicle reminds rear area target vehicle.
13. the method as described in claim 1-12 is any, which is characterized in that the method further includes:
Objects ahead vehicle heading angle is obtained according to the objects ahead vehicle range or the front car light identification region;
The intermediate-freuqncy signal for obtaining dead load frequency radar obtains relative velocity and the azimuth of front radar target according to intermediate-freuqncy signal;
The phase of objects ahead vehicle is obtained according to objects ahead vehicle heading angle, the relative velocity of front radar target and azimuth
To speed, dead load frequency radar is calibrated automatically according to the objects ahead vehicle of identification.
14. a kind of vehicle travels automatic control device, which is characterized in that including:
First acquisition module, the first image and the second image for obtaining environment in front of main body vehicle from preposition 3D cameras, from
Postposition 3D cameras obtain the third image and the 4th image of main body vehicle rear environment, wherein the first image and third image are
Color or luminance picture, the second image and the 4th image are depth image;
Second acquisition module, for obtaining front lines on highway according to described first image;
Third acquisition module, for obtaining rear lines on highway according to the third image;
First generation module, before will be described according to the intertexture mapping relations between described first image and second image
Square lines on highway, which maps in second image, generates multiple front vehicles identification ranges, according to the third image and institute
State the intertexture mapping relations between the 4th image the rear lines on highway is mapped in the 4th image generate it is multiple
Front vehicle identification range;
First identification module, for identifying objects ahead vehicle according to all front vehicles identification ranges, according to all rear vehicles
Identification range identifies rear area target vehicle;
Control module is used for according to the kinematic parameter of the objects ahead vehicle and the rear area target vehicle to the main body vehicle
Kinematic parameter carry out cruise control.
15. device as claimed in claim 14, which is characterized in that described to be used for from the first acquisition module:
The first image of environment in front of main body vehicle is obtained from the imaging sensor of preposition 3D cameras;
The second image of environment in front of main body vehicle is obtained from the time-of-flight sensor of preposition 3D cameras;
The third image of main body vehicle rear environment is obtained from the imaging sensor of postposition 3D cameras;
The 4th image of main body vehicle rear environment is obtained from the time-of-flight sensor of postposition 3D cameras.
16. device as claimed in claim 13, which is characterized in that second acquisition module includes:
First recognition unit, for being luminance picture when described first image, according to front road driveway in described first image
Line and the luminance difference on road surface identify the front lines on highway;Alternatively,
The color image is converted to luminance picture, institute by the first converting unit for being color image when described first image
State the first recognition unit for according in described first image front lines on highway and road surface luminance difference identification it is described before
Square lines on highway.
17. device as claimed in claim 16, which is characterized in that first recognition unit is specifically used for:
The binary map of the front lines on highway is created according to the luminance information of described first image and preset luminance threshold
Picture;
Whole edge pixel locations of straight way solid line lane line are detected in the bianry image according to preset detection algorithm
Or detect the whole edge pixel locations for solid line lane line of going off the curve;
Whole edge pixel locations of straight way dotted line lane line are detected in the bianry image according to preset detection algorithm
Or detect the whole edge pixel locations for dotted line lane line of going off the curve.
18. device as claimed in claim 14, which is characterized in that the third acquisition module includes:
Second recognition unit, for being luminance picture when the third image, according to rear road driveway in the third image
Line and the luminance difference on road surface identify the rear lines on highway;Alternatively,
The color image is converted to luminance picture, institute by the second converting unit for being color image when the third image
State the second recognition unit for according to rear lines on highway in the third image and the luminance difference on road surface identification it is described after
Square lines on highway.
19. device as claimed in claim 18, which is characterized in that second recognition unit is specifically used for:
The binary map of the rear lines on highway is created according to the luminance information of the third image and preset luminance threshold
Picture;
Whole edge pixel locations of straight way solid line lane line are detected in the bianry image according to preset detection algorithm
Or detect the whole edge pixel locations for solid line lane line of going off the curve;
Whole edge pixel locations of straight way dotted line lane line are detected in the bianry image according to preset detection algorithm
Or detect the whole edge pixel locations for dotted line lane line of going off the curve.
20. device as claimed in claim 14, which is characterized in that first identification module is used for:
All front vehicles identification ranges are marked with the label in front this track and this non-track of front;
According to this track target vehicle of the vehicle identification range identification front of this track label of label front;
According to this non-track target vehicle of vehicle identification range identification front of this non-track label of label front;
Lane change ahead target vehicle is identified according to the front vehicles identification range of combination of two;
All front vehicle identification ranges are marked with the label in this track of rear and this non-track of rear;
This track target vehicle of rear is identified according to the vehicle identification range of label this track label of rear;
This non-track target vehicle of rear is identified according to the vehicle identification range of label this non-track label of rear;
Rear lane change target vehicle is identified according to the front vehicle identification range of combination of two.
21. device as claimed in claim 14, which is characterized in that first identification module is additionally operable to:
The object boundary that objects ahead vehicle is detected using the boundary detection method in image processing algorithm is identified;
The object boundary that rear area target vehicle is detected using the boundary detection method in image processing algorithm is identified.
22. device as claimed in claim 13, which is characterized in that further include:
Second generation module, for generating objects ahead vehicle range according to the objects ahead vehicle, according to the rear square mesh
It marks vehicle and generates rear area target vehicle range;
Third generation module, before will be described according to the intertexture mapping relations between described first image and second image
Square mesh mark vehicle range, which maps in described first image, generates front car light identification region, according to the third image and described
Intertexture mapping relations between 4th image, which map to the rear area target vehicle range in the third image, generates rear
Car light identification region;
Second identification module, the steering indicating light for identifying corresponding objects ahead vehicle according to the front car light identification region, root
The steering indicating light of corresponding rear area target vehicle is identified according to the rear car light identification region;
The control module is used for:
According to the kinematic parameter of the objects ahead vehicle and the rear area target vehicle to the kinematic parameter of the main body vehicle
And steering indicating light carries out cruise control.
23. device as claimed in claim 22, which is characterized in that second generation module is used for:
Objects ahead vehicle range is generated according to the enclosed region that the object boundary of the objects ahead vehicle surrounds, according to described
The enclosed region that the object boundary of rear area target vehicle surrounds generates rear area target vehicle range;Alternatively,
Objects ahead vehicle range is generated according to the enclosed region of the extension of the object boundary of the objects ahead vehicle surrounded,
Rear area target vehicle range is generated according to the enclosed region of the extension of the object boundary of the rear area target vehicle surrounded;Or
Person,
Objects ahead vehicle range is generated according to the enclosed region that multiple location of pixels lines of the objects ahead vehicle surround,
Rear area target vehicle range is generated according to the enclosed region that multiple location of pixels lines of the rear area target vehicle surround.
24. device as claimed in claim 22, which is characterized in that second identification module is used for:
Corresponding objects ahead is identified according to the color, flicker frequency of tail-light in the front car light identification region or flashing sequence
The steering indicating light of vehicle;
Corresponding rear area target is identified according to the color, flicker frequency of headlight in the rear car light identification region or flashing sequence
The steering indicating light of vehicle.
25. device as claimed in claim 22, which is characterized in that the control module is used for:
This non-track target vehicle deceleration lane change of front is recognized according to the kinematic parameter of the objects ahead vehicle and steering indicating light
To the operating mode in this track, so that the kinematic parameter control system of the main body vehicle carries out braking adjustment in advance;Alternatively,
This track target vehicle deceleration lane change of front is recognized according to the kinematic parameter of the objects ahead vehicle and steering indicating light extremely
The operating mode in this non-track of front, so that the kinematic parameter control system of the main body vehicle is adjusted without braking;Alternatively,
This non-track target vehicle deceleration lane change of rear is recognized according to the kinematic parameter of the rear area target vehicle and steering indicating light
To the operating mode in this track, so that the kinematic parameter control system of the main body vehicle is adjusted in advance, and/or, so that described
The lamp system of main body vehicle reminds rear area target vehicle.
26. the device as described in claim 14-25 is any, which is characterized in that described device further includes:
4th acquisition module, for obtaining front mesh according to the objects ahead vehicle range or the front car light identification region
Mark vehicle heading angle;
5th acquisition module, the intermediate-freuqncy signal for obtaining dead load frequency radar obtain front radar target according to intermediate-freuqncy signal
Relative velocity and azimuth;
Calibration module, before being obtained according to objects ahead vehicle heading angle, the relative velocity of front radar target and azimuth
The relative velocity of square target vehicle calibrates dead load frequency radar according to the objects ahead vehicle of identification automatically.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710120432.XA CN108528431B (en) | 2017-03-02 | 2017-03-02 | Automatic control method and device for vehicle running |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710120432.XA CN108528431B (en) | 2017-03-02 | 2017-03-02 | Automatic control method and device for vehicle running |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108528431A true CN108528431A (en) | 2018-09-14 |
CN108528431B CN108528431B (en) | 2020-03-31 |
Family
ID=63489033
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710120432.XA Active CN108528431B (en) | 2017-03-02 | 2017-03-02 | Automatic control method and device for vehicle running |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108528431B (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109703556A (en) * | 2018-12-20 | 2019-05-03 | 斑马网络技术有限公司 | Driving assistance method and equipment |
CN110723072A (en) * | 2019-10-09 | 2020-01-24 | 卓尔智联(武汉)研究院有限公司 | Driving assistance method and device, computer equipment and storage medium |
US20200398797A1 (en) * | 2019-06-19 | 2020-12-24 | Ford Global Technologies, Llc | Vehicle sensor enhancements |
CN112141103A (en) * | 2020-08-31 | 2020-12-29 | 恒大新能源汽车投资控股集团有限公司 | Method and system for controlling vehicle to run along with front vehicle |
CN112257535A (en) * | 2020-10-15 | 2021-01-22 | 天目爱视(北京)科技有限公司 | Three-dimensional matching equipment and method for avoiding object |
US10967853B2 (en) | 2019-02-11 | 2021-04-06 | Ford Global Technologies, Llc | Enhanced collision mitigation |
CN113085722A (en) * | 2021-06-09 | 2021-07-09 | 禾多科技(北京)有限公司 | Vehicle control method, electronic device, and computer-readable medium |
CN113362649A (en) * | 2021-06-30 | 2021-09-07 | 安诺(深圳)创新技术有限公司 | Auxiliary driving system based on Internet of vehicles |
CN114023109A (en) * | 2021-11-03 | 2022-02-08 | 中国矿业大学 | Early warning system for preventing rear-end collision of mobile tank car |
CN114475665A (en) * | 2022-03-17 | 2022-05-13 | 北京小马睿行科技有限公司 | Control method and control device for automatic driving vehicle and automatic driving system |
CN114882709A (en) * | 2022-04-22 | 2022-08-09 | 四川云从天府人工智能科技有限公司 | Vehicle congestion detection method and device and computer storage medium |
US11440471B2 (en) * | 2019-03-21 | 2022-09-13 | Baidu Usa Llc | Automated warning system to detect a front vehicle slips backwards |
CN115923781A (en) * | 2023-03-08 | 2023-04-07 | 江铃汽车股份有限公司 | Automatic obstacle avoidance method and system for intelligent networked passenger vehicle |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102673560A (en) * | 2011-03-16 | 2012-09-19 | 通用汽车环球科技运作有限责任公司 | Method for recognizing turn-off maneuver and driver assistance system |
CN104724117A (en) * | 2015-02-05 | 2015-06-24 | 温州大学 | Self-adaptive online speed adjusting system with rear vehicle early warning function |
CN104952254A (en) * | 2014-03-31 | 2015-09-30 | 比亚迪股份有限公司 | Vehicle identification method and device and vehicle |
JP2016014970A (en) * | 2014-07-01 | 2016-01-28 | 富士重工業株式会社 | Vehicle driving support device |
CN105517872A (en) * | 2013-09-11 | 2016-04-20 | 罗伯特·博世有限公司 | Modifying adaptive cruise control to mitigate rear-end collisions |
-
2017
- 2017-03-02 CN CN201710120432.XA patent/CN108528431B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102673560A (en) * | 2011-03-16 | 2012-09-19 | 通用汽车环球科技运作有限责任公司 | Method for recognizing turn-off maneuver and driver assistance system |
CN105517872A (en) * | 2013-09-11 | 2016-04-20 | 罗伯特·博世有限公司 | Modifying adaptive cruise control to mitigate rear-end collisions |
CN104952254A (en) * | 2014-03-31 | 2015-09-30 | 比亚迪股份有限公司 | Vehicle identification method and device and vehicle |
JP2016014970A (en) * | 2014-07-01 | 2016-01-28 | 富士重工業株式会社 | Vehicle driving support device |
CN104724117A (en) * | 2015-02-05 | 2015-06-24 | 温州大学 | Self-adaptive online speed adjusting system with rear vehicle early warning function |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109703556A (en) * | 2018-12-20 | 2019-05-03 | 斑马网络技术有限公司 | Driving assistance method and equipment |
US10967853B2 (en) | 2019-02-11 | 2021-04-06 | Ford Global Technologies, Llc | Enhanced collision mitigation |
US11440471B2 (en) * | 2019-03-21 | 2022-09-13 | Baidu Usa Llc | Automated warning system to detect a front vehicle slips backwards |
US20200398797A1 (en) * | 2019-06-19 | 2020-12-24 | Ford Global Technologies, Llc | Vehicle sensor enhancements |
US11673533B2 (en) * | 2019-06-19 | 2023-06-13 | Ford Global Technologies, Llc | Vehicle sensor enhancements |
CN110723072A (en) * | 2019-10-09 | 2020-01-24 | 卓尔智联(武汉)研究院有限公司 | Driving assistance method and device, computer equipment and storage medium |
CN110723072B (en) * | 2019-10-09 | 2021-06-01 | 卓尔智联(武汉)研究院有限公司 | Driving assistance method and device, computer equipment and storage medium |
CN112141103A (en) * | 2020-08-31 | 2020-12-29 | 恒大新能源汽车投资控股集团有限公司 | Method and system for controlling vehicle to run along with front vehicle |
CN112257535A (en) * | 2020-10-15 | 2021-01-22 | 天目爱视(北京)科技有限公司 | Three-dimensional matching equipment and method for avoiding object |
CN113085722A (en) * | 2021-06-09 | 2021-07-09 | 禾多科技(北京)有限公司 | Vehicle control method, electronic device, and computer-readable medium |
CN113085722B (en) * | 2021-06-09 | 2021-10-22 | 禾多科技(北京)有限公司 | Vehicle control method, electronic device, and computer-readable medium |
CN113362649A (en) * | 2021-06-30 | 2021-09-07 | 安诺(深圳)创新技术有限公司 | Auxiliary driving system based on Internet of vehicles |
CN114023109A (en) * | 2021-11-03 | 2022-02-08 | 中国矿业大学 | Early warning system for preventing rear-end collision of mobile tank car |
CN114475665A (en) * | 2022-03-17 | 2022-05-13 | 北京小马睿行科技有限公司 | Control method and control device for automatic driving vehicle and automatic driving system |
CN114882709A (en) * | 2022-04-22 | 2022-08-09 | 四川云从天府人工智能科技有限公司 | Vehicle congestion detection method and device and computer storage medium |
CN114882709B (en) * | 2022-04-22 | 2023-05-30 | 四川云从天府人工智能科技有限公司 | Vehicle congestion detection method, device and computer storage medium |
CN115923781A (en) * | 2023-03-08 | 2023-04-07 | 江铃汽车股份有限公司 | Automatic obstacle avoidance method and system for intelligent networked passenger vehicle |
Also Published As
Publication number | Publication date |
---|---|
CN108528431B (en) | 2020-03-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108528431A (en) | Vehicle travels autocontrol method and device | |
WO2018059586A1 (en) | A vehicle identification method, device, and vehicle | |
JP5855272B2 (en) | Method and apparatus for recognizing braking conditions | |
US7411486B2 (en) | Lane-departure warning system with differentiation between an edge-of-lane marking and a structural boundary of the edge of the lane | |
US20090254260A1 (en) | Full speed range adaptive cruise control system | |
JP5254102B2 (en) | Environment recognition device | |
CN108528433B (en) | Automatic control method and device for vehicle running | |
CN108528448B (en) | Automatic control method and device for vehicle running | |
US9886773B2 (en) | Object detection apparatus and object detection method | |
JP2007024590A (en) | Object detector | |
JP6034923B1 (en) | Outside environment recognition device | |
JP6236039B2 (en) | Outside environment recognition device | |
CN103874931A (en) | Method and apparatus for ascertaining a position for an object in surroundings of a vehicle | |
CN108536134A (en) | Vehicle travels autocontrol method and device | |
JP6828602B2 (en) | Target detection device | |
CN107886030A (en) | Vehicle identification method, device and vehicle | |
CN108528432B (en) | Automatic control method and device for vehicle running | |
CN107886729B (en) | Vehicle identification method and device and vehicle | |
US8031908B2 (en) | Object recognizing apparatus including profile shape determining section | |
JP6169949B2 (en) | Image processing device | |
KR20150010126A (en) | Apparatus and method for controlling side displaying of vehicle | |
CN107886036B (en) | Vehicle control method and device and vehicle | |
CN108528450B (en) | Automatic control method and device for vehicle running | |
CN108528449B (en) | Automatic control method and device for vehicle running | |
US20210188267A1 (en) | Driver assistance system and control method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |