CN108528450A - Vehicle travels autocontrol method and device - Google Patents
Vehicle travels autocontrol method and device Download PDFInfo
- Publication number
- CN108528450A CN108528450A CN201710120749.3A CN201710120749A CN108528450A CN 108528450 A CN108528450 A CN 108528450A CN 201710120749 A CN201710120749 A CN 201710120749A CN 108528450 A CN108528450 A CN 108528450A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- image
- identification
- highway
- target vehicle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 75
- 230000008859 change Effects 0.000 claims description 52
- 238000001514 detection method Methods 0.000 claims description 33
- 238000003384 imaging method Methods 0.000 claims description 33
- 238000006073 displacement reaction Methods 0.000 claims description 21
- 238000013507 mapping Methods 0.000 claims description 17
- 238000009434 installation Methods 0.000 claims description 16
- 238000012545 processing Methods 0.000 claims description 6
- 238000009825 accumulation Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 16
- 238000009792 diffusion process Methods 0.000 description 8
- 238000002955 isolation Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 5
- 230000015572 biosynthetic process Effects 0.000 description 3
- 238000010924 continuous production Methods 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 3
- 241000208340 Araliaceae Species 0.000 description 2
- 240000007594 Oryza sativa Species 0.000 description 2
- 235000007164 Oryza sativa Nutrition 0.000 description 2
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 2
- 235000003140 Panax quinquefolius Nutrition 0.000 description 2
- 238000010009 beating Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000002902 bimodal effect Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 235000008434 ginseng Nutrition 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000035772 mutation Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 235000009566 rice Nutrition 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 239000000758 substrate Substances 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- IYLGZMTXKJYONK-ACLXAEORSA-N (12s,15r)-15-hydroxy-11,16-dioxo-15,20-dihydrosenecionan-12-yl acetate Chemical compound O1C(=O)[C@](CC)(O)C[C@@H](C)[C@](C)(OC(C)=O)C(=O)OCC2=CCN3[C@H]2[C@H]1CC3 IYLGZMTXKJYONK-ACLXAEORSA-N 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000005452 bending Methods 0.000 description 1
- 239000000571 coke Substances 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000005183 dynamical system Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- VIKNJXKGJWUCNN-XGXHKTLJSA-N norethisterone Chemical compound O=C1CC[C@@H]2[C@H]3CC[C@](C)([C@](CC4)(O)C#C)[C@@H]4[C@@H]3CCC2=C1 VIKNJXKGJWUCNN-XGXHKTLJSA-N 0.000 description 1
- IYLGZMTXKJYONK-UHFFFAOYSA-N ruwenine Natural products O1C(=O)C(CC)(O)CC(C)C(C)(OC(C)=O)C(=O)OCC2=CCN3C2C1CC3 IYLGZMTXKJYONK-UHFFFAOYSA-N 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 239000011800 void material Substances 0.000 description 1
- 230000002618 waking effect Effects 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
- B60W30/14—Adaptive cruise control
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/584—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2420/00—Indexing codes relating to the type of sensors based on the principle of their operation
- B60W2420/40—Photo or light sensitive means, e.g. infrared sensors
- B60W2420/403—Image sensing, e.g. optical camera
-
- B60W2420/408—
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2554/00—Input parameters relating to objects
- B60W2554/80—Spatial relation or speed relative to objects
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2555/00—Input parameters relating to exterior conditions, not covered by groups B60W2552/00, B60W2554/00
- B60W2555/60—Traffic rules, e.g. speed limits or right of way
Abstract
The invention discloses a kind of vehicle traveling autocontrol methods and device, wherein method includes:The first image and the second image of environment in front of main body vehicle are obtained from preposition 3D cameras;Front lines on highway is obtained according to the first image, and then obtains third image and rear lines on highway;Multiple front vehicles identification ranges are generated according to the first image and the second image, and then identify objects ahead vehicle;Multiple front vehicle identification ranges are generated according to third image and rear lines on highway;Rear area target vehicle parameter group is obtained from millimetre-wave radar, obtains multiple rear area target vehicle parameter points, and then identify rear area target vehicle;Cruise control is carried out to the kinematic parameter of main body vehicle according to the kinematic parameter of objects ahead vehicle and rear area target vehicle.Thereby it is ensured that traffic safety.
Description
Technical field
The present invention relates to technical field of vehicle control more particularly to a kind of vehicle traveling autocontrol methods and device.
Background technology
Currently, the attention rate of the self-adaption cruise system of vehicle is higher and higher, and in vehicle self-adaption cruise system, user
Desirable speed is set, system obtains the accurate location of front vehicles using low power radar or infrared light beam, if hair
Existing front vehicles deceleration monitors fresh target, and system, which will be sent, executes signal to engine or braking system to reduce speed
Vehicle and front vehicles is set to keep the operating range of a safety.It can accelerate the vehicle for being restored to setting again when road ahead there is not vehicle
Speed, radar system can monitor next target automatically.The adaptive cruise control system of vehicle replaces user's control speed, avoids
Frequent cancellation and setting cruise control, make self-adaption cruise system be suitable for more road conditions, have provided one kind to the user
More easily drive manner.
However, the self-adaption cruise system of existing vehicle needs to use millimetre-wave radar as distance measuring sensor, for
The case where objects ahead vehicle travels on bend, millimetre-wave radar since its operation principle cannot identify lane line well, because
This main body vehicle for only loading millimetre-wave radar be likely to by the objects ahead vehicle identification in this non-track be in this track, and can
It can be to just generating identification delay in the objects ahead vehicle of lane change to this track, such wrong identification or identification delay will likely be led
The self-adaption cruise system of the main body vehicle is caused to execute braking or the brake latency of mistake, to increase rear-end impact risk.
Invention content
The purpose of the present invention is intended to solve above-mentioned one of technical problem at least to a certain extent.
For this purpose, first purpose of the present invention is to propose that a kind of vehicle traveling autocontrol method, this method can be accurate
Lines on highway is identified, and the data for combining millimetre-wave radar to obtain know the roadway environments information of main body vehicle, according to specific
Roadway environments information carries out cruise control to main body vehicle, ensure that traffic safety.
Second object of the present invention is to propose a kind of vehicle traveling automatic control device.
To achieve the goals above, first aspect present invention embodiment proposes a kind of vehicle traveling autocontrol method,
Including:The first image and the second image of environment in front of main body vehicle are obtained from preposition 3D cameras, wherein the first image is colour
Or luminance picture, the second image are depth image;Front lines on highway is obtained according to described first image;According to described first
The imaging parameters of image and the front lines on highway obtain third image and rear lines on highway;According to first figure
Intertexture mapping relations between picture and second image map to the front lines on highway raw in second image
At multiple front vehicles identification ranges;Objects ahead vehicle is identified according to all front vehicles identification ranges;According to the third
Image and the rear lines on highway generate multiple front vehicle identification ranges;Rear area target vehicle is obtained from millimetre-wave radar
The rear area target vehicle parameter group is projected to the third image by parameter group according to the installation parameter of the millimetre-wave radar
In to obtain multiple rear area target vehicle parameter points;According to all front vehicle identification ranges and the multiple rear area target vehicle
Parameter point identifies rear area target vehicle;According to the kinematic parameter of the objects ahead vehicle and the rear area target vehicle pair
The kinematic parameter of the main body vehicle carries out cruise control.
The vehicle of the embodiment of the present invention travels autocontrol method, and environment in front of main body vehicle is obtained from preposition 3D cameras
First image and the second image, lines on highway before obtaining, and according to the imaging parameters of the first image and front lines on highway
Third image and rear lines on highway are obtained, it is according to the intertexture mapping relations between the first image and the second image that front is public
Road car diatom, which maps in the second image, generates multiple front vehicles identification ranges, before being identified according to front vehicles identification range
Square target vehicle generates multiple front vehicle identification ranges according to third image and rear lines on highway, and from millimeter wave thunder
Up to rear area target vehicle parameter group is obtained, rear area target vehicle parameter group is projected to the according to the installation parameter of millimetre-wave radar
To obtain multiple rear area target vehicle parameter points in three images, to identify rear area target vehicle, finally according to objects ahead
The kinematic parameter and rear area target vehicle of vehicle carry out cruise control to the kinematic parameter of main body vehicle.It accurately identifies as a result,
Lines on highway, and the data for combining millimetre-wave radar to obtain know the roadway environments information of main body vehicle, according to specific track
Environmental information carries out cruise control to main body vehicle, ensure that traffic safety.
To achieve the goals above, second aspect of the present invention embodiment proposes a kind of vehicle traveling automatic control device,
Including:First acquisition module, the first image and the second image for obtaining environment in front of main body vehicle from preposition 3D cameras,
In, the first image is colored or luminance picture, and the second image is depth image;Second acquisition module, for according to described first
Image obtains front lines on highway;Third acquisition module is used for the imaging parameters according to described first image and the front
Lines on highway obtains third image and rear lines on highway;First generation module, for according to described first image and institute
State the intertexture mapping relations between the second image the front lines on highway is mapped in second image generate it is multiple
Front vehicles identification range;First identification module, for identifying objects ahead vehicle according to all front vehicles identification ranges;The
Two generation modules, for generating multiple front vehicle identification ranges according to the third image and the rear lines on highway;
4th acquisition module, for obtaining rear area target vehicle parameter group from millimetre-wave radar, according to the installation of the millimetre-wave radar
Parameter projects to the rear area target vehicle parameter group in the third image to obtain multiple rear area target vehicle parameter points;
Second identification module, for identifying rear according to all front vehicle identification ranges and the multiple rear area target vehicle parameter point
Target vehicle;Control module is used for kinematic parameter and the rear area target vehicle according to the objects ahead vehicle to institute
The kinematic parameter for stating main body vehicle carries out cruise control.
The vehicle of the embodiment of the present invention travels automatic control device, and environment in front of main body vehicle is obtained from preposition 3D cameras
First image and the second image, lines on highway before obtaining, and according to the imaging parameters of the first image and front lines on highway
Third image and rear lines on highway are obtained, it is according to the intertexture mapping relations between the first image and the second image that front is public
Road car diatom, which maps in the second image, generates multiple front vehicles identification ranges, before being identified according to front vehicles identification range
Square target vehicle generates multiple front vehicle identification ranges according to third image and rear lines on highway, and from millimeter wave thunder
Up to rear area target vehicle parameter group is obtained, rear area target vehicle parameter group is projected to the according to the installation parameter of millimetre-wave radar
To obtain multiple rear area target vehicle parameter points in three images, to identify rear area target vehicle, finally according to objects ahead
The kinematic parameter and rear area target vehicle of vehicle carry out cruise control to the kinematic parameter of main body vehicle.It accurately identifies as a result,
Lines on highway, and the data for combining millimetre-wave radar to obtain know the roadway environments information of main body vehicle, according to specific track
Environmental information carries out cruise control to main body vehicle, ensure that traffic safety.The additional aspect of the present invention of the embodiment of the present invention
It will be set forth in part in the description with advantage, partly will become apparent from the description below, or reality through the invention
It tramples and recognizes.
Description of the drawings
Above-mentioned and/or additional aspect and advantage of the invention will become from the following description of the accompanying drawings of embodiments
Obviously and it is readily appreciated that, wherein:
Fig. 1 is the flow chart that autocontrol method is travelled according to the vehicle of first embodiment of the invention;
Fig. 2 is the flow chart that autocontrol method is travelled according to the vehicle of second embodiment of the invention;
Fig. 3 is the flow chart that autocontrol method is travelled according to the vehicle of third embodiment of the invention;
Fig. 4 is the flow chart that autocontrol method is travelled according to the vehicle of four embodiment of the invention;
Fig. 5 is the flow chart that autocontrol method is travelled according to the vehicle of fifth embodiment of the invention;
Fig. 6 is the schematic diagram of a scenario of vehicle traveling autocontrol method according to an embodiment of the invention;
Fig. 7 is the schematic diagram of a scenario of vehicle traveling autocontrol method in accordance with another embodiment of the present invention;
Fig. 8 is the structural schematic diagram that automatic control device is travelled according to the vehicle of first embodiment of the invention;
Fig. 9 is the structural schematic diagram that automatic control device is travelled according to the vehicle of second embodiment of the invention;
Figure 10 is the structural schematic diagram that automatic control device is travelled according to the vehicle of third embodiment of the invention;
Figure 11 is the structural schematic diagram that automatic control device is travelled according to the vehicle of four embodiment of the invention;
Figure 12 is the structural schematic diagram that automatic control device is travelled according to the vehicle of fifth embodiment of the invention;And
Figure 13 is the structural schematic diagram that automatic control device is travelled according to the vehicle of sixth embodiment of the invention.
Specific implementation mode
The embodiment of the present invention is described below in detail, examples of the embodiments are shown in the accompanying drawings, wherein from beginning to end
Same or similar label indicates same or similar element or element with the same or similar functions.Below with reference to attached
The embodiment of figure description is exemplary, it is intended to for explaining the present invention, and is not considered as limiting the invention.
Below with reference to the accompanying drawings the vehicle traveling autocontrol method and device of the embodiment of the present invention are described.
Fig. 1 is the flow chart that autocontrol method is travelled according to the vehicle of first embodiment of the invention.
In general, by millimetre-wave radar assembly in the front, side or rear of vehicle, forward sight anticollision is completed, side is prevented
It hits and function that the emphasis such as backsight anticollision are different, in practical applications, millimetre-wave radar emits the anti-of signal according to for it
Feedback signal, analysis front vehicles between current topic vehicle at a distance from and relative velocity, in order to vehicle automatically adjust speed, protect
Demonstrate,prove traffic safety.
Specifically, on the vehicle of installation millimetre-wave radar behind road, after the selected vehicle followed of millimetre-wave radar, will follow
Vehicle be monitored as target vehicle, no matter front vehicles accelerate, slow down, parking or startup, theme vehicle all can be real
When know and take appropriate measures.However, since millimetre-wave radar is pulse, measured target vehicle cannot be accurately judged
Classification and property cannot identify lane line well especially when objects ahead vehicle travels on bend, be easy to cause identification
Delay etc., leading to driving, there are security risks.
To solve the above-mentioned problems, the present invention proposes a kind of vehicle traveling autocontrol method, can accurately identify public affairs
Road car diatom and lines on highway environmental information ensure that traffic safety.
With reference to specific embodiment, vehicle of the present invention traveling autocontrol method is illustrated.As shown in Figure 1,
The vehicle travels autocontrol method:
S101 obtains the first image and the second image of environment in front of main body vehicle from preposition 3D cameras, wherein the first figure
As being colored or luminance picture, the second image is depth image.
Specifically, the first figure that 3D cameras obtain environment in front of main body vehicle is set in the front of current topic vehicle in advance
Picture and the second image, wherein the first image is colored or luminance picture, and the second image is depth image.
It should be noted that in practical applications, according to the difference of the image device structure of preposition 3D cameras, can be used more
Kind mode obtains the first image and the second image of environment in front of main body vehicle from preposition 3D cameras.
As a kind of possible realization method, environment in front of main body vehicle is obtained from the imaging sensor of preposition 3D cameras
First image is obtained from the time-of-flight sensor (Time of Flight, abbreviation TOF) of preposition 3D cameras in front of main body vehicle
Second image of environment.
Wherein, imaging sensor refers to the array or set of luminance pixel sensor, example such as red, green, blue (RGB) or bright
The luminance pixels sensors such as degree, coloration (YUV) are limited to accurately determine luminance pixel sensor between object to be detected
Distance ability, commonly used in obtain environment luminance picture.
TOF sensor refers to the array or set of TOF element sensors, example TOF element sensors can be optical sensor,
Phase detectors etc., can detect from light-pulse generator, modulated light source light between TOF element sensors and object to be detected
The flight time of propagation, to detection object distance and obtain depth image.
In addition, in practical applications, imaging sensor or TOF sensor can use complementary metal oxide semiconductor
(CMOS) technique is made, and luminance pixel sensor and TOF element sensors can be produced on same substrate in proportion
On, such as with 8:The 8 luminance pixel sensors and 1 TOF element sensor that 1 ratio is made form a big friendship
Knit pixel, wherein the photosensitive area of 1 TOF element sensor can be equal to 8 luminance pixel sensors photosensitive area, 8
The array format that luminance pixel sensor can be arranged by 2 rows 4 arranges.
For example, the battle array of the above-mentioned actively intertexture pixel of 360 rows 480 row can be made on the substrate of 1 inch optical target surface
Row, can obtain 720 rows 1920 row enliven luminance pixel sensor array, 360 rows 480 row enliven TOF element sensor battle arrays
Row, the same camera that thus imaging sensor and TOF sensor form can obtain colored or luminance picture and depth map simultaneously
Picture.
It is obtained as a result, about the first image of environment in front of main body vehicle and the same preposition 3D cameras of the second image,
CMOS technology manufacture can be used to realize, according to the Moore's Law of semi-conductor industry, preposition 3D cameras will tool in limited period
Standby sufficiently low manufacturing cost.
S102 obtains front lines on highway according to the first image.
Specifically, it since the first image is colored or luminance picture, and identifies the position of lines on highway and only needs to utilize
Lines on highway and the luminance difference on road surface can be realized.Therefore, in practical implementation, the brightness of the first image can be passed through
Acquisition of information lines on highway.
Specifically, if the first image is luminance picture, according to front lines on highway and road surface in the first image
Luminance difference identification front lines on highway.
If the first image is coloured image, coloured image is converted into luminance picture, according to front in the first image
Lines on highway and the luminance difference on road surface identification front lines on highway.
Wherein, due to the conversion method that luminance picture is converted to by coloured image be it is familiar to those skilled in the art,
The detailed process that luminance picture is converted to by coloured image is not repeated herein thus.
S103 obtains third image and rear road driveway according to the imaging parameters of the first image and front lines on highway
Line.
Wherein, the imaging parameters of the first image can include imaging pixel coordinate system, the coke for the camera for obtaining the first image
Position and orientation away from, physical world coordinates system of the camera in main body vehicle, namely can be by by the imaging parameters
The arbitrary image pixel coordinate of one image establishes projection relation with main body vehicle physical world coordinates system, the foundation of the projection relation
Method is familiar to those skilled in the art.
Wherein, third image is the vertical view of whole location of pixels of the front lines on highway after projection, therefore third
The position of front lines on highway in image is exactly lines on highway in front of main body vehicle with respect to main body vehicle physical world
The position of coordinate origin.
In turn, since rear lines on highway is the continuity of the lines on highway in front, thus, also first in acquisition is public
On the basis of road car diatom, rear lines on highway is obtained.
Front lines on highway is mapped to by S104 according to the intertexture mapping relations between the first image and the second image
Multiple front vehicles identification ranges are generated in two images.
Specifically, due to the first image and the second image be the colour that the same preposition 3D cameras obtain or luminance picture with
And depth image, thus, the first image and the second image have intertexture mapping relations, due between the first image and the second image
Intertexture mapping relations, the adjustment of the ranks coordinate of each pixel of the first image Jing Guo equal proportion can the second image extremely
The ranks coordinate of a pixel, therefore each edge pixel position of the front lines on highway obtained according to the first image are determined less
A location of pixels can at least be determined in the second image by setting, to obtain the front of equal proportion adjustment in the second image
Lines on highway.
In turn, the visual field watched according to human eye, according to the front lines on highway of the equal proportion obtained in the second image, often
Two adjacent front lines on highway uniquely create a front vehicles identification range.
S105 identifies objects ahead vehicle according to all front vehicles identification ranges.
Specifically, in front of the acquisition after the identification range in track, obtaining the vehicle being located within the scope of the lane identification of front is
Objects ahead vehicle.
S106 generates multiple front vehicle identification ranges according to third image and rear lines on highway.
Specifically, the visual field watched according to human eye, it is each two adjacent according to the rear lines on highway in third image
Rear lines on highway uniquely creates a front vehicle identification range.
S107, from millimetre-wave radar obtain rear area target vehicle parameter group, according to the installation parameter of millimetre-wave radar will after
Square mesh mark vehicle parameter group projects in third image to obtain multiple rear area target vehicle parameter points.
Specifically, millimetre-wave radar, such as it is operated in the frequency modulated continuous wave radar of 24GHz or 77GHz, it can obtain multiple
The parameters such as distance, relative velocity and the azimuth of rear area target vehicle form rear area target vehicle parameter group, thus pass through milli
Metre wave radar obtains rear area target vehicle parameter, and each group of the rear area target vehicle parameter group includes at least the rear area target vehicle
Distance, relative velocity and azimuth.
Wherein, the installation parameter of the position and orientation of physical world coordinates system of the millimetre-wave radar in main body vehicle can be with
By the off-line test of main body vehicle to obtain and record, therefore each target vehicle of millimetre-wave radar target vehicle parameter group
The parameters such as distance, relative velocity and azimuth can be converted to opposite main body vehicle physical world coordinates system origin ginseng
Number, i.e., according to the installation parameter of millimetre-wave radar by rear area target vehicle parameter group project in third image with obtain it is multiple after
Square mesh mark vehicle parameter point.
For example, the normal of the millimetre-wave radar is overlapped with the Y-axis of the physical world coordinates system of main body vehicle, the millimeter
The distance of the normal starting point of wave radar to the physical world coordinates system origin of main body vehicle is -2m, millimetre-wave radar identification
The distance of some rear area target vehicle is 10m, relative velocity 2m/s, (i.e. target vehicle and origin is wired at 30 ° of azimuth
The angle of Y-axis), then the X, Y coordinates of the physical world coordinates system origin of the target vehicle to main body vehicle be (10m*sin30 ° ,-
2m-10m*cos30 °) i.e. (5m, -10.66m), i.e., millimetre-wave radar target vehicle is joined according to the installation parameter of millimetre-wave radar
Array, which projects in third image, obtains the rear area target vehicle parameter point (5m, -10.66m).
S108 identifies rear area target vehicle according to all front vehicle identification ranges and multiple rear area target vehicle parameter points
.
Specifically, due to being projected millimetre-wave radar rear area target vehicle parameter group according to the installation parameter of millimetre-wave radar
To, to obtain several rear area target vehicle parameter points, therefore rear area target vehicle parameter point being fallen in corresponding vehicle in third image
The target vehicle of identification range is labeled as rear area target vehicle.
S109, according to the kinematic parameter of objects ahead vehicle and rear area target vehicle to the kinematic parameter of main body vehicle
Carry out cruise control.
Wherein, the kinematic parameter of target vehicle includes the information such as the speed of target vehicle, steering, the movement ginseng of main body vehicle
Number includes the speed of vehicle, steering indicating light display information etc..
Specifically, after the kinematic parameter for getting objects ahead vehicle, cruise control is carried out to the kinematic parameter of main body vehicle
System, to ensure traffic safety, for example, in objects ahead vehicle deceleration, suitably reduces speed, avoids the generation of rear-end collision
Deng.
It is emphasized that due in practical applications, if there is rear area target vehicle at main body vehicle rear, in order to keep away
Exempt from accident caused by front vehicle knocks into the back with main body vehicle etc., it is also necessary to which rearward target vehicle is prompted using steering indicating light.For example,
If main body vehicle lane change, opens steering indicating light, to warn rear area target vehicle.
Thus, in one embodiment of the invention, objects ahead vehicle range can be also generated according to objects ahead vehicle,
Objects ahead vehicle range is mapped into life in the first image according to the intertexture mapping relations between the first image and the second image
At front car light identification region.
It is appreciated that since traffic safety is related with objects ahead vehicle running state, for example, objects ahead vehicle is kept straight on
When, main body vehicle can normally travel, the lane change if objects ahead vehicle slows down suddenly, in order to avoid the generation of rear-end collision,
Main body vehicle needs to brake traveling.And since the car light of the passable objects ahead vehicle of the transport condition of objects ahead vehicle is anti-
It answers, thus, in the present embodiment, in order to know the car light of objects ahead vehicle, it is thus necessary to determine that front car light identification region.
Specifically, since front car light identification region is located in objects ahead vehicle range, thus, according to objects ahead vehicle
Generate objects ahead vehicle range, due to the intertexture mapping relations between the first image and the second image, in the second image before
Adjustment of the ranks coordinate of each pixel of square mesh mark vehicle range Jing Guo equal proportion can at least determine one in the first image
The ranks coordinate of a pixel, and the imaging of the car light of the target vehicle is included in corresponding objects ahead vehicle range, from
And front car light identification region is generated in the first image.
It should be noted that in practical applications, according to the difference of concrete application scene, being generated according to objects ahead vehicle
The mode of objects ahead vehicle range is different, and example is as follows:
The first example:
Objects ahead vehicle range is generated according to the enclosed region that the object boundary of objects ahead vehicle surrounds.
In this example, as a kind of possible realization method, using the boundary detection method (example in image processing algorithm
Such as Canny, Sobel, Laplace boundary detection method) detection objects ahead vehicle object boundary be identified.
Due to, in depth image, what the light reflection of the back or front of the same target vehicle was formed to TOF sensor
Depth sub-picture pack is containing consistent range information, as long as therefore identifying the depth subgraph of target vehicle formation in depth image
In position can obtain the range information of the target vehicle.Wherein, subgraph refers to the combination of the one part of pixel of image.
It is comprising consistent that the light reflection of the back or front of the same target vehicle forms depth subgraph to TOF sensor
Range information, and the light reflection on road surface to TOF sensor formed depth subgraph be the range information for including consecutive variations, because
The depth subgraph of this depth subgraph comprising consistent range information and the range information comprising consecutive variations is in the two
Intersection necessarily forms mutation differences, and the boundary of these mutation differences forms target side of the target vehicle in depth image
Boundary.
Second of example:
Objects ahead vehicle range is generated according to the enclosed region of the extension of the object boundary of objects ahead vehicle surrounded.
In this example, it as a kind of possible realization method, is examined using the boundary detection method in image processing algorithm
The object boundary for surveying objects ahead vehicle is identified.
The third example:
Objects ahead vehicle range is generated according to the enclosed region that multiple location of pixels lines of objects ahead vehicle surround.
Wherein, vehicle identification range is determined by whole location of pixels of lane line, therefore is examined within the scope of vehicle identification
The object boundary for surveying target vehicle will reduce the border interference of the road equipments such as isolation strip, light pole, fender pile formation.
Further, the steering indicating light of corresponding objects ahead vehicle is identified according to front car light identification region.
Specifically, after car light identification region in front of acquisition, in order to accurately know the concrete form shape of objects ahead vehicle
State identifies the steering indicating light of corresponding objects ahead vehicle according to front car light identification region.
It should be noted that according to the difference of concrete application demand, corresponding front is identified according to front car light identification region
The mode of the steering indicating light of target vehicle is different.
As a kind of possible realization method, according to the color, flicker frequency of tail-light in the car light identification region of front or
Flashing sequence identifies the steering indicating light of corresponding objects ahead vehicle.
In this realization method, its length travel of the initial stage of objects ahead vehicle lane change and lateral displacement are all smaller, meaning
The car light identification region size variation for the objects ahead vehicle is also smaller, and the brightness being imaged only at steering indicating light becomes because of flicker
Change larger.
Therefore, by continuously acquiring the colour or luminance picture of several different moments, and to the wherein objects ahead vehicle
Car light identification region carry out time diffusion processing, to create the time diffusion car light identification region subgraph of the objects ahead vehicle
Picture.Time diffusion car light identification region subgraph will protrude the car light subgraph of the objects ahead vehicle continuously flickered.
In turn, time diffusion car light identification region subgraph is projected into row reference axis, carries out one-dimensional lookup and obtains the mesh
Starting and the terminal row coordinate position for marking the car light subgraph of vehicle project these startings and terminal row coordinate position to the time
Differential car light identification region subgraph, and starting and the end line coordinate position of car light subgraph are searched, by car light subgraph
The row, column coordinate position of starting and terminal is projected into the colour of several above-mentioned different moments or luminance picture to confirm the front
Color, flicker frequency or the flashing sequence of the car light of target vehicle, so that it is determined that the row, column coordinate of the car light subgraph of flicker
Position.
Further, the row, column coordinate position of the car light subgraph of flicker is only identified in the car light of the objects ahead vehicle
When on the left of region, it may be determined that the objects ahead vehicle is beating left steering lamp, the row, column coordinate position of the car light subgraph of flicker
When only on the right side of the car light identification region of the objects ahead vehicle, it may be determined that the objects ahead vehicle is playing right turn lamp, dodges
The row, column coordinate position of bright car light subgraph can determine the mesh at the car light identification region both sides of the objects ahead vehicle
Mark vehicle is playing double sudden strain of a muscle warning lamps.
In addition, in this realization method, during objects ahead vehicle lane change its length travel or lateral displacement compared with
Greatly, cause the car light identification region size variation of the objects ahead vehicle also larger, can to continuously acquire several it is different when
The car light identification region of the objects ahead vehicle at quarter, carries out length travel or lateral displacement compensates and is scaled to vehicle of the same size
Lamp identification region, then to the car light identification region of the objects ahead vehicle after adjustment, before carrying out time diffusion processing to create this
Time diffusion car light identification region subgraph is projected to row and sat by the time diffusion car light identification region subgraph of square target vehicle
Parameter carries out starting and the terminal row coordinate position of the one-dimensional car light subgraph searched and obtain the objects ahead vehicle, by these
Starting and terminal row coordinate position project to time diffusion car light identification region subgraph, and search car light subgraph starting and
End line coordinate position projects the row, column coordinate position of the starting of car light subgraph and terminal to several above-mentioned different moments
Colour or luminance picture in, to confirm the color, flicker frequency or flashing sequence of the car light of the objects ahead vehicle, to really
The row, column coordinate position for having determined the car light subgraph of flicker finally completes left steering lamp, right turn lamp or double knowledges for dodging warning lamp
Not.
Specifically, recognizing this non-track target vehicle of front according to the kinematic parameter of objects ahead vehicle and steering indicating light
Deceleration lane change so that the kinematic parameter control system of main body vehicle carries out braking adjustment in advance, and makes to the operating mode in this track
The lamp system of main body vehicle reminds rear area target vehicle so that the lamp system of main body vehicle can be adjusted earlier to carry
Rear area target of waking up vehicle provides more brakings or adjustment time for rear area target vehicle, more effectively reduces to knock into the back and touch
Hit risk.
Become alternatively, recognizing this track target vehicle of front according to the kinematic parameter of objects ahead vehicle and steering indicating light and slowing down
The operating mode in this non-track of road to front, so that the kinematic parameter control system of main body vehicle is adjusted without braking so that main body
The kinematic parameter control system of vehicle can reduce unnecessary braking adjustment, unnecessary due to main body vehicle to reduce
Braking adjustment caused by rear-end impact risk.
In conclusion the vehicle of the embodiment of the present invention travels autocontrol method, main body vehicle is obtained from preposition 3D cameras
The first image and the second image of front environment, lines on highway before obtaining, and according to the imaging parameters of the first image and front
Lines on highway obtains third image and rear lines on highway, is mapped and is closed according to the intertexture between the first image and the second image
System, which maps to front lines on highway in the second image, generates multiple front vehicles identification ranges, to be identified according to front vehicles
Range identifies objects ahead vehicle, and multiple front vehicle identification ranges are generated according to third image and rear lines on highway, and
Rear area target vehicle parameter group is obtained from millimetre-wave radar, according to the installation parameter of millimetre-wave radar by rear area target vehicle parameter
Group is projected to obtain multiple rear area target vehicle parameter points in third image, to identify rear area target vehicle, final root
Cruise control is carried out to the kinematic parameter of main body vehicle according to the kinematic parameter and rear area target vehicle of objects ahead vehicle.By
This, accurately identifies lines on highway, and the data for combining millimetre-wave radar to obtain know the roadway environments information of main body vehicle, root
Cruise control is carried out to main body vehicle according to specific roadway environments information, ensure that traffic safety.
Based on above description, it should be noted that according to the difference of concrete application scene, different technologies, root can be used
According to front lines on highway in the first image and the luminance difference on road surface identification front lines on highway.With reference to specifically showing
Example illustrates.
Fig. 2 is the flow chart that autocontrol method is travelled according to the vehicle of second embodiment of the invention, as shown in Fig. 2,
Above-mentioned steps S102 includes:
S201 creates the binary map of front lines on highway according to the luminance information of the first image and preset luminance threshold
Picture.
Specifically, since in real life, the existing solid line lane line of lines on highway also has dotted line lane line, in order to just
In explanation, illustrated for identifying solid line lane line first below.
Specifically, using the luminance difference of lines on highway and road surface in the first image, luminance threshold is pre-set, wherein
Preset luminance threshold is by searching for certain luminance thresholds are obtained, and luminance threshold can utilize " statistics with histogram-is bimodal " to calculate
Method searches to obtain.
In turn, the bianry image of prominent lines on highway is created using preset luminance threshold and luminance picture, it can be with
Luminance picture is divided into multiple brightness subgraphs, " statistics with histogram-is bimodal " algorithm is executed to each brightness subgraph to look into
Multiple luminance thresholds are found, the two-value of lines on highway is protruded using each luminance threshold and corresponding brightness creation of sub-pictures
Subgraph, and completely protrude using two-value creation of sub-pictures the bianry image of lines on highway, so as to answer road pavement or
The case where lane line brightness change.
Wherein, it searches luminance threshold and creates the specific implementation step of the bianry image of lines on highway, it can be by this field
Technical staff obtain on the basis of existing technology, details are not described herein.
S202 detects whole edge pixels of straight way solid line lane line according to preset detection algorithm in bianry image
Position or the whole edge pixel locations for detecting solid line lane line of going off the curve.
Specifically, after the bianry image for obtaining front lines on highway, since the radius of curvature of lines on highway can not
Can be too small, and since camera projection theory causes the imaging pixel of the nearby opposite distant place lane line of lane line more so that it is curved
The pixel that the solid line lane line in road is arranged in a straight line in luminance picture also accounts for the major part of the solid line lane line imaging pixel.
Therefore preset detection algorithm, such as Hough transform algorithm isoline detection algorithm can be used in prominent highway
In the bianry image of lane line, the solid line that whole edge pixel locations of the solid line lane line of straight way or detection are gone off the curve is detected
Most of initial straight edge pixel location of lane line.
Certainly, if not doing filtration treatment, straight-line detection is also most of straight in bianry image by isolation strip, electric pole
Line edge pixel location detects.It is had a lot of social connections according to the road of the Aspect Ratio of imaging sensor, camera lens focal length, highway layout specification
Slope range of the lane line in bianry image can be arranged in the installation site of main body vehicle in degree range and imaging sensor, from
And the straight line of non-lane line is filtered according to the slope range and is excluded.
Due to the edge pixel location always consecutive variations of the solid line lane line of bend, according to searching above-mentioned detection
Initial straight both ends edge pixel location connected pixel position, and the connected pixel position is incorporated to the initial straight side
Edge pixel set repeats above-mentioned lookup and is incorporated to the connected pixel position, finally by whole edge pictures of bend solid line lane line
Plain position uniquely determines.
S203 detects whole edge pixels of straight way dotted line lane line according to preset detection algorithm in bianry image
Position or the whole edge pixel locations for detecting dotted line lane line of going off the curve.
It is public according to front lines on highway in the first image and the luminance difference on road surface identification front in order to comprehensively illustrate
Road car diatom is illustrated for identifying dotted line lane line first below.
Line detection algorithm described in above-mentioned steps S201, also can be by most of initial straight edge of dotted line lane line
Location of pixels detected, and the extended line of most of initial straight edge pixel location of lane line or lookup can close by a dotted line
And method the edge pixel of other the shorter lane lines for belonging to the dotted line lane line is connected, to obtain dotted line track
Whole edge pixel locations of line.Wherein, the method for extended line is used to obtain whole edge pixel positions of straight way dotted line lane line
It sets, the method for searching merging is used to obtain whole edge pixel locations of bend dotted line lane line, selects the method for extended line also
It is to look for combined method to need to obtain the priori that dotted line lane line is straight way or bend, certain priori can be with
It is obtained by detecting solid line lane line.
It, can be according to the original being mutually parallel in the priori of solid line lane line, lane line reality as a kind of realization method
Whole edge pixel locations of solid line lane line are projected to dotted line lane line by the then, projective parameter of imaging sensor and camera
Initial straight edge pixel location, with connect the dotted line lane line initial straight edge pixel location and other belong to the void
The edge pixel location of other shorter lane lines of line lane line, to obtain whole edge pixel positions of dotted line lane line
It sets.
As another realization method, the priori of straight way or bend need not be obtained, since vehicle cruises in straight way
Or during constant steering angle bend cruise, the lateral shift of dotted line lane line can almost neglect within shorter continuous time
Slightly, but vertical misalignment is larger, therefore continuous several width of the dotted line lane line in different moments protrude the binary map of lines on highway
As in, it can be superimposed as a solid line lane line, then again by the recognition methods of above-mentioned solid line lane line, obtain the dotted line vehicle
Whole edge pixel locations of diatom.
Since the vertical misalignment amount of dotted line lane line is influenced by main body vehicle speed, can be passed according to from wheel speed with this
The speed that sensor obtains, the minimum width number of the bianry image of the dynamic continuous prominent lines on highway for determining different moments,
Dotted line lane line is superimposed as a solid line lane line, to obtain whole edge pixel locations of dotted line lane line.
In conclusion the vehicle of the embodiment of the present invention travels autocontrol method, according to the luminance information of the first image and
Preset luminance threshold creates the bianry image of front lines on highway, is detected in bianry image according to preset detection algorithm
Go out whole edge pixel locations of straight way solid line lane line or detects the whole edge pixel locations for solid line lane line of going off the curve, root
Whole edge pixel locations of straight way dotted line lane line are detected in bianry image according to preset detection algorithm or are detected curved
Whole edge pixel locations of road dotted line lane line.The dotted line of straight way and bend in lines on highway can be recognized accurately as a result,
With solid line lane line.
It should be noted that according to the difference of concrete application scene, different technologies can be used, according to the first image at
As parameter and front lines on highway obtain third image and rear lines on highway.With reference to specific example to carry out more
Add clear explanation.
Fig. 3 is the flow chart that autocontrol method is travelled according to the vehicle of third embodiment of the invention, as shown in figure 3,
Above-mentioned steps S103 includes:
Whole location of pixels of front lines on highway are projected to main body vehicle by S301 according to the imaging parameters of the first image
Physical world coordinates system establishes third image.
The position of front lines on highway in third image is passed through continuous time accumulation and opposite main body vehicle by S302
The displacement of physical world coordinates system origin will obtain the position of rear lines on highway.
Specifically, whole location of pixels of the front lines on highway of acquisition are projected into main body vehicle physical world coordinates
System establishes third image, and third image can be the vertical view of whole location of pixels of the front lines on highway after projection, because
The position of front lines on highway in this third image is exactly lines on highway in front of main body vehicle with respect to main body vehicle object
Manage the position of world coordinate system origin.
Since the front lines on highway that a certain moment obtains will be located at main body vehicle rear after some time,
The position of front lines on highway in third image is accumulated by continuous time and opposite main body vehicle physical world coordinates system
The displacement of origin will obtain the position of the rear lines on highway of main body vehicle.
For example, the opposite main body vehicle physical world coordinates system origins of a point A of lines on highway in front of the T1 moment
Y-axis distance be D1 (X-axis apart from be D2), which has at the uniform velocity travelled T time with vehicle velocity V along Y-axis, i.e., in T2=T1+T
One point A of moment lines on highway is V × T (such as V × T=2 × D1) in the displacement of main body vehicle physical world coordinates system,
One point A of T2=T moment lines on highway is D1-V × T=- with respect to the distance of main body vehicle physical world coordinates system origin
D1 obtains the position of the rear lines on highway of main body vehicle (X-axis distance is still D2).
The case where being travelled for main body vehicle speed change, V can be obtained the change curve of T by wheel speed sensors, can be made
The displacement of speed change traveling is obtained to the integral of T with V.The case where travelling on arc bend for main body vehicle utilizes front highway
Lane line calculates the radius of curvature of bend in the coordinate of main body vehicle physical world coordinates system, recycles the radius of curvature and main body
The arcuate displacement of the integral of time of vehicle operation T can calculate the front lines on highway by time T-phase to main body vehicle object
The coordinate for managing world coordinate system origin, that is, obtain the position of the rear lines on highway of main body vehicle.
In conclusion the vehicle of the embodiment of the present invention travels autocontrol method, it will according to the imaging parameters of the first image
Whole location of pixels of front lines on highway project to main body vehicle physical world coordinates system and establish third image, by third figure
The position of front lines on highway as in is accumulated by continuous time and with respect to main body vehicle physical world coordinates system origin
Displacement will obtain the position of rear lines on highway.The position of rear lane line has accurately been known as a result, convenient for according to rear vehicle
The position of diatom carries out relevant control to vehicle, to ensure that driving safety lays the foundation.
Due in practical applications, either main body vehicle objects ahead vehicle or rear area target vehicle all have more
Kind transport condition and traveling-position, and different transport conditions and traveling-position is directly related to the specific control to main body vehicle
Operation, for example, for the target vehicle in this non-track of front, as long as it is maintained at non-this track operation in front, no matter it accelerates
All the traffic safety of main body vehicle is not influenced with slowing down, once but its lane change to this track, main body vehicle needs to carry out
Deceleration-operation etc..
Separately below to how to identify that objects ahead vehicle and rear area target vehicle are described in detail.
Fig. 4 is the flow chart that autocontrol method is travelled according to the vehicle of four embodiment of the invention, as shown in figure 4,
Above-mentioned steps S105 includes:
S401 marks all front vehicles identification ranges on the label in front this track and this non-track of front.
Specifically, according to the front lines on highway of the equal proportion obtained in the second image, each front road driveway is taken
Line number shared by the initial linear portion of line is compared to obtain the slope of the initial straight of the lines on highway with columns, to according to oblique
The front vehicles identification range that front lines on highway where the initial straight of maximum two lines on highway of rate creates, mark
The label in this track of note front, the label in other this non-tracks of front vehicles identification range created label front.
Front lines on highway can be mapped to according to the intertexture mapping relations between the first image and the second image as a result,
In second image, to generate several front vehicles identification ranges in the second image, and to all front vehicles identification range marks
The label in note front this track and this non-track of front.
S402, according to this track target vehicle of the vehicle identification range identification front of this track label of label front.
Specifically, obtain front this track label after, this track target vehicle of front in this lane identification range of front,
It can be according to this track target vehicle of the vehicle identification range identification front of this track label of label front.
Specifically, always change at any time relative to the distance of TOF sensor and position due to target vehicle, and road
Face, isolation strip are approximately indeclinable at any time relative to the distance of TOF sensor and position.When therefore using two width differences
The depth image creation time differential depth image obtained is carved to detect the variation of above-mentioned distance and position, and then reality is marking
This track target vehicle of the vehicle identification range identification front of this track label.
S403, according to this non-track target vehicle of vehicle identification range identification front of this non-track label of label front.
Specifically, after obtaining this non-track label of front, this non-track target vehicle of front is in non-of front lane identification
It, can be according to this non-track target vehicle of vehicle identification range identification front of this non-track label of label front in range.
Specifically, always change at any time relative to the distance of TOF sensor and position due to target vehicle, and road
Face, isolation strip are approximately indeclinable at any time relative to the distance of TOF sensor and position.When therefore using two width differences
The depth image creation time differential depth image obtained is carved to detect the variation of above-mentioned distance and position, and then reality is marking
This non-track target vehicle of vehicle identification range identification front of this non-track label.
S404 identifies lane change ahead target vehicle according to the front vehicles identification range of combination of two.
Specifically, due to that can identify this track target vehicle of front and this non-track target vehicle of front, to
Based on same recognition methods, lane change ahead target vehicle is identified according to the front vehicles identification range of combination of two.
Specifically, always change at any time relative to the distance of TOF sensor and position due to target vehicle, and road
Face, isolation strip are approximately indeclinable at any time relative to the distance of TOF sensor and position.When therefore using two width differences
The depth image creation time differential depth image obtained is carved to detect the variation of above-mentioned distance and position, and then practical in basis
The front vehicles identification range of combination of two identifies lane change ahead target vehicle.
Fig. 5 is the flow chart that autocontrol method is travelled according to the vehicle of fifth embodiment of the invention, as shown in figure 5,
Above-mentioned steps S108 includes:
S501 marks all front vehicle identification ranges on the label in this track of rear and this non-track of rear.
Specifically, according to the rear lines on highway in third image, the initial straight of each rear lines on highway is taken
Line number shared by part is compared to obtain the slope of the initial straight of the lines on highway with columns, to according to slope absolute value maximum
Two lines on highway initial straight where rear lines on highway, the front vehicle identification range of establishment marks rear
The label in this track, the label in other front vehicle identification range created label this non-tracks of rear.
Rear lines on highway can be mapped to according to the intertexture mapping relations between the first image and the second image as a result,
In second image, to generate several front vehicle identification ranges in the second image, and institute's front vehicle identification range is marked
The label in this non-track of this track of rear and rear.
S502, according to the vehicle identification range of label this track label of rear and rear area target vehicle parameter point identification label
This track target vehicle of rear.
Specifically, obtain this track label of rear after, this track target vehicle of rear in this lane identification range of rear,
This track target vehicle of rear can be identified according to the vehicle identification range of label this track label of rear.
Specifically, always change at any time relative to the distance of TOF sensor and position due to target vehicle, and road
Face, isolation strip are approximately indeclinable at any time relative to the distance of TOF sensor and position.When therefore using two width differences
The depth image creation time differential depth image obtained is carved to detect the variation of above-mentioned distance and position, and then reality is marking
The vehicle identification range of this track label identifies this track target vehicle of rear.
S503, according to the vehicle identification range of label this non-track label of rear and rear area target vehicle parameter point identification mark
Remember this non-track target vehicle of rear.
Specifically, after obtaining this non-track label of rear, this non-track target vehicle of rear is in non-of rear lane identification
In range, this non-track target vehicle of rear can be identified according to the vehicle identification range of label this non-track label of rear.
Specifically, always change at any time relative to the distance of TOF sensor and position due to target vehicle, and road
Face, isolation strip are approximately indeclinable at any time relative to the distance of TOF sensor and position.When therefore using two width differences
The depth image creation time differential depth image obtained is carved to detect the variation of above-mentioned distance and position, and then reality is marking
The vehicle identification range of this non-track label identifies this non-track target vehicle of rear.
S504 becomes according to the front vehicle identification range of combination of two and rear area target vehicle parameter point identification label rear
Road target vehicle.
Specifically, due to that can identify this non-track target vehicle of this track target vehicle of rear and rear, to
Based on same recognition methods, lane change target vehicle in rear is identified according to the front vehicle identification range of combination of two.
Specifically, always change at any time relative to the distance of TOF sensor and position due to target vehicle, and road
Face, isolation strip are approximately indeclinable at any time relative to the distance of TOF sensor and position.When therefore using two width differences
The depth image creation time differential depth image obtained is carved to detect the variation of above-mentioned distance and position, and then practical in basis
The front vehicle identification range of combination of two identifies rear lane change target vehicle.
Certainly, in addition to the recognition methods of the above-mentioned acquisition target vehicle shown, other modes also can be used and obtain target carriage
, as a kind of possible realization method, after the object boundary that objects ahead vehicle is identified after above-mentioned steps S109, respectively
The object boundary detected within the scope of each vehicle identification is projected to the row reference axis of image, and carries out one in reference axis of being expert at
Dimension is searched, you can determines that line number and row within the scope of the vehicle identification shared by longitudinal object boundary of all objects ahead vehicles are sat
Range is marked, and determines the shared columns and row coordinate position of lateral object boundary.
Wherein, longitudinal object boundary refers to the object boundary for occupying that number of lines of pixels is more and columns is few, and lateral object boundary refers to
Have occupy that number of lines of pixels is few and columns more than object boundary.
In turn, the columns shared by lateral object boundary all within the scope of the vehicle identification, row coordinate position, at this
The row coordinate positions of all longitudinal object boundaries is searched within the scope of vehicle identification, and (namely the row coordinate of respective transversal object boundary rises
Beginning position and final position), and according to object boundary include consistent range information principle distinguish different target vehicle mesh
Boundary is marked, so that it is determined that the position of all objects ahead vehicles and range information within the scope of the vehicle identification.
Therefore, the object boundary of detection acquisition objects ahead vehicle can uniquely determine the depth of objects ahead vehicle formation
Position of the subgraph in depth image is spent, to uniquely determine the range information of the objects ahead vehicle.
Multiple objects ahead vehicles and its range information can be detected simultaneously according to this exemplary boundary detection method, in turn
This track target vehicle in front of identification within the scope of the vehicle identification for marking this track label, in the vehicle for marking this non-track label
This non-track target vehicle of identification front, identifies lane change ahead mesh within the scope of the vehicle identification of combination of two in identification range
Mark vehicle.
Based on same principle, rear area target vehicle can be also identified, details are not described herein.
Wherein, it as another realization method for obtaining rear area target vehicle, is obtained in above-mentioned steps S107 multiple
After rear area target vehicle parameter point, rear area target vehicle parameter point is fallen into the vehicle identification range in label this track label of rear
Target vehicle be labeled as this track target vehicle of rear, rear area target vehicle parameter point is fallen and is marked in non-this track in label rear
The target vehicle of the vehicle identification range of label is labeled as this non-track target vehicle of rear.
Wherein, in practical applications, for the wider millimetre-wave radar of bandwidth of operation, resolution of ranging is higher, right
The target vehicle parameter point that same rear area target vehicle obtains is more, can be by the method for cluster by multiple target vehicle parameters
Point is classified as a target vehicle parameter sets, which can form the wheel of same rear area target vehicle
It is wide.
For example, can all target vehicle parameter points be classified as several initial sets by relative velocity parameters first
It closes, then again by all parameter points in each initial sets according to the X, Y coordinates of physical world coordinates system origin, carries out two dimension
The cluster of profile obtains the parameter sets of the same target vehicle, such as familiar to those skilled in the art using k-means etc.
Clustering method etc., rear area target vehicle parameter intersection is finally fallen to the vehicle identification range in label this track label of rear
Target vehicle is labeled as this track target vehicle of rear, and rear area target vehicle parameter set is fallen in label this non-track of rear
The target vehicle of the vehicle identification range of label is labeled as this non-track target vehicle of rear, by rear area target vehicle parameter collection
It closes and falls the target vehicle in more than two adjacent front vehicle identification ranges labeled as rear lane change target vehicle.
In conclusion the vehicle of the embodiment of the present invention travels autocontrol method, front and back target carriage is accurately identified
, in order to carry out thermoacoustic prime engine to main body vehicle according to objects ahead vehicle and rear area target vehicle, to ensure traffic safety
It provides and ensures.
Based on above example, in order to more clearly illustrate, in above-mentioned steps S111, how according to objects ahead vehicle
Kinematic parameter and steering indicating light and rear area target vehicle cruise control is carried out to the kinematic parameter of main body vehicle, tie below
Close specifically application scenarios, illustrate in the present embodiment identification and this track target vehicle of the front of supervision subjects vehicle from spin to
Continuous process of the lamp to completion lane change to this non-track.
Specifically, according to this track target vehicle of the vehicle identification range identification front of this track label of label front, root
Lane change ahead target vehicle is identified according to the front vehicles identification range of combination of two, also, phase is identified according to car light identification region
The steering indicating light of target vehicle is answered, can identify and monitor this track target vehicle of front from steering indicating light is played to completion lane change to non-
The continuous process in track, and duration of target vehicle during the continuous lane change, distance, phase with respect to main body vehicle
The kinematic parameters such as speed and lateral displacement are also easy to be monitored, to according to the kinematic parameter of the target vehicle can be used for
Control main body vehicle.
When the right turn lamp for recognizing this track target vehicle of front lights, the left side object boundary of the target vehicle arrives
The pixel distance of this track left-hand lane line of front is determined as lateral distance P through the conversion of camera projection relation, by continuously acquiring N
The first image of width different moments and the second image (time for obtaining first image or the second image is T), during which identify
And the variation of the distance R of the target vehicle is recorded, and can be somebody's turn to do by variation calculating of the distance R to the target vehicle with respect to T
The relative velocity V of target vehicle.
It recognizes the target vehicle and just completes this non-track on the right side of lane change to this track of front, at this time the target vehicle
Left side object boundary overlapped to this track of front right-hand lane line, this lane width is D, and therefore, which exists
Kinematic parameter during the continuous lane change is duration N × T, the distance of opposite main body vehicle is R, relative velocity V and cross
It is (D-P) to displacement.
It is emphasized that above-mentioned identification lateral displacement with the left and right lane line in this track be reference, no matter the target
It is in straight way or bend, no matter the leftward or rightward lane change of target vehicle can identify accurately when vehicle lane change, to based on
Body vehicle self-adaption cruise system provides accurately control foundation.
Further, the cross of the target vehicle of traditional vehicle self-adaption cruise system identification for only relying on millimetre-wave radar
To displacement be with main body vehicle it is reference, be sometimes cannot with reference to the lateral displacement of target vehicle identified with main body vehicle
It is supplied to the accurate motion control foundation of vehicle self-adaption cruise system.
Fig. 6 is the schematic diagram of a scenario of vehicle traveling autocontrol method according to an embodiment of the invention.
As shown in fig. 6, when this track objects ahead vehicle from this track complete lane change to the right be just in it is curved curved to the left
When road, it may still identify that the objects ahead vehicle sections are on this track positioned at the millimetre-wave radar of conventional truck on straight way,
250 meters of above-mentioned bend radius of curvature, above-mentioned objects ahead vehicle lane change has travelled 25 meters on bend in the process, with the front mesh
This track right-hand lane line that the left side object boundary of vehicle overlaps is marked at 25 meters of bend with respect to the straight way of the lane line
Extended line is offset by the left
If at this time the millimetre-wave radar of above-mentioned conventional truck recognize the target vehicle distance be 50 meters to 80 meters, i.e., on
The millimetre-wave radar for stating conventional truck is located on straight way and still has 25 meters to 55 meters of distance, above-mentioned tradition apart from bend entrance
The millimetre-wave radar of vehicle will recognize the objects ahead vehicle in the case where lacking bend priori, and still there are about 1.25
The vehicle body of meter Kuan Du is on this track, and as the target vehicle is continued on to left bend Reduced Speed Now, above-mentioned traditional vehicle
Millimetre-wave radar recognize the target vehicle and have the vehicle body of bigger width on this track, i.e., the millimeter of above-mentioned conventional truck
Wave radar will produce inaccurate identification and will cause the conventional truck self-adaption cruise system execute it is continuous inaccurate and
Unnecessary braking increases so as to cause the conventional truck and the rear-end impact risk of its rear area target vehicle.
Similarly, the millimetre-wave radar of above-mentioned conventional truck to this above-mentioned track target vehicle on bend to the right from this track
There is also inaccuracies for the identification of completion lane change to the left.
The millimetre-wave radar of above-mentioned conventional truck is known to solve the inaccuracy of above-mentioned identification or increase camera help
Other lane line or the azimuthal accuracy of identification of increase, will improve system complexity and cost in a word.
Therefore according to above-mentioned example, the kinematic parameter of the target vehicle identified according to the present invention identifies target vehicle with corresponding
Steering indicating light, this track target vehicle deceleration lane change can be recognized to the operating mode in this non-track of main body vehicle so that main body vehicle
Kinematic parameter control system can reduce unnecessary braking adjustment, it is unnecessary due to main body vehicle to reduce
Rear-end impact risk caused by braking adjustment.
Similarly, according to above-mentioned example, the present invention can also identify and monitor this non-track target vehicle from beat steering indicating light to
Lane change is completed to the continuous process in this track, and it is duration of objects ahead vehicle during the continuous lane change, opposite
The kinematic parameters such as distance, relative velocity and the lateral displacement of main body vehicle are also easy to be monitored, to the objects ahead vehicle
The kinematic parameter can be used for controlling the kinematic parameter of main body vehicle and adjust and improve driving safety, simultaneously to make braking earlier
Control car light warns rear area target vehicle to reduce rear-end impact risk earlier.
Fig. 7 is the schematic diagram of a scenario of vehicle traveling autocontrol method in accordance with another embodiment of the present invention.
Such as shown in Fig. 7, main body vehicle is travelled in this track straight way with constant speed mode, and still has 55 apart from bend entrance
The distance of rice (or to 25 meters), the bend is bending to the right and radius of curvature is 250 meters, in 25 meters of sheets in front of bend entrance
There is this non-track objects ahead vehicle beating left steering lamp to this lane on the right side of track, and the target vehicle
Left side object boundary is overlapped with the right-hand lane line in this track.
According to above-mentioned example, the present invention can accurately identify the objects ahead vehicle to this lane, due to
The target vehicle can control the dynamical system of main body vehicle accurately apart from main body vehicle about 80 meters (or to 50 meters), the present invention
It executes power output and reduces acting, lighting brake lamp in time for even braking, to ensure main body vehicle and front, rear area target vehicle
Safe distance, to improve main body vehicle driving safety and reduce rear-end impact risk.
However, the lateral position of the target vehicle of traditional vehicle self-adaption cruise system identification for only relying on millimetre-wave radar
It is reference that shifting, which is with main body vehicle, this vehicle of the objects ahead vehicle distances will be identified in the case where lacking bend priori
The extended line of road right-hand lane line also there are aboutThe lateral distance of rice, i.e., mistakenly identification should
Objects ahead vehicle needs to continue about 1.25 meters of above-mentioned millimetre-wave radars of lateral displacement to the left and just can confirm that the objects ahead vehicle
Initially enter this track.
If the objects ahead lateral direction of car velocity of displacement is 1 metre per second (m/s), above-mentioned traditional millimetre-wave radar that only relies on
It is defeated that vehicle self-adaption cruise system will could execute power after the objects ahead vehicle actually enters this track about 1.25 seconds
Go out to reduce the action of even braking, this undoubtedly reduces main body vehicle and front, the safe distance of rear area target vehicle, results in
The driving safety of main body vehicle declines and increases rear-end impact risk.
Therefore, according to above-mentioned example, according to turn of the kinematic parameter of the target vehicle of identification and corresponding identification target vehicle
To lamp, this non-track target vehicle deceleration lane change can be recognized to the operating mode in this track of main body vehicle so that main body vehicle
Kinematic parameter control system and security system can adjust earlier, improve main body vehicle and its driving safety of occupant
Property so that the lamp system of main body vehicle can be adjusted earlier to remind rear area target vehicle, carried for rear area target vehicle
More brakings or adjustment time have been supplied, rear-end impact risk is more effectively reduced.
In conclusion the vehicle of the embodiment of the present invention travels autocontrol method, main body vehicle and its occupant are improved
Driving safety so that the lamp system of main body vehicle can be adjusted earlier to remind rear area target vehicle, be rear square mesh
Mark vehicle provides more brakings or adjustment time, more effectively reduces rear-end impact risk.
To achieve the above object, the present invention also proposes a kind of vehicle traveling automatic control device.Fig. 8 is according to the present invention the
The structural schematic diagram of the vehicle traveling automatic control device of one embodiment, as shown in figure 8, the vehicle travels automatic control device
Including:First acquisition module 1010, the second acquisition module 1020, third acquisition module 1030, the first generation module 1040, first
Identification module 1050, the second generation module 1060, the 4th acquisition module 1070, the second identification module 1080 and control module
1090。
Wherein, the first acquisition module 1010, the first image for obtaining environment in front of main body vehicle from preposition 3D cameras
With the second image, wherein the first image is colored or luminance picture, and the second image is depth image.
In one embodiment of the invention, the first acquisition module 1010 is obtained from the imaging sensor of preposition 3D cameras and is led
First image of body vehicle front environment.
In one embodiment of the invention, the first acquisition module 1010 is obtained from the time-of-flight sensor of preposition 3D cameras
Take the second image of environment in front of main body vehicle.
Second acquisition module 1020, for obtaining front lines on highway according to the first image.
In one embodiment of the invention, the second acquisition module 1020 is when the first image is luminance picture, according to the
Front lines on highway and the luminance difference on road surface identification front lines on highway in one image.
In one embodiment of the invention, the second acquisition module 1020, will be colored when the first image is coloured image
Image is converted to luminance picture, according to front lines on highway in the first image and the luminance difference on road surface identification front highway car
Diatom.
Third acquisition module 1030, for obtaining third figure according to the imaging parameters and front lines on highway of the first image
Picture and rear lines on highway.
First generation module 1040, for according to the intertexture mapping relations between the first image and the second image that front is public
Road car diatom, which maps in the second image, generates multiple front vehicles identification ranges.
First identification module 1050, for identifying objects ahead vehicle according to all front vehicles identification ranges.
Second generation module 1060 is identified for generating multiple front vehicles according to third image and rear lines on highway
Range.
4th acquisition module 1070, for obtaining rear area target vehicle parameter group from millimetre-wave radar, according to millimeter wave thunder
The installation parameter reached projects to rear area target vehicle parameter group in third image to obtain multiple rear area target vehicle parameter points.
Second identification module 1080, for according to all front vehicle identification ranges and multiple rear area target vehicle parameter points
Identify rear area target vehicle.
Control module 1090 is used for kinematic parameter and rear area target vehicle according to objects ahead vehicle to main body vehicle
Kinematic parameter carry out cruise control.
In one embodiment of the invention, control module 1090 is according to the kinematic parameter and steering indicating light of objects ahead vehicle
This non-track target vehicle deceleration lane change of front is recognized to the operating mode in this track, so that the kinematic parameter control system of main body vehicle
System carries out braking adjustment in advance, and the lamp system of main body vehicle is made to remind rear area target vehicle.
In one embodiment of the invention, Fig. 9 is automatically controlled according to the vehicle of second embodiment of the invention traveling
The structural schematic diagram of device, as shown in figure 9, on the basis of as shown in Figure 8, vehicle traveling automatic control device further includes the
Three generation modules 1100 and third identification module 1110.
Third generation module 1100, for generating objects ahead vehicle range according to objects ahead vehicle, according to the first figure
Intertexture mapping relations between picture and the second image, which map to objects ahead vehicle range in the first image, generates front car light
Identification region.
In one embodiment of the invention, third generation module 1100 is surrounded according to the object boundary of objects ahead vehicle
Enclosed region generate objects ahead vehicle range.
In one embodiment of the invention, the prolonging according to the object boundary of objects ahead vehicle of third generation module 1100
The enclosed region surrounded stretched generates objects ahead vehicle range.
In one embodiment of the invention, third generation module 1100 is according to multiple location of pixels of objects ahead vehicle
The enclosed region that line surrounds generates objects ahead vehicle range.
Third identification module 1110, the steering for identifying corresponding objects ahead vehicle according to front car light identification region
Lamp.
In one embodiment of the invention, third identification module 1110, according to tail-light in the car light identification region of front
Color, flicker frequency or flashing sequence identify the steering indicating light of corresponding objects ahead vehicle.In the present embodiment, control module
1090 recognize this track target vehicle deceleration lane change of front to front according to the kinematic parameter and steering indicating light of objects ahead vehicle
The operating mode in this non-track, so that the kinematic parameter control system of main body vehicle is adjusted without braking.
It should be noted that the aforementioned explanation for travelling autocontrol method to vehicle, is also applied for implementation of the present invention
The vehicle of example travels automatic control device, and realization principle is similar, and details are not described herein.
In conclusion the vehicle of the embodiment of the present invention travels automatic control device, main body vehicle is obtained from preposition 3D cameras
The first image and the second image of front environment, lines on highway before obtaining, and according to the imaging parameters of the first image and front
Lines on highway obtains third image and rear lines on highway, is mapped and is closed according to the intertexture between the first image and the second image
System, which maps to front lines on highway in the second image, generates multiple front vehicles identification ranges, to be identified according to front vehicles
Range identifies objects ahead vehicle, and multiple front vehicle identification ranges are generated according to third image and rear lines on highway, and
Rear area target vehicle parameter group is obtained from millimetre-wave radar, according to the installation parameter of millimetre-wave radar by rear area target vehicle parameter
Group is projected to obtain multiple rear area target vehicle parameter points in third image, to identify rear area target vehicle, final root
Cruise control is carried out to the kinematic parameter of main body vehicle according to the kinematic parameter and rear area target vehicle of objects ahead vehicle.By
This, accurately identifies lines on highway, and the data for combining millimetre-wave radar to obtain know the roadway environments information of main body vehicle, root
Cruise control is carried out to main body vehicle according to specific roadway environments information, ensure that traffic safety.
Figure 10 is the structural schematic diagram that automatic control device is travelled according to the vehicle of third embodiment of the invention, such as Figure 10
Shown, on the basis of shown in Fig. 9, the second acquisition module 1020 includes creating unit 1021, first detection unit 1022 and the
Two detection units 1023.
Wherein, creating unit 1021, for creating front according to the luminance information of the first image and preset luminance threshold
The bianry image of lines on highway.
First detection unit 1022, for detecting straight way solid line track in bianry image according to preset detection algorithm
Whole edge pixel locations of line or the whole edge pixel locations for detecting solid line lane line of going off the curve;
Second detection unit 1023, for detecting straight way dotted line track in bianry image according to preset detection algorithm
Whole edge pixel locations of line or the whole edge pixel locations for detecting dotted line lane line of going off the curve.
It should be noted that the aforementioned explanation for travelling autocontrol method to vehicle, is also applied for implementation of the present invention
The vehicle of example travels automatic control device, and realization principle is similar, and details are not described herein.
In conclusion the vehicle of the embodiment of the present invention travels automatic control device, according to the luminance information of the first image and
Preset luminance threshold creates the bianry image of front lines on highway, is detected in bianry image according to preset detection algorithm
Go out whole edge pixel locations of straight way solid line lane line or detects the whole edge pixel locations for solid line lane line of going off the curve, root
Whole edge pixel locations of straight way dotted line lane line are detected in bianry image according to preset detection algorithm or are detected curved
Whole edge pixel locations of road dotted line lane line.The dotted line of straight way and bend in lines on highway can be recognized accurately as a result,
With solid line lane line.
Figure 11 is the structural schematic diagram that automatic control device is travelled according to the vehicle of four embodiment of the invention, such as Figure 11
Shown, on the basis of shown in Fig. 9, third acquisition module 1030 includes:Projecting cell 1031 and acquiring unit 1032.
Wherein, projecting cell 1031, for according to the imaging parameters of the first image by whole pictures of front lines on highway
Plain position projects to main body vehicle physical world coordinates system and establishes third image.
Acquiring unit 1032, for by the position of the front lines on highway in third image by continuous time accumulation and
The displacement of opposite main body vehicle physical world coordinates system origin obtains the position of rear lines on highway.
It should be noted that the aforementioned explanation for travelling autocontrol method to vehicle, is also applied for implementation of the present invention
The vehicle of example travels automatic control device, and realization principle is similar, and details are not described herein.
In conclusion the vehicle of the embodiment of the present invention travels automatic control device, it will according to the imaging parameters of the first image
Whole location of pixels of front lines on highway project to main body vehicle physical world coordinates system and establish third image, by third figure
The position of front lines on highway as in is accumulated by continuous time and with respect to main body vehicle physical world coordinates system origin
Displacement will obtain the position of rear lines on highway.The position of rear lane line has accurately been known as a result, convenient for according to rear vehicle
The position of diatom carries out relevant control to vehicle, to ensure that driving safety lays the foundation.
Figure 12 is the structural schematic diagram that automatic control device is travelled according to the vehicle of fifth embodiment of the invention, such as Figure 12
It is shown, on the basis of shown in Fig. 9, the first identification module 1050 include the first marking unit 1051, the first recognition unit 1052,
Second recognition unit 1053, third recognition unit 1054.
Wherein, the first marking unit 1051, for marking this track of front and front to all front vehicles identification ranges
The label in this non-track.
First recognition unit 1052, for according to this vehicle of the vehicle identification range identification front of this track label of label front
Road target vehicle.
Second recognition unit 1053, for non-according to the vehicle identification range identification front of this non-track label of label front
This track target vehicle.
Third recognition unit 1054, for identifying lane change ahead target carriage according to the front vehicles identification range of combination of two
.
Figure 13 is the structural schematic diagram that automatic control device is travelled according to the vehicle of sixth embodiment of the invention, such as Figure 13
Shown, on the basis of shown in Fig. 8, the second identification module 1080 includes:Second marking unit 1081, the 4th recognition unit
1082, the 5th identification single 1083 and the 6th recognition unit 1084.
Wherein, the second marking unit 1081, for marking this track of rear and rear to all front vehicle identification ranges
The label in this non-track.
4th recognition unit 1082, for the vehicle identification range and rear area target vehicle according to label this track label of rear
Parameter point identification label this track target vehicle of rear.
5th recognition unit 1083, for the vehicle identification range and rear area target according to label this non-track label of rear
Vehicle parameter point identification label this non-track target vehicle of rear.
6th recognition unit 1084, for the front vehicle identification range and rear area target vehicle parameter according to combination of two
Point identification label rear lane change target vehicle.
It should be noted that the aforementioned explanation for travelling autocontrol method to vehicle, is also applied for implementation of the present invention
The vehicle of example travels automatic control device, and realization principle is similar, and details are not described herein.
In conclusion the vehicle of the embodiment of the present invention travels automatic control device, front and back target carriage is accurately identified
, in order to carry out thermoacoustic prime engine to main body vehicle according to objects ahead vehicle and rear area target vehicle, to ensure traffic safety
It provides and ensures.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show
The description of example " or " some examples " etc. means specific features, structure, material or spy described in conjunction with this embodiment or example
Point is included at least one embodiment or example of the invention.In the present specification, schematic expression of the above terms are not
It must be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be in office
It can be combined in any suitable manner in one or more embodiments or example.In addition, without conflicting with each other, the skill of this field
Art personnel can tie the feature of different embodiments or examples described in this specification and different embodiments or examples
It closes and combines.
Although the embodiments of the present invention has been shown and described above, it is to be understood that above-described embodiment is example
Property, it is not considered as limiting the invention, those skilled in the art within the scope of the invention can be to above-mentioned
Embodiment is changed, changes, replacing and modification.
Claims (24)
1. a kind of vehicle travels autocontrol method, which is characterized in that including:
The first image and the second image of environment in front of main body vehicle are obtained from preposition 3D cameras, wherein the first image is colour
Or luminance picture, the second image are depth image;
Front lines on highway is obtained according to described first image;
Third image and rear road driveway are obtained according to the imaging parameters of described first image and the front lines on highway
Line;
The front lines on highway is mapped according to the intertexture mapping relations between described first image and second image
Extremely multiple front vehicles identification ranges are generated in second image;
Objects ahead vehicle is identified according to all front vehicles identification ranges;
Multiple front vehicle identification ranges are generated according to the third image and the rear lines on highway;
Rear area target vehicle parameter group is obtained from millimetre-wave radar, according to the installation parameter of the millimetre-wave radar by the rear
Target vehicle parameter group projects in the third image to obtain multiple rear area target vehicle parameter points;
Rear area target vehicle is identified according to all front vehicle identification ranges and the multiple rear area target vehicle parameter point;
The movement of the main body vehicle is joined according to the kinematic parameter of the objects ahead vehicle and the rear area target vehicle
Number carries out cruise control.
2. the method as described in claim 1, which is characterized in that described to obtain environment in front of main body vehicle from preposition 3D cameras
First image and the second image, including:
The first image of environment in front of main body vehicle is obtained from the imaging sensor of preposition 3D cameras;
The second image of environment in front of main body vehicle is obtained from the time-of-flight sensor of preposition 3D cameras.
3. the method as described in claim 1, which is characterized in that described to obtain front road driveway according to described first image
Line, including:
When described first image be luminance picture, according in described first image front lines on highway and road surface luminance difference
Identify the front lines on highway;Alternatively,
When described first image be coloured image, the coloured image is converted into luminance picture, according in described first image
Front lines on highway and the luminance difference on road surface identify the front lines on highway.
4. method as claimed in claim 3, which is characterized in that it is described according in described first image front lines on highway with
The luminance difference on road surface identifies the front lines on highway, including:
The binary map of the front lines on highway is created according to the luminance information of described first image and preset luminance threshold
Picture;
Whole edge pixel locations of straight way solid line lane line are detected in the bianry image according to preset detection algorithm
Or detect the whole edge pixel locations for solid line lane line of going off the curve;
Whole edge pixel locations of straight way dotted line lane line are detected in the bianry image according to preset detection algorithm
Or detect the whole edge pixel locations for dotted line lane line of going off the curve.
5. the method as described in claim 1, which is characterized in that the imaging parameters according to described first image and it is described before
Square lines on highway obtains third image and rear lines on highway, including:
Whole location of pixels of the front lines on highway are projected into the master according to the imaging parameters of described first image
Body vehicle physical world coordinate system establishes third image;
The position of front lines on highway in the third image is passed through into continuous time accumulation and opposite main body vehicle physics
The displacement of world coordinate system origin obtains the position of the rear lines on highway.
6. the method as described in claim 1, which is characterized in that described to identify front mesh according to all front vehicles identification ranges
Vehicle is marked, including:
All front vehicles identification ranges are marked with the label in front this track and this non-track of front;
According to this track target vehicle of the vehicle identification range identification front of this track label of label front;
According to this non-track target vehicle of vehicle identification range identification front of this non-track label of label front;
Lane change ahead target vehicle is identified according to the front vehicles identification range of combination of two.
7. the method as described in claim 1, which is characterized in that described according to all front vehicle identification ranges and the multiple
Rear area target vehicle parameter point identifies rear area target vehicle, including:
All front vehicle identification ranges are marked with the label in this track of rear and this non-track of rear;
According to the vehicle identification range of label this track label of rear and rear area target vehicle parameter point identification label rear
This track target vehicle;
After the vehicle identification range of label this non-track label of rear and rear area target vehicle parameter point identification label
The tracks Fang Feiben target vehicle;
According to the front vehicle identification range of combination of two and rear area target vehicle parameter point identification label rear lane change mesh
Mark vehicle.
8. the method as described in claim 1, which is characterized in that further include:
Generate objects ahead vehicle range according to the objects ahead vehicle, according to described first image and second image it
Between intertexture mapping relations by the objects ahead vehicle range map in described first image generate front car light cog region
Domain;
The steering indicating light of corresponding objects ahead vehicle is identified according to the front car light identification region;
The fortune of the kinematic parameter according to the objects ahead vehicle and the rear area target vehicle to the main body vehicle
Dynamic parameter carries out cruise control, including:
According to the kinematic parameter of the objects ahead vehicle and steering indicating light and the rear area target vehicle to the main body vehicle
Kinematic parameter carry out cruise control.
9. method as claimed in claim 8, which is characterized in that described to generate objects ahead vehicle according to the objects ahead vehicle
Range, including:
The object boundary that objects ahead vehicle is detected using the boundary detection method in image processing algorithm is identified.
10. method as claimed in claim 8, which is characterized in that described to generate objects ahead according to the objects ahead vehicle
Vehicle range, including:
Objects ahead vehicle range is generated according to the enclosed region that the object boundary of the objects ahead vehicle surrounds;Alternatively,
Objects ahead vehicle range is generated according to the enclosed region of the extension of the object boundary of the objects ahead vehicle surrounded;
Alternatively,
Objects ahead vehicle range is generated according to the enclosed region that multiple location of pixels lines of the objects ahead vehicle surround.
11. method as claimed in claim 8, which is characterized in that described to be identified accordingly according to the front car light identification region
The steering indicating light of objects ahead vehicle, including:
Corresponding objects ahead is identified according to the color, flicker frequency of tail-light in the front car light identification region or flashing sequence
The steering indicating light of vehicle.
12. method as claimed in claim 8, which is characterized in that the kinematic parameter according to the objects ahead vehicle and
Steering indicating light and the rear area target vehicle carry out cruise control to the kinematic parameter of the main body vehicle, including:
This non-track target vehicle deceleration lane change of front is recognized according to the kinematic parameter of the objects ahead vehicle and steering indicating light
To the operating mode in this track, so that the kinematic parameter control system of the main body vehicle carries out braking adjustment in advance, and make described
The lamp system of main body vehicle reminds rear area target vehicle;
Alternatively,
This track target vehicle deceleration lane change of front is recognized according to the kinematic parameter of the objects ahead vehicle and steering indicating light extremely
The operating mode in this non-track of front, so that the kinematic parameter control system of the main body vehicle is adjusted without braking.
13. a kind of vehicle travels automatic control device, which is characterized in that including:
First acquisition module, the first image and the second image for obtaining environment in front of main body vehicle from preposition 3D cameras,
In, the first image is colored or luminance picture, and the second image is depth image;
Second acquisition module, for obtaining front lines on highway according to described first image;
Third acquisition module, for obtaining third figure according to the imaging parameters of described first image and the front lines on highway
Picture and rear lines on highway;
First generation module, before will be described according to the intertexture mapping relations between described first image and second image
Square lines on highway, which maps in second image, generates multiple front vehicles identification ranges;
First identification module, for identifying objects ahead vehicle according to all front vehicles identification ranges;
Second generation module is identified for generating multiple front vehicles according to the third image and the rear lines on highway
Range;
4th acquisition module, for obtaining rear area target vehicle parameter group from millimetre-wave radar, according to the millimetre-wave radar
Installation parameter projects to the rear area target vehicle parameter group in the third image is joined with obtaining multiple rear area target vehicles
Several points;
Second identification module, for being identified according to all front vehicle identification ranges and the multiple rear area target vehicle parameter point
Rear area target vehicle;
Control module is used for kinematic parameter and the rear area target vehicle according to the objects ahead vehicle to the master
The kinematic parameter of body vehicle carries out cruise control.
14. device as claimed in claim 13, which is characterized in that first acquisition module is used for:
The first image of environment in front of main body vehicle is obtained from the imaging sensor of preposition 3D cameras;
The second image of environment in front of main body vehicle is obtained from the time-of-flight sensor of preposition 3D cameras.
15. device as claimed in claim 13, which is characterized in that second acquisition module is used for:
When described first image is luminance picture, according to the luminance difference of front lines on highway and road surface in described first image
The different identification front lines on highway;Alternatively,
When described first image is coloured image, the coloured image is converted into luminance picture, according to described first image
Middle front lines on highway and the luminance difference on road surface identify the front lines on highway.
16. device as claimed in claim 15, which is characterized in that second acquisition module includes:
Creating unit, for creating the front highway car according to the luminance information of described first image and preset luminance threshold
The bianry image of diatom;
First detection unit, for detecting straight way solid line lane line in the bianry image according to preset detection algorithm
Whole edge pixel locations or the whole edge pixel locations for detecting solid line lane line of going off the curve;
Second detection unit, for detecting straight way dotted line lane line in the bianry image according to preset detection algorithm
Whole edge pixel locations or the whole edge pixel locations for detecting dotted line lane line of going off the curve.
17. device as claimed in claim 13, which is characterized in that the third acquisition module includes:
Projecting cell, for according to the imaging parameters of described first image by whole location of pixels of the front lines on highway
It projects to main body vehicle physical world coordinates system and establishes third image;
Acquiring unit, for accumulating the position of the front lines on highway in the third image by continuous time and relatively
The displacement of main body vehicle physical world coordinates system origin obtains the position of the rear lines on highway.
18. device as claimed in claim 13, which is characterized in that first identification module includes:
First marking unit, the mark for all front vehicles identification ranges to be marked with front this track and this non-track of front
Label;
First recognition unit, for according to this track target carriage of the vehicle identification range identification front of this track label of label front
;
Second recognition unit, for this non-track mesh of vehicle identification range identification front according to this non-track label of label front
Mark vehicle;
Third recognition unit, for identifying lane change ahead target vehicle according to the front vehicles identification range of combination of two.
19. device as claimed in claim 13, which is characterized in that second identification module includes:
Second marking unit, the mark for all front vehicle identification ranges to be marked with this track of rear and this non-track of rear
Label;
4th recognition unit, for being joined according to the vehicle identification range and the rear area target vehicle of label this track label of rear
Several point identification label this track target vehicle of rear;
5th recognition unit, for according to the vehicle identification range and the rear area target vehicle for marking this non-track label of rear
Parameter point identification label this non-track target vehicle of rear;
6th recognition unit, for being known according to the front vehicle identification range of combination of two and the rear area target vehicle parameter point
It Biao Ji not rear lane change target vehicle.
20. device as claimed in claim 13, which is characterized in that further include:
Third generation module, for generating objects ahead vehicle range according to the objects ahead vehicle, according to first figure
Intertexture mapping relations between picture and second image map to the objects ahead vehicle range in described first image
Generate front car light identification region;
Third identification module, the steering indicating light for identifying corresponding objects ahead vehicle according to the front car light identification region;
The control module is additionally operable to:
According to the kinematic parameter of the objects ahead vehicle and steering indicating light and the rear area target vehicle to the main body vehicle
Kinematic parameter carry out cruise control.
21. device as claimed in claim 20, which is characterized in that the third generation module is used for:
The object boundary that objects ahead vehicle is detected using the boundary detection method in image processing algorithm is identified.
22. device as claimed in claim 21, which is characterized in that the third generation module is used for:
Objects ahead vehicle range is generated according to the enclosed region that the object boundary of the objects ahead vehicle surrounds;Alternatively,
Objects ahead vehicle range is generated according to the enclosed region of the extension of the object boundary of the objects ahead vehicle surrounded;
Alternatively,
Objects ahead vehicle range is generated according to the enclosed region that multiple location of pixels lines of the objects ahead vehicle surround.
23. device as claimed in claim 21, which is characterized in that the third identification module is used for:
Corresponding objects ahead is identified according to the color, flicker frequency of tail-light in the front car light identification region or flashing sequence
The steering indicating light of vehicle.
24. device as claimed in claim 20, which is characterized in that the control module is used for:
This non-track target vehicle deceleration lane change of front is recognized according to the kinematic parameter of the objects ahead vehicle and steering indicating light
To the operating mode in this track, so that the kinematic parameter control system of the main body vehicle carries out braking adjustment in advance, and make described
The lamp system of main body vehicle reminds rear area target vehicle;
Alternatively,
This track target vehicle deceleration lane change of front is recognized according to the kinematic parameter of the objects ahead vehicle and steering indicating light extremely
The operating mode in this non-track of front, so that the kinematic parameter control system of the main body vehicle is adjusted without braking.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710120749.3A CN108528450B (en) | 2017-03-02 | 2017-03-02 | Automatic control method and device for vehicle running |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710120749.3A CN108528450B (en) | 2017-03-02 | 2017-03-02 | Automatic control method and device for vehicle running |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108528450A true CN108528450A (en) | 2018-09-14 |
CN108528450B CN108528450B (en) | 2020-06-19 |
Family
ID=63489302
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710120749.3A Active CN108528450B (en) | 2017-03-02 | 2017-03-02 | Automatic control method and device for vehicle running |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108528450B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110682917A (en) * | 2019-09-05 | 2020-01-14 | 成都亿盟恒信科技有限公司 | Vehicle positioning drift calibration system and method based on video intelligent analysis |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN204309672U (en) * | 2014-12-23 | 2015-05-06 | 杭州好好开车科技有限公司 | A kind of anti-car rear-end prior-warning device based on image recognition |
CN104952254A (en) * | 2014-03-31 | 2015-09-30 | 比亚迪股份有限公司 | Vehicle identification method and device and vehicle |
US20160052515A1 (en) * | 2014-08-21 | 2016-02-25 | Hyundai Motor Company | Method and apparatus of predicting collision for omnidirectional application within emergency brake system |
CN105528593A (en) * | 2016-01-22 | 2016-04-27 | 江苏大学 | Forward vehicle driver driving behavior prediction system and prediction method |
CN205573939U (en) * | 2016-01-22 | 2016-09-14 | 江苏大学 | It is preceding to collision avoidance system based on it is preceding to action of driving of vehicle operation people |
CN106463064A (en) * | 2014-06-19 | 2017-02-22 | 日立汽车系统株式会社 | Object recognition apparatus and vehicle travel controller using same |
-
2017
- 2017-03-02 CN CN201710120749.3A patent/CN108528450B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104952254A (en) * | 2014-03-31 | 2015-09-30 | 比亚迪股份有限公司 | Vehicle identification method and device and vehicle |
CN106463064A (en) * | 2014-06-19 | 2017-02-22 | 日立汽车系统株式会社 | Object recognition apparatus and vehicle travel controller using same |
US20160052515A1 (en) * | 2014-08-21 | 2016-02-25 | Hyundai Motor Company | Method and apparatus of predicting collision for omnidirectional application within emergency brake system |
CN204309672U (en) * | 2014-12-23 | 2015-05-06 | 杭州好好开车科技有限公司 | A kind of anti-car rear-end prior-warning device based on image recognition |
CN105528593A (en) * | 2016-01-22 | 2016-04-27 | 江苏大学 | Forward vehicle driver driving behavior prediction system and prediction method |
CN205573939U (en) * | 2016-01-22 | 2016-09-14 | 江苏大学 | It is preceding to collision avoidance system based on it is preceding to action of driving of vehicle operation people |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110682917A (en) * | 2019-09-05 | 2020-01-14 | 成都亿盟恒信科技有限公司 | Vehicle positioning drift calibration system and method based on video intelligent analysis |
Also Published As
Publication number | Publication date |
---|---|
CN108528450B (en) | 2020-06-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108528433A (en) | Vehicle travels autocontrol method and device | |
WO2018059585A1 (en) | Vehicle identification method and device, and vehicle | |
CN108528431A (en) | Vehicle travels autocontrol method and device | |
RU2629433C2 (en) | Device for detecting three-dimensional objects | |
CA3002628C (en) | Display control method and display control device | |
US9064418B2 (en) | Vehicle-mounted environment recognition apparatus and vehicle-mounted environment recognition system | |
US8305431B2 (en) | Device intended to support the driving of a motor vehicle comprising a system capable of capturing stereoscopic images | |
JP5855272B2 (en) | Method and apparatus for recognizing braking conditions | |
CN104115188B (en) | Stereoscopic article detection means | |
JP2015172934A (en) | Object recognition device and object recognition method | |
CN103874931A (en) | Method and apparatus for ascertaining a position for an object in surroundings of a vehicle | |
CN107886030A (en) | Vehicle identification method, device and vehicle | |
CN109484393A (en) | Controller of vehicle, control method for vehicle and storage medium | |
CN108528448A (en) | Vehicle travels autocontrol method and device | |
JP6741646B2 (en) | Exterior environment recognition device | |
JP2011053809A (en) | White line recognition device for vehicle | |
RU2635280C2 (en) | Device for detecting three-dimensional objects | |
JP2018124768A (en) | Vehicle control device | |
CN108536134B (en) | Automatic control method and device for vehicle running | |
CN107886729B (en) | Vehicle identification method and device and vehicle | |
JP2011053808A (en) | White line recognition device for vehicle | |
CN108528432A (en) | Vehicle travels autocontrol method and device | |
JP7042185B2 (en) | Distance calculation device | |
CN104115204A (en) | Three-dimensional object detection device | |
Sasaki et al. | Development of ranging method for inter-vehicle distance using visible light communication and image processing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |