CN107886036A - Control method for vehicle, device and vehicle - Google Patents
Control method for vehicle, device and vehicle Download PDFInfo
- Publication number
- CN107886036A CN107886036A CN201610874368.XA CN201610874368A CN107886036A CN 107886036 A CN107886036 A CN 107886036A CN 201610874368 A CN201610874368 A CN 201610874368A CN 107886036 A CN107886036 A CN 107886036A
- Authority
- CN
- China
- Prior art keywords
- image
- vehicle
- target vehicle
- identification
- highway
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Traffic Control Systems (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of control method for vehicle, device and vehicle, the manufacturing cost of self-adaption cruise system can be reduced.Methods described includes:The first image and the second image are obtained, wherein, described first image is coloured image or luminance picture, and second image is depth image;Target vehicle is identified in second image, to obtain the range information of the target vehicle;According to described first image or second image, the azimuth of the target vehicle is obtained;According to the azimuth of the target vehicle and the intermediate-freuqncy signal obtained by dead load frequency radar, the relative velocity of the target vehicle is determined;According to the range information and the relative velocity, the kinematic parameter of main body vehicle is controlled.
Description
Technical field
The present invention relates to technical field of vehicle, and in particular to control method for vehicle, device and vehicle.
Background technology
With the continuous development of scientific technology, the trip of people is also more and more convenient, various automobiles, electric car etc.
Have become the vehicles essential in people's life.Some existing vehicles have been provided with the function of adaptive cruise.
At present, the self-adaption cruise system of vehicle can use millimetre-wave radar, laser radar or stereo camera conduct
Distance measuring sensor, vehicle can sense multiple target carriages of vehicle front simultaneously by installing the distance measuring sensor of this several types
, and then it is adaptively adjusted the kinematic parameter of cruise system.
However, complex using the location algorithm of stereoscopic camera, this would potentially result in the increase of computer chip power consumption,
And the mode of millimetre-wave radar or laser radar is coordinated to need larger in-car installing space, and cost using single general camera
It is higher.
The content of the invention
It is an object of the invention to provide a kind of control method for vehicle, device and vehicle, can reduce self-adaption cruise system
Manufacturing cost.
First aspect according to embodiments of the present invention, there is provided a kind of control method for vehicle, including:
The first image and the second image are obtained, wherein, described first image is coloured image or luminance picture, described second
Image is depth image;
Target vehicle is identified in second image, to obtain the range information of the target vehicle;
According to described first image or second image, the azimuth of the target vehicle is obtained;
According to the azimuth of the target vehicle and the intermediate-freuqncy signal obtained by dead load frequency radar, the target is determined
The relative velocity of vehicle;
According to the range information and the relative velocity, the kinematic parameter of main body vehicle is controlled.
Optionally, methods described also includes:
Lines on highway is identified according to described first image;
According to the mapping relations between described first image and second image, the lines on highway is mapped into institute
The second image is stated, to determine at least one vehicle identification scope in second image, wherein, the adjacent highway car of each two
Diatom creates a vehicle identification scope;
Target vehicle is identified in second image, including:
The target vehicle is identified at least one vehicle identification scope.
Optionally, methods described also includes:
Obtain the slope of the initial straight of each lines on highway mapped in second image;
The vehicle identification range flags that lines on highway corresponding to two initial straights of maximum slope is created are this
Track, and by remaining vehicle identification range flags be this non-track;
Target vehicle is identified at least one vehicle identification scope, including:
The target vehicle in this track is identified in the vehicle identification scope labeled as this track, labeled as this non-track
Identified in vehicle identification scope this non-track target vehicle and two neighboring vehicle identification range combinations into vehicle identification
The target vehicle of lane change is identified in scope.
Optionally, methods described also includes:
By identifying the target vehicle, target vehicle region is determined in second image;
According to the mapping relations between described first image and second image, by the target vehicle area maps extremely
In described first image, to generate car light identification region in described first image;
The steering indicating light of the target vehicle is identified in the car light identification region;
According to the range information and the relative velocity, the kinematic parameter of main body vehicle is controlled, including:
According to the steering indicating light of the target vehicle of the range information, the relative velocity and identification, to the master
The kinematic parameter of body vehicle is controlled.
Optionally, according to described first image or second image, the azimuth of the target vehicle is obtained, including:
According to position of the target vehicle region in second image, the azimuth of the target vehicle is obtained;
Or,
According to position of the car light identification region in described first image, the azimuth of the target vehicle is obtained.
Optionally, methods described also includes:
According to the azimuth of the target vehicle of identification, the dead load frequency radar is calibrated automatically.
First aspect according to embodiments of the present invention, there is provided a kind of controller of vehicle, including:
Image collection module, for obtaining the first image and the second image, wherein, described first image be coloured image or
Luminance picture, second image are depth image;
First identification module, for identifying target vehicle in second image, with obtain the target vehicle away from
From information;
First acquisition module, for according to described first image or second image, obtaining the side of the target vehicle
Parallactic angle;
First determining module, the intermediate frequency obtained for the azimuth according to the target vehicle and by dead load frequency radar
Signal, determine the relative velocity of the target vehicle;
Control module, for according to the range information and the relative velocity, being carried out to the kinematic parameter of main body vehicle
Control.
Optionally, described device also includes:
Second identification module, for identifying lines on highway according to described first image;
First mapping block, for according to the mapping relations between described first image and second image, will described in
Lines on highway maps to second image, to determine at least one vehicle identification scope in second image, wherein,
The adjacent lines on highway of each two creates a vehicle identification scope;
First identification module is used for:
The target vehicle is identified at least one vehicle identification scope.
Optionally, described device also includes:
Second acquisition module, the initial straight of each lines on highway mapped to for acquisition in second image
Slope;
Creation module, the vehicle identification created for lines on highway corresponding to two initial straights by maximum slope
Range flags are this track, and by remaining vehicle identification range flags are this non-track;
First identification module is used for:
The target vehicle in this track is identified in the vehicle identification scope labeled as this track, labeled as this non-track
Identified in vehicle identification scope this non-track target vehicle and two neighboring vehicle identification range combinations into vehicle identification
The target vehicle of lane change is identified in scope.
Optionally, described device also includes:
Second determining module, for by identifying the target vehicle, target vehicle area to be determined in second image
Domain;
Second mapping block, for according to the mapping relations between described first image and second image, will described in
Target vehicle area maps are into described first image, to generate car light identification region in described first image;
3rd identification module, for identifying the steering indicating light of the target vehicle in the car light identification region;
The control module is used for:
According to the steering indicating light of the target vehicle of the range information, the relative velocity and identification, to the master
The kinematic parameter of body vehicle is controlled.
Optionally, first acquisition module is used for:
According to position of the target vehicle region in second image, the azimuth of the target vehicle is obtained;
Or,
According to position of the car light identification region in described first image, the azimuth of the target vehicle is obtained.
Optionally, described device also includes:
Calibration module, for the azimuth of the target vehicle according to identification, the dead load frequency radar is carried out automatic
Calibration.
First aspect according to embodiments of the present invention, there is provided a kind of vehicle, including:
Image collecting device, for obtaining the first image and the second image, wherein, described first image be coloured image or
Luminance picture, second image are depth image;And the controller of vehicle described in second aspect.
In the embodiment of the present invention, the azimuth of target vehicle can be obtained by identifying image, while pass through dead load frequency thunder
Up to the relative velocity for obtaining target vehicle, the range information of target vehicle is obtained in conjunction with depth image, so, uses dead load frequency
Radar relatively accurately senses the target vehicle near main body vehicle with general camera combination can, and then preferably carries out certainly
Adapt to cruise.Simultaneously as the emitter of dead load frequency radar is worked on nearly constant wave frequency, therefore dead load frequency thunder
Up to the electromagnetic wave bandwidth very little of the frequency modulated(FM) radar occupancy relative to ranging, so as to which dead load frequency radar can reduce making for component
With reducing the cost of self-adaption cruise system.
Other features and advantages of the present invention will be described in detail in subsequent specific embodiment part.
Brief description of the drawings
Accompanying drawing is for providing a further understanding of the present invention, and a part for constitution instruction, with following tool
Body embodiment is used to explain the present invention together, but is not construed as limiting the invention.In the accompanying drawings:
Fig. 1 is a kind of flow chart of control method for vehicle according to an exemplary embodiment.
Fig. 2 is the flow chart of another control method for vehicle according to an exemplary embodiment.
Fig. 3 is the flow chart of another control method for vehicle according to an exemplary embodiment.
Fig. 4 is the flow chart of another control method for vehicle according to an exemplary embodiment.
Fig. 5 is the target vehicle region and car light identification region schematic diagram according to an exemplary embodiment.
Fig. 6 is the time diffusion subgraph schematic diagram according to an exemplary embodiment.
Fig. 7 is a kind of block diagram of controller of vehicle according to an exemplary embodiment.
Fig. 8 is a kind of block diagram of vehicle according to an exemplary embodiment.
Embodiment
The embodiment of the present invention is described in detail below in conjunction with accompanying drawing.It should be appreciated that this place is retouched
The embodiment stated is merely to illustrate and explain the present invention, and is not intended to limit the invention.
Fig. 1 is a kind of flow chart of control method for vehicle according to an exemplary embodiment, as shown in figure 1, the car
Control method can apply in body vehicle, comprise the following steps.
Step S11:The first image and the second image are obtained, wherein, the first image is coloured image or luminance picture, second
Image is depth image.
Step S12:Target vehicle is identified in the second image, to obtain the range information of target vehicle.
Step S13:According to the first image or the second image, the azimuth of target vehicle is obtained.
Step S14:According to the azimuth of target vehicle and the intermediate-freuqncy signal obtained by dead load frequency radar, target is determined
The relative velocity of vehicle.
Step S15:According to range information and relative velocity, the kinematic parameter of main body vehicle is controlled.
First image can be coloured image or luminance picture, and the second image can be depth image, the first image and
Two images can be obtained by the same image collecting device being arranged in main body vehicle.For example, pass through image collecting device
Imaging sensor obtain the first image, pass through TOF (Time of flight, flight time) sensor of image collecting device
Obtain the second image.
In the embodiment of the present invention, colored or luminance picture pixel and depth image pixel can be carried out according to a certain percentage
Interleaved arrangement, it is how many actually for ratio, the embodiment of the present invention is not construed as limiting.For example, imaging sensor and TOF sensor are all
It can be made using complementary metal oxide semiconductor (CMOS) technique, luminance pixel and TOF pixels can be made in proportion
Make on same substrate, such as with 8:8 luminance pixels and 1 TOF pixel that 1 ratio is made form a big friendship
Pixel is knitted, wherein the photosensitive area of 1 TOF pixel can be equal to the photosensitive area of 8 luminance pixels, wherein 8 luminance pixels
It can be arranged by 2 rows and 4 array formats arranged.For example 360 rows and 480 row can be made on the substrate of 1 inch optical target surface
Active intertexture pixel array, can obtain 720 rows and 1920 row enliven luminance pixel array, 360 rows and 480 row enliven
TOF pel arrays, thus the same image collecting device of imaging sensor and TOF sensor composition can obtain simultaneously it is colored or
Luminance picture and depth image.
Optionally, Fig. 2 is referred to, Fig. 2 is the flow chart of another control method for vehicle.Obtaining the first image and second
After image, step S16 can also be included according to the first image recognition lines on highway;Step S17 is according to the first image and second
Mapping relations between image, lines on highway is mapped into the second image, to determine at least one vehicle in the second image
Identification range, wherein, the adjacent lines on highway of each two can create a vehicle identification scope.In this case, walk
Rapid S12 can be the target vehicle at least one vehicle identification scope of identification, to obtain the range information of target vehicle.
Because the first image is colored or luminance picture, and identifying the position of lines on highway only needs to utilize road driveway
Line and the luminance difference on road surface, therefore acquisition lines on highway only needs the monochrome information of the first image.So first
When image is luminance picture, lines on highway directly can be identified according to the monochrome information of the first image, be coloured silk in the first image
During color image, the first image can be changed into luminance picture and identify lines on highway again afterwards.
A vehicle identification scope is created per two neighboring lines on highway, i.e. vehicle identification scope corresponds to actual
Track, then the target vehicle on target vehicle, that is, identification track is identified in the range of vehicle identification.It will can so know
The scope of other target vehicle determined onto track, to ensure that the object of identification is the vehicle that is travelled on track, is avoided in image
Interference caused by the object of other non-vehicle, the accuracy of lifting identification target vehicle.
Optionally, because the existing solid line lane line of lines on highway also has dotted line lane line, therefore know in the first image
Other lines on highway can be whole edge pictures that each solid line lane line that lines on highway includes is obtained according to the first image
Plain position, and obtain the whole edge pixel locations for each dotted line lane line that lines on highway includes.So could be complete
Ground identifies solid line lane line and dotted line lane line, and then lifts the accuracy of identification target vehicle.
Optionally, the whole edge pixel locations for each solid line lane line that lines on highway includes are obtained, can be created
Bianry image corresponding with the first image, whole edge pixel positions of each solid line lane line are then detected in bianry image
Put.
For how to be created in bianry image corresponding to the first image, the embodiment of the present invention is not construed as limiting, below to several
Possible mode is illustrated.
For example, using lines on highway and the luminance difference on road surface, some luminance thresholds, brightness can be obtained by searching
Threshold value can utilize " statistics with histogram-bimodal " algorithm to search to obtain, and created and protruded using luminance threshold and luminance picture
The bianry image of lines on highway.
Such as luminance picture can also be divided into multiple brightness subgraphs, each brightness subgraph is performed " straight
Side's figure statistics-bimodal " algorithm searches to obtain multiple luminance thresholds, utilizes each luminance threshold and corresponding brightness subgraph
The two-value subgraph of prominent lines on highway is created, and the two-value of lines on highway is completely protruded using two-value creation of sub-pictures
Image, it can so answer road pavement or the situation of lane line brightness change.
After bianry image corresponding with the first image is created, each solid line track can be detected in bianry image
Whole edge pixel locations of line, for the mode of detection, the embodiment of the present invention is equally not construed as limiting.
For example, because the radius of curvature of lines on highway can not possibly be too small, and because camera projection theory causes nearby
Lane line is more with respect to the imaging pixel of distant place lane line so that the solid line lane line of bend is arranged in a straight line in luminance picture
Pixel also account for the major part of the solid line lane line imaging pixel, therefore similar Hough transform algorithm isoline can be used to examine
Method of determining and calculating detected in the bianry image of prominent lines on highway the solid line lane line of straight way whole edge pixel locations or
Detect the most of initial straight edge pixel location for the solid line lane line gone off the curve.
Straight-line detection also may examine the most of linear edge location of pixels of isolation strip, electric pole in bianry image
Go out.So for example can according to the Aspect Ratio of imaging sensor, camera lens focal length, highway layout specification road width model
Slope range of the lane line in bianry image can be set with imaging sensor in the installation site of main body vehicle by enclosing, so as to root
The straight line of non-lane line is filtered according to the slope range and excluded.
Due to the edge pixel location always consecutive variations of the solid line lane line of bend, therefore according to searching above-mentioned detection
Initial straight both ends edge pixel location connected pixel position, and the connected pixel position is incorporated to the initial straight side
Edge pixel set, repeat above-mentioned lookup and be incorporated to the connected pixel position, finally by whole edge pictures of bend solid line lane line
Plain position uniquely determines.
By whole edge pixel locations that solid line lines on highway can be detected with upper type.
Optionally, the first dotted line lines on highway can be any dotted line lines on highway that lines on highway includes, and obtain
The edge pixel location of the first dotted line lane line is taken, can then will according to first image recognition the first solid line lines on highway
Whole edge pixel locations of first solid line lines on highway project to the edge pixel of the initial straight of the first dotted line lane line
Position, to obtain whole edge pixel locations of the first dotted line lane line.Wherein, the first solid line lines on highway can be highway
Any solid line lines on highway that lane line includes.
, can be according to the original being parallel to each other in the priori of solid line lane line, lane line reality in the embodiment of the present invention
Then, the projective parameter of imaging sensor and camera, it is empty that whole edge pixel locations of the first solid line lane line are projected to first
The initial straight edge pixel location of line lane line is to connect the initial straight edge pixel location of the first dotted line lane line and category
In the edge pixel location of other shorter lane lines of the first dotted line lane line, so as to obtain whole edges of dotted line lane line
Location of pixels.
Optionally, the first dotted line lines on highway is any dotted line lines on highway for including of lines on highway, obtains the
The edge pixel location of one dotted line lane line, bianry image corresponding to multiple first images continuously acquired difference can be carried out
Superposition, is superimposed as solid line lane line by the first dotted line lane line, then obtains the whole edges for the solid line lane line being superimposed as
Location of pixels.
In the embodiment of the present invention, can need not obtain the priori of straight way or bend, due to vehicle straight way cruise or
During constant steering angle bend cruise, the lateral shift of dotted line lane line can almost neglect within shorter continuous time
Slightly, but vertical misalignment is larger, therefore dotted line lane line protrudes the binary map of lines on highway in continuous several width at different moments
A solid line lane line can be superimposed as in, the dotted line then can be obtained by the recognition methods of above-mentioned solid line lane line again
Whole edge pixel locations of lane line.
Because the vertical misalignment amount of dotted line lane line is influenceed by main body vehicle speed, therefore identifying the first dotted line car
During diatom, continuous prominent lines on highway at different moments can be dynamically determined according to the speed obtained from wheel speed sensors
Bianry image minimum width number so that the first dotted line lane line is superimposed as into a solid line lane line, so as to obtain the first dotted line car
Whole edge pixel locations of diatom.
Optionally, Fig. 3 is referred to, Fig. 3 is the flow chart of another control method for vehicle in the embodiment of the present invention, can be with
Including step S18:Obtain the slope of the initial straight of each lines on highway mapped in the second image;Step S19:Will be oblique
The vehicle identification range flags that lines on highway corresponding to two maximum initial straights of rate is created are this track, and by its
Remaining vehicle identification range flags are this non-track.So step S12 can be in the vehicle identification scope labeled as this track
The middle target vehicle for identifying this track, the target carriage for identifying in the vehicle identification scope labeled as this non-track this non-track
And two neighboring vehicle identification range combinations into vehicle identification scope in identify lane change target vehicle, to obtain mesh
Mark the range information of vehicle.
Due to the intertexture mapping relations between the first image and the second image, the ranks coordinate of each pixel of the first image
The ranks coordinate of a pixel can be at least determined in the second image by the adjustment of equal proportion, therefore be obtained according to the first image
Each edge pixel location of the lines on highway taken can at least determine a location of pixels in the second image, so as to the
The lines on highway of equal proportion adjustment is obtained in two images.In the second image, one is created per two neighboring lines on highway
Individual vehicle identification scope.
According to the lines on highway of the equal proportion obtained in the second image, the initial linear portion of each lines on highway is taken
Shared line number is compared to obtain the slope of the initial straight of the lines on highway with columns, to two highways according to maximum slope
The vehicle identification range flags that lines on highway where the initial straight of lane line creates are this track, to the car of other establishments
Identification range is labeled as this non-track.
, can be to identify the target carriage in this track in the vehicle identification scope labeled as this track after marking track
, identify in the vehicle identification scope labeled as this non-track the target vehicle in this non-track and know in two neighboring vehicle
Other range combinations into vehicle identification scope in identify lane change target vehicle.
Mode for identifying target vehicle, the embodiment of the present invention are not construed as limiting, several possible modes are carried out below
Explanation.
First way:
Always change with the time relative to the distance and position of TOF sensor due to target vehicle, and road surface, isolation strip
With the time it is approximately indeclinable relative to the distance and position of TOF sensor.Therefore can be obtained at different moments using two width
Depth image creation time differential depth image, and then identify the position of target vehicle in the second image, or target vehicle
The distance between body vehicle, etc..
The second way:
In the second image, that is, in depth image, the light that is reflected by the back side of same target vehicle, passed to TOF
The depth sub-picture pack that sensor is formed containing consistent range information, as long as therefore identify the target vehicle formed depth subgraph
As the position in depth image can obtain the range information of the target vehicle.
It is to include consistent distance that the light at the back side of same target vehicle, which reflexes to TOF sensor and forms depth subgraph,
Information, and it is the range information for including consecutive variations that the light on road surface, which reflexes to TOF sensor to form depth subgraph, therefore include
The depth subgraph of consistent range information is with the depth subgraph of the range information comprising consecutive variations in both intersections
Mutation differences are necessarily formed, the boundary of these mutation differences forms object boundary of the target vehicle in depth image.
It is for instance possible to use a variety of borders such as Canny, Sobel, Laplace on the detection border in image processing algorithm
Detection method is to detect the object boundary of target vehicle.
Further, vehicle identification scope is determined by whole location of pixels of lane line, therefore in vehicle identification scope
The object boundary of interior detection target vehicle will reduce the border interference of the road equipments such as isolation strip, light pole, fender pile formation.
In actual applications, target vehicle may have multiple, therefore, will can be detected respectively in the range of each vehicle identification
Object boundary project to the row reference axis of image, and one-dimensional lookup is carried out in reference axis of being expert at, you can determine that the vehicle is known
Line number and row coordinate range in other scope shared by longitudinal object boundary of all target vehicles, and determine horizontal object boundary
Shared columns and row coordinate position, longitudinal object boundary refers to the object boundary for occupying that number of lines of pixels is more and columns is few, horizontal
Refer to object boundary have occupy that number of lines of pixels is few and columns more than object boundary.According to horizontal stroke all in the range of the vehicle identification
To the columns shared by object boundary, row coordinate position, the row that all longitudinal object boundaries are searched in the range of the vehicle identification are sat
Cursor position (namely the row coordinate original position of respective transversal object boundary and final position), and included unanimously according to object boundary
Range information principle distinguish different target vehicle object boundary, so that it is determined that all target carriages in the range of the vehicle identification
Position and range information.
Therefore, the object boundary of detection acquisition target vehicle can uniquely determine the depth subgraph of target vehicle formation
Position in depth image, so as to uniquely determine the range information of the target vehicle.
It is of course also possible to identify target vehicle in other way, the embodiment of the present invention is not construed as limiting to this, as long as energy
Enough identify target vehicle.
Optionally, Fig. 4 is referred to, Fig. 4 is the flow chart of another control method for vehicle in the embodiment of the present invention, can be with
Including step S20:By identifying target vehicle, target vehicle region is determined in the second image;Step S21:According to the first figure
Picture and the mapping relations between the second image, by target vehicle area maps into the first image, to be generated in the first image
Car light identification region;Step S22:The steering indicating light of target vehicle is identified in car light identification region.Certainly, it is illustrated in Figure 4 to be
A kind of execution sequence therein, the execution sequence of each step can also be other, such as, step S20- steps S22 is in step S14
Perform afterwards, etc., the embodiment of the present invention is not construed as limiting for step S20- steps S22 execution sequence.In this case,
Step S15 can be according to the steering indicating light of the target vehicle of range information, relative velocity and identification, to the fortune of main body vehicle
Dynamic parameter is controlled.
After target vehicle is identified, target vehicle region can be determined in the second image.Target vehicle region
It is exactly target vehicle in the region where the second image, can is the enclosed region that the border of the target vehicle identified surrounds,
It can also be either the enclosed region surrounded of the extension on the border of the target vehicle identified or can also be target vehicle
The enclosed region that surrounds of some location of pixels lines, etc..Which kind of the embodiment of the present invention is actually for target vehicle region
Region is not construed as limiting, as long as the region comprising target vehicle.
Due to the intertexture mapping relations between the first image and the second image, target vehicle region is each in the second image
Adjustment of the ranks coordinate of pixel Jing Guo equal proportion can at least determine the ranks coordinate of a pixel in the first image.Please
Referring to Fig. 5, by the target vehicle area maps in the second image into the first image after, can be in the relevant position of the first image
Upper generation car light identification region, because the imaging of the car light of target vehicle is included in target vehicle region, therefore can be the
The steering indicating light of target vehicle is identified in the car light identification region generated in one image.
Optionally, for the mode for the steering indicating light that target vehicle is identified in car light identification region, the embodiment of the present invention is not
It is construed as limiting, time diffusion processing can be carried out to the car light identification region in multiple first images for continuously acquiring, creates pair
The steering indicating light of target vehicle should be identified in the time diffusion subgraph of target vehicle, then according to time diffusion subgraph.
For example, the tailstock can be identified according to the color of tail-light, flicker frequency or flashing sequence in car light identification region
Steering indicating light.
Its length travel at the initial stage of target vehicle lane change and lateral displacement are all smaller, it is meant that the car light of the target vehicle is known
Other area size change is also smaller, and the brightness being imaged only at tailstock steering indicating light changes greatly because of flicker.Therefore, by continuous
Obtain several the first images at different moments, that is, colored or luminance picture and to the car light cog region of the wherein target vehicle
Domain carries out time diffusion processing to create the time diffusion subgraph of the target vehicle.Time diffusion subgraph will protrude the target
The tail-light subgraph of the continuous flicker of vehicle.Then time micro- subgraph can be projected to row reference axis, carried out one-dimensional
Starting and the terminal row coordinate position for the tail-light subgraph for obtaining the target vehicle are searched, by these startings and terminal row coordinate
Position projects to time diffusion subgraph and searches starting and the end line coordinate position of tail-light subgraph, by tail-light subgraph
The starting of picture and the row, column coordinate position of terminal project to it is above-mentioned several at different moments colour or luminance picture in confirm should
Color, flicker frequency or the flashing sequence of the tail-light of target vehicle, so that it is determined that the row, column of the tail-light subgraph of flicker
Coordinate position.
Further, the row, column coordinate position of the tail-light subgraph of flicker is only in the car light cog region of the target vehicle
It can determine that the target vehicle exists beating left steering lamp, the row, column coordinate position of the tail-light subgraph of flicker when on the left of domain
It can determine that the target vehicle is beating right turn lamp, the tail-light subgraph of flicker when on the right side of the car light identification region of the target vehicle
The row, column coordinate position of picture can determine that the target vehicle is playing double sudden strain of a muscle polices at the car light identification region both sides of the target vehicle
Show lamp.
In addition, its length travel or the larger car for causing the target vehicle of lateral displacement during target vehicle lane change
Lamp identification region size variation is also larger, and at this moment the car light of several target vehicles at different moments continuously acquired can be identified
Region carries out length travel or lateral displacement compensates and is scaled to car light identification region of the same size, then to the mesh after adjustment
The car light identification region for marking vehicle carries out time diffusion processing to create the time diffusion subgraph of the target vehicle, and the time is micro-
Molecular Graphs picture projects to row reference axis, and the starting and terminal row for carrying out the one-dimensional tail-light subgraph searched and obtain target vehicle are sat
Cursor position, these startings and terminal row coordinate position are projected to time diffusion car light identification region subgraph and search tail-light
The starting of subgraph and end line coordinate position, by the row, column coordinate position of the starting of tail-light subgraph and terminal project to
To confirm the color of the tail-light of the target vehicle, flicker frequency or sudden strain of a muscle in several above-mentioned colours or luminance picture at different moments
Bright sequence, so that it is determined that the row, column coordinate position of the tail-light subgraph of flicker, finally completes left steering lamp, right turn lamp
Or double identifications for dodging warning lamp.
For example, the time diffusion subgraph corresponding to car light identification region as shown in Figure 6, in the time diffusion subgraph
In protrude the tail-light subgraph continuously flashed, by coordinate identification, determine that tail-light subgraph is located at car light identification region
Left, flicker frequency are 1 time/second, then can such as determine that target vehicle is currently beating left steering lamp.
By with upper type, can preferably identify the steering indicating light of target vehicle, whether to know target vehicle in advance
Turn to and how to turn to, so can preferably, more safely carry out adaptive cruise.
Optionally, for how to obtain according to the first image or the second image the azimuth of target vehicle, the present invention is implemented
Example is not construed as limiting to this, for example, position that can be according to target vehicle region in the second image, obtains the orientation of target vehicle
Angle;Or, the position according to car light identification region in the first image, obtain the azimuth of target vehicle.
Because the lens parameters and installation site of the first image of acquisition or the video camera of the second image can be taken the photograph by prior
Camera calibration technical limit spacing, therefore can establish using video camera as the road scenery coordinate of origin and the first image or the second image
Pixel coordinate relation look-up table.
The pixel coordinate that target vehicle scope or car light identification region include can be converted to by above-mentioned relation look-up table
Target vehicle coordinate using video camera as origin, so as to calculated according to the target vehicle coordinate changed using video camera into origin with
Video camera is the target vehicle azimuth of origin.
When relative velocity be present in target vehicle and main body vehicle, the reflection for the target vehicle that dead load frequency radar receives
Signal can pass through phase shifter and produce orthogonal reflected signal, and the transmission signal of the orthogonal reflected signal and the dead load frequency radar is passed through
Frequency mixer produces orthogonal intermediate-freuqncy signal, and the orthogonal intermediate-freuqncy signal includes the Doppler frequency on above-mentioned relative velocity, and how general this is
Strangle frequency size it is directly proportional to the size of the relative velocity, the sign of the Doppler frequency with it is positive and negative with the relative velocity
It is number identical.
Using analog-digital converter and complex fast Fourier algorithm can create the prominent Doppler frequency this it is orthogonal in
The frequency spectrum of frequency signal;The size of the Doppler frequency of the frequency spectrum of the orthogonal intermediate-freuqncy signal can be obtained using peak detection algorithm
And sign;Determined according to the size of the Doppler frequency of acquisition and sign using Doppler range rate measurement formula relatively fast
The size and sign of degree.
Dead load frequency radar can include more than two receivers to obtain the azimuth of radar target.Dead load frequency radar
The mutual position difference of each receiver cause orthogonal intermediate-freuqncy signal that each receiver obtains at same Doppler frequency
Phase phase difference be present.
Phase difference of each receiver at same Doppler frequency that is obtained according to the frequency spectrum of orthogonal intermediate-freuqncy signal and each
The mutual position relationship of individual receiver is the azimuth that radar target is obtained using phase method angle measurement formula.That is, dead load is passed through
The intermediate-freuqncy signal that frequency radar obtains can know relative velocity and the azimuth of the target that dead load frequency radar senses.
When multiple target vehicles be present, the azimuth of multiple target vehicles can be obtained by step S13, according to dead load
The intermediate-freuqncy signal of frequency radar can obtain relative velocity and the azimuth of multiple radar targets, utilize the orientation of single target vehicle
The relative velocity of the radar target can be defined as the target by angle and the approximately equalised principle in azimuth of a certain radar target
The relative velocity of vehicle.
When the installation site of video camera and dead load frequency radar differs farther out, the above-mentioned approximately equalised principle in azimuth may
Cause error, as long as according to the installation site relation of video camera and dead load frequency radar by the azimuthal coordinate of both different origins
Above-mentioned error can be eliminated by being calibrated to the azimuthal coordinate of same origin.
, can be according to obtaining during adaptive cruise after the range information and relative velocity of target vehicle is obtained
The information taken is controlled to the kinematic parameter of main body vehicle, and for how on earth being controlled, the embodiment of the present invention does not limit
It is fixed.For example, recognize target vehicle immediately ahead of the main body vehicle on 100 meters of position with relative to the meter per second of main body vehicle -10
Speed travels, then in order to prevent rear-end collision, main body vehicle deceleration, etc. can be controlled,
Certainly, if also identifying the steering indicating light of target vehicle, then, can also be according to mesh during adaptive cruise
Range information, relative velocity and the steering indicating light of vehicle are marked, the kinematic parameter of main body vehicle is controlled.For example, recognize
Target vehicle is located on the track on the main body vehicle left side, to be travelled relative to the speed of the meter per second of main body vehicle -10, lights simultaneously
Right turn lamp, then it is considered that the target vehicle may be to this lane of main body vehicle, therefore main body vehicle can be controlled
Slow down, etc..
Optionally, dead load frequency radar can also be calibrated automatically according to the azimuth of the target vehicle of identification.
When being in due to the installation site of dead load frequency radar beyond driver's cabin, its azimuthal measurement result may be shaken
The dynamic, influence of temperature change, sleet dirt covering, it is necessary to be calibrated automatically.For example, when according to present invention identify that main body
Vehicle front has the target vehicle of multiple different orientations, you can contrasts azimuth and the radar mesh of multiple target vehicles of identification
Whether target azimuth has consistent deviation, if there is consistent deviation, the deviation is recorded into the holder of dead load frequency radar,
Dead load frequency radar reads the deviation in follow-up azimuth determination and is calibrated and compensated for automatically.Certainly, if having inconsistent
Deviation, the disabled warning of dead load frequency radar can be sent to main body vehicle driver, remind main body vehicle driver to dead load
Frequency radar is checked or cleaned.
Fig. 7 is referred to, based on same inventive concept, the embodiment of the present invention provides a kind of vehicle identifier 100, device
100 can include:
Image collection module 101, for obtaining the first image and the second image, wherein, the first image be coloured image or
Luminance picture, the second image are depth image;
First identification module 102, for identifying target vehicle in the second image, to obtain the distance of target vehicle letter
Breath;
First acquisition module 103, for according to the first image or the second image, obtaining the azimuth of target vehicle;
First determining module 104, the intermediate frequency obtained for the azimuth according to target vehicle and by dead load frequency radar
Signal, determine the relative velocity of target vehicle;
Control module 105, for according to range information and relative velocity, being controlled to the kinematic parameter of main body vehicle.
Optionally, device 100 also includes:
Second identification module, for according to the first image recognition lines on highway;
First mapping block, for according to the mapping relations between the first image and the second image, lines on highway to be reflected
The second image is incident upon, to determine at least one vehicle identification scope in the second image, wherein, the adjacent road driveway of each two
Line creates a vehicle identification scope;
First identification module 102 is used for:
Target vehicle is identified at least one vehicle identification scope.
Optionally, device 100 also includes:
Second acquisition module, for obtaining the oblique of the initial straight of each lines on highway mapped in the second image
Rate;
Creation module, the vehicle identification created for lines on highway corresponding to two initial straights by maximum slope
Range flags are this track, and by remaining vehicle identification range flags are this non-track;
First identification module 102 is used for:
The target vehicle in this track is identified in the vehicle identification scope labeled as this track, labeled as this non-track
Identified in vehicle identification scope this non-track target vehicle and two neighboring vehicle identification range combinations into vehicle identification
The target vehicle of lane change is identified in scope.
Optionally, device 100 also includes:
Second determining module, for by identifying target vehicle, target vehicle region to be determined in the second image;
Second mapping block, for according to the mapping relations between the first image and the second image, by target vehicle region
Map in the first image, to generate car light identification region in the first image;
3rd identification module, for identifying the steering indicating light of target vehicle in car light identification region;
Control module 105 is used for:
According to the steering indicating light of the target vehicle of range information, relative velocity and identification, to the kinematic parameter of main body vehicle
It is controlled.
Optionally, the first acquisition module 103 is used for:
According to position of the target vehicle region in the second image, the azimuth of target vehicle is obtained;Or,
According to position of the car light identification region in the first image, the azimuth of target vehicle is obtained.
Optionally, device 100 also includes:
Calibration module, for the azimuth of the target vehicle according to identification, dead load frequency radar is calibrated automatically.
Fig. 8 is referred to, based on same inventive concept, the embodiment of the present invention provides a kind of vehicle 200, and vehicle 200 can wrap
Image collecting device is included, for obtaining the first image and the second image, wherein, the first image is coloured image or luminance picture,
Second image is depth image;And Fig. 7 vehicle identifier 100.
In embodiment provided by the present invention, it should be understood that disclosed apparatus and method, others can be passed through
Mode is realized.For example, device embodiment described above is only schematical, for example, the division of the module or unit,
Only a kind of division of logic function, can there is an other dividing mode when actually realizing, such as multiple units or component can be with
With reference to or be desirably integrated into another system, or some features can be ignored, or not perform.
Each functional module in each embodiment of the application can be integrated in a processing unit or each
Module is individually physically present, can also two or more modules it is integrated in a unit.Above-mentioned integrated unit both may be used
Realize, can also be realized in the form of SFU software functional unit in the form of using hardware.
If the integrated unit is realized in the form of SFU software functional unit and is used as independent production marketing or use
When, it can be stored in a computer read/write memory medium.Based on such understanding, the technical scheme of the application is substantially
The part to be contributed in other words to prior art or all or part of the technical scheme can be in the form of software products
Embody, the computer software product is stored in a storage medium, including some instructions are causing a computer
It is each that equipment (can be personal computer, server, or network equipment etc.) or processor (processor) perform the application
The all or part of step of embodiment methods described.And foregoing storage medium includes:USB flash disk, mobile hard disk, ROM (Read-
Only Memory, read-only storage), RAM (Random Access Memory, random access memory), magnetic disc or CD
Etc. it is various can be with the medium of store program codes.
Described above, above example is implemented above only technical scheme to be described in detail
The explanation of example is only intended to help the method and its core concept for understanding the present invention, should not be construed as limiting the invention.This
Those skilled in the art the invention discloses technical scope in, the change or replacement that can readily occur in should all be covered
Within protection scope of the present invention.
Claims (13)
- A kind of 1. control method for vehicle, it is characterised in that including:The first image and the second image are obtained, wherein, described first image is coloured image or luminance picture, second image For depth image;Target vehicle is identified in second image, to obtain the range information of the target vehicle;According to described first image or second image, the azimuth of the target vehicle is obtained;According to the azimuth of the target vehicle and the intermediate-freuqncy signal obtained by dead load frequency radar, the target vehicle is determined Relative velocity;According to the range information and the relative velocity, the kinematic parameter of main body vehicle is controlled.
- 2. according to the method for claim 1, it is characterised in that methods described also includes:Lines on highway is identified according to described first image;According to the mapping relations between described first image and second image, the lines on highway is mapped to described Two images, to determine at least one vehicle identification scope in second image, wherein, the adjacent lines on highway of each two Create a vehicle identification scope;Target vehicle is identified in second image, including:The target vehicle is identified at least one vehicle identification scope.
- 3. according to the method for claim 2, it is characterised in that methods described also includes:Obtain the slope of the initial straight of each lines on highway mapped in second image;The vehicle identification range flags that lines on highway corresponding to two initial straights of maximum slope is created are this track, And by remaining vehicle identification range flags it is this non-track;Target vehicle is identified at least one vehicle identification scope, including:The target vehicle in this track is identified in the vehicle identification scope labeled as this track, in the vehicle labeled as this non-track Identified in identification range this non-track target vehicle and two neighboring vehicle identification range combinations into vehicle identification scope The target vehicle of middle identification lane change.
- 4. according to the method for claim 1, it is characterised in that methods described also includes:By identifying the target vehicle, target vehicle region is determined in second image;According to the mapping relations between described first image and second image, by the target vehicle area maps to described In first image, to generate car light identification region in described first image;The steering indicating light of the target vehicle is identified in the car light identification region;According to the range information and the relative velocity, the kinematic parameter of main body vehicle is controlled, including:According to the steering indicating light of the target vehicle of the range information, the relative velocity and identification, to the main body car Kinematic parameter be controlled.
- 5. according to the method for claim 4, it is characterised in that according to described first image or second image, obtain The azimuth of the target vehicle, including:According to position of the target vehicle region in second image, the azimuth of the target vehicle is obtained;Or,According to position of the car light identification region in described first image, the azimuth of the target vehicle is obtained.
- 6. according to the method for claim 1, it is characterised in that methods described also includes:According to the azimuth of the target vehicle of identification, the dead load frequency radar is calibrated automatically.
- A kind of 7. controller of vehicle, it is characterised in that including:Image collection module, for obtaining the first image and the second image, wherein, described first image is coloured image or brightness Image, second image are depth image;First identification module, for identifying target vehicle in second image, to obtain the distance of target vehicle letter Breath;First acquisition module, for according to described first image or second image, obtaining the azimuth of the target vehicle;First determining module, the intermediate frequency obtained for the azimuth according to the target vehicle and by dead load frequency radar are believed Number, determine the relative velocity of the target vehicle;Control module, for according to the range information and the relative velocity, being controlled to the kinematic parameter of main body vehicle.
- 8. device according to claim 7, it is characterised in that described device also includes:Second identification module, for identifying lines on highway according to described first image;First mapping block, for according to the mapping relations between described first image and second image, by the highway Lane line maps to second image, to determine at least one vehicle identification scope in second image, wherein, every two Individual adjacent lines on highway creates a vehicle identification scope;First identification module is used for:The target vehicle is identified at least one vehicle identification scope.
- 9. device according to claim 8, it is characterised in that described device also includes:Second acquisition module, for obtaining the oblique of the initial straight of each lines on highway mapped in second image Rate;Creation module, the vehicle identification scope created for lines on highway corresponding to two initial straights by maximum slope It is this non-track labeled as this track, and by remaining vehicle identification range flags;First identification module is used for:The target vehicle in this track is identified in the vehicle identification scope labeled as this track, in the vehicle labeled as this non-track Identified in identification range this non-track target vehicle and two neighboring vehicle identification range combinations into vehicle identification scope The target vehicle of middle identification lane change.
- 10. device according to claim 7, it is characterised in that described device also includes:Second determining module, for by identifying the target vehicle, target vehicle region to be determined in second image;Second mapping block, for according to the mapping relations between described first image and second image, by the target Vehicle region is mapped in described first image, to generate car light identification region in described first image;3rd identification module, for identifying the steering indicating light of the target vehicle in the car light identification region;The control module is used for:According to the steering indicating light of the target vehicle of the range information, the relative velocity and identification, to the main body car Kinematic parameter be controlled.
- 11. device according to claim 10, it is characterised in that first acquisition module is used for:According to position of the target vehicle region in second image, the azimuth of the target vehicle is obtained;Or,According to position of the car light identification region in described first image, the azimuth of the target vehicle is obtained.
- 12. device according to claim 7, it is characterised in that described device also includes:Calibration module, for the azimuth of the target vehicle according to identification, the dead load frequency radar is calibrated automatically.
- A kind of 13. vehicle, it is characterised in that including:Image collecting device, for obtaining the first image and the second image, wherein, described first image is coloured image or brightness Image, second image are depth image;AndController of vehicle as described in claim 7-12 is any.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610874368.XA CN107886036B (en) | 2016-09-30 | 2016-09-30 | Vehicle control method and device and vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610874368.XA CN107886036B (en) | 2016-09-30 | 2016-09-30 | Vehicle control method and device and vehicle |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107886036A true CN107886036A (en) | 2018-04-06 |
CN107886036B CN107886036B (en) | 2020-11-06 |
Family
ID=61770063
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610874368.XA Active CN107886036B (en) | 2016-09-30 | 2016-09-30 | Vehicle control method and device and vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107886036B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110281923A (en) * | 2019-06-28 | 2019-09-27 | 信利光电股份有限公司 | A kind of vehicle auxiliary lane change method, apparatus and system |
CN110596656A (en) * | 2019-08-09 | 2019-12-20 | 山西省煤炭地质物探测绘院 | Intelligent street lamp feedback compensation system based on big data |
CN111060946A (en) * | 2018-10-17 | 2020-04-24 | 三星电子株式会社 | Method and apparatus for estimating position |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104112118A (en) * | 2014-06-26 | 2014-10-22 | 大连民族学院 | Lane departure early-warning system-based lane line detection method |
CN104952254A (en) * | 2014-03-31 | 2015-09-30 | 比亚迪股份有限公司 | Vehicle identification method and device and vehicle |
-
2016
- 2016-09-30 CN CN201610874368.XA patent/CN107886036B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104952254A (en) * | 2014-03-31 | 2015-09-30 | 比亚迪股份有限公司 | Vehicle identification method and device and vehicle |
CN104112118A (en) * | 2014-06-26 | 2014-10-22 | 大连民族学院 | Lane departure early-warning system-based lane line detection method |
Non-Patent Citations (1)
Title |
---|
江登银: "汽车自动防撞雷达系统的研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑,2011年第S2期,C035-70》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111060946A (en) * | 2018-10-17 | 2020-04-24 | 三星电子株式会社 | Method and apparatus for estimating position |
CN110281923A (en) * | 2019-06-28 | 2019-09-27 | 信利光电股份有限公司 | A kind of vehicle auxiliary lane change method, apparatus and system |
CN110596656A (en) * | 2019-08-09 | 2019-12-20 | 山西省煤炭地质物探测绘院 | Intelligent street lamp feedback compensation system based on big data |
CN110596656B (en) * | 2019-08-09 | 2021-11-16 | 山西省煤炭地质物探测绘院 | Intelligent street lamp feedback compensation system based on big data |
Also Published As
Publication number | Publication date |
---|---|
CN107886036B (en) | 2020-11-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107886770A (en) | Vehicle identification method, device and vehicle | |
CN105206109B (en) | A kind of vehicle greasy weather identification early warning system and method based on infrared CCD | |
CN107886030A (en) | Vehicle identification method, device and vehicle | |
US9754172B2 (en) | Three-dimenisional object detection device | |
CN109544633B (en) | Target ranging method, device and equipment | |
US9135511B2 (en) | Three-dimensional object detection device | |
CN104520894A (en) | Roadside object detection device | |
CN104376297A (en) | Detection method and device for linear indication signs on road | |
CN104411559A (en) | A robust method for detecting traffic signals and their associated states | |
CN107886729A (en) | Vehicle identification method, device and vehicle | |
EP2821982B1 (en) | Three-dimensional object detection device | |
US9830519B2 (en) | Three-dimensional object detection device | |
CN107886036A (en) | Control method for vehicle, device and vehicle | |
CN104094311A (en) | Three-dimensional object detection device | |
KR20220135186A (en) | Electronic device and control method | |
CN107220632B (en) | Road surface image segmentation method based on normal characteristic | |
CN110727269B (en) | Vehicle control method and related product | |
CN109895697B (en) | Driving auxiliary prompting system and method | |
JP2019211403A (en) | Object position measurement device and object position measurement program | |
CN114677658B (en) | Billion-pixel dynamic large scene image acquisition and multi-target detection method and device | |
CN108528450A (en) | Vehicle travels autocontrol method and device | |
US11810453B2 (en) | Route control device and route control method | |
Nedevschi et al. | On-board stereo sensor for intersection driving assistance architecture and specification | |
EP4246467A1 (en) | Electronic instrument, movable apparatus, distance calculation method, and storage medium | |
KR20240070592A (en) | Apparatus and method for determining the distance of an optical signal transmitter |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |