WO2018059586A1 - 车辆识别方法、装置及车辆 - Google Patents
车辆识别方法、装置及车辆 Download PDFInfo
- Publication number
- WO2018059586A1 WO2018059586A1 PCT/CN2017/104875 CN2017104875W WO2018059586A1 WO 2018059586 A1 WO2018059586 A1 WO 2018059586A1 CN 2017104875 W CN2017104875 W CN 2017104875W WO 2018059586 A1 WO2018059586 A1 WO 2018059586A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- vehicle
- lane
- target vehicle
- lane line
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 230000008859 change Effects 0.000 claims abstract description 44
- 239000007787 solid Substances 0.000 claims description 87
- 238000013507 mapping Methods 0.000 claims description 40
- 238000006073 displacement reaction Methods 0.000 claims description 28
- 238000012545 processing Methods 0.000 claims description 8
- 230000004069 differentiation Effects 0.000 claims 2
- 230000004397 blinking Effects 0.000 description 13
- 238000010586 diagram Methods 0.000 description 9
- 230000003044 adaptive effect Effects 0.000 description 5
- 238000010924 continuous production Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 238000002955 isolation Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 239000004575 stone Substances 0.000 description 2
- 239000000758 substrate Substances 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/166—Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/08—Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
- B60W30/09—Taking automatic action to avoid collision, e.g. braking and steering
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/14—Adaptive cruise control
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/017—Detecting movement of traffic to be counted or controlled identifying vehicles
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/017—Detecting movement of traffic to be counted or controlled identifying vehicles
- G08G1/0175—Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/167—Driving aids for lane monitoring, lane changing, e.g. blind spot detection
Definitions
- the present invention relates to the field of vehicle technology, and in particular, to a vehicle identification method, device, and vehicle.
- a distance measuring sensor can be installed on the vehicle to sense multiple target vehicles in front of the vehicle to reduce the collision. The incidence of accidents.
- a stereo camera can be used as a ranging sensor, or a single ordinary camera can be used with a millimeter wave radar or a laser radar as a ranging sensor to increase the accuracy of measurement.
- the ranging algorithm using a stereo camera is more complicated, which may lead to an increase in power consumption of the computer chip, and the use of a single ordinary camera in combination with a millimeter wave radar or a laser radar requires a large in-vehicle installation space and a high cost. .
- the above-mentioned ranging sensor detects mainly the change of the target vehicle distance, and the target vehicle distance does not change or change little, for example, in the initial stage of the lane change, etc., the above-mentioned ranging sensor may not be able to detect. Therefore, the corresponding measures cannot be taken.
- a vehicle identification method including:
- the driving information of the front target vehicle and the rear target vehicle is obtained based on the identified forward target vehicle and the turn signal of the rear target vehicle.
- a vehicle identification device includes:
- An image acquisition module configured to acquire a first image and a second image located in front of a traveling direction of the main vehicle, and acquire a third image and a fourth image located behind the traveling direction of the main vehicle, wherein the first image and the image
- the third image is a color image or a brightness image
- the second image and the fourth image are depth images
- a first identification module configured to identify a front target vehicle in the second image, and identify a rear target vehicle in the fourth image
- a first mapping module configured to map, according to a mapping relationship between the first image and the second image, a front target vehicle region in the second image to the first target vehicle region In the image, the front light recognition area is generated in the first image, and the rear target vehicle is in the fourth image according to a mapping relationship between the third image and the fourth image Corresponding rear target vehicle area is mapped into the third image to generate a rear vehicle light recognition area in the third image;
- a second identification module configured to identify a turn signal of the front target vehicle in the front light recognition area, and identify a turn signal of the rear target vehicle in the rear light recognition area;
- a first acquiring module configured to obtain driving information of the front target vehicle and the rear target vehicle according to the identified forward target vehicle and the turn signal of the rear target vehicle.
- a vehicle comprising the vehicle identification device provided by the second aspect described above.
- the arrangement is more advanced than the stereo camera, a single ordinary camera with a millimeter wave radar, and a single ordinary camera with a laser radar. Simple, no need for more housing space, and simple calculation method, reducing the need for computer chip performance.
- the steering lights of the target vehicle located in front of and behind the traveling direction of the main vehicle can be identified by the cooperation of the depth image and the color image, so that the front target vehicle or the rear target vehicle is in the initial stage of the lane change, It is possible to know whether the target vehicle has to be changed and how to change lanes, so that the corresponding measures can be taken in advance in a more timely manner, thereby improving the driving safety of the vehicle.
- FIG. 1 is a flow chart of a vehicle identification method according to an exemplary embodiment.
- FIG. 2 is a flow chart of another vehicle identification method, according to an exemplary embodiment.
- FIG. 3 is a schematic diagram of a target vehicle area and a vehicle light identification area, according to an exemplary embodiment.
- FIG. 4 is a schematic diagram of a temporal micromolecular image, according to an exemplary embodiment.
- FIG. 5 is a schematic diagram of identifying a target vehicle, according to an exemplary embodiment.
- FIG. 6 is a schematic diagram of identifying a target vehicle, according to an exemplary embodiment.
- FIG. 7 is a schematic diagram of identifying a target vehicle, according to an exemplary embodiment.
- FIG. 8 is a block diagram of a vehicle identification device, according to an exemplary embodiment.
- FIG. 9 is a block diagram of a vehicle, according to an exemplary embodiment.
- FIG. 1 is a flowchart of a vehicle identification method according to an exemplary embodiment. As shown in FIG. 1 , the vehicle identification method may be applied to a vehicle in a vehicle, including the following steps.
- Step S11 Acquire a first image and a second image located in front of the traveling direction of the subject vehicle, and acquire a third image and a fourth image located behind the traveling direction of the subject vehicle.
- Step S12 Identifying the front target vehicle in the second image, and identifying the rear target vehicle in the fourth image.
- Step S13 Map the front target vehicle area corresponding to the front target vehicle in the second image to the first image according to the mapping relationship between the first image and the second image to generate front light recognition in the first image. a region, and, according to a mapping relationship between the third image and the fourth image, mapping the rear target vehicle region in the fourth image to the third image in the rear image to generate the rear light in the third image Identify the area.
- Step S14 Identifying the turn signal of the front target vehicle in the front lamp recognition area, and identifying the turn signal of the rear target vehicle in the rear lamp recognition area.
- Step S15 Obtain driving information of the front target vehicle and the rear target vehicle based on the identified forward target vehicle and the turn signal of the rear target vehicle.
- the first image and the third image may be a color image or a brightness image
- the second image and the fourth image may be depth images
- the first image and the second image are acquired environmental images located in front of the traveling direction of the main vehicle model, and may be
- the third image and the fourth image are obtained by the same image acquisition device disposed on the front end of the main vehicle
- the third image and the fourth image are obtained by the same image located behind the driving direction of the main vehicle, and may be the same image disposed on the rear of the main vehicle. Acquired by the acquisition device.
- the first image can be acquired by the image sensor of the image capturing device, and the second image is obtained by the TOF (Time of Flight) sensor of the image capturing device. image.
- TOF Time of Flight
- the pixels of the color or the brightness image and the pixels of the depth image may be interlaced in a certain ratio, and the ratio is not limited in the embodiment of the present invention.
- both the image sensor and the TOF sensor can be fabricated using a complementary metal oxide semiconductor (CMOS) process, and the luminance pixels and TOF pixels can be scaled onto the same substrate, for example, 8 luminance pixels produced at a ratio of 8:1.
- CMOS complementary metal oxide semiconductor
- one TOF pixel constitutes one large interlaced pixel, wherein the photosensitive area of one TOF pixel can be equal to the photosensitive area of 8 brightness pixels, wherein 8 brightness pixels can be arranged in an array of 2 rows and 4 columns.
- an array of 360 rows and 480 columns of actively interlaced pixels can be fabricated on a 1 inch optical target substrate, including an active luminance pixel array of 720 rows and 1920 columns, an active TOF pixel array of 360 rows and 480 columns,
- the same image capturing device composed of the image sensor and the TOF sensor can simultaneously acquire color or brightness images and depth images.
- FIG. 2 is a flowchart of another vehicle identification method.
- the method may further include step S16: identifying a forward road lane line according to the first image, and identifying a rear road lane line according to the third image. ;
- step S12 may be to identify the front target vehicle in the at least one forward vehicle recognition range, and identify the rear target vehicle in the at least one rear vehicle identification range.
- each two adjacent road lane lines create a vehicle identification range.
- the road lane line refers to the identification of traffic information such as guidance, restriction, warning and the like to the traffic participants on the road surface with lines, arrows, characters, elevation marks, raised road signs and outline signs.
- Different line types correspond to different indications. Take China's highway lane marking as an example, including the following types: 1. White dotted line, used to separate traffic flow in the same direction or as a safe distance identification line when drawn in a road section; The vehicle is traveling. 2.
- White solid line when drawn in the road section, is used to separate the motor vehicles and non-motor vehicles traveling in the same direction, or to indicate the edge of the roadway; when painted at the intersection, it can be used as a guide lane line or a stop line.
- yellow dotted line when drawn in the road section, used to separate the traffic flow in the opposite direction; when painted on the road side or edge stone, it is used to prohibit the vehicle from parked on the side of the road for a long time.
- the solid yellow line is used to separate the traffic flow in the opposite direction when painted in the road section; it is used to prohibit the vehicle from being parked for a long time or temporarily on the roadside when painted on the road side or on the edge stone.
- Double white dotted line when drawn at the intersection, as a deceleration to make the line; when drawn in the road section, as a variable lane line whose driving direction changes with time. 6.
- Double yellow solid line used to separate the traffic flow in the opposite direction when painted in the road section. 7.
- the yellow virtual solid line is used to separate the traffic flow in the opposite direction when painted in the road section; the yellow solid line side prohibits the vehicle from overtaking, crossing or turning, and the yellow dotted line side allows the vehicle to overtake and cross over in the case of ensuring safety. Or turn around. 8, double white solid line, painted at the intersection, as a parking line. 9, orange virtual, solid line, used for the work area marking. 10, blue virtual, solid line, as a non-motorized lane marking line, when the parking space marking is divided, the free parking space is indicated.
- the identification lane line can also be adjusted and identified according to the local lane marking rule of the target vehicle running geographical position.
- the front road lane line can be directly identified according to the brightness information of the first image.
- the first image can be converted into a brightness image and then the front road lane line is recognized. .
- the rear road lane line can be identified by the third image in the same manner.
- Each two adjacent road lane lines create a vehicle identification range, that is, the vehicle identification range corresponds to the actual lane, then the front target vehicle is identified in the front vehicle identification range, and the rear target vehicle is identified in the rear vehicle identification range,
- the range of the target vehicle can be determined to the lane to ensure that the identified object is a vehicle traveling in the lane, avoiding interference caused by other non-vehicle objects in the image, and improving the accuracy of identifying the target vehicle.
- the identifying the lane lane line may acquire all edge pixel positions of each solid lane lane included in the forward road lane line according to the first image, and Obtaining all edge pixel positions of each of the dotted lane lines included in the forward road lane line; and, according to the third image, acquiring all edge pixel positions of each solid lane line included in the rear road lane line, and acquiring the rear road lane
- the line includes all edge pixel locations for each dashed lane line.
- the lane line includes all of the edge pixel locations of each solid lane line.
- the line includes all edge pixel locations for each solid lane line.
- the following takes an example of obtaining a solid lane line included in the forward road lane line as an example.
- the embodiment of the present invention is not limited to how to create a binary image corresponding to the first image. Several possible ways are exemplified below.
- certain brightness thresholds can be obtained by searching.
- the brightness threshold can be found by using the “histogram statistics-double peak” algorithm, and the highlighted road lane line is created by using the brightness threshold and the brightness image.
- Binary image
- the luminance image may also be divided into a plurality of luminance sub-images, and a “histogram statistics-bimodal” algorithm is performed on each luminance sub-image to find a plurality of luminance thresholds, and the respective luminance thresholds and corresponding luminance sub-images are utilized. Create a binary sub-image of the prominent road lane line and use the binary sub-image to create a binary image of the complete highlighted road lane line, so as to cope with the change of the road or lane line brightness.
- the embodiment of the present invention is also not limited.
- the near lane line has more imaging pixels than the far lane line, so that the solid lane line of the curve is arranged in a straight line in the luminance image.
- the pixels also occupy most of the imaging pixels of the solid lane line, so a line detection algorithm such as the Hough transform algorithm can be used to detect all edge pixel positions of the straight lane line in the binary image of the highlighted road lane line or Most of the initial straight edge pixel locations of the solid lane lines of the curve are detected.
- Straight line detection may also detect the isolation strips and poles at most of the linear edge pixel locations in the binary image. Then, for example, the slope range of the lane line in the binary image can be set according to the aspect ratio of the image sensor, the focal length of the camera lens, the road width range of the road design specification, and the installation position of the image sensor in the subject vehicle, so that the slope range will be according to the slope range. Line filtering for non-lane lines is excluded.
- all the edge pixel positions of the curved solid lane line can be determined by searching for the edge pixel at both ends of the initial straight line of the detected curved lane line. Positioning the connected pixel position and incorporating the connected pixel position into the initial straight edge edge pixel set, repeating the above process of finding and incorporating the connected pixel position until finally determining all edge pixel positions of the curved solid lane line .
- the connected pixel of the edge pixel refers to a pixel adjacent to the edge pixel position and having a similar value.
- the binary image of the third image can be created in the same manner as described above, and then all the edge pixels of each solid lane line included in the rear road lane line are detected. In this way, the solid lane line ahead and behind the direction of travel of the subject vehicle can be identified.
- the first dotted lane line is any dotted road lane line included in the front road lane line
- the first solid lane lane in the forward road lane line may be identified according to the first image, and then according to the first dotted lane lane line
- the initial linear position maps all edge pixel positions of the first solid road lane line to the edge pixel positions of the first dotted lane line, thereby acquiring all edge pixel positions of the first dotted lane line.
- the first solid lane line is any solid road lane line included in the forward road lane line.
- all edge pixel positions of the first solid lane line can be projected to the first according to the prior knowledge of the solid lane line, the parallel principle of the lane line reality, the image sensor and the projection parameters of the camera.
- the initial straight edge pixel position of the dashed lane line is to connect the initial straight edge pixel position of the first dotted lane line and the edge pixel position of other shorter lane lines belonging to the first dotted lane line, thereby acquiring all edge pixels of the dotted lane line position.
- the second dotted lane line is any dashed road lane line included in the rear road lane line
- the second solid lane line in the rear road lane line can be identified according to the third image, and then according to the first dotted lane line
- the initial straight line position projects all edge pixel positions of the second solid road lane line to the edge pixel positions of the second dotted lane line, thereby acquiring all edge pixel positions of the second dotted lane line.
- the second solid lane line is any solid road lane line included in the rear road lane line. This makes it possible to identify the dotted lane lines ahead and behind the direction of travel of the subject vehicle.
- the first dotted line lane line is any dashed road lane line included in the front road lane line, and then the binary images corresponding to the plurality of consecutively acquired first images may be superimposed to use the first dotted line lane line.
- the prior knowledge of the straight track or the curve may not be obtained. Because the vehicle is in the process of straight track cruising or constant steering angle curve cruising, the lateral shift of the dotted lane line can be almost in a short continuous time. Ignore, but the vertical offset is large, so the dotted lane line can be superimposed into a solid lane lane in the binary image of several consecutive highway lane lines at different times, and then the above-mentioned solid lane lane identification method is adopted. All edge pixel locations of the dashed lane line are obtained.
- the binary value of the continuous protruding road lane line at different times can be dynamically determined according to the vehicle speed acquired from the wheel speed sensor.
- the minimum number of images is to superimpose the first dashed lane line into a solid lane lane to obtain all edge pixel locations of the first dashed lane line.
- the second dotted lane line is any dashed road lane line included in the rear road lane line
- the binary images corresponding to the plurality of consecutively acquired third images may be superimposed to superimpose the second dotted lane line
- a solid lane line is then acquired, and then all edge pixel positions of the solid lane line superimposed by the second dotted lane line are acquired. This makes it possible to identify the dotted lane lines ahead and behind the direction of travel of the subject vehicle.
- the slope of the initial straight line mapped to each of the front road lane lines in the second image may be acquired, and the slope of the initial straight line mapped to each of the rear road lane lines in the fourth image may be acquired, then
- the forward vehicle identification range created by the front road lane line corresponding to the two initial straight lines with the largest slope is marked as the front local lane
- the remaining front vehicle identification range is marked as the front non-local lane
- the rear vehicle identification range created by the corresponding rear road lane line is marked as the rear home lane
- the remaining rear vehicle identification range is marked as the rear non-own lane.
- step S12 may be to identify the front target vehicle of the own lane in the preceding vehicle recognition range marked as the front own lane, the front target vehicle that identifies the non-own lane in the preceding vehicle identification range marked as the front non-own lane, and The front target vehicle identifying the lane change in the forward vehicle recognition range combined with the two preceding vehicle identification ranges, and the rear target vehicle identifying the own lane in the rear vehicle identification range marked as the rear own lane, marked as rear non-target In the rear vehicle recognition range of the own lane, a rear target vehicle that is not the own lane and a rear target vehicle that recognizes the lane change in the rear vehicle identification range in which the adjacent two rear vehicle identification ranges are combined are identified.
- the row and column coordinates of each pixel of the first image may be adjusted at least in the second image to determine the row and column coordinates of one pixel, and thus are acquired according to the first image.
- Each edge pixel location of the forward road lane line can define at least one pixel location in the second image, thereby acquiring a scaled forward road lane line in the second image.
- a front vehicle identification range is created for each adjacent two front road lane lines.
- the slope of the initial straight line portion of each of the front road lane lines is compared with the number of columns to obtain the slope of the initial straight line of the front road lane line.
- the vehicle identification range created by the front road lane line where the initial straight line of the two front road lane lines having the largest slope is located is marked as the lane, and the other created front vehicle identification range is marked as the non-own lane.
- the front target vehicle of the own lane can be identified in the front vehicle identification range marked as the lane
- the target vehicle in the front non-own lane is identified in the vehicle identification range marked as the non-own lane
- the adjacent two The vehicle identification range is combined into a vehicle target range that identifies the forward target vehicle of the lane change.
- the target vehicle described below may be a front target vehicle or a rear target vehicle.
- the distance and position of the target vehicle relative to the TOF sensor always changes over time, the distance and position of the road surface, the isolation belt relative to the TOF sensor, and the time are approximately unchanged. Therefore, it is possible to create a time differential depth image using the depth images acquired at two different times, thereby identifying the position of the target vehicle in the depth image, or the distance between the target vehicle and the body vehicle, and the like.
- the depth sub-image formed by the TOF sensor contains consistent distance information, so that only the depth sub-image formed by the target vehicle is identified in the depth image.
- the location can obtain the distance information of the target vehicle.
- the depth sub-image formed by the back or front light of the same target vehicle reflected to the TOF sensor contains consistent distance information, and the depth sub-image formed by the light reflected from the road surface to the TOF sensor contains continuously varying distance information and therefore contains the same
- the depth sub-image of the distance information and the depth sub-image containing the continuously varying distance information inevitably form a sudden difference in the boundary between the two, and the boundary of these sudden differences forms the target boundary of the target vehicle in the depth image.
- boundary detection methods such as Canny, Sobel, Laplace, etc. for detecting a boundary in an image processing algorithm may be employed to detect a target boundary of a target vehicle.
- the vehicle identification range is determined by all pixel positions of the lane line, so detecting the target boundary of the target vehicle within the vehicle identification range will reduce boundary interference formed by road facilities such as the isolation belt, the street light pole, and the guard pile.
- the target boundary detected within each vehicle recognition range can be respectively projected onto the row coordinate axis of the image, and one-dimensional search can be performed on the row coordinate axis. Determining the number of rows and row coordinates of the longitudinal target boundary of all target vehicles in the vehicle identification range, and determining the number of columns and row coordinate positions of the lateral target boundary, and the vertical target boundary refers to occupying more rows and columns of pixels A small number of target boundaries, which are target boundaries with a small number of rows of pixels and a large number of columns.
- detecting the target boundary of the target vehicle can uniquely determine the position of the depth sub-image formed by the target vehicle in the depth image, thereby uniquely determining the distance information of the target vehicle.
- the target vehicle can be identified by other means, which is not limited in this embodiment of the present invention, as long as the target vehicle can be identified.
- the front target vehicle area may be determined in the second image and the rear target vehicle area may be determined in the fourth image.
- the target vehicle area that is, the area where the target vehicle is located in the second image or the fourth image, may be a closed area surrounded by the boundary of the identified target vehicle, or may be an extension of the boundary of the identified target vehicle.
- the closed area or may also be a closed area surrounded by a plurality of pixel position lines of the target vehicle, and the like.
- the embodiment of the present invention does not limit the region of the target vehicle area, as long as it is the area including the target vehicle.
- the row and column coordinates of each pixel of the front target vehicle region in the second image are adjusted in equal proportions to determine at least one row of row and column coordinates in the first image.
- the front light recognition area may be generated at a corresponding position of the first image, because the imaging of the headlight of the front target vehicle is included in In the front target vehicle area, the turn signal of the front target vehicle can be identified in the front vehicle light recognition area generated in the first image.
- the rear vehicle identification area can be generated in the third image, and the turn signal of the rear target vehicle can be identified in the rear vehicle identification area.
- identifying a turn signal of the target vehicle in the front light recognition area or the rear light recognition area is not limited.
- the vehicle identification area in the plurality of first images or the third images that are continuously acquired may be time-differentiated to create a time micro-molecular image corresponding to the target vehicle, and then A molecular image that identifies the turn signal of the target vehicle.
- the turn signal can be identified based on the color, blinking frequency, or blinking sequence of the lights in the front or rear light recognition area.
- a time micro-molecular image of the target vehicle is created by continuously acquiring a plurality of color or luminance images at different times and time-differentiating the vehicle identification region of the target vehicle.
- the temporal micromolecular image will highlight the continuously blinking turn signal sub-image of the target vehicle.
- the time micro-sub-image can then be projected onto the column coordinate axis to perform a one-dimensional search to obtain the starting and ending column coordinate positions of the steering light sub-image of the target vehicle, and the starting and ending column coordinate positions are projected to the time micro-molecular image.
- the color, blinking frequency or blinking sequence of the turn signal determines the row and column coordinate positions of the blinking turn signal sub-image, that is, the position information of the turn signal sub-image in the time micro-molecular image is obtained.
- the row and column coordinate positions of the blinking turn signal sub-image are only on the left side of the vehicle light recognition area of the target vehicle, it can be determined that the target vehicle is playing the left turn signal, and the row and column coordinates of the blinking turn signal sub-image
- the position is only on the right side of the vehicle light recognition area of the target vehicle, it can be determined that the target vehicle is playing the right turn signal, and the row and column coordinate positions of the blinking turn signal sub-image can be on both sides of the target vehicle identification area.
- the position of the turn signal imaged by the target vehicle behind the subject vehicle may be opposite to the left and right, but the position of the oppositely directed turn signal can be easily adjusted and identified via a simple change.
- the longitudinal displacement or the lateral displacement thereof is large, so that the size of the vehicle identification area of the target vehicle changes greatly, and the target vehicle of the plurality of different moments can be continuously acquired.
- the vehicle light recognition area is compensated for longitudinal displacement or lateral displacement and is scaled into a uniform size of the vehicle identification area, and then the adjusted vehicle identification area of the target vehicle is time-differentiated to create a time micro-molecular image of the target vehicle. Projecting the time micromolecular image onto the column coordinate axis, performing a one-dimensional search to obtain the start and end point coordinate positions of the steering lamp sub-image of the target vehicle, and projecting the start and end point column coordinate positions to the time differential lamp recognition area.
- the color, flicker frequency or blinking sequence of the turn signal determines the row and column coordinates of the blinking turn signal sub-image It is set to finalize the left turn signal, right turn or a double flash lamp recognition of warning lights.
- a time micromolecular image corresponding to the vehicle lamp recognition area in which a continuously blinking turn signal sub-image is highlighted, and by recognizing the coordinates, it is determined that the turn signal sub-image is located in the vehicle lamp recognition.
- the blinking frequency is 1 time/second. For example, it can be determined that the target vehicle is currently playing the left turn signal.
- the target vehicle can be either the front target vehicle or the rear target vehicle
- the steering lights of the front and rear target vehicles can be better recognized, so that corresponding measures can be taken in advance according to the steering condition of the target vehicle to prevent the occurrence of safety accidents and improve the safety of the vehicle.
- the travel information of the target vehicle can be obtained.
- the travel information may further include information such as the travel speed of the target vehicle, the distance between the target vehicle and the subject vehicle, and the like.
- the driving information of a front target vehicle includes that the front target vehicle is located in the lane where the main vehicle is located, and the right turn signal is turned on at a speed of -10 m/s with respect to the main vehicle, that is, the right side is changed, and so on, etc. .
- the motion parameters of the subject vehicle may be controlled according to the driving information of the front target vehicle and/or the rear target vehicle.
- the rear non-ownway target vehicle deceleration lane can be identified to the main vehicle own lane, and then the main vehicle can be controlled to light up the brake light in advance.
- the rear target vehicle driver is alerted to cancel the lane change or deceleration lane change, thereby reducing the risk of rear-end collision between the subject vehicle and the rear target vehicle.
- the situation that the target vehicle deceleration lane in front of the lane is determined to be non-own lane can be recognized, so that the subject vehicle can be controlled to reduce unnecessary braking, thereby The risk of rear-end collision due to unnecessary braking adjustment of the subject vehicle is reduced.
- the identified motion parameter of the target vehicle and the corresponding turn signal of the target vehicle it may be recognized that the target vehicle deceleration lane to the main vehicle is changed to the main vehicle own lane, so that the motion parameter control system and the safety system of the main vehicle are obtained. Adjustments can be made earlier to improve the driving safety of the subject vehicle and its occupants.
- the situation that the target vehicle deceleration lane in front of the lane is not detected to the main vehicle own lane can be identified, so that the vehicle lamp system of the main vehicle can be made earlier. Adjustments are made to alert the rear target vehicle, providing more braking or adjustment time for the rear target vehicle, more effectively reducing the risk of rear-end collision, and the like.
- the present invention identifies and monitors a continuous process of the front lane target vehicle of the subject vehicle from turning the turn signal to completing the lane change to the non-own lane.
- the front-end target vehicle is identified based on the vehicle identification range of the own lane label in front of the mark, and the forward target vehicle of the lane change is identified based on the forward vehicle recognition range of the two-two combination, and the turn signal of the corresponding target vehicle is identified based on the vehicle-light recognition area. That is, it is possible to identify and monitor the continuous process of the front-end target vehicle from the turn signal to the completion of the lane change to the non-own lane, and the duration of the target vehicle during the continuous lane change, the distance to the main vehicle, and the relative speed Motion parameters such as lateral displacement are also easily monitored so that the motion parameters of the subject vehicle can be controlled based on the travel information of the target vehicle.
- the pixel distance of the left target boundary of the target vehicle to the left lane of the front lane is determined by the camera projection relationship to be the lateral distance P;
- the first image and the second image at different times (the time at which a first image or the second image is acquired is T), during which the change in the distance R of the target vehicle is identified and recorded, and the distance R to the target vehicle can be Calculating the relative speed V of the target vehicle with respect to the change of T; identifying that the target vehicle just completed the lane change to the non-own lane on the right side of the front lane, and the left target boundary of the target vehicle to the right lane of the front lane Line coincidence; this lane
- the width is D; therefore, the motion parameters of the forward target vehicle during the continuous lane change are duration N x T, distance to the subject vehicle is R, relative speed is V, and lateral displacement is (D-P).
- the present invention can also identify and monitor the continuous process of the rear target vehicle of the subject vehicle from turning the turn signal to completing the lane change to the non-own lane.
- the lateral displacement identified above is referenced to the left and right lane lines of the lane, and the target vehicle may be identified as being straight or curved regardless of the lane change of the target vehicle, and the target vehicle may be accurately identified, thereby Adapt to the cruise system to provide accurate control.
- the distance of the front target vehicle can be identified as RA.
- the distance of the non-ownway target vehicle on the left side of the main vehicle is RB and the lane change intention of the right turn signal is being blinked; when the RB distance is too small, the rear target vehicle is changed to the rear of the main vehicle and the rear-end collision is prone to occur.
- the brake light of the subject vehicle can be controlled to be illuminated in advance to warn the rear target vehicle driver to cancel the lane change or deceleration lane change, thereby The risk of rear-end collision between the subject vehicle and the rear target vehicle is mitigated.
- the conventional vehicle that only relies on millimeter wave radar or laser radar recognizes that the rear lane change target vehicle has a sufficiently large lateral displacement of the lane change to judge the lane change intention of the rear target vehicle, which will lead to the risk of rear collision collision. Increase.
- the lateral displacement of the rear target vehicle that continues to forcibly change lanes and the lane change is not decelerated relative to the lane line can be accurately identified, and the cruise system of the subject vehicle can be controlled automatically according to the present invention.
- the vehicle speed is increased to appropriately reduce the following distance between the subject vehicle and the forward target vehicle, and the distance between the subject vehicle and the rear target vehicle is increased, thereby reducing the risk of rear-end collision of the subject vehicle and the rear target vehicle.
- the lateral displacement of the target vehicle identified by the traditional vehicle adaptive cruise system relying solely on millimeter wave radar or lidar is referenced by the subject vehicle.
- the lateral displacement of the target vehicle identified by the subject vehicle may not be provided to the vehicle.
- the target vehicle in front of the lane completes the right lane change from the own lane, it is advantageous to bend the left corner.
- the millimeter wave radar or laser radar of the conventional vehicle on the straight lane may still recognize the front.
- the target vehicle is partially in the lane, and the curvature radius of the above-mentioned curve is 250 meters.
- the front target vehicle is traveling 25 meters on the curve during the lane change, and the right lane lane of the lane coincides with the left target boundary of the front target vehicle. At 25 meters of the curve, the straight extension of the lane has been shifted to the left by 1.25 meters.
- the millimeter wave radar or laser radar of the above conventional vehicle recognizes that the target vehicle has a distance of 50 meters to 80 meters, that is, the above-mentioned conventional vehicle's millimeter wave radar or laser radar is located on the straight road and is still 25 meters away from the curve entrance. At a distance of 55 meters, the millimeter wave radar or laser radar of the above-mentioned conventional vehicle will recognize that the front target vehicle still has a body width of about 1.25 meters in the present lane in the absence of a prior knowledge of the curve, and with the The target vehicle continues to decelerate along the left curve.
- the millimeter wave radar or laser radar of the above conventional vehicle recognizes that the target vehicle has a wider width of the vehicle body in the lane, that is, the above-mentioned conventional vehicle millimeter wave radar or laser radar will An inaccurate identification is generated and will result in the continuous inaccurate and unnecessary braking of the conventional vehicle adaptive cruise system, resulting in an increased risk of rear-end collision of the conventional vehicle with its rear target vehicle.
- the millimeter wave radar or laser radar of the above-mentioned conventional vehicle also has inaccuracy in the recognition of the above-mentioned own target vehicle from the completion of the lane to the left lane on the right curve.
- the travel information of the target vehicle that can be identified and the turn signal of the corresponding target vehicle can identify the working condition of the target vehicle deceleration lane to the main vehicle non-own lane, so that the main body
- the vehicle's motion parameter control system can reduce unnecessary brake adjustments, thereby reducing the risk of rear-end collisions due to unnecessary brake adjustments of the subject vehicle.
- the present invention can also identify and monitor the continuous process of the non-ownway target vehicle from the turn signal to the completion of the lane change to the lane, and the duration of the target vehicle during the continuous lane change, the distance to the subject vehicle, and the relative speed Motion parameters such as lateral displacement are also easily monitored so that the motion parameters of the target vehicle can be used to control the motion parameters of the subject vehicle to make brake adjustments earlier and improve driving safety, and to control headlight warnings earlier. Rear target vehicles to reduce the risk of rear-end collisions.
- the main vehicle travels in a constant speed mode on the straight lane of the lane, and there is still a distance of 55 meters (or at least 25 meters) from the entrance of the curve, which bends to the right and has a radius of curvature of 250 meters. 25 meters ahead of the corner entrance.
- On the right side of the lane there is a target vehicle in front of the non-own lane.
- the left turn signal is changing lane to the lane, and the left target boundary of the target vehicle has coincided with the right lane line of the lane.
- the present invention will be able to accurately recognize that the forward target vehicle is changing lanes to the own lane, and the present invention can control the power system of the subject vehicle to accurately perform since the target vehicle is about 80 meters (or at least 50 meters) from the subject vehicle.
- the power output reduces or even brakes, and the brake light is illuminated in time to ensure the safe distance between the main vehicle and the front and rear target vehicles, thereby improving the driving safety of the main vehicle and reducing the risk of rear-end collision.
- the traditional lateral displacement of the target vehicle identified by the vehicle adaptive cruise system relying solely on millimeter wave radar or lidar is referenced to the subject vehicle, and the distance of the forward target vehicle will be identified in the absence of prior knowledge of the curve.
- the extension line of the right lane line of the lane also has a lateral distance of about 1.25 meters, that is, it is erroneously recognized that the front target vehicle needs to continue to laterally shift to the left by about 1.25 meters.
- the above millimeter wave radar or laser radar can confirm that the front target vehicle starts to enter. This driveway.
- the conventional vehicle adaptive cruise system relying only on the millimeter wave radar or the laser radar will perform the power output after the front target vehicle actually enters the lane for about 1.25 seconds.
- the action of reducing or even braking is undoubtedly reducing the safety distance between the main vehicle and the front and rear target vehicles, resulting in a decrease in driving safety of the main vehicle and an increased risk of rear-end collision.
- the working condition of the non-ownway target vehicle deceleration lane to the main vehicle own lane can be identified, so that The motion parameter control system and the safety system of the subject vehicle can be adjusted earlier to improve the driving safety of the subject vehicle and its occupants.
- the vehicle light system of the main vehicle can be adjusted earlier to remind the rear target vehicle to provide more braking or adjustment time for the rear target vehicle, and the risk of rear-end collision is more effectively reduced.
- an embodiment of the present invention provides a vehicle identification device 100.
- the device 100 may include an image acquisition module 101, a first identification module 102, a first mapping module 103, and a second identification module 104. And the first acquisition module 105.
- the image acquisition module 101 is configured to acquire a first image and a second image located in front of the traveling direction of the main body vehicle, and acquire a third image and a fourth image located behind the traveling direction of the main vehicle, wherein the first image and the third image Like a color image or a brightness image, the second image and the fourth image are depth images.
- the first identification module 102 is configured to identify the front target vehicle in the second image, and identify the rear target vehicle in the fourth image.
- the first mapping module 103 is configured to map the front target vehicle area corresponding to the front target vehicle in the second image into the first image according to the mapping relationship between the first image and the second image, in the first image. Generating a front light recognition area, and mapping the rear target vehicle area corresponding to the rear target vehicle area in the fourth image to the third image according to a mapping relationship between the third image and the fourth image to be in the third image.
- the rear headlight recognition area is generated in the middle.
- the second identification module 104 is configured to identify a turn signal of the front target vehicle in the front light recognition area, and identify a turn signal of the rear target vehicle in the rear light recognition area.
- the first obtaining module 105 is configured to obtain driving information of the front target vehicle and the rear target vehicle according to the identified forward target vehicle and the turn signal of the rear target vehicle.
- the device 100 further includes:
- a third identification module configured to identify a forward road lane line according to the first image, and identify a rear road lane line according to the third image
- a second mapping module configured to map a front road lane line to a second image according to a mapping relationship between the first image and the second image, to determine at least one preceding vehicle identification range in the second image, and a mapping relationship between the three images and the fourth image, mapping the rear road lane line into the fourth image to determine at least one rear vehicle identification range in the fourth image, wherein each two adjacent road lane lines are created a vehicle identification range;
- the first identification module 102 is further configured to: identify the front target vehicle in the at least one forward vehicle identification range, and identify the rear target vehicle in the at least one rear vehicle identification range.
- the device 100 further includes:
- a second acquisition module configured to acquire a slope of an initial straight line mapped to each of the front road lane lines in the second image, and acquire a slope of an initial straight line mapped to each of the rear road lane lines in the fourth image;
- the marking module is configured to mark the front vehicle identification range created by the front road lane line corresponding to the two initial straight lines with the largest slope as the front local lane, the remaining front vehicle identification range as the front non-local lane, and the slope
- the rear vehicle identification range created by the rear road lane line corresponding to the two largest initial lines is marked as the rear home lane, and the remaining rear vehicle identification range is marked as the rear non-own lane;
- the first identification module 102 is further configured to: identify a front target vehicle of the own lane in a front vehicle recognition range marked as the front own lane, and identify a front target vehicle that is not the original lane in a forward vehicle identification range marked as a front non-own lane And identifying the forward target vehicle of the lane change in the forward vehicle recognition range in which the adjacent two preceding vehicle identification ranges are combined, and identifying the rear target vehicle of the own lane in the rear vehicle identification range marked as the rear own lane, The rear target vehicle that identifies the non-ownway in the rear vehicle identification range of the rear non-ownway, and the rear target vehicle that recognizes the lane change in the rear vehicle identification range in which the adjacent two rear vehicle identification ranges are combined.
- the third identification module is configured to: acquire all edge pixel positions of each solid lane line included in the forward road lane line according to the first image, and acquire all the dashed lane lines included in the front road lane line. Edge And a pixel position; and, according to the third image, acquiring all edge pixel positions of each solid lane line included in the rear road lane line, and acquiring all edge pixel positions of each of the dotted lane lines included in the rear road lane line.
- the third identification module is configured to: create a binary image corresponding to the first image; and detect all edge pixels of each solid lane line included in the forward road lane line in the binary image corresponding to the first image Positioning; and, creating a binary image corresponding to the third image; detecting all edge pixel positions of each solid lane line included in the rear road lane line in the binary image corresponding to the third image.
- the first dotted lane line is any dashed lane line included in the road lane line
- the second dotted lane line is any dashed road lane line included in the rear road lane line
- the third identification module is configured to: according to the first The image identifies a first solid lane lane in the forward road lane line, wherein the first solid lane lane is any solid road lane line included in the forward lane lane line; according to the initial straight line position of the first dotted lane lane, All edge pixel positions of the first solid road lane line are projected to edge pixel positions of the first dotted lane line to acquire all edge pixel positions of the first dotted lane line; and, in the rear road lane line, according to the third image a second solid lane line, wherein the second solid lane line is any solid road lane line included in the rear road lane line; according to the initial straight line position of the second dotted lane line, the second solid road lane line All edge pixel locations are projected to the edge pixel locations
- the first dotted lane line is any dashed lane line included in the road lane line
- the second dotted lane line is any dashed road lane line included in the rear road lane line
- the third identification module is used to: continuously acquire And superimposing the corresponding binary image of the plurality of first images to superimpose the first dotted lane line into a solid lane line; acquiring all edge pixel positions of the solid lane line superimposed by the first dotted lane line; And superimposing the binary images corresponding to the plurality of third images successively acquired to superimpose the second dotted lane line into a solid lane line; and acquiring all edges of the solid lane line superimposed by the second dotted lane line Pixel position.
- the second identification module 104 is configured to perform time differential processing on the plurality of front vehicle identification regions of the plurality of first images that are continuously acquired to create a time micromolecular image corresponding to the front target vehicle; a time micro-molecular image of the front target vehicle, identifying a turn signal of the front target vehicle; and performing time differential processing on the plurality of rear light recognition areas of the plurality of consecutively acquired third images to create a vehicle corresponding to the rear target
- the time micromolecular image; the turn signal of the rear target vehicle is identified based on the time micromolecular image corresponding to the rear target vehicle.
- the device 100 further includes:
- a compensation module configured to perform longitudinal displacement compensation or lateral displacement compensation on part or all of the front lamp recognition areas in the plurality of front lamp recognition areas, to obtain a plurality of front lamp recognition areas with the same ratio; and, for multiple Part or all of the rear lamp recognition area in the rear lamp recognition area performs longitudinal displacement compensation or lateral displacement compensation to obtain a plurality of rear lamp recognition areas having the same ratio;
- a zooming module configured to scale part or all of the front light recognition areas of the plurality of front light recognition areas of the same ratio to obtain a plurality of front light recognition areas of the same size; and Some or all of the rear lamp recognition areas in the rear lamp recognition area are scaled to obtain a plurality of rear lamp recognition areas of uniform size.
- the second identification module 104 is configured to: detect first position information of the steering light sub-image of the front target vehicle in the time micro-molecular image; identify a turn signal of the front target vehicle according to the first position information; and detect the rear The second position information of the steering lamp sub-image of the target vehicle in the time micro-molecular image; the turn signal of the rear target vehicle is identified based on the second position information.
- the apparatus 100 further includes: a control module, configured to control motion parameters of the subject vehicle according to the driving information of the front target vehicle and/or the rear target vehicle.
- a control module configured to control motion parameters of the subject vehicle according to the driving information of the front target vehicle and/or the rear target vehicle.
- an embodiment of the present invention provides a vehicle 200, which may include the vehicle identification device 100 of FIG.
- the disclosed apparatus and method may be implemented in other manners.
- the device embodiments described above are merely illustrative.
- the division of the modules or units is only a logical function division.
- there may be another division manner for example, multiple units or components may be used. Combinations can be integrated into another system, or some features can be ignored or not executed.
- the functional modules in the various embodiments of the present application may be integrated into one processing unit, or each module may exist physically separately, or two or more modules may be integrated into one unit.
- the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
- the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
- a computer readable storage medium A number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) or a processor to perform all or part of the steps of the methods described in various embodiments of the present application.
- the foregoing storage medium includes: a U disk, a mobile hard disk, a ROM (Read-Only Memory), a RAM (Random Access Memory), a disk or an optical disk, and the like, which can store program codes. .
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
一种车辆识别方法、装置及车辆,其中的识别方法包括:获取位于主体车辆行驶方向前方的第一图像和第二图像,以及获取位于主体车辆行驶方向后方的第三图像和第四图像(S11);在第二图像中识别前方目标车辆,及在第四图像中识别后方目标车辆(S12);在第一图像中生成前方车灯识别区域,及在第三图像中生成后方车灯识别区域(S13);在前方车灯识别区域中识别前方目标车辆的转向灯,及在后方车灯识别区域中识别后方目标车辆的转向灯(S14);获得前方目标车辆和后方目标车辆的行驶信息(S15)。能够较好地识别主体车辆前后方要变道的目标车辆,提高车辆的智能化程度。
Description
本发明涉及车辆技术领域,具体涉及车辆识别方法、装置及车辆。
随着科学技术的不断发展,人们的出行也越来越便利,各种各样的汽车、电动车等已经成为人们生活中必不可少的交通工具。然而,这些交通工具虽然方便了人们的出行,但交通安全事故却频频发生,为了提高车辆的安全性,可以在车辆上安装测距传感器,进而感测车辆前方的多个目标车辆,以降低撞车事故的发生率。
目前,可以使用立体相机作为测距传感器,或者使用单个普通相机配合毫米波雷达或激光雷达作为测距传感器来增加测量的准确性。
然而,使用立体相机的测距算法较为复杂,这将可能导致计算机芯片功耗的增加,而使用单个普通相机配合毫米波雷达或激光雷达的方式需要较大的车内安装空间,且成本较高。再且,由于上述的测距传感器检测的主要是目标车辆距离的变化,而对于目标车辆距离没有变化或变化不大的情况下,比如,变道初期,等等,上述测距传感器可能无法检测,从而无法采取相应的措施。
发明内容
本发明的目的是提供一种车辆识别方法、装置及车辆,以使其能够较好地识别主体车辆前后方要变道的目标车辆,提高车辆的智能化程度。
根据本发明第一方面的实施例,提供了一种车辆识别方法,包括:
获取位于主体车辆行驶方向前方的第一图像和第二图像,以及获取位于所述主体车辆行驶方向后方的第三图像和第四图像,其中,所述第一图像和所述第三图像为彩色图像或亮度图像,所述第二图像和所述第四图像为深度图像;
在所述第二图像中识别前方目标车辆,及,在所述第四图像中识别后方目标车辆;
根据所述第一图像与所述第二图像之间的映射关系,将所述前方目标车辆在所述第二图像中对应的前方目标车辆区域映射至所述第一图像中,以在所述第一图像中生成前方车灯识别区域,及,根据所述第三图像与所述第四图像之间的映射关系,将所述后方目标车辆在所述第四图像中对应的后方目标车辆区域映射至所述第三图像中,以在所述第三图像中生成后方车灯识别区域;
在所述前方车灯识别区域中识别所述前方目标车辆的转向灯,及,在所述后方车灯识别区域中识别所述后方目标车辆的转向灯;
根据识别的所述前方目标车辆和所述后方目标车辆的转向灯,获得所述前方目标车辆和所述后方目标车辆的行驶信息。
根据本发明第二方面的实施例提供了一种车辆识别装置,包括:
图像获取模块,用于获取位于主体车辆行驶方向前方的第一图像和第二图像,以及获取位于所述主体车辆行驶方向后方的第三图像和第四图像,其中,所述第一图像和所述第三图像为彩色图像或亮度图像,所述第二图像和所述第四图像为深度图像;
第一识别模块,用于在所述第二图像中识别前方目标车辆,及,在所述第四图像中识别后方目标车辆;
第一映射模块,用于根据所述第一图像与所述第二图像之间的映射关系,将所述前方目标车辆在所述第二图像中对应的前方目标车辆区域映射至所述第一图像中,以在所述第一图像中生成前方车灯识别区域,及,根据所述第三图像与所述第四图像之间的映射关系,将所述后方目标车辆在所述第四图像中对应的后方目标车辆区域映射至所述第三图像中,以在所述第三图像中生成后方车灯识别区域;
第二识别模块,用于在所述前方车灯识别区域中识别所述前方目标车辆的转向灯,及,在所述后方车灯识别区域中识别所述后方目标车辆的转向灯;
第一获取模块,用于根据识别的所述前方目标车辆和所述后方目标车辆的转向灯,获得所述前方目标车辆和所述后方目标车辆的行驶信息。
根据本发明第三方面的实施例,提供了一种车辆,包括上述第二方面提供的车辆识别装置。
通过上述技术方案,由于彩色图像和深度图像可以由单个相机就能够获取,因此相对于立体相机、单个普通相机配合毫米波雷达、以及单个普通相机配合激光雷达等等方式而言,布置上更为简单,无需较多的容置空间,且计算方式简单,减少了对计算机芯片性能的需求。本发明实施例中,可以通过深度图像和彩色图像的配合就能够识别位于主体车辆行驶方向前方和后方的目标车辆的转向灯,因此无论是前方目标车辆还是后方目标车辆处于变道的初期,也能够获知目标车辆是否要变道以及如何变道,进而能够更为及时地提前采取相应的措施,提高了车辆的行驶安全性。
本发明的其他特征和优点将在随后的具体实施方式部分予以详细说明。
附图是用来提供对本发明的进一步理解,并且构成说明书的一部分,与下面的具体实施方式一起用于解释本发明,但并不构成对本发明的限制。在附图中:
图1是根据一示例性实施例示出的一种车辆识别方法的流程图。
图2是根据一示例性实施例示出的另一种车辆识别方法的流程图。
图3是根据一示例性实施例示出的目标车辆区域及车灯识别区域示意图。
图4是根据一示例性实施例示出的时间微分子图像示意图。
图5是根据一示例性实施例示出的识别目标车辆的示意图。
图6是根据一示例性实施例示出的识别目标车辆的示意图。
图7是根据一示例性实施例示出的识别目标车辆的示意图。
图8是根据一示例性实施例示出的一种车辆识别装置的框图。
图9是根据一示例性实施例示出的一种车辆的框图。
以下结合附图对本发明的具体实施方式进行详细说明。应当理解的是,此处所描述的具体实施方式仅用于说明和解释本发明,并不用于限制本发明。
图1是根据一示例性实施例示出的一种车辆识别方法的流程图,如图1所示,该车辆识别方法可以应用于本体车辆中,包括以下步骤。
步骤S11:获取位于主体车辆行驶方向前方的第一图像和第二图像,以及获取位于主体车辆行驶方向后方的第三图像和第四图像。
步骤S12:在第二图像中识别前方目标车辆,及,在第四图像中识别后方目标车辆。
步骤S13:根据第一图像与第二图像之间的映射关系,将前方目标车辆在第二图像中对应的前方目标车辆区域映射至第一图像中,以在第一图像中生成前方车灯识别区域,及,根据第三图像与第四图像之间的映射关系,将后方目标车辆在第四图像中对应的后方目标车辆区域映射至第三图像中,以在第三图像中生成后方车灯识别区域。
步骤S14:在前方车灯识别区域中识别前方目标车辆的转向灯,及,在后方车灯识别区域中识别后方目标车辆的转向灯。
步骤S15:根据识别的前方目标车辆和后方目标车辆的转向灯,获得前方目标车辆和后方目标车辆的行驶信息。
第一图像、第三图像可以是彩色图像或亮度图像,第二图像、第四图像可以是深度图像,第一图像与第二图像是获取的位于主体车型行驶方向前方的环境成像,可以是由设置在主体车辆的车头上的同一图像采集装置获取的,同样,第三图像与第四图像是获取的位于主体车型行驶方向后方的环境成像,可以是由设置在主体车辆的车尾上的同一图像采集装置获取的。例如,以采集第一图像和第二图像的图像采集装置为例,那么可以通过图像采集装置的图像传感器获取第一图像,通过图像采集装置的TOF(Time of flight,飞行时间)传感器获得第二图像。
本发明实施例中,彩色或亮度图像的像素和深度图像的像素的可以按一定的比例进行交织排列,对于比例究竟是多少,本发明实施例不作限定。例如,图像传感器和TOF传感器都可以使用互补金属氧化物半导体(CMOS)工艺进行制作,亮度像素和TOF像素可以按比例制作在同一基板之上,例如以8:1比例进行制作的8个亮度像素和1个TOF像素组成一个大的交织像素,其中1个TOF像素的感光面积可以等于8个亮度像素的感光面积,其中8个亮度像素可以按2行及4列的阵列形式排列。比如,可以在1英寸光学靶面的基板上制作360行及480列的活跃交织像素的阵列,其中,包括720行及1920列的活跃亮度像素阵列、360行及480列的活跃TOF像素阵列,由此图像传感器和TOF传感器组成的同一个图像采集装置可同时获取彩色或亮度图像和深度图像。
可选的,请参见图2,图2为另一种车辆识别方法的流程图,该方法还可以包括步骤S16:根据第一图像识别前方公路车道线,以及根据第三图像识别后方公路车道线;
步骤S17:根据第一图像与第二图像之间的映射关系,将前方公路车道线映射至第二图像,以在第二图像中确定至少一个前方车辆识别范围,及,根据第三图像与第四图像之间的映射关系,将后方公路车道线映射至第四图像中,以在第四图像中确定至少一个后方车辆识别范围。那么步骤S12可以是在至少一个前方车辆识别范围中识别前方目标车辆,及,在至少一个后方车辆识别范围中识别后方目标车辆。其中,每两个相邻的公路车道线创建一个车辆识别范围。
其中,公路车道线是指在道路的路面上用线条、箭头、文字、立面标记、突起路标和轮廓标等向交通参与者传递引导、限制、警告等交通信息的标识。不同的线型对应不同的指示作用。以中国的公路车道划线为例,包括以下种类:1、白色虚线,画于路段中时,用以分隔同向行驶的交通流或作为行车安全距离识别线;画于路口时,用以引导车辆行进。2、白色实线,画于路段中时,用以分隔同向行驶的机动车和非机动车,或指示车行道的边缘;画于路口时,可用作导向车道线或停止线。3、黄色虚线,画于路段中时,用以分隔对向行驶的交通流;画于路侧或缘石上时,用以禁止车辆长时在路边停放。4、黄色实线,画于路段中时,用以分隔对向行驶的交通流;画于路侧或缘石上时,用以禁止车辆长时或临时在路边停放。5、双白虚线,画于路口时,作为减速让行线;画于路段中时,作为行车方向随时间改变之可变车道线。6、双黄实线,画于路段中时,用以分隔对向行驶的交通流。7、黄色虚实线,画于路段中时,用以分隔对向行驶的交通流;黄色实线一侧禁止车辆超车、跨越或回转,黄色虚线一侧在保证安全的情况下准许车辆超车、跨越或回转。8、双白实线,画于路口时,作为停车让行线。9、橙虚、实线,用于作业区标线。10、蓝虚、实线,作为非机动车专用道标线,划分停车位标线时,指示免费停车位。当然,识别车道线也可以根据目标车辆运行地理位置当地的车道划线规则进行调整识别方式。
识别公路车道线的位置只需要利用公路车道线与路面的亮度差异,因此获取前方公路车道线只需要第一图像的亮度信息即可。从而在第一图像为亮度图像时,可以直接根据第一图像的亮度信息识别前方公路车道线,在第一图像为彩色图像时,可以将第一图像转化成亮度图像之后再识别前方公路车道线。同理,可以以同样的方式通过第三图像识别后方公路车道线。
每两个相邻的公路车道线创建一个车辆识别范围,即,车辆识别范围对应于实际的车道,那么在前方车辆识别范围中识别前方目标车辆,以及在后方车辆识别范围中识别后方目标车辆,可以将识别目标车辆的范围确定到车道上,以确保识别的对象是车道上行驶的车辆,避免图像中的其他非车辆的对象所造成的干扰,提升识别目标车辆的准确性。
可选的,由于公路车道线既有实线车道线也有虚线车道线,那么识别公路车道线可以根据第一图像,获取前方公路车道线包括的每个实线车道线的全部边缘像素位置,及获取前方公路车道线包括的每个虚线车道线的全部边缘像素位置;以及,根据第三图像,获取后方公路车道线包括的每个实线车道线的全部边缘像素位置,及获取后方公路车道
线包括的每个虚线车道线的全部边缘像素位置。这样才能完整地识别主体车辆行驶方向前方和后方的实线车道线以及虚线车道线,进而提升识别前方目标车辆和后方目标车辆的准确性。
可选的,获取前方公路车道线包括的每个实线车道线的全部边缘像素位置,可以创建与第一图像对应的二值图像,然后在对应于第一图像的二值图像中检测前方公路车道线包括的每个实线车道线的全部边缘像素位置。同样的,获取后方公路车道线包括的每个实线车道线的全部边缘像素位置,可以创建与第三图像对应的二值图像,然后在对应于第三图像的二值图像中检测后方公路车道线包括的每个实线车道线的全部边缘像素位置。
以下将以获取前方公路车道线包括的实线车道线为例进行说明。
对于如何创建与第一图像对应的二值图像,本发明实施例不作限定,以下对几种可能的方式进行举例说明。
例如,利用公路车道线与路面的亮度差异,可以通过查找得到某些亮度阈值,亮度阈值可以利用“直方图统计—双峰”算法来查找得到,并利用亮度阈值和亮度图像创建突出公路车道线的二值图像。
或者例如,还可以将亮度图像划分为多个亮度子图像,对每个亮度子图像执行“直方图统计—双峰”算法来查找得到多个亮度阈值,利用各个亮度阈值和相应的亮度子图像创建突出公路车道线的二值子图像,并利用二值子图像创建完整的突出公路车道线的二值图像,这样可以应对路面或车道线亮度变化的情况。
在创建了与第一图像对应的二值图像之后,可以在二值图像中检测每个实线车道线的全部边缘像素位置,对于检测的方式,本发明实施例同样不作限定。
例如,由于公路车道线的曲率半径不可能太小,并且由于相机投影原理导致近处车道线相对远处车道线的成像像素更多,使得弯道的实线车道线在亮度图像中排列成直线的像素也占该实线车道线成像像素的大部分,因此可以使用类似Hough变换算法等直线检测算法在突出公路车道线的二值图像中检测出直道的实线车道线的全部边缘像素位置或检测出弯道的实线车道线的大部分初始直线边缘像素位置。
直线检测可能也将隔离带、电线杆在二值图像中的大部分直线边缘像素位置检出。那么例如可以根据图像传感器的长宽比例、相机镜头焦距、道路设计规范的道路宽度范围和图像传感器在主体车辆的安装位置等设置车道线在二值图像中的斜率范围,从而根据该斜率范围将非车道线的直线过滤排除。
由于弯道的实线车道线的边缘像素位置总是连续变化的,因此可以通过以下方式确定弯道实线车道线的全部边缘像素位置:查找上述检测的弯道车道线初始直线两端的边缘像素位置的连通像素位置,并将该连通像素位置并入该初始直线边缘像素集合,重复上述查找和并入该连通像素位置的过程,直至最后将弯道实线车道线的全部边缘像素位置唯一确定。
其中,边缘像素的连通像素,是指与边缘像素位置相邻且取值相近的像素。
通过以上方式可以检测前方公路车道线包括的实线车道线的全部边缘像素位置。当然,可以通过上述同样的方式创建第三图像的二值图像,再检测后方公路车道线包括的每个实线车道线的全部边缘像素。这样便可以识别主体车辆行驶方向前方以及后方的实线车道线。
可选的,第一虚线车道线为前方公路车道线包括的任一虚线公路车道线,那么可以根据第一图像识别前方公路车道线中的第一实线车道线,然后根据第一虚线车道线的初始直线位置,将第一实线公路车道线的全部边缘像素位置投影到第一虚线车道线的边缘像素位置,进而获取第一虚线车道线的全部边缘像素位置。第一实线车道线为前方公路车道线包括的任一实线公路车道线。
本发明实施例中,可以根据实线车道线的先验知识、车道线现实中相互平行的原则、图像传感器及相机的投影参数,将第一实线车道线的全部边缘像素位置投影到第一虚线车道线的初始直线边缘像素位置以连接第一虚线车道线的初始直线边缘像素位置和属于第一虚线车道线的其他较短的车道线的边缘像素位置,从而获取虚线车道线的全部边缘像素位置。
同样的,第二虚线车道线为后方公路车道线包括的任一虚线公路车道线,那么可以根据第三图像识别后方公路车道线中的第二实线车道线,然后根据第一虚线车道线的初始直线位置,将第二实线公路车道线的全部边缘像素位置投影到第二虚线车道线的边缘像素位置,进而获取第二虚线车道线的全部边缘像素位置。第二实线车道线为后方公路车道线包括的任一实线公路车道线。这样便可以识别主体车辆行驶方向前方以及后方的虚线车道线。
可选的,第一虚线车道线为前方公路车道线包括的任一虚线公路车道线,那么可以将连续获取的多个第一图像分别对应的二值图像进行叠加,以将第一虚线车道线叠加成实线车道线,然后获取由第一虚线车道线叠加成的实线车道线的全部边缘像素位置。
本发明实施例中,可以无需得到直道或弯道的先验知识,由于车辆在直道巡航或恒定转向角弯道巡航的过程中,虚线车道线的横向偏移在较短的连续时间内几乎可以忽略,但纵向偏移却较大,因此虚线车道线在不同时刻的连续几幅突出公路车道线的二值图像中可以叠加成一条实线车道线,然后再通过上述实线车道线的识别方法即可获取该虚线车道线的全部边缘像素位置。
由于虚线车道线的纵向偏移量受到主体车辆车速的影响,因此在识别第一虚线车道线时,可以根据从轮速传感器获取的车速动态地确定不同时刻的连续的突出公路车道线的二值图像的最少幅数以将第一虚线车道线叠加成一条实线车道线,从而获取第一虚线车道线的全部边缘像素位置。
同样的,第二虚线车道线为后方公路车道线包括的任一虚线公路车道线,那么可以将连续获取的多个第三图像分别对应的二值图像进行叠加,以将第二虚线车道线叠加成实线车道线,然后获取由第二虚线车道线叠加成的实线车道线的全部边缘像素位置。这样便可以识别主体车辆行驶方向前方以及后方的虚线车道线。
可选的,还可以获取映射至第二图像中的每个前方公路车道线的初始直线的斜率,以及获取映射至第四图像中的每个后方公路车道线的初始直线的斜率,那么可以将斜率最大的两条初始直线对应的前方公路车道线所创建的前方车辆识别范围标记为前方本车道,将其余的前方车辆识别范围标记为前方非本车道,及,将斜率最大的两条初始直线对应的后方公路车道线所创建的后方车辆识别范围标记为后方本车道,将其余的后方车辆识别范围标记为后方非本车道。那么步骤S12可以是在标记为前方本车道的前方车辆识别范围中识别本车道的前方目标车辆、在标记为前方非本车道的前方车辆识别范围中识别非本车道的前方目标车辆、及在相邻两个前方车辆识别范围组合成的前方车辆识别范围中识别变道的前方目标车辆,及,在标记为后方本车道的后方车辆识别范围中识别本车道的后方目标车辆、在标记为后方非本车道的后方车辆识别范围中识别非本车道的后方目标车辆、及在相邻两个后方车辆识别范围组合成的后方车辆识别范围中识别变道的后方目标车辆。
由于第一图像和第二图像之间的交织映射关系,第一图像的每个像素的行列坐标经过等比例的调整都可以在第二图像至少确定一个像素的行列坐标,因此根据第一图像获取的前方公路车道线的每个边缘像素位置都可以在第二图像至少确定一个像素位置,从而在第二图像中获取了等比例调整的前方公路车道线。在第二图像中,每相邻两个前方公路车道线创建一个前方车辆识别范围。
根据第二图像中获取的等比例的前方公路车道线,取每个前方公路车道线的初始直线部分所占的行数和列数相比得到该前方公路车道线的初始直线的斜率,对根据斜率最大的两条前方公路车道线的初始直线所在的前方公路车道线创建的车辆识别范围标记为本车道,对其他创建的前方车辆识别范围标记为非本车道。
标记车道之后,便可以在标记为本车道的前方车辆识别范围中识别本车道的前方目标车辆、在标记为非本车道的车辆识别范围中识别前方非本车道的目标车辆、及在相邻两个车辆识别范围组合成的车辆识别范围中识别变道的前方目标车辆。
标记后方车辆识别范围的方式与上述方式相同,在此不再赘述。
对于识别目标车辆的方式,本发明实施例不作限定,以下对几种可能的方式进行说明,下文所述的目标车辆即可以是前方目标车辆,也可以是后方目标车辆。
第一种方式:
由于目标车辆相对于TOF传感器的距离和位置随时间总是变化的,而路面、隔离带相对于TOF传感器的距离和位置随时间近似是不变化的。因此可以利用两幅不同时刻获取的深度图像创建时间微分深度图像,进而识别深度图像中目标车辆的位置,或者目标车辆与本体车辆之间的距离,等等。
第二种方式:
在深度图像中,由同一个目标车辆的背面或前面所反射的光,到TOF传感器所形成的深度子图像包含一致的距离信息,因此只要识别该目标车辆形成的深度子图像在深度图像中的位置即可获取该目标车辆的距离信息。
同一个目标车辆的背面或前面的光反射到TOF传感器形成的深度子图像是包含一致的距离信息,而路面的光反射到TOF传感器形成的深度子图像是包含连续变化的距离信息,因此包含一致的距离信息的深度子图像与包含连续变化的距离信息的深度子图像在两者的交界处必然形成突变差异,这些突变差异的交界形成了该目标车辆在深度图像中的目标边界。
例如,可以采用图像处理算法中的检测边界的Canny、Sobel、Laplace等多种边界检测方法以检测目标车辆的目标边界。
进一步地,车辆识别范围由车道线的全部像素位置所确定,因此在车辆识别范围内检测目标车辆的目标边界将减少隔离带、路灯杆、防护桩等道路设施形成的边界干扰。
在实际应用中,目标车辆可能有多个,因此,可以分别将每个车辆识别范围内检出的目标边界投影至图像的行坐标轴上,并在行坐标轴上进行一维查找,即可确定该车辆识别范围内所有目标车辆的纵向目标边界所占的行数和行坐标范围,以及确定横向目标边界的所占的列数和行坐标位置,纵向目标边界指占有像素行数多并且列数少的目标边界,横向目标边界指有占有像素行数少并且列数多的目标边界。根据该车辆识别范围内所有的横向目标边界所占的列数、行坐标位置,在该车辆识别范围内查找所有纵向目标边界的列坐标位置(也即相应横向目标边界的列坐标起始位置和终点位置),并根据目标边界包含一致的距离信息的原则区分不同目标车辆的目标边界,从而确定该车辆识别范围内所有目标车辆的位置和距离信息。
因此,检测获取目标车辆的目标边界即可唯一确定该目标车辆形成的深度子图像在深度图像中的位置,从而唯一确定该目标车辆的距离信息。
当然,也可以通过其他的方式识别目标车辆,本发明实施例对此不作限定,只要能够识别目标车辆即可。
在识别了目标车辆之后,可以在第二图像中确定前方目标车辆区域,以及在第四图像中确定后方目标车辆区域。目标车辆区域也就是目标车辆在第二图像或第四图像中所在的区域,可以是识别出的目标车辆的边界围成的闭合区域,或者也可以是识别出的目标车辆的边界的延伸围成的闭合区域,或者还可以是目标车辆的若干像素位置连线围成的闭合区域,等等。本发明实施例对于目标车辆区域究竟是何种区域不作限定,只要是包含目标车辆的区域即可。
由于第一图像和第二图像之间的交织映射关系,第二图像中前方目标车辆区域的每个像素的行列坐标经过等比例的调整都可以在第一图像中至少确定一个像素的行列坐标。请参见图3,将第二图像中的前方目标车辆区域映射至第一图像中后,可以在第一图像的相应位置上生成前方车灯识别区域,由于前方目标车辆的车灯的成像包含在前方目标车辆区域中,因此可以在第一图像中生成的前方车灯识别区域中识别前方目标车辆的转向灯。同理,通过同样的方式可以在第三图像中生成后方车灯识别区域,并在后方车灯识别区域中识别后方目标车辆的转向灯。
可选的,对于在前方车灯识别区域或后方车灯识别区域中识别目标车辆的转向灯的
方式,本发明实施例不作限定,可以对连续获取的多个第一图像或第三图像中的车灯识别区域进行时间微分处理,以创建对应于目标车辆的时间微分子图像,然后根据时间微分子图像,识别目标车辆的转向灯。
例如,可以根据前方或后方车灯识别区域中车灯的颜色、闪烁频率或闪烁序列以识别转向灯。
目标车辆变道的初期其纵向位移和横向位移都较小,意味着该目标车辆的车灯识别区域大小变化也较小,只有转向灯处成像的亮度因闪烁而变化较大。因此,通过连续获取多幅不同时刻的彩色或亮度图像并对其中该目标车辆的车灯识别区域进行时间微分处理以创建该目标车辆的时间微分子图像。
可选的,时间微分子图像将突出目标车辆的连续闪烁的转向灯子图像。然后可以将时间微子图像投影到列坐标轴,进行一维查找获取该目标车辆的转向灯子图像的起始和终点列坐标位置,将这些起始和终点列坐标位置投影至时间微分子图像并查找转向灯子图像的起始和终点行坐标位置,将转向灯子图像的起始和终点的行、列坐标位置投影至上述多幅不同时刻的彩色或亮度图像中以确认该目标车辆的转向灯的颜色、闪烁频率或闪烁序列,从而确定了闪烁的转向灯子图像的行、列坐标位置,即获取了转向灯子图像在时间微分子图像中的位置信息。
进一步地,闪烁的转向灯子图像的行、列坐标位置只在该目标车辆的车灯识别区域左侧时可以确定该目标车辆在打左转向灯,闪烁的转向灯子图像的行、列坐标位置只在该目标车辆的车灯识别区域右侧时可以确定该目标车辆在打右转向灯,闪烁的转向灯子图像的行、列坐标位置在该目标车辆的车灯识别区域两侧时可以确定该目标车辆在打双闪警示灯。当然,主体车辆后方的目标车辆成像的转向灯位置有可能是左右相反的,但左右相反的成像的转向灯位置可以经由简单的变换轻易调整和识别。
可选的,当目标车辆变道的过程中其纵向位移或横向位移较大导致该目标车辆的车灯识别区域大小变化也较大,这时可以对连续获取的多幅不同时刻的目标车辆的车灯识别区域进行纵向位移或横向位移补偿并缩放成大小一致的车灯识别区域,再对调整后的该目标车辆的车灯识别区域进行时间微分处理以创建该目标车辆的时间微分子图像,将时间微分子图像投影到列坐标轴,进行一维查找获取目标车辆的转向灯子图像的起始和终点列坐标位置,将这些起始和终点列坐标位置投影至时间微分车灯识别区域子图像并查找转向灯子图像的起始和终点行坐标位置,将转向灯子图像的起始和终点的行、列坐标位置投影至上述多幅不同时刻的彩色或亮度图像中以确认该目标车辆的转向灯的颜色、闪烁频率或闪烁序列,从而确定了闪烁的转向灯子图像的行、列坐标位置,最后完成左转向灯、右转向灯或双闪警示灯的识别。
例如,如图4所示的对应于车灯识别区域的时间微分子图像,在该时间微分子图像中突出有连续闪烁的转向灯子图像,通过识别坐标,确定转向灯子图像位于车灯识别区域左方,闪烁频率为1次/秒,那么比如可以确定目标车辆当前在打左转向灯。
通过以上方式,由于目标车辆既可以是前方目标车辆也可以是后方目标车辆,因此
可以较好地识别前方和后方目标车辆的转向灯,以便根据目标车辆的转向情况提前采取相应的措施,防止安全事故的发生,提升了车辆的安全性。
根据识别的目标车辆的转向灯,可以获得目标车辆的行驶信息,当然,行驶信息还可以包括目标车辆的行驶速度、目标车辆与主体车辆之间的距离等信息。例如,某前方目标车辆的行驶信息包括,前方目标车辆位于主体车辆所在的车道上,以相对于主体车辆-10米/秒的速度,亮着右转向灯,即将向右方变道,等等。
可选的,在车辆自适应巡航时,可以根据前方目标车辆和/或后方目标车辆的行驶信息,对主体车辆的运动参数进行控制。
例如,根据识别的目标车辆的行驶信息和相应识别目标车辆的转向灯,可以识别到后方非本车道目标车辆减速变道至主体车辆本车道的情况,那么可以控制主体车辆提前亮起刹车灯以警示该后方目标车辆驾驶员取消变道或减速变道,从而减缓了主体车辆与该后方目标车辆的追尾碰撞风险。
例如,根据识别的目标车辆的行驶信息和相应识别目标车辆的转向灯,可以识别到本车道前方目标车辆减速变道至非本车道的情况,那么可以控制主体车辆减少不必要的制动,从而减少了由于主体车辆的不必要的制动调整导致的追尾碰撞风险。
例如,根据识别的目标车辆的运动参数和相应识别目标车辆的转向灯,可以识别到非本车道前方目标车辆减速变道至主体车辆本车道的情况,使得主体车辆的运动参数控制系统和安全系统可以更早做出调整,提高了主体车辆及其乘员的行驶安全性。
例如,根据识别的目标车辆的运动参数和相应识别目标车辆的转向灯,可以识别到非本车道前方目标车辆减速变道至主体车辆本车道的情况,使得主体车辆的车灯系统可以更早做出调整以提醒后方目标车辆,为后方目标车辆提供了更多的制动或调整时间,更有效地减少了追尾碰撞风险,等等。
以下将示例,本发明识别和监控主体车辆的前方本车道目标车辆从打转向灯到完成变道至非本车道的连续过程。
根据标记前方本车道标签的车辆识别范围识别前方本车道目标车辆,根据两两组合的前方车辆识别范围识别变道的前方目标车辆,根据车灯识别区域识别相应目标车辆的转向灯。也即可以识别和监控前方本车道目标车辆从打转向灯到完成变道至非本车道的连续过程,而该目标车辆在该连续变道过程中的持续时间、相对主体车辆的距离、相对速度和横向位移等运动参数也容易被监控,从而根据该目标车辆的该行驶信息可以控制主体车辆的运动参数。
例如,识别到前方本车道目标车辆的右转向灯亮起时该目标车辆的左侧目标边界到前方本车道左侧车道线的像素距离经相机投影关系换算确定为横向距离P;经过连续获取N幅不同时刻的第一图像和第二图像(获取一幅第一图像或第二图像的时间为T),期间识别并记录该目标车辆的距离R的变化,并可以通过对该目标车辆的距离R相对T的变化计算该目标车辆的相对速度V;识别到该目标车辆刚好完成变道至前方本车道右侧的非本车道,此时该目标车辆的左侧目标边界到前方本车道右侧车道线重合;本车道
宽度为D;因此,该前方目标车辆在该连续变道过程中的运动参数为持续时间N×T、相对主体车辆的距离为R、相对速度为V和横向位移为(D-P)。
同理,本发明也可以识别和监控主体车辆的后方本车道目标车辆从打转向灯到完成变道至非本车道的连续过程。且上述识别的横向位移以本车道的左右车道线为参考,无论该目标车辆变道时处于直道或是弯道、无论目标车辆向左或向右变道都可以识别准确,从而为主体车辆自适应巡航系统提供准确的控制依据。
例如,请参见图5左边的示意图,根据本发明可以在通常的直道或弯道工况中控制主体车辆跟随前方本车道目标车辆匀速巡航,根据本发明可以识别该前方目标车辆的距离为RA,并同时识别到主体车辆后方左侧非本车道目标车辆的距离为RB并且识别其右转向灯正在闪烁的变道意图;当RB距离过小,该后方目标车辆变道至主体车辆后方容易发生追尾碰撞,但由于本发明识别该后方目标车辆打转向灯的初始变道意图,根据本发明可以控制主体车辆的刹车灯提前亮起以警示该后方目标车辆驾驶员取消变道或减速变道,从而减缓了主体车辆与该后方目标车辆的追尾碰撞风险。请参见图5右边的示意图,传统的仅依靠毫米波雷达或激光雷达的车辆识别后方变道目标车辆具有足够大的变道横向位移才能判断该后方目标车辆的变道意图,将导致追尾碰撞风险增大。可见,通过本发明实施例中的方式,可以准确识别该后方目标车辆继续强行变道且变道不减速产生的相对于本车道线的横向位移,根据本发明可以控制主体车辆的巡航系统自动地提高车速以适当地减少主体车辆与该前方目标车辆的跟随距离、增大主体车辆与该后方目标车辆的距离,从而减缓了主体车辆与该后方目标车辆的追尾碰撞风险。
传统的仅依靠毫米波雷达或激光雷达的车辆自适应巡航系统识别的目标车辆的横向位移是以主体车辆为参考的,以主体车辆为参考识别的目标车辆的横向位移有时将不能提供给车辆自适应巡航系统准确的运动控制依据。
例如,如图6所示,当本车道前方目标车辆从本车道完成向右变道正好处在向左弯的弯道时,位于直道上传统车辆的毫米波雷达或激光雷达仍可能识别该前方目标车辆部分处于本车道上,上述弯道曲率半径250米,上述前方目标车辆变道过程中在弯道上行驶了25米,与该前方目标车辆的左侧目标边界重合的本车道右侧车道线在弯道25米处已经相对该车道线的直道延长线向左偏移了1.25米。若此时上述传统车辆的毫米波雷达或激光雷达识别到该目标车辆的距离为50米至80米,即上述传统车辆的毫米波雷达或激光雷达位于直道上并且距离弯道入口仍有25米至55米的距离,上述传统车辆的毫米波雷达或激光雷达在缺乏弯道先验知识的情况下将识别到该前方目标车辆仍然约有1.25米宽度的车身在本车道上,并且随着该目标车辆继续沿着向左弯道减速行驶,上述传统车辆的毫米波雷达或激光雷达识别到该目标车辆有更大宽度的车身在本车道上,即上述传统车辆的毫米波雷达或激光雷达将产生了不准确的识别并将导致该传统车辆自适应巡航系统执行连续的不准确和不必要的制动,从而导致该传统车辆与其后方目标车辆的追尾碰撞风险增大。同理,上述传统车辆的毫米波雷达或激光雷达对上述本车道目标车辆在向右弯道上从本车道完成向左变道的识别也存在不准确性。
而通过本发明实施例中的技术方案,可以识别的目标车辆的行驶信息和相应识别目标车辆的转向灯,可以识别到本车道目标车辆减速变道至主体车辆非本车道的工况,使得主体车辆的运动参数控制系统可以减少不必要的制动调整,从而减少了由于主体车辆的不必要的制动调整导致的追尾碰撞风险。
本发明也可以识别和监控非本车道目标车辆从打转向灯到完成变道至本车道的连续过程,而该目标车辆在该连续变道过程中的持续时间、相对主体车辆的距离、相对速度和横向位移等运动参数也容易被监控,从而该目标车辆的该运动参数可以用于控制主体车辆的运动参数以更早做出制动调整并提高行驶安全性、并更早地控制车灯警示后方目标车辆以减少追尾碰撞风险。
例如图7所示,主体车辆在本车道直道以定速模式行驶,并且距离弯道入口仍有55米(或至少25米)的距离,该弯道向右弯曲并且曲率半径为250米,在距离弯道入口前方25米本车道右侧有一辆非本车道前方目标车辆正在打左转向灯向本车道变道,并且该目标车辆的左侧目标边界已经与本车道的右侧车道线重合。根据上述示例,本发明将可以准确识别该前方目标车辆正在向本车道变道,由于该目标车辆距离主体车辆约80米(或至少50米),本发明可以控制主体车辆的动力系统准确地执行动力输出减小甚至制动的动作、及时亮起刹车灯,以保证主体车辆与前方、后方目标车辆的安全距离,从而提高了主体车辆的行驶安全性和减少了追尾碰撞风险。
然而,传统的仅依靠毫米波雷达或激光雷达的车辆自适应巡航系统识别的目标车辆的横向位移是以主体车辆为参考的,在缺乏弯道先验知识的情况下将识别该前方目标车辆距离本车道右侧车道线的延长线还约有1.25米的横向距离,即错误地识别该前方目标车辆需要继续向左横向位移约1.25米上述毫米波雷达或激光雷达才能确认该前方目标车辆开始进入本车道。若该前方目标车辆横向位移速度为1米每秒,则上述传统的仅依靠毫米波雷达或激光雷达的车辆自适应巡航系统将在该前方目标车辆实际进入本车道约1.25秒以后才能执行动力输出减小甚至制动的动作,这无疑减少了主体车辆与前方、后方目标车辆的安全距离,导致了主体车辆的行驶安全性下降和增加了追尾碰撞风险。
可见,通过本发明实施例中的技术方案,根据识别的目标车辆的行驶信息和相应识别目标车辆的转向灯,可以识别到非本车道目标车辆减速变道至主体车辆本车道的工况,使得主体车辆的运动参数控制系统和安全系统可以更早做出调整,提高了主体车辆及其乘员的行驶安全性。同时,使得主体车辆的车灯系统可以更早做出调整以提醒后方目标车辆,为后方目标车辆提供了更多的制动或调整时间,更有效地减少了追尾碰撞风险。
请参见图8,基于相似的发明构思,本发明实施例提供一种车辆识别装置100,装置100可以包括:图像获取模块101,第一识别模块102,第一映射模块103,第二识别模块104和第一获取模块105。
图像获取模块101,用于获取位于主体车辆行驶方向前方的第一图像和第二图像,以及获取位于主体车辆行驶方向后方的第三图像和第四图像,其中,第一图像和第三图
像为彩色图像或亮度图像,第二图像和第四图像为深度图像。
第一识别模块102,用于在第二图像中识别前方目标车辆,及,在第四图像中识别后方目标车辆。
第一映射模块103,用于根据第一图像与第二图像之间的映射关系,将前方目标车辆在第二图像中对应的前方目标车辆区域映射至第一图像中,以在第一图像中生成前方车灯识别区域,及,根据第三图像与第四图像之间的映射关系,将后方目标车辆在第四图像中对应的后方目标车辆区域映射至第三图像中,以在第三图像中生成后方车灯识别区域。
第二识别模块104,用于在前方车灯识别区域中识别前方目标车辆的转向灯,及,在后方车灯识别区域中识别后方目标车辆的转向灯。
第一获取模块105,用于根据识别的前方目标车辆和后方目标车辆的转向灯,获得前方目标车辆和后方目标车辆的行驶信息。
可选的,装置100还包括:
第三识别模块,用于根据第一图像识别前方公路车道线,以及根据第三图像识别后方公路车道线;
第二映射模块,用于根据第一图像与第二图像之间的映射关系,将前方公路车道线映射至第二图像,以在第二图像中确定至少一个前方车辆识别范围,及,根据第三图像与第四图像之间的映射关系,将后方公路车道线映射至第四图像中,以在第四图像中确定至少一个后方车辆识别范围,其中,每两个相邻的公路车道线创建一个车辆识别范围;
第一识别模块102还用于:在至少一个前方车辆识别范围中识别前方目标车辆,及,在至少一个后方车辆识别范围中识别后方目标车辆。
可选的,装置100还包括:
第二获取模块,用于获取映射至第二图像中的每个前方公路车道线的初始直线的斜率,以及获取映射至第四图像中的每个后方公路车道线的初始直线的斜率;
标记模块,用于将斜率最大的两条初始直线对应的前方公路车道线所创建的前方车辆识别范围标记为前方本车道,将其余的前方车辆识别范围标记为前方非本车道,及,将斜率最大的两条初始直线对应的后方公路车道线所创建的后方车辆识别范围标记为后方本车道,将其余的后方车辆识别范围标记为后方非本车道;
第一识别模块102还用于:在标记为前方本车道的前方车辆识别范围中识别本车道的前方目标车辆、在标记为前方非本车道的前方车辆识别范围中识别非本车道的前方目标车辆、及在相邻两个前方车辆识别范围组合成的前方车辆识别范围中识别变道的前方目标车辆,及,在标记为后方本车道的后方车辆识别范围中识别本车道的后方目标车辆、在标记为后方非本车道的后方车辆识别范围中识别非本车道的后方目标车辆、及在相邻两个后方车辆识别范围组合成的后方车辆识别范围中识别变道的后方目标车辆。
可选的,第三识别模块用于:根据第一图像,获取前方公路车道线包括的每个实线车道线的全部边缘像素位置,及获取前方公路车道线包括的每个虚线车道线的全部边缘
像素位置;以及,根据第三图像,获取后方公路车道线包括的每个实线车道线的全部边缘像素位置,及获取后方公路车道线包括的每个虚线车道线的全部边缘像素位置。
可选的,第三识别模块用于:创建与第一图像对应的二值图像;在对应于第一图像的二值图像中检测前方公路车道线包括的每个实线车道线的全部边缘像素位置;以及,创建与第三图像对应的二值图像;在对应于第三图像的二值图像中检测后方公路车道线包括的每个实线车道线的全部边缘像素位置。
可选的,第一虚线车道线为公路车道线包括的任一虚线车道线,第二虚线车道线为后方公路车道线包括的任一虚线公路车道线,第三识别模块用于:根据第一图像识别前方公路车道线中的第一实线车道线,其中,第一实线车道线为前方公路车道线包括的任一实线公路车道线;根据第一虚线车道线的初始直线位置,将第一实线公路车道线的全部边缘像素位置投影到第一虚线车道线的边缘像素位置,以获取第一虚线车道线的全部边缘像素位置;以及,根据第三图像识别后方公路车道线中的第二实线车道线,其中,第二实线车道线为后方公路车道线包括的任一实线公路车道线;根据第二虚线车道线的初始直线位置,将第二实线公路车道线的全部边缘像素位置投影到第二虚线车道线的初始直线的边缘像素位置,以获取第二虚线车道线的全部边缘像素位置。
可选的,第一虚线车道线为公路车道线包括的任一虚线车道线,第二虚线车道线为后方公路车道线包括的任一虚线公路车道线,第三识别模块用于:将连续获取的多个第一图像分别对应的二值图像进行叠加,以将第一虚线车道线叠加成实线车道线;获取由第一虚线车道线叠加成的实线车道线的全部边缘像素位置;以及,将连续获取的多个第三图像分别对应的二值图像进行叠加,以将第二虚线车道线叠加成实线车道线;获取由第二虚线车道线叠加成的实线车道线的全部边缘像素位置。
可选的,第二识别模块104用于:对连续获取的多个第一图像中的多个前方车灯识别区域进行时间微分处理,以创建对应于前方目标车辆的时间微分子图像;根据对应于前方目标车辆的时间微分子图像,识别前方目标车辆的转向灯;以及,对连续获取的多个第三图像中的多个后方车灯识别区域进行时间微分处理,以创建对应于后方目标车辆的时间微分子图像;根据对应于后方目标车辆的时间微分子图像,识别后方目标车辆的转向灯。
可选的,装置100还包括:
补偿模块,用于对多个前方车灯识别区域中的部分或全部前方车灯识别区域进行纵向位移补偿或横向位移补偿,以获取比例相同的多个前方车灯识别区域;及,对多个后方车灯识别区域中的部分或全部后方车灯识别区域进行纵向位移补偿或横向位移补偿,以获取比例相同的多个后方车灯识别区域;
缩放模块,用于将比例相同的多个前方车灯识别区域中的部分或全部前方车灯识别区域进行缩放,以获得大小一致的多个前方车灯识别区域;以及,将比例相同的多个后方车灯识别区域中的部分或全部后方车灯识别区域进行缩放,以获得大小一致的多个后方车灯识别区域。
可选的,第二识别模块104用于:检测前方目标车辆的转向灯子图像在时间微分子图像中的第一位置信息;根据第一位置信息识别前方目标车辆的转向灯;以及,检测后方目标车辆的转向灯子图像在时间微分子图像中的第二位置信息;根据第二位置信息识别后方目标车辆的转向灯。
可选的,装置100还包括:控制模块,用于根据前方目标车辆和/或后方目标车辆的行驶信息,对主体车辆的运动参数进行控制。
请参见图9,基于同一发明构思,本发明实施例提供一种车辆200,车辆200可以包括图8的车辆识别装置100。
在本发明所提供的实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。
在本申请各个实施例中的各功能模块可以集成在一个处理单元中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、ROM(Read-Only Memory,只读存储器)、RAM(Random Access Memory,随机存取存储器)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以对本发明的技术方案进行了详细介绍,但以上实施例的说明只是用于帮助理解本发明的方法及其核心思想,不应理解为对本发明的限制。本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本发明的保护范围之内。
Claims (23)
- 一种车辆识别方法,其特征在于,包括:获取位于主体车辆行驶方向前方的第一图像和第二图像,以及获取位于所述主体车辆行驶方向后方的第三图像和第四图像,其中,所述第一图像和所述第三图像为彩色图像或亮度图像,所述第二图像和所述第四图像为深度图像;在所述第二图像中识别前方目标车辆,及,在所述第四图像中识别后方目标车辆;根据所述第一图像与所述第二图像之间的映射关系,将所述前方目标车辆在所述第二图像中对应的前方目标车辆区域映射至所述第一图像中,以在所述第一图像中生成前方车灯识别区域,及,根据所述第三图像与所述第四图像之间的映射关系,将所述后方目标车辆在所述第四图像中对应的后方目标车辆区域映射至所述第三图像中,以在所述第三图像中生成后方车灯识别区域;在所述前方车灯识别区域中识别所述前方目标车辆的转向灯,及,在所述后方车灯识别区域中识别所述后方目标车辆的转向灯;根据识别的所述前方目标车辆和所述后方目标车辆的转向灯,获得所述前方目标车辆和所述后方目标车辆的行驶信息。
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:根据所述第一图像识别前方公路车道线,以及根据所述第三图像识别后方公路车道线;根据所述第一图像与所述第二图像之间的映射关系,将所述前方公路车道线映射至所述第二图像,以在所述第二图像中确定至少一个前方车辆识别范围,及,根据所述第三图像与所述第四图像之间的映射关系,将所述后方公路车道线映射至所述第四图像中,以在所述第四图像中确定至少一个后方车辆识别范围,其中,每两个相邻的公路车道线创建一个车辆识别范围;在所述第二图像中识别前方目标车辆,及,在所述第四图像中识别后方目标车辆,包括:在所述至少一个前方车辆识别范围中识别所述前方目标车辆,及,在所述至少一个后方车辆识别范围中识别所述后方目标车辆。
- 根据权利要求2所述的方法,其特征在于,所述方法还包括:获取映射至所述第二图像中的每个前方公路车道线的初始直线的斜率,以及获取映射至所述第四图像中的每个后方公路车道线的初始直线的斜率;将斜率最大的两条初始直线对应的前方公路车道线所创建的前方车辆识别范围标记为前方本车道,将其余的前方车辆识别范围标记为前方非本车道,及,将斜率最大的两条初始直线对应的后方公路车道线所创建的后方车辆识别范围标记为后方本车道,将 其余的后方车辆识别范围标记为后方非本车道;在所述至少一个前方车辆识别范围中识别所述前方目标车辆,及,在所述至少一个后方车辆识别范围中识别所述后方目标车辆,包括:在标记为前方本车道的前方车辆识别范围中识别本车道的前方目标车辆、在标记为前方非本车道的前方车辆识别范围中识别非本车道的前方目标车辆、及在相邻两个前方车辆识别范围组合成的前方车辆识别范围中识别变道的前方目标车辆,及,在标记为后方本车道的后方车辆识别范围中识别本车道的后方目标车辆、在标记为后方非本车道的后方车辆识别范围中识别非本车道的后方目标车辆、及在相邻两个后方车辆识别范围组合成的后方车辆识别范围中识别变道的后方目标车辆。
- 根据权利要求2或3所述的方法,其特征在于,根据所述第一图像识别前方公路车道线,以及根据所述第三图像识别后方公路车道线,包括:根据所述第一图像,获取所述前方公路车道线包括的每个实线车道线的全部边缘像素位置,及获取所述前方公路车道线包括的每个虚线车道线的全部边缘像素位置;以及,根据所述第三图像,获取所述后方公路车道线包括的每个实线车道线的全部边缘像素位置,及获取所述后方公路车道线包括的每个虚线车道线的全部边缘像素位置。
- 根据权利要求4所述的方法,其特征在于,获取所述前方公路车道线包括的每个实线车道线的全部边缘像素位置,包括:创建与所述第一图像对应的二值图像;在对应于所述第一图像的二值图像中检测所述前方公路车道线包括的每个实线车道线的全部边缘像素位置;获取所述后方公路车道线包括的每个实线车道线的全部边缘像素位置,包括:创建与所述第三图像对应的二值图像;在对应于所述第三图像的二值图像中检测所述后方公路车道线包括的每个实线车道线的全部边缘像素位置。
- 根据权利要求4或5所述的方法,其特征在于,第一虚线车道线为所述前方公路车道线包括的任一虚线公路车道线,获取所述第一虚线公路车道线的边缘像素位置,包括:根据所述第一图像识别所述前方公路车道线中的第一实线车道线,其中,所述第一实线车道线为所述前方公路车道线包括的任一实线公路车道线;将第一实线公路车道线的全部边缘像素位置根据所述第一虚线车道线的初始直线位置投影到所述第一虚线车道线的边缘像素位置,以获取所述第一虚线车道线的全部边缘像素位置;第二虚线车道线为所述后方公路车道线包括的任一虚线公路车道线,获取所述第二 虚线公路车道线的边缘像素位置,包括:根据所述第三图像识别所述后方公路车道线中的第二实线车道线,其中,所述第二实线车道线为所述后方公路车道线包括的任一实线公路车道线;将第二实线公路车道线的全部边缘像素位置根据所述第二虚线车道线的初始直线位置投影到所述第二虚线车道线的边缘像素位置,以获取所述第二虚线车道线的全部边缘像素位置。
- 根据权利要求4或5所述的方法,其特征在于,第一虚线车道线为所述前方公路车道线包括的任一虚线公路车道线,获取所述第一虚线公路车道线的边缘像素位置,包括:将连续获取的多个第一图像分别对应的二值图像进行叠加,以将所述第一虚线车道线叠加成实线车道线;获取由所述第一虚线车道线叠加成的实线车道线的全部边缘像素位置;第二虚线车道线为所述后方公路车道线包括的任一虚线公路车道线,获取所述第二虚线公路车道线的边缘像素位置,包括:将连续获取的多个第三图像分别对应的二值图像进行叠加,以将所述第二虚线车道线叠加成实线车道线;获取由所述第二虚线车道线叠加成的实线车道线的全部边缘像素位置。
- 根据权利要求1-7中任意一项所述的方法,其特征在于,在所述前方车灯识别区域中识别所述前方目标车辆的转向灯,包括:对连续获取的多个第一图像中的多个前方车灯识别区域进行时间微分处理,以创建对应于所述前方目标车辆的时间微分子图像;根据对应于所述前方目标车辆的时间微分子图像,识别所述前方目标车辆的转向灯;在所述后方车灯识别区域中识别所述后方目标车辆的转向灯,包括:对连续获取的多个第三图像中的多个后方车灯识别区域进行时间微分处理,以创建对应于所述后方目标车辆的时间微分子图像;根据对应于所述后方目标车辆的时间微分子图像,识别所述后方目标车辆的转向灯。
- 根据权利要求8所述的方法,其特征在于,所述方法还包括:对所述多个前方车灯识别区域中的部分或全部前方车灯识别区域进行纵向位移补偿或横向位移补偿,以获取比例相同的多个前方车灯识别区域;将比例相同的多个前方车灯识别区域中的部分或全部前方车灯识别区域进行缩放,以获得大小一致的多个前方车灯识别区域;以及,对所述多个后方车灯识别区域中的部分或全部后方车灯识别区域进行纵向位移补偿或横向位移补偿,以获取比例相同的多个后方车灯识别区域;将比例相同的多个后方车灯识别区域中的部分或全部后方车灯识别区域进行缩放,以获得大小一致的多个后方车灯识别区域。
- 根据权利要求8或9所述的方法,其特征在于,根据对应于所述前方目标车辆的时间微分子图像,识别所述前方目标车辆的转向灯,包括:检测所述前方目标车辆的转向灯子图像在所述时间微分子图像中的第一位置信息;根据所述第一位置信息识别所述前方目标车辆的转向灯;根据对应于所述后方目标车辆的时间微分子图像,识别所述后方目标车辆的转向灯,包括:检测所述后方目标车辆的转向灯子图像在所述时间微分子图像中的第二位置信息;根据所述第二位置信息识别所述后方目标车辆的转向灯。
- 根据权利要求1-10中任意一项所述的方法,其特征在于,所述方法还包括:根据所述前方目标车辆和/或所述后方目标车辆的行驶信息,对主体车辆的运动参数进行控制。
- 一种车辆识别装置,其特征在于,包括:图像获取模块,用于获取位于主体车辆行驶方向前方的第一图像和第二图像,以及获取位于所述主体车辆行驶方向后方的第三图像和第四图像,其中,所述第一图像和所述第三图像为彩色图像或亮度图像,所述第二图像和所述第四图像为深度图像;第一识别模块,用于在所述第二图像中识别前方目标车辆,及,在所述第四图像中识别后方目标车辆;第一映射模块,用于根据所述第一图像与所述第二图像之间的映射关系,将所述前方目标车辆在所述第二图像中对应的前方目标车辆区域映射至所述第一图像中,以在所述第一图像中生成前方车灯识别区域,及,根据所述第三图像与所述第四图像之间的映射关系,将所述后方目标车辆在所述第四图像中对应的后方目标车辆区域映射至所述第三图像中,以在所述第三图像中生成后方车灯识别区域;第二识别模块,用于在所述前方车灯识别区域中识别所述前方目标车辆的转向灯,及,在所述后方车灯识别区域中识别所述后方目标车辆的转向灯;第一获取模块,用于根据识别的所述前方目标车辆和所述后方目标车辆的转向灯,获得所述前方目标车辆和所述后方目标车辆的行驶信息。
- 根据权利要求12所述的装置,其特征在于,所述装置还包括:第三识别模块,用于根据所述第一图像识别前方公路车道线,以及根据所述第三图 像识别后方公路车道线;第二映射模块,用于根据所述第一图像与所述第二图像之间的映射关系,将所述前方公路车道线映射至所述第二图像,以在所述第二图像中确定至少一个前方车辆识别范围,及,根据所述第三图像与所述第四图像之间的映射关系,将所述后方公路车道线映射至所述第四图像中,以在所述第四图像中确定至少一个后方车辆识别范围,其中,每两个相邻的公路车道线创建一个车辆识别范围;所述第一识别模块还用于:在所述至少一个前方车辆识别范围中识别所述前方目标车辆,及,在所述至少一个后方车辆识别范围中识别所述后方目标车辆。
- 根据权利要求13所述的装置,其特征在于,所述装置还包括:第二获取模块,用于获取映射至所述第二图像中的每个前方公路车道线的初始直线的斜率,以及获取映射至所述第四图像中的每个后方公路车道线的初始直线的斜率;标记模块,用于将斜率最大的两条初始直线对应的前方公路车道线所创建的前方车辆识别范围标记为前方本车道,将其余的前方车辆识别范围标记为前方非本车道,及,将斜率最大的两条初始直线对应的后方公路车道线所创建的后方车辆识别范围标记为后方本车道,将其余的后方车辆识别范围标记为后方非本车道;所述第一识别模块还用于:在标记为前方本车道的前方车辆识别范围中识别本车道的前方目标车辆、在标记为前方非本车道的前方车辆识别范围中识别非本车道的前方目标车辆、及在相邻两个前方车辆识别范围组合成的前方车辆识别范围中识别变道的前方目标车辆,及,在标记为后方本车道的后方车辆识别范围中识别本车道的后方目标车辆、在标记为后方非本车道的后方车辆识别范围中识别非本车道的后方目标车辆、及在相邻两个后方车辆识别范围组合成的后方车辆识别范围中识别变道的后方目标车辆。
- 根据权利要求13或14所述的装置,其特征在于,所述第三识别模块用于:根据所述第一图像,获取所述前方公路车道线包括的每个实线车道线的全部边缘像素位置,及获取所述前方公路车道线包括的每个虚线车道线的全部边缘像素位置;以及,根据所述第三图像,获取所述后方公路车道线包括的每个实线车道线的全部边缘像素位置,及获取所述后方公路车道线包括的每个虚线车道线的全部边缘像素位置。
- 根据权利要求13-15中任意一项所述的装置,其特征在于,所述第三识别模块用于:创建与所述第一图像对应的二值图像;在对应于所述第一图像的二值图像中检测所述前方公路车道线包括的每个实线车道线的全部边缘像素位置;以及,创建与所述第三图像对应的二值图像;在对应于所述第三图像的二值图像中检测所述后方公路车道线包括的每个实线车道线的全部边缘像素位置。
- 根据权利要求15或16所述的装置,其特征在于,第一虚线车道线为所述公路车道线包括的任一虚线车道线,第二虚线车道线为所述后方公路车道线包括的任一虚线公路车道线,所述第三识别模块用于:根据所述第一图像识别所述前方公路车道线中的第一实线车道线,其中,所述第一实线车道线为所述前方公路车道线包括的任一实线公路车道线;根据第一虚线车道线的初始直线位置,将第一实线公路车道线的全部边缘像素位置投影到所述第一虚线车道线的边缘像素位置,以获取所述第一虚线车道线的全部边缘像素位置;以及,根据所述第三图像识别所述后方公路车道线中的第二实线车道线,其中,所述第二实线车道线为所述后方公路车道线包括的任一实线公路车道线;根据第二虚线车道线的初始直线位置,将第二实线公路车道线的全部边缘像素位置投影到所述第二虚线车道线的边缘像素位置,以获取所述第二虚线车道线的全部边缘像素位置。
- 根据权利要求15或16所述的装置,其特征在于,第一虚线车道线为所述公路车道线包括的任一虚线车道线,第二虚线车道线为所述后方公路车道线包括的任一虚线公路车道线,所述第三识别模块用于:将连续获取的多个第一图像分别对应的二值图像进行叠加,以将所述第一虚线车道线叠加成实线车道线;获取由所述第一虚线车道线叠加成的实线车道线的全部边缘像素位置;以及,将连续获取的多个第三图像分别对应的二值图像进行叠加,以将所述第二虚线车道线叠加成实线车道线;获取由所述第二虚线车道线叠加成的实线车道线的全部边缘像素位置。
- 根据权利要求12-18中任意一项所述的装置,其特征在于,所述第二识别模块用于:对连续获取的多个第一图像中的多个前方车灯识别区域进行时间微分处理,以创建对应于所述前方目标车辆的时间微分子图像;根据对应于所述前方目标车辆的时间微分子图像,识别所述前方目标车辆的转向灯;以及,对连续获取的多个第三图像中的多个后方车灯识别区域进行时间微分处理,以创建对应于所述后方目标车辆的时间微分子图像;根据对应于所述后方目标车辆的时间微分子图像,识别所述后方目标车辆的转向灯。
- 根据权利要求19所述的装置,其特征在于,所述装置还包括:补偿模块,用于对所述多个前方车灯识别区域中的部分或全部前方车灯识别区域进行纵向位移补偿或横向位移补偿,以获取比例相同的多个前方车灯识别区域;及,对所述多个后方车灯识别区域中的部分或全部后方车灯识别区域进行纵向位移补偿或横向位移补偿,以获取比例相同的多个后方车灯识别区域;缩放模块,用于将比例相同的多个前方车灯识别区域中的部分或全部前方车灯识别区域进行缩放,以获得大小一致的多个前方车灯识别区域;以及,将比例相同的多个后方车灯识别区域中的部分或全部后方车灯识别区域进行缩放,以获得大小一致的多个后方车灯识别区域。
- 根据权利要求19或20所述的装置,其特征在于,所述第二识别模块用于:检测所述前方目标车辆的转向灯子图像在所述对应于前方目标车辆的时间微分子图像中的第一位置信息;根据所述第一位置信息识别所述前方目标车辆的转向灯;以及,检测所述后方目标车辆的转向灯子图像在所述对应于后方目标车辆的时间微分子图像中的第二位置信息;根据所述第二位置信息识别所述后方目标车辆的转向灯。
- 根据权利要求12-21中任意一项所述的装置,其特征在于,所述装置还包括:控制模块,用于根据所述前方目标车辆和/或所述后方目标车辆的行驶信息,对所述主体车辆的运动参数进行控制。
- 一种车辆,其特征在于,包括如权利要求12-22中任一项所述的车辆识别装置。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610872462.1 | 2016-09-30 | ||
CN201610872462.1A CN107886770B (zh) | 2016-09-30 | 2016-09-30 | 车辆识别方法、装置及车辆 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018059586A1 true WO2018059586A1 (zh) | 2018-04-05 |
Family
ID=61763172
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/104875 WO2018059586A1 (zh) | 2016-09-30 | 2017-09-30 | 车辆识别方法、装置及车辆 |
PCT/CN2017/104864 WO2018059585A1 (zh) | 2016-09-30 | 2017-09-30 | 车辆识别方法、装置及车辆 |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/104864 WO2018059585A1 (zh) | 2016-09-30 | 2017-09-30 | 车辆识别方法、装置及车辆 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107886770B (zh) |
WO (2) | WO2018059586A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109299674A (zh) * | 2018-09-05 | 2019-02-01 | 重庆大学 | 一种基于车灯的隧道违章变道检测方法 |
CN112889097A (zh) * | 2018-10-17 | 2021-06-01 | 戴姆勒股份公司 | 马路横穿通道可视化方法 |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111052201B (zh) * | 2017-09-01 | 2022-02-01 | 株式会社村上开明堂 | 碰撞预测装置、碰撞预测方法以及存储介质 |
EP3569460B1 (en) * | 2018-04-11 | 2024-03-20 | Hyundai Motor Company | Apparatus and method for controlling driving in vehicle |
CN110126729A (zh) * | 2019-05-30 | 2019-08-16 | 四川长虹电器股份有限公司 | 一种汽车后方来车辅助提醒方法及系统 |
CN111275981A (zh) * | 2020-01-21 | 2020-06-12 | 长安大学 | 一种高速公路车辆开启制动灯和双闪灯的识别方法 |
CN111292556B (zh) * | 2020-01-22 | 2022-03-01 | 长安大学 | 一种基于路侧双闪灯识别的车辆预警系统及方法 |
CN113392679B (zh) * | 2020-03-13 | 2024-07-05 | 富士通株式会社 | 车辆转向信号的识别装置及方法、电子设备 |
CN111768651B (zh) * | 2020-05-11 | 2022-07-12 | 吉利汽车研究院(宁波)有限公司 | 一种预防车辆碰撞的预警方法及装置 |
CN112785850A (zh) * | 2020-12-29 | 2021-05-11 | 上海眼控科技股份有限公司 | 车辆变道未打灯的识别方法及装置 |
CN112949470A (zh) * | 2021-02-26 | 2021-06-11 | 上海商汤智能科技有限公司 | 车辆变道转向灯识别方法、装置、设备及存储介质 |
CN113611111B (zh) * | 2021-07-29 | 2023-09-08 | 郑州高识智能科技有限公司 | 一种基于车辆远光灯的车距计算方法 |
CN115082901B (zh) * | 2022-07-21 | 2023-01-17 | 天津所托瑞安汽车科技有限公司 | 基于算法融合的车辆汇入检测方法、装置及设备 |
CN115240426B (zh) * | 2022-07-26 | 2024-03-26 | 东软睿驰汽车技术(沈阳)有限公司 | 一种变道数据的自动定位方法、装置、设备及存储介质 |
CN115565371B (zh) * | 2022-09-21 | 2024-08-20 | 北京汇通天下物联科技有限公司 | 应急停车检测方法、装置、电子设备及可读存储介质 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030085991A1 (en) * | 2001-11-08 | 2003-05-08 | Fuji Jukogyo Kabushiki Kaisha | Image processing apparatus and the method thereof |
CN104554259A (zh) * | 2013-10-21 | 2015-04-29 | 财团法人车辆研究测试中心 | 主动式自动驾驶辅助系统与方法 |
CN104952254A (zh) * | 2014-03-31 | 2015-09-30 | 比亚迪股份有限公司 | 车辆识别方法、装置和车辆 |
CN105358398A (zh) * | 2013-07-01 | 2016-02-24 | 奥迪股份公司 | 用于在进行车道变更时运行机动车的方法和机动车 |
CN105489062A (zh) * | 2015-12-29 | 2016-04-13 | 北京新能源汽车股份有限公司 | 车辆并线的提示方法和装置 |
CN105711586A (zh) * | 2016-01-22 | 2016-06-29 | 江苏大学 | 一种基于前向车辆驾驶人驾驶行为的前向避撞系统及避撞算法 |
CN105740834A (zh) * | 2016-02-05 | 2016-07-06 | 广西科技大学 | 夜视环境下对前方车辆的高精度检测方法 |
CN105946710A (zh) * | 2016-04-29 | 2016-09-21 | 孙继勇 | 行车辅助装置 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101391589A (zh) * | 2008-10-30 | 2009-03-25 | 上海大学 | 车载智能报警方法和装置 |
JP2010262387A (ja) * | 2009-04-30 | 2010-11-18 | Fujitsu Ten Ltd | 車両検知装置および車両検知方法 |
CN102194328B (zh) * | 2010-03-02 | 2014-04-23 | 鸿富锦精密工业(深圳)有限公司 | 车辆管理系统、方法及具有该系统的车辆控制装置 |
US8686873B2 (en) * | 2011-02-28 | 2014-04-01 | Toyota Motor Engineering & Manufacturing North America, Inc. | Two-way video and 3D transmission between vehicles and system placed on roadside |
CN103208006B (zh) * | 2012-01-17 | 2016-07-06 | 株式会社理光 | 基于深度图像序列的对象运动模式识别方法和设备 |
CN103984950B (zh) * | 2014-04-22 | 2017-07-14 | 北京联合大学 | 一种适应白天检测的运动车辆刹车灯状态识别方法 |
CN104392629B (zh) * | 2014-11-07 | 2015-12-09 | 深圳市中天安驰有限责任公司 | 检测车距的方法和装置 |
CN105460009B (zh) * | 2015-11-30 | 2018-08-14 | 奇瑞汽车股份有限公司 | 汽车控制方法及装置 |
-
2016
- 2016-09-30 CN CN201610872462.1A patent/CN107886770B/zh active Active
-
2017
- 2017-09-30 WO PCT/CN2017/104875 patent/WO2018059586A1/zh active Application Filing
- 2017-09-30 WO PCT/CN2017/104864 patent/WO2018059585A1/zh active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030085991A1 (en) * | 2001-11-08 | 2003-05-08 | Fuji Jukogyo Kabushiki Kaisha | Image processing apparatus and the method thereof |
CN105358398A (zh) * | 2013-07-01 | 2016-02-24 | 奥迪股份公司 | 用于在进行车道变更时运行机动车的方法和机动车 |
CN104554259A (zh) * | 2013-10-21 | 2015-04-29 | 财团法人车辆研究测试中心 | 主动式自动驾驶辅助系统与方法 |
CN104952254A (zh) * | 2014-03-31 | 2015-09-30 | 比亚迪股份有限公司 | 车辆识别方法、装置和车辆 |
CN105489062A (zh) * | 2015-12-29 | 2016-04-13 | 北京新能源汽车股份有限公司 | 车辆并线的提示方法和装置 |
CN105711586A (zh) * | 2016-01-22 | 2016-06-29 | 江苏大学 | 一种基于前向车辆驾驶人驾驶行为的前向避撞系统及避撞算法 |
CN105740834A (zh) * | 2016-02-05 | 2016-07-06 | 广西科技大学 | 夜视环境下对前方车辆的高精度检测方法 |
CN105946710A (zh) * | 2016-04-29 | 2016-09-21 | 孙继勇 | 行车辅助装置 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109299674A (zh) * | 2018-09-05 | 2019-02-01 | 重庆大学 | 一种基于车灯的隧道违章变道检测方法 |
CN109299674B (zh) * | 2018-09-05 | 2022-03-18 | 重庆大学 | 一种基于车灯的隧道违章变道检测方法 |
CN112889097A (zh) * | 2018-10-17 | 2021-06-01 | 戴姆勒股份公司 | 马路横穿通道可视化方法 |
Also Published As
Publication number | Publication date |
---|---|
CN107886770B (zh) | 2020-05-22 |
WO2018059585A1 (zh) | 2018-04-05 |
CN107886770A (zh) | 2018-04-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018059586A1 (zh) | 车辆识别方法、装置及车辆 | |
US7046822B1 (en) | Method of detecting objects within a wide range of a road vehicle | |
US9064418B2 (en) | Vehicle-mounted environment recognition apparatus and vehicle-mounted environment recognition system | |
CN111937002B (zh) | 障碍物检测装置、自动制动装置、障碍物检测方法以及自动制动方法 | |
JP7518893B2 (ja) | 緊急車両の検出 | |
JP6254084B2 (ja) | 画像処理装置 | |
CN108528433B (zh) | 车辆行驶自动控制方法和装置 | |
US20150029012A1 (en) | Vehicle rear left and right side warning apparatus, vehicle rear left and right side warning method, and three-dimensional object detecting device | |
JP2018092501A (ja) | 車載用画像処理装置 | |
WO2017145605A1 (ja) | 画像処理装置、撮像装置、移動体機器制御システム、画像処理方法、及びプログラム | |
CN108528432B (zh) | 车辆行驶自动控制方法和装置 | |
CN107886030A (zh) | 车辆识别方法、装置及车辆 | |
CN107886729B (zh) | 车辆识别方法、装置及车辆 | |
JP5202741B2 (ja) | 分岐路進入判定装置 | |
US11256929B2 (en) | Image-based road cone recognition method and apparatus, storage medium, and vehicle | |
KR101721442B1 (ko) | 차량용 블랙박스 후방카메라를 이용한 측후방 충돌방지 시스템 및 충돌방지방법 | |
JP2015036842A (ja) | 走行可否判定装置 | |
CN107886036A (zh) | 车辆控制方法、装置及车辆 | |
CN108528450B (zh) | 车辆行驶自动控制方法和装置 | |
KR20160133386A (ko) | 차량용 블랙박스 후방카메라를 이용한 측후방 충돌방지 시스템의 충돌방지방법 | |
Balcerek et al. | Automatic recognition of image details using stereovision and 2D algorithms | |
JP2021075117A (ja) | 車両制御方法及び車両制御装置 | |
US20230303066A1 (en) | Driver assistance system and computer-readable recording medium | |
RU2779773C1 (ru) | Способ распознавания светофора и устройство распознавания светофора | |
WO2024209662A1 (ja) | 物体認識装置、物体認識処理方法及び記録媒体 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17855036 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17855036 Country of ref document: EP Kind code of ref document: A1 |