CN105981042A - Vehicle detection system and method thereof - Google Patents
Vehicle detection system and method thereof Download PDFInfo
- Publication number
- CN105981042A CN105981042A CN201580003808.8A CN201580003808A CN105981042A CN 105981042 A CN105981042 A CN 105981042A CN 201580003808 A CN201580003808 A CN 201580003808A CN 105981042 A CN105981042 A CN 105981042A
- Authority
- CN
- China
- Prior art keywords
- blobs
- module
- identified
- pair
- blob
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 43
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000012795 verification Methods 0.000 claims description 22
- 238000010200 validation analysis Methods 0.000 claims description 19
- 238000001914 filtration Methods 0.000 claims description 16
- 238000012790 confirmation Methods 0.000 claims description 6
- 238000003709 image segmentation Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 5
- 230000004927 fusion Effects 0.000 claims description 4
- 229920006395 saturated elastomer Polymers 0.000 claims description 4
- 239000003550 marker Substances 0.000 claims 2
- 238000010586 diagram Methods 0.000 description 8
- 230000003628 erosive effect Effects 0.000 description 7
- 230000011218 segmentation Effects 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 230000008859 change Effects 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 3
- 230000007704 transition Effects 0.000 description 3
- 239000013598 vector Substances 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000000877 morphologic effect Effects 0.000 description 2
- 239000000523 sample Substances 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000010339 dilation Effects 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 238000005530 etching Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 230000013011 mating Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/584—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
- Image Processing (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The present invention describes a vehicle detection system and method for detecting one or more vehicles in a dynamic varying region of interest, ROI. The system comprises a scene recognition module (101), a road topology estimation module (102), and a vehicle detecting module (103). The scene recognition module is configured for receiving either high exposure image or low exposure image for identifying condition of one or more scenes in a dynamically varying region of interest. The road topology estimation module configured for receiving either high exposure image or low exposure image for determining at least one of a curve, slope and vanishing point of a road in the dynamically varying region of interest. The vehicle detecting module is coupled with the scene recognition module and road topology module for detecting one or more vehicles on the road at night time.
Description
Technical Field
The present invention relates generally to vehicle detection, and more particularly to a vehicle detection system in low light conditions, such as nighttime.
Background
Advanced driving assistance solutions are gradually gaining market. Forward collision warning is one of the applications that warns the driver when a host vehicle is about to collide with a target vehicle ahead. The vision application detects the vehicle in front during the day and night and generates a warning based on the calculated time of imminent collision. Forward collision warning systems and other automotive vision applications employ different algorithms to detect vehicles in diurnal conditions.
However, existing vehicle detection systems are inefficient, inconvenient, and expensive. There is a need for an efficient and economical vision system and method based on nighttime vehicle detection. Therefore, there is a need for a system that can provide robust vehicle detection that eliminates false objects even under low light conditions, in various real-time scenarios.
Conventional vision processing systems lack a wide range of visibility conditions, including rural (dark) and urban (bright) conditions. Furthermore, detection of this type of vehicle is particularly challenging for reasons including:
there are a wide variety of differences from different locations of the vehicle to the shape of the vehicle lights.
The vehicle has different lights. Such as brake lights, on-side lights.
The vehicle is not lighted.
Vehicle detection under urban conditions, so much ambient light is mixed around.
Detection and range estimation of the two-wheeled vehicle.
Accordingly, there is a need for a vehicle detection system that detects one or more vehicles on a nighttime roadway. There is a need for a robust system that can identify and eliminate false objects, including lights that closely resemble the shape of vehicle lights, such as street lights, traffic cones, and other miscellaneous light sources; thereby providing a high level of accuracy.
SUMMARY
One embodiment of the present invention describes a vehicle detection system. The vehicle detection system includes a scene identification module configured to receive one of a high-exposure image and a low-exposure image for identifying a condition of one or more scenes in a dynamically changing region of interest (ROI), a road topology estimation module configured to receive one of a high-exposure and a low-exposure image for determining at least one of a curve, a slope, and a vanishing point of a road in a dynamically changing region of interest (ROI), and a vehicle detection module in combination with the scene identification module and the road topology module to detect one or more vehicles on the road.
Another embodiment of the present invention describes a method of detecting one or more vehicles by a vehicle detection system. The method comprises the following steps: one of a high exposure image and a low exposure image is received by at least one of a scene identification module and a road topology estimation module, a condition of one or more image scenes in a dynamically changing region of interest (ROI) is identified by the scene identification module, at least one of a curve, a slope, and a vanishing point of a road in the dynamically changing region of interest (ROI) is determined, and the one or more images are processed to detect one or more vehicles in the dynamically changing region of interest (ROI).
Processing one or more images for detecting one or more vehicles in a dynamically changing region of interest (ROI), comprising the steps of: the method includes obtaining possible light sources by a segmentation module, eliminating noise and unwanted information by a filtering module, identifying one or more blobs in the filtered image, determining features of each identified blob, identifying one or more objects in the identified one or more blobs in the dynamically changing ROI using at least one pairing logic, and identifying and verifying one or more pairs of identified blobs.
In one embodiment, one or more of the two or more pairs of identified blobs are verified by performing at least once method steps comprising eliminating one pair of identified blobs having a smaller width from the two pairs of identified blobs sharing the same blob, wherein the columns of the two pairs overlap very high or very low, eliminating one pair of identified blobs having a larger width from the two pairs of identified blobs sharing the same blob, wherein the overlaps of the two pairs are not very high or very low and the intermediate blobs are asymmetrically distributed, eliminating one pair of identified blobs having a smaller width and height than the other pair of identified blobs, wherein the two pairs have column overlaps and no row overlaps, eliminating one pair of identified blobs having a lower intensity, height and wider width than the other pair of identified blobs, eliminating one pair of identified blobs having a lower intensity and a lower height than the other pair of identified blobs, wherein two pairs have the same width and height and have high column overlap, eliminating a pair of identification blobs having a greater width than the other pair of identification blobs, wherein two pairs have column overlap and row overlap and are asymmetric, eliminating a pair of identification blobs having a smaller width than the other pair of identification blobs, wherein two pairs have column overlap and are symmetric, eliminating a pair of identification blobs disposed within the other pair of identification blobs, wherein two pairs have very little column overlap, eliminating a pair of identification blobs disposed below the other pair of identification blobs, wherein two pairs have very little column overlap.
Brief description of the drawings
The aspects and other features described above will be explained with reference to the drawings, in which:
FIG. 1 is a block diagram of a vehicle detection system according to one embodiment of the present invention.
FIG. 2 is a block diagram of a vehicle detection system according to one embodiment of the present invention.
FIG. 3 is an image of a vehicle captured in a dynamically changing ROI, shown in accordance with an exemplary embodiment of the present invention.
Fig. 4 is an illustration of input frames provided to a scene recognition module or road topology estimation module, according to one embodiment of the invention.
FIG. 5 is an acquired dynamically changing ROI image according to one embodiment of the invention.
FIG. 6 is a block diagram of a road topology estimation module according to one embodiment of the invention.
FIG. 7 depicts a dynamically changing ROI as a function of grade and curve changes for a road according to an embodiment of the present invention.
FIG. 8 depicts an acquired image for segmentation in accordance with an exemplary embodiment of the present invention.
Fig. 9 is an output image obtained after segmentation according to an exemplary embodiment of the present invention.
Fig. 10 is a 3 x 3 matrix for segmenting a color image, as described in accordance with an exemplary embodiment of the present invention.
FIG. 11 is a filtered output image of a segmented image according to an exemplary embodiment of the present invention.
FIG. 12 illustrates the separation of two merged blobs according to an exemplary embodiment of the invention.
FIG. 13 is an image of an image with each blob in the image assigned a different label, according to an exemplary embodiment of the present invention.
FIG. 14 is a flowchart illustrating a method for preparing a final blob list from dark and light frames, according to one embodiment of the invention.
FIG. 15 is a plurality of images in which blobs are each classified to identify headlamps, tail lamps or any other light according to an exemplary embodiment of the present invention.
FIG. 16 is an image showing blobs in the image each classified as merged blobs according to an exemplary embodiment of the invention.
FIG. 17 illustrates a method for identifying valid pairings based on pairing logic, according to an embodiment of the invention.
FIG. 18 illustrates a method for validating and verifying blobs according to one embodiment of the invention.
FIG. 19 is an image of a patch pair before and after validation and verification, according to an exemplary embodiment of the present invention.
FIG. 20 illustrates merged lights and/or blobs according to an exemplary embodiment of the invention.
FIG. 21 illustrates a method for identifying valid plaque in a dynamically changing ROI for detecting a two-wheeled vehicle, according to one embodiment of the present invention.
FIG. 22 illustrates a trace module state mode transition period according to one embodiment of the present invention.
Fig. 23 is a diagram showing a specific example of estimating the distance between the detected vehicle and the host vehicle by the distance estimation module according to the present invention.
FIG. 24 illustrates estimated distances of detected vehicles in accordance with an exemplary embodiment of the present invention.
FIG. 25 is a flow chart illustrating a method for detecting one or more vehicles with a vehicle detection system in accordance with one embodiment of the present invention.
Detailed description of the invention
Embodiments of the present invention will be described in detail with reference to the accompanying drawings. Also, the present invention is not limited to the embodiments described in the present invention. The size, shape, location, number and combination of elements of the devices described herein are exemplary only and various modifications may be made by one skilled in the art without departing from the scope of the invention. Therefore, the embodiments of the present invention are only intended to explain the present invention more clearly to those skilled in the art to which the present invention pertains. In the drawings, like components are designated by like reference numerals.
Reference throughout this specification to "an", "one", or "some" embodiment may be made to various places. It is not necessarily intended that each such reference refer to the same embodiment(s), or that the feature only applies to a single embodiment. Individual features of different embodiments may also be combined to provide other embodiments.
As used in this application, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless expressly stated otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this application, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. Further, "connected" or "coupled" as used in this application may include connected or coupled in real-time. As used herein, the term "and/or" includes any and all combinations and arrangements of one or more of the associated listed items.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
A vehicle detection system and method of detecting a vehicle occurring during low light conditions at night is described. Vehicle detection requires a variety of applications such as forward collision warning systems, dynamic high beam assist, high beam automatic control systems, etc. The system detects the vehicle at night to use the vehicle lights, such as tail lights, as a main feature. The system provides a robust vehicle detection using a variety of different classifiers and identifies and eliminates false objects.
FIG. 1 is a block diagram of a vehicle detection system according to one embodiment of the present invention. The vehicle detection system 100 includes a scene recognition module 101, a road topology estimation module 102, and a vehicle detection module 103. The scene recognition module 101 is configured to receive one of a high exposure image and a low exposure image to identify the condition of one or more scenes/images in a dynamically changing region of interest (ROI). The scene recognition module determines saturated pixels, intensities, and region variations in a dynamically varying region of interest (ROI). The road topology estimation module 102 is configured to receive either a high exposure image or a low exposure image to determine one or more road features, such as curves, slopes, and vanishing points of a road, in a dynamically changing region of interest (ROI). In one embodiment, a road topology estimation module is coupled to a scene identification module for receiving instances of one or more scenes/images in a dynamically changing region of interest (ROI). The vehicle detection module 103 is connected to the scene recognition module 101 and the road topology estimation module 102 and is configured to detect one or more vehicles on the road at night, i.e. a preceding vehicle or an oncoming vehicle traveling in front of the host vehicle. Here, the system works under nighttime conditions of highways and urban roads. The system uses vehicle lights as main features for vehicle detection and vehicle classifier.
The image acquisition unit acquires images in a dynamically changing region of interest for providing the same (e.g., high and low exposures/channels with different gains/exposures) to the system 100. The image capturing unit may be a vehicle camera. The input image is then converted to a grayscale image and a color image and the input is provided to the system 100. The system 100 operates in a region of interest (ROI). The scene recognition module 101 determines scene conditions such as day and night, night darkness versus light, fog, and rain. The road topology estimation module 102 determines road scenarios, such as curves, gradients, and the like. In one embodiment, the dynamic ROI is an ROI that varies with the curve and slope changes associated with the road. The road topology estimation module 102 and the scene recognition module 101 receive and process alternating sets of high exposure/gain frames and low exposure/gain frames. The system 100 is connected to an electronic control unit 104 for providing output signals that are processed by a vehicle detection module 103. The electronic control unit 104 processes the received signals for displaying or alerting a user or driver of the vehicle.
In one embodiment, the road topology estimation module 102 calculates the dynamically changing ROI based on the following parameters:
saturated pixels
Brightness
Regional variation
Color of
In one embodiment, if the above-described parameters are high through the entire ROI, the scene/image in the dynamically changing region is classified as bright.
FIG. 2 is a block diagram of a vehicle detection system according to one embodiment of the present invention. The vehicle detection module 103 includes an image segmentation module 201, a filtering module 202, a blob identification module 203, an object identification module 204, a pairing validation and verification module 205, a tracking module 207, and a distance estimation module 208. In addition, the vehicle detection module 103 includes a two-wheel vehicle identification module 206.
The image segmentation module 201 is configured for receiving input data from the scene recognition module 101 or the road topology estimation module 102 to provide binary image data. The binary image data includes tail lights, head lights, noise, and unnecessary information.
The filtering module 202 is coupled to the image segmentation module 201 and is configured for removing noise and unwanted information, such as very small objects, false positives and similar other information.
The blob identifying module 203 is coupled to the filtering module 202 and is configured to identify one or more blobs in the filtered image and then determine the characteristics of each identified blob. The blob identifying module 203 is configured to perform steps including assigning a unique tag to each of the one or more blobs, determining features of each of the one or more tagged blobs, the features including origin, width, height, box area, pixel area, number of red pixels, aspect ratio, and blob outline of the blob, determining one or more fusions of the one or more tagged blobs in a dark or light frame based on the determined features, and classifying the one or more tagged blobs within at least one classification of headlights, taillights, merged lights, and invalid lights.
The object identification module 204 is coupled to the blob identification module 203 and is configured to identify objects based on one or more pairing logics. The object identification module 204 is configured to perform at least one of the following steps including determining a horizontal overlap of one or more blobs, determining an aspect ratio of the one or more blobs, determining a pixel area ratio of the one or more blobs, determining a width ratio of the one or more blobs, and determining a pixel box area ratio of the one or more blobs.
A pairing validation and verification module 205 is coupled to the object identification module 204 and is configured to validate and verify one or more identified plaque pairs. The pairing validation and verification module 205 is configured to perform one or more steps including determining a pair of identified blobs, determining one or more identified blobs between a row of lights, verifying one or more identified blobs between two or more pairs of identified blobs, and validating merged lights by identifying one or more merged blobs. Identifying plaque pairs is determined by performing the steps including determining width and aspect ratio of the pairs, determining the number of unpaired plaques in the pairs, and determining the area of the paired and unpaired plaques as a percentage of the width of the pairs.
In one embodiment, identifying one or more identified blobs between light rows in the ROI is accomplished by performing a step comprising a row line matching algorithm on all light rows in the ROI.
The tracking module 207 is coupled to the pairing validation and verification module 205 and is configured to track one or more validated and verified plaque pairs in one or more stages. The one or more phases include an idle phase, a pre-tracking phase, a tracking phase, and a cancel tracking phase.
The two-wheeler identification module 206 is configured for determining that the identified object in the ROI is a two-wheeler based on information received from the blob identification module 203 and the pairing validation and verification module 205 of one or more blobs. The information for the one or more blobs includes blobs that are classified as headlamps or taillights in the blob identification module and blobs that pass the near rider shape contour classifier. In one embodiment, the blob identifying module 203 identifies a single blob in a region of interest (ROI) to determine a two-wheel vehicle.
The tracking module 207 is coupled to the pairing validation and verification module 205 and the two-wheel vehicle identification module 206 and is configured for tracking one or more validated and verified plaque pairs/blobs in one or more stages. The one or more phases include an idle phase, a pre-tracking phase, a tracking phase, and/or a cancel tracking phase.
The distance estimation module 208 is configured to calculate the distance between one or more detected vehicles and a host vehicle based on the ratio of the actual width of the detected vehicle dimension multiplied by the focal length of the lens and the product of the width of the detected vehicle in the image and a coefficient converting the pixels of the camera to meters.
FIG. 3 is an image of a vehicle in a dynamically changing region of interest captured in accordance with an exemplary embodiment of the present invention. The system utilizes vehicle lighting as a primary feature for vehicle detection and classification. The image pickup unit is attached to a predetermined position of the vehicle, picks up an image and provides the picked-up image to the system 100. The system 100 employs a high exposure image (as shown in fig. 3 a) and a low exposure image (as shown in fig. 3 b). The image acquisition unit provides the acquired image as an input frame, which includes a high exposure/gain and a low exposure/gain. As shown in fig. 4, the input frame is provided to a scene recognition module or a road topology estimation module.
FIG. 5 is an illustration of a captured image, according to an exemplary embodiment. The image acquisition unit acquires an image and provides it to the scene recognition module. The scene recognition module 101 processes the received image and classifies the image/scene as light/dark/fog/rain if the parameters such as saturated pixels, brightness, color and regional variation through the ROI are high. The image/scene classification accumulates the increase in transit time and lag to determine the change in classification.
FIG. 6 is a block diagram of the road topology estimation module 102 according to one embodiment of the invention. In this embodiment, the road topology estimation module 102 calculates the predicted vanishing points of the determined regions of interest. A ROI (region of interest) in one image is the region where the user is looking for a potential vehicle-a forward road scene without sky regions. The road topology estimation module 102 receives inputs such as an offset estimation, a vanishing point estimation, a pitch estimation, scene recognition and vehicle external parameters to determine/estimate road vanishing points.
The dynamic ROI is an ROI that varies with the gradient and curve change with respect to the road, as shown in fig. 7(a) and (b).
For curve estimation, the road topology estimation module 102 uses the yaw rate and speed of the host vehicle to estimate the curvature of the front.
For slope estimation, the vehicle detection system uses the following clues to determine the slope of the front
Matching/registration based on tracking to determine the offset between successive frames
Input from LDWS (Lane departure Warning System)
Pitch estimation from the feature tracking module.
Scene recognition output, such as day or night
External parameters of the vehicle, e.g. yaw rate, speed, etc
The advantages of using a dynamic ROI are as follows:
vehicles can also be detected on curved roads
Reduction of false positives
Unnecessary processing can be avoided, thereby improving system performance
Image segmentation
FIG. 8 illustrates an image acquired for segmentation in accordance with an exemplary embodiment of the present invention. In the present embodiment, the lamp light shown in the input low exposure/gain image and high exposure/gain image is divided using a sliding window based on a dual threshold. The threshold value of the one-dimensional fixed/variable length local window is calculated from the mean of the window pixel values, the predefined minimum value and the predefined maximum value. The predefined minimum value may be adjusted according to the brightness of the image. For the lighter case the minimum value of the threshold is further increased, whereas for the darker case the threshold is shifted to a predefined value. Either a fixed window or a variable-size window is used to calculate the threshold for the ROI. The pixel values of the image may be modified based on the threshold. For example, seven segmented images are formed by seven different thresholds calculated from different predefined minimum and maximum values. The output of the segmentation is a binary image, as shown in fig. 9.
For the segmentation of a color image, the color image and the segmented input image (obtained using a grayscale image) are used as inputs. The size of the region is increased in the vicinity of the divided light region based on the color information of the color image. The determination of pixel values in the vicinity of the segmented light pixels is based on a threshold of the 8-neighborhood red hue, so that low light conditions or far regions will have increased taillight size. For example, as shown in fig. 10 for a 3 × 3 matrix, the values of the intermediate pixels are determined based on the segmented pixels (i.e., "1") and the color image. The color image should have a red hue for use in tail light conditions. The two-level adaptively segmented image and the color image are processed to obtain a final segmented image.
Filtering
The divided binary image shown in fig. 9 is composed of tail lights, noise, and unnecessary information. Filtering is used to remove noise and unwanted information. Filtering may be accomplished using morphological operations or median filtering. Morphological operations such as erosion dilation are used with structural elements of size three, so it eliminates plaque sizes less than 3 x 3. The median filtering is designed to eliminate blobs of sizes less than 2 x 3 and 3 x 2. Scene-based erosion for brighter scenes and median filtering for darker scenes-filtering is applied to the segmented image. All different threshold segmentation images based on the scene are filtered. The output of the filtering module is a filtered image as shown in fig. 11. The filtered image identifies salient clusters of segmented pixels (blobs).
Separating and combining plaques:
the system 100 also includes sub-modules for separating merged blobs-two tail/head lamps, tail/head lamps and other lights, tail/head lamps and reflected lights, etc. -in the filtered image. The system 100 applies a two-stage erosion of the 3 x 3 kernel to the segmented image to determine and separate the two merged patches. The filtered image is subjected to the following process to separate the two merged patches, as shown in fig. 12.
If the filtered image has one blob and the two-level erosion image has two blobs at the same location, the blob in the filtered image is broken by vertically cutting the overlapping region of the two blob pairs.
If the filtered image has a blob and the two-level erosion image has no or a blob at the same location, then the blob in the filtered image is retained.
If the filtered image has one blob and the two-level erosion image has more than two blobs at the same location, then any change is avoided.
Plaque identification
The blob identifying module 203 identifies different types of blobs in the filtered image and calculates their features. Performing the following steps to identify blobs:
plaque marking
Computation of plaque features
Plaque fusion
Classification of plaques
Plaque marking
FIG. 13 is an image with each blob in the image assigned a different label, according to an exemplary embodiment of the present invention. The labeling is performed by a 4-pass method, in which the same label is assigned to a group of pixels if they are connected by the 4-pass method. After assigning a mark to each blob, information such as start row, end row, start column, end column, assigned mark, and pixel area are stored in an array.
Plaque features
After assigning a label to each plaque, the following features are calculated:
origin of the blob, indicating whether the blob is from a dark or light frame.
Width, including the difference between the ending column and the starting column.
Height, including the difference between the end and start rows.
The area of the frame, which is the product of the width and the height (i.e., width x height)
Pixel area, total number of white pixels included in the frame
The number of red pixels, including the total number of red pixels based on the hue value
Aspect ratio, including minimum (width, height)/maximum (width, height)
Plaque outline, including the shape of the plaque
Plaque fusion
FIG. 14 is a flowchart illustrating a method for preparing a final blob list from dark and light frames, according to one embodiment of the invention. In step 1401, patches of low exposure/gain frames and patches of high exposure/gain frames are received to determine an overlap of the patches. If there is no overlap, the blob for the high exposure/gain frame is checked as a blob for a potential tail light in step 1402. If there is no overlap, in step 1402, the blobs of pass/allow low exposure/gain frames are included in the blob list. The blob of low exposure/gain frames may occur due to reflection imaging. If the blob of high exposure/gain frames in step 1402 is not a potential tail light, the blob is discarded in step 1404. If the blob of the high exposure/gain frame in step 1402 is a potential tail light, then the blob of the pass/allow high exposure/gain frame is included in the blob list at step 1405. If there is more than one blob with overlap at step 1401, then the blobs that pass/allow low exposure/gain frames at step 1406 are included in the blob list. In step 1407, a final plaque list is prepared.
The plaque list was prepared based on the following criteria:
one plaque should be within the estimated area of the visual interface.
There should not be any other overlapping blobs underneath one blob.
An area where a blob should not have the same brightness between its neighbors.
The horizontal ROI is defined from 35-65% of the total width of potential candidates for determining bright frames.
The patches that the main self-lit frames pass through are all paired. If it is a merged patch (high/low) it will go through dark frames. Two plaques were horizontally overlapped to pair. The probability of overlapping the threshold due to the movement of consecutive frames is very low.
If a large blob of a bright frame is more than 1 blob of a dark frame, then the blob is considered from the dark frame.
Plaque classification
FIG. 15 is an image according to an exemplary embodiment of the present invention, wherein blobs are classified to identify headlamps, tail lamps or any other light. Once the final blob list is obtained, the blobs have been classified as headlamps, tail lamps, merged lamps and other lamps. If the red score (red pixel number) is found, the blob is a tail light because the blob is greater than a predetermined threshold. The tail light classification is shown in blue in fig. 15 (a). The blobs are classified as headlamps based on the following criteria:
a) any plaque is below the plaque due to reflection; and/or
b) When a plurality of patches are horizontally overlapped, the height ratio of the two patches is less than half of the maximum vehicle width; and/or
c) The blob has a minimum and two maxima near the blob, where the minimum and maximum patterns are determined by taking the vertical profile of a particular blob.
After tail and head light sorting, all head lights with low red scores and small sized head lights are removed from the list by marking them as invalid patches. In addition, if there is more than one plaque under any plaque, it is also marked as invalid plaque.
To classify the above blobs as merged blobs, the patterns are checked as 101 and 111, where 0 corresponds to the minimum value position and 1 corresponds to the maximum value position. To determine the mode, a patch is divided into three segments, and the minimum and maximum positions of each segment are determined by using the filtered image. By using these values, the ratio of the center segment to the left and right segments is determined to check the 101 mode. For the 111 mode, the ratio of the left and right segments to the center segment is determined.
In one embodiment, the blob identifying module 203 identifies a single blob in a region of interest (ROI) to determine a two-wheel vehicle.
FIG. 16 is an image showing one example embodiment of the present invention wherein more than one blob is classified as merged blobs, which if classified as merged and of small size are considered tail or low beam based on their earlier classification as tail or head based, respectively.
Pairing logic
FIG. 17 is a process for identifying valid pairs based on pairing logic, according to one embodiment of the invention. The system 100 also includes a pairing logic module that follows a heuristic-based approach to determine a pair of taillights from the plurality of blobs. The pairing procedure was performed according to the criteria listed below:
1. the horizontal overlap of multiple blobs is checked,
2. the aspect ratios of the plurality of patches are collated,
3. the pixel area ratios of the plurality of patches are collated,
4. the width ratio of a plurality of patches is collated,
5. the pixel frame area ratio of a plurality of patches is collated,
6. for larger blobs, a check is made to match the shape of multiple blobs. The shape of the plurality of blobs may be obtained by subtracting the original blob image from the erosion image. Here, the etching is performed together with a structural element of size 3. The shape of the patch is checked using cosine similarity.
Cosine similarity: which measures the similarity between vectors of multiple blobs.
Cosine similarity is a.b/| a | | | | B |.
Wherein,
size of vector A | |
Size of | B | | vector B
The final confidence score is calculated based on the weighted scores obtained from the above checks. Pairing is performed according to a scoring matrix using scores above a threshold. Pairing is very basic, using a very low threshold to allow unbalanced tail light, turned on side light, and slightly disproportionate plaque pairing.
The pairing as determined above allows for a dynamic ROI-based vehicle width check. The core of the whole logic is that the dynamic triangle is pre-computed and loaded when the system is initialized (the ROI is kept updated according to camera and vehicle parameters).
The line width of the geometric center of the pair should lie between the line widths of the minimum and maximum triangles (dynamic ROI), as shown in fig. 17 (a). The output of the pairing logic (as shown in fig. 17 (b)) is the pairing of the possible vehicles.
Verification and verification of pairing (V)&V)
The pairing validation and verification module 205 validates and verifies the pairing and merging blobs. The inputs to the module 205 are all possible pairings, as shown in FIG. 19 (a). The module 205 is divided into two sub-modules including a pairing validation module and a combining optical validation module
Pairing validation
FIG. 18 illustrates a method for validating and verifying blobs according to one embodiment of the invention. Pairing confirmations are made for a single pairing, a pairing between a row of lights, and a pairing between multiple pairings.
Verification of a single pairing:
FIG. 18(a) illustrates a method for verifying a single pair of blobs according to one embodiment. To verify a single pairing of patches, the following conditions need to be met:
1. mating width and aspect ratio
2. Number of unpaired blobs between pairs
3. The area of the paired and unpaired patches is a percentage of the paired width
Verification of pairing between a row of lights:
in one embodiment, a pair of patches is verified for a row of lights, such as reflected light or street lights. In most cases, both the reflected light and the street light are aligned. And carrying out line matching algorithm check on a row of lamplight. If a row of lights are lined up and the intersection ratio between successive pairs of patches is the same, the pairing formed by those lights is invalid.
And (3) verification between pairings:
fig. 18(b) to 18(j) show a method of verifying a patch pair according to an embodiment. The actual pairing of plaques is determined from the two pairs of plaques in the ROI using the following rules:
1. if two pairs share the same patch and the column overlap is very high or very low, the smaller width pairings are eliminated, as shown in FIGS. 18(b) and (c).
2. If two pairs share the same patch and the column overlap is not very high or very low, and the middle patches are asymmetrically distributed, then the larger width pairs are eliminated, as shown in FIGS. 18(d) and (e).
3. If two pairs have column overlaps and no row overlaps, the lower pair is eliminated if both width and height are smaller than the upper pair, whereas the upper pair is eliminated if both height and intensity are greater than the upper pair and width is lower than the upper pair, and the less intense pair is eliminated if the column overlaps very high and width and height are the same, as shown in FIGS. 18(f), (g), and (h)
4. If the two pairs have columns and rows overlapping and are asymmetric, the wider pairs are eliminated, while the narrower pairs are eliminated, as shown in FIGS. 18(i) and (j), assuming good symmetry for the column overlap
5. If two pairs of columns overlap very little, one pair being inside the other, then the pair inside is eliminated, as shown in figure 18(k),
6. if the two pairs of columns overlap very little, one pair being below the other, the lower pair is eliminated, as shown in FIG. 18 (l).
FIG. 19 is an image of a patch pair before and after validation and verification, according to an exemplary embodiment of the present invention. Fig. 19(a) shows four pairs of identified patch pairs before validation and verification are performed, while fig. 19(b) depicts three pairs of valid and verified patch pairs after validation and verification based on the method described in fig. 18. After pairing confirmation in fig. 19, pairing that conforms to the four-wheel vehicle standard is allowed by the tracking system and the remaining pairing is allowed to be eliminated.
Merging light confirmation
FIG. 20 illustrates a merged light/blob according to an exemplary embodiment of the present invention. In one embodiment, the merged lights are headlamps of a distant vehicle and require confirmation. To confirm the merged lights, the following criteria were implemented:
1. the merge light is ineffective if it has a longitudinal overlap with and is below the four-wheeled vehicle in front.
2. The merged light is classified as tail light, noise, or unwanted information if the merged light has a longitudinal overlap with the oncoming vehicle, and is invalid if the merged light is lower than the oncoming vehicle.
3. If the merged light has a longitudinal overlap with a pair of blobs having a shape match above a first shape match predetermined threshold and the merged light score is below a second predetermined threshold. The merged light is invalid if it longitudinally overlaps a pair of blobs having a shape match less than a first predetermined threshold.
4. The merged lights are invalid if they have longitudinal overlap and transverse overlap with the prior pairing of the four-wheel vehicle.
5. The merged lights are invalid if there is a longitudinal overlap of the merged lights, no lateral overlap, the shape match is above a predetermined threshold and the merged plaque score is below a predetermined threshold.
6. If in the above case the smaller integrated lights are gradually eliminated, then the tracking of the integrated lights is checked. Merging blobs is valid if the merged light has a longitudinal overlap, a lateral overlap, an area ratio, a height ratio and is within a predetermined threshold.
Two-wheel vehicle detection
FIG. 21 illustrates a method for identifying valid blobs in a dynamically changing ROI for detecting a two-wheeled vehicle, according to one embodiment of the invention. The two-round detection module 206 uses the blob classification information. The module 206 detects a preceding vehicle and an oncoming vehicle. Unpaired blobs, blobs not listed as merged lights, blobs classified as headlamps or tail lights are all considered possible bicycles and additional checks such as road grade, rider profile through the classifier, blob movement are made to confirm the blob as a bicycle. For the validity of the event in space, checks should also be made, for example, for the oncoming headlights of a left-hand drive, the right-hand area of the host vehicle would have patches, and for a right-hand drive, these patches would be in the left-hand area. In addition, these patches should not have any longitudinal overlap with the pairings. These patches satisfying the above conditions are considered as one two-wheeled vehicle. Fig. 21 lists two examples in which the identified patch does not meet the above condition, and therefore, it is an invalid patch.
Tracking system
FIG. 22 illustrates a trace module state mode transition period according to one embodiment of the present invention. The function of the trace module 207 is divided into four phases including idle 2201, pre-trace 2202, trace 2203, and cancel trace 2204. By default, the tracking module/tracking system is in idle state 2201. Once the object is identified as a pair of blobs (in the case of a four-wheel vehicle)/one blob (in the case of a two-wheel vehicle), a new tracking is initiated (if no previously matching active tracking exists), and the state changes from idle state 2201 to pre-tracking state 2202. The pre-tracking state 2202 is for confirming again the presence of blob pairs/blobs. The conditions for verifying that the pre-tracked object is a blob pair/blob and transitioning it to tracking state 2203 are listed below. To move to tracking state 2203, a pre-tracked object that has been verified as a blob pair/blob is only:
detect in "N" frames, it has a good confidence score. Plaque pair/plaque confidence in each frame is derived from probe confidence returned by pairing logic/two-wheeler probe
It has a higher frequency of occurrence
It has a good movement score (applicable only to four-wheeled vehicles). The tracking system keeps track of the behavior of the blob pairs. Here, it is considered that the two patches are in lockstep. Movement in the opposite direction is only allowed to be in front of the vehicle, i.e. the vehicle is travelling towards the host vehicle, or away from the host vehicle.
In tracking state 2203, continuous prediction and updating occurs while the tracked object uses the kalman filter. Any other suitable filter known in the art may be used. If the tracked object is found to be missing in a particular frame, a bounding box is displayed using Kalman prediction. And at the same time, the tracking system transitions from tracking state 2203 to an undo tracking state 2204. In the undo tracking state 2204, the object is verified as an "M" frame. In addition, the cancel tracking state 2204 attempts to improve the continuity of a good tracking system (i.e., a very high number of frames being tracked) by
Free form of large area search consistent with motion constraints
Search in high gain/high exposure frames if the environment is very dark
If close to vehicle pairing, attempt to match a patch to prolong its life
If approaching a two-or four-wheel vehicle, search in the neighborhood using a classifier
For the cancel tracking state 2204, pairing confidence is obtained for the "M" frame to decide whether to move the tracking system back to tracking state 2203 or to idle state 2201 (not a valid pairing).
Therefore, in the tracking process, false detection is abandoned by pre-tracking and multi-frame confirmation, and tracking and de-tracking states often fill in the detection gaps of corresponding objects.
In the pre-tracking state, the observation time window of the 'N' frame and the observation time window of the 'M' frame in the detrack state are variables. Different scenarios make the tracking state change decision dynamic, such as blob pair/patch categories, score and movement over time of the blob pair/patch, pairing width, intermittent blob pair/patch, curve/slope scenarios.
Distance estimation
FIG. 23 shows a schematic view of a display panelOne example of estimating the distance between the detected vehicle and the host vehicle using the distance estimation module 208 is shown in accordance with the present invention. The distance estimation module 208 is configured to calculate a distance between at least one detected vehicle and a host vehicle, a ratio of products of an actual width based on a size of the detected vehicle and a focal length of a lens, and a product of the width of the detected vehicle in an image and a coefficient converting a pixel of a camera into meters.
In one embodiment, perspective geometry is used to estimate the distance. If three pairs of vertices of the corresponding vehicle are connected into three straight lines intersecting at one vertex, two perspective triangles are formed starting from one vertex.
In the perspective method, the following formula is adopted for the distance estimation between the detected vehicle and the host vehicle, and its schematic diagram is shown in fig. 23.
Wherein,
f: focal length of lens (millimeter)
W: actual width of the vehicle dimension (meter)
w: width (pixel) of vehicle in image
k: converting the pixels of the CCD camera into coefficients of meters, and
d: distance to target vehicle (meter)
FIG. 24 is an estimated distance of a final inspected vehicle according to an exemplary embodiment of the present invention. The vehicle detection system 100 has detected three vehicles and is indicated by a rectangular box.
FIG. 25 illustrates a method for detecting one or more vehicles using a vehicle detection system in accordance with one embodiment of the present invention. In step 2501, a high exposure image or a low exposure image is received in a scene recognition module or a road topology estimation module. In step 2502, the scene identification module identifies instances of one or more scenes in a dynamically changing region of interest (ROI). In step 2503, at least one of a curve, a grade, and a vanishing point of the road is determined within a dynamically changing region of interest (ROI). In step 2504, a filtering module removes noise and unwanted information. In step 2505, one or more blobs in the filtered image are identified. In step 2506, the characteristics of each identified patch are determined. In step 2507, one or more objects from the one or more blobs identified in the dynamically changing ROI are identified using at least one pairing logic. In step 2508, one or more identified blob pairs are validated and verified.
Although the system and method of the present invention have been described in detail with reference to the specific embodiments thereof, the present invention is not limited thereto. It will be apparent to those skilled in the art that various substitutions, modifications and variations can be made in plaque identification, plaque classification, pairing logic and pairing validation and verification without departing from the scope and spirit of the invention.
Claims (17)
1. A vehicle detection system comprising:
a scene identification module to receive a high exposure image or a low exposure image to identify a condition of one or more scenes in a dynamically changing region of interest (ROI);
a road topology estimation module to receive a high exposure image or a low exposure image to determine at least one of a road curve, a slope, and a vanishing point in a dynamically changing region of interest (ROI); and
a vehicle detection module that, in combination with the scene recognition module and the road topology module, detects one or more vehicles on the road.
2. The system of claim 1, wherein the vehicle detection module comprises:
an image segmentation module for receiving input data from the scene recognition module or the road topology estimation module to provide binary image data;
a filtering module, coupled to the image segmentation module, configured to remove noise and unwanted information;
a blob identifying module, coupled to the filtering module, configured to identify one or more blobs in the filtered image and determine a characteristic of each identified blob;
an object identification module, coupled to the blob identification module, configured to identify an object based on at least one pairing logic;
a pair confirmation and verification module, coupled to the object identification module, configured to confirm and verify one or more identified patch pairs; and
a tracking module, coupled to the pairing validation and verification module, configured to track one or more validated and verified plaque pairs in one or more phases.
3. The system of claim 1, further comprising a two-wheeler identification module configured for determining that the identified object in the ROI is a two-wheeler, based on information of the one or more blobs received from the blob identification module and the pair validation and verification module,
the information for the one or more blobs includes a headlight blob or a taillight blob.
4. The system of claim 2, wherein the blob identifying module is configured for identifying one or more blobs in the filtered image and determining a characteristic of each identified blob, comprising the steps of:
assigning a unique marker to each of the one or more blobs;
determining features of each of the one or more marked blobs, the features including origin, width, height, frame area, pixel area, number of red pixels, aspect ratio, and blob outline of the blob;
determining one or more fusions of one or more marker blobs in one dark frame or light frame based on the determined features; and
the one or more marked blobs are classified within at least one of a head light, tail light, combined light and invalid light classification.
5. The system of claim 2, wherein the object identification module is configured to identify the object based on at least one pairing logic, comprising at least one of:
determining a horizontal overlap of one or more blobs;
determining an aspect ratio of one or more blobs;
determining a pixel area ratio of one or more patches;
determining a width ratio of one or more patches; and
a pixel box area ratio for one or more blobs is determined.
6. The system of claim 2, wherein the pairing validation and verification module is configured for validating and verifying one or more identified patch pairs, comprising at least one of:
identifying a pair of identified blobs;
identifying one or more identified patches among a row of lights; and
one or more identified blobs are identified between two or more pairs of identified blobs.
7. The system of claim 6, further comprising confirming the merged light by identifying one or more merged patches.
8. The system of claim 6, wherein identifying a pair of identified blobs comprises
The width and aspect ratio of the pair is determined,
determining the number of unpaired blobs between pairs, and
the areas of paired and unpaired plaques were determined as a percentage of the paired width.
9. The system of claim 6, wherein identifying one or more identified blobs between light rows in the ROI comprises row line matching algorithms for all light rows in the ROI.
10. The system of claim 6, wherein verifying one or more identified blobs in two or more pairs of identified blobs comprises at least once:
eliminating one pair of identified blobs having a smaller width in two pairs of identified blobs sharing the same blob, wherein the columns of the two pairs overlap very high or very low;
eliminating one pair of identified blobs having a larger width in two pairs of identified blobs sharing the same blob, wherein the column overlap of the two pairs is not very high or not very low and the intermediate blobs are asymmetrically distributed;
eliminating one pair of identified blobs having a width and height less than the other pair of identified blobs, wherein the two pairs have column overlap and no row overlap;
eliminating one pair of identified blobs having a lower intensity, height and wider width than the other pair of identified blobs;
eliminating one pair of identified blobs having a lower intensity than the other pair of identified blobs, wherein the two pairs have the same width and height and the columns overlap very high;
eliminating one pair of identified blobs having a greater width than the other pair, wherein the two pairs have column and row overlap and are asymmetric;
eliminating one pair of identified blobs having a smaller width than the other pair of identified blobs, wherein the two pairs are overlapped and symmetric;
eliminating one pair of identified blobs placed within another pair of identified blobs, wherein the two pairs have very little column overlap; and
one pair of identified blobs placed below another pair of identified blobs is eliminated, where the two pairs have very little column overlap.
11. The system of claim 1, wherein the scene identification module determines saturated pixels, brightness, and region variation in a dynamically varying region of interest (ROI).
12. The system of claim 1, wherein the road topology estimation module is coupled to the scene identification module for receiving a condition of one or more scenes in a dynamically changing region of interest (ROI).
13. The system of claim 1, further comprising a distance estimation module configured to calculate a distance between at least one detected vehicle and one host vehicle based on a ratio of a product of an actual width of a detected vehicle dimension and a focal length of the lens, and a product of the width of the detected vehicle in the image and a coefficient converting pixels of the camera to meters.
14. A method of detecting one or more vehicles by a vehicle detection system, comprising:
receiving a high-exposure image or a low-exposure image by at least one of a scene recognition module and a road topology estimation module;
identifying, by a scene identification module, a condition of one or more images in a dynamically changing region of interest (ROI);
determining at least one of a curve, a gradient, and a vanishing point of a road in a dynamically changing region of interest (ROI); and processing the one or more images to detect one or more vehicles in a dynamically changing region of interest (ROI).
15. The method of claim 14, wherein processing one or more images comprises:
eliminating noise and unnecessary information through a filtering module;
identifying one or more blobs in the filtered image;
determining a characteristic of each identified blob;
identifying one or more objects in the identified one or more blobs in the dynamically changing ROI using at least one pairing logic; and
one or more pairs of identified blobs are validated and verified.
16. The method of claim 14, further comprising determining that the identified object in the ROI is a two-wheel vehicle, based on information received from the plaque identification module and the pairing validation and verification module of one or more plaques,
the information for the one or more blobs includes a headlight blob or a taillight blob.
17. The method of claim 14, further comprising
Tracking one or more confirmed and verified plaque pairs/blobs in one or more stages by a tracking module,
the one or more phases include an idle phase, a pre-tracking phase, a tracking phase, and a cancel tracking phase.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IN159/MUM/2014 | 2014-01-17 | ||
IN159MU2014 | 2014-01-17 | ||
PCT/IN2015/000028 WO2015114654A1 (en) | 2014-01-17 | 2015-01-16 | Vehicle detection system and method thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105981042A true CN105981042A (en) | 2016-09-28 |
CN105981042B CN105981042B (en) | 2019-12-06 |
Family
ID=53059373
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201580003808.8A Active CN105981042B (en) | 2014-01-17 | 2015-01-16 | Vehicle detection system and method |
Country Status (5)
Country | Link |
---|---|
US (1) | US10380434B2 (en) |
EP (1) | EP3095073A1 (en) |
JP (1) | JP2017505946A (en) |
CN (1) | CN105981042B (en) |
WO (1) | WO2015114654A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111768343A (en) * | 2019-03-29 | 2020-10-13 | 通用电气精准医疗有限责任公司 | System and method for facilitating the examination of liver tumor cases |
CN111879360A (en) * | 2020-08-05 | 2020-11-03 | 吉林大学 | Automatic driving auxiliary safety early warning system in dark scene and early warning method thereof |
TWI723657B (en) * | 2019-12-02 | 2021-04-01 | 宏碁股份有限公司 | Vehicle control method and vehicle control system |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160217333A1 (en) * | 2015-01-26 | 2016-07-28 | Ricoh Company, Ltd. | Information processing apparatus and information processing system |
ITUB20154942A1 (en) * | 2015-10-23 | 2017-04-23 | Magneti Marelli Spa | Method to detect an incoming vehicle and its system |
KR102371589B1 (en) * | 2016-06-27 | 2022-03-07 | 현대자동차주식회사 | Apparatus and method for dectecting front vehicle |
CN106114558B (en) * | 2016-06-29 | 2017-12-01 | 南京雅信科技集团有限公司 | Suitable for the preceding tail-light detection method of subway tunnel bending section |
EP3511900B1 (en) * | 2016-09-06 | 2021-05-05 | Hitachi Automotive Systems, Ltd. | Image processing device and light distribution control system |
US10248874B2 (en) * | 2016-11-22 | 2019-04-02 | Ford Global Technologies, Llc | Brake light detection |
US10336254B2 (en) | 2017-04-21 | 2019-07-02 | Ford Global Technologies, Llc | Camera assisted vehicle lamp diagnosis via vehicle-to-vehicle communication |
CN110020575B (en) * | 2018-01-10 | 2022-10-21 | 富士通株式会社 | Vehicle detection device and method and electronic equipment |
JPWO2019146514A1 (en) * | 2018-01-24 | 2021-01-07 | 株式会社小糸製作所 | In-vehicle camera system, vehicle lighting equipment, distant detection method, vehicle lighting equipment control method |
WO2019159765A1 (en) * | 2018-02-15 | 2019-08-22 | 株式会社小糸製作所 | Vehicle detection device and vehicle light system |
EP3584742A1 (en) * | 2018-06-19 | 2019-12-25 | KPIT Technologies Ltd. | System and method for traffic sign recognition |
JP7261006B2 (en) * | 2018-12-27 | 2023-04-19 | 株式会社Subaru | External environment recognition device |
US10817777B2 (en) * | 2019-01-31 | 2020-10-27 | StradVision, Inc. | Learning method and learning device for integrating object detection information acquired through V2V communication from other autonomous vehicle with object detection information generated by present autonomous vehicle, and testing method and testing device using the same |
CN111832347B (en) * | 2019-04-17 | 2024-03-19 | 北京地平线机器人技术研发有限公司 | Method and device for dynamically selecting region of interest |
CN112085962B (en) * | 2019-06-14 | 2022-10-25 | 富士通株式会社 | Image-based parking detection method and device and electronic equipment |
CN111256707A (en) * | 2019-08-27 | 2020-06-09 | 北京纵目安驰智能科技有限公司 | Congestion car following system and terminal based on look around |
WO2021132566A1 (en) * | 2019-12-26 | 2021-07-01 | パナソニックIpマネジメント株式会社 | Display control device, display system, and display control method |
CN111260631B (en) * | 2020-01-16 | 2023-05-05 | 成都地铁运营有限公司 | Efficient rigid contact line structure light bar extraction method |
KR20210148756A (en) | 2020-06-01 | 2021-12-08 | 삼성전자주식회사 | Slope estimating apparatus and operating method thereof |
CN118447469B (en) * | 2024-07-08 | 2024-10-22 | 潍柴动力股份有限公司 | BP and CNN-based road gradient prediction method and device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006059184A (en) * | 2004-08-20 | 2006-03-02 | Matsushita Electric Ind Co Ltd | Image processor |
US20110164789A1 (en) * | 2008-07-14 | 2011-07-07 | National Ict Australia Limited | Detection of vehicles in images of a night time scene |
CN102542256A (en) * | 2010-12-07 | 2012-07-04 | 摩比莱耶科技有限公司 | Advanced warning system for giving front conflict alert to pedestrians |
CN103029621A (en) * | 2011-09-30 | 2013-04-10 | 株式会社理光 | Method and equipment used for detecting front vehicle |
CN103402819A (en) * | 2011-03-02 | 2013-11-20 | 罗伯特·博世有限公司 | Method and control unit for influencing an illumination scene in front of a vehicle |
Family Cites Families (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3503230B2 (en) * | 1994-12-15 | 2004-03-02 | 株式会社デンソー | Nighttime vehicle recognition device |
JPH0935059A (en) * | 1995-07-14 | 1997-02-07 | Aisin Seiki Co Ltd | Discriminating device for illuminance on moving body |
US7655894B2 (en) * | 1996-03-25 | 2010-02-02 | Donnelly Corporation | Vehicular image sensing system |
US8108119B2 (en) * | 2006-04-21 | 2012-01-31 | Sri International | Apparatus and method for object detection and tracking and roadway awareness using stereo cameras |
US7724962B2 (en) * | 2006-07-07 | 2010-05-25 | Siemens Corporation | Context adaptive approach in vehicle detection under various visibility conditions |
US7972045B2 (en) * | 2006-08-11 | 2011-07-05 | Donnelly Corporation | Automatic headlamp control system |
WO2008091565A1 (en) * | 2007-01-23 | 2008-07-31 | Valeo Schalter & Sensoren Gmbh | Method and system for universal lane boundary detection |
US8199198B2 (en) * | 2007-07-18 | 2012-06-12 | Delphi Technologies, Inc. | Bright spot detection and classification method for a vehicular night-time video imaging system |
US8376595B2 (en) * | 2009-05-15 | 2013-02-19 | Magna Electronics, Inc. | Automatic headlamp control |
US20120287276A1 (en) * | 2011-05-12 | 2012-11-15 | Delphi Technologies, Inc. | Vision based night-time rear collision warning system, controller, and method of operating the same |
JP5518007B2 (en) * | 2011-07-11 | 2014-06-11 | クラリオン株式会社 | Vehicle external recognition device and vehicle control system using the same |
US20140163703A1 (en) * | 2011-07-19 | 2014-06-12 | Utah State University | Systems, devices, and methods for multi-occupant tracking |
JP5896788B2 (en) * | 2012-03-07 | 2016-03-30 | キヤノン株式会社 | Image composition apparatus and image composition method |
US20130322697A1 (en) * | 2012-05-31 | 2013-12-05 | Hexagon Technology Center Gmbh | Speed Calculation of a Moving Object based on Image Data |
JP5902049B2 (en) * | 2012-06-27 | 2016-04-13 | クラリオン株式会社 | Lens cloudiness diagnostic device |
JP5947682B2 (en) * | 2012-09-07 | 2016-07-06 | 株式会社デンソー | Vehicle headlamp device |
CN104823122B (en) * | 2012-12-04 | 2018-06-29 | 金泰克斯公司 | For detecting the imaging system and method for bright city condition |
US8994652B2 (en) * | 2013-02-15 | 2015-03-31 | Intel Corporation | Model-based multi-hypothesis target tracker |
US9514373B2 (en) * | 2013-08-28 | 2016-12-06 | Gentex Corporation | Imaging system and method for fog detection |
JP6208244B2 (en) * | 2013-09-27 | 2017-10-04 | 日立オートモティブシステムズ株式会社 | Object detection device |
KR101511853B1 (en) * | 2013-10-14 | 2015-04-13 | 영남대학교 산학협력단 | Night-time vehicle detection and positioning system and method using multi-exposure single camera |
KR20150052638A (en) * | 2013-11-06 | 2015-05-14 | 현대모비스 주식회사 | ADB head-lamp system and Beam control method using the same |
JP6380843B2 (en) * | 2013-12-19 | 2018-08-29 | 株式会社リコー | Object detection apparatus, mobile device control system including the same, and object detection program |
DE102014219120A1 (en) * | 2013-12-19 | 2015-06-25 | Robert Bosch Gmbh | Method and apparatus for determining headlamp leveling of a headlamp |
JP6095605B2 (en) * | 2014-04-24 | 2017-03-15 | 本田技研工業株式会社 | Vehicle recognition device |
US9558455B2 (en) * | 2014-07-11 | 2017-01-31 | Microsoft Technology Licensing, Llc | Touch classification |
US9434382B1 (en) * | 2015-03-19 | 2016-09-06 | Toyota Motor Engineering & Manufacturing North America, Inc. | Vehicle operation in environments with second order objects |
JP6537385B2 (en) * | 2015-07-17 | 2019-07-03 | 日立オートモティブシステムズ株式会社 | In-vehicle environment recognition device |
JP6493087B2 (en) * | 2015-08-24 | 2019-04-03 | 株式会社デンソー | In-vehicle camera device |
DE102015224171A1 (en) * | 2015-12-03 | 2017-06-08 | Robert Bosch Gmbh | Tilt detection on two-wheeled vehicles |
JP6565806B2 (en) * | 2016-06-28 | 2019-08-28 | 株式会社デンソー | Camera system |
-
2015
- 2015-01-16 US US15/112,122 patent/US10380434B2/en active Active
- 2015-01-16 CN CN201580003808.8A patent/CN105981042B/en active Active
- 2015-01-16 WO PCT/IN2015/000028 patent/WO2015114654A1/en active Application Filing
- 2015-01-16 JP JP2016542680A patent/JP2017505946A/en active Pending
- 2015-01-16 EP EP15721342.2A patent/EP3095073A1/en not_active Withdrawn
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006059184A (en) * | 2004-08-20 | 2006-03-02 | Matsushita Electric Ind Co Ltd | Image processor |
US20110164789A1 (en) * | 2008-07-14 | 2011-07-07 | National Ict Australia Limited | Detection of vehicles in images of a night time scene |
CN102542256A (en) * | 2010-12-07 | 2012-07-04 | 摩比莱耶科技有限公司 | Advanced warning system for giving front conflict alert to pedestrians |
CN103402819A (en) * | 2011-03-02 | 2013-11-20 | 罗伯特·博世有限公司 | Method and control unit for influencing an illumination scene in front of a vehicle |
CN103029621A (en) * | 2011-09-30 | 2013-04-10 | 株式会社理光 | Method and equipment used for detecting front vehicle |
Non-Patent Citations (1)
Title |
---|
SUNGMIN EUM ET AL.: "Enhancing Light Blob Detection for Intelligent Headlight Control Using Lane Detection", 《IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111768343A (en) * | 2019-03-29 | 2020-10-13 | 通用电气精准医疗有限责任公司 | System and method for facilitating the examination of liver tumor cases |
CN111768343B (en) * | 2019-03-29 | 2024-04-16 | 通用电气精准医疗有限责任公司 | System and method for facilitating examination of liver tumor cases |
TWI723657B (en) * | 2019-12-02 | 2021-04-01 | 宏碁股份有限公司 | Vehicle control method and vehicle control system |
CN111879360A (en) * | 2020-08-05 | 2020-11-03 | 吉林大学 | Automatic driving auxiliary safety early warning system in dark scene and early warning method thereof |
CN111879360B (en) * | 2020-08-05 | 2021-04-23 | 吉林大学 | Automatic driving auxiliary safety early warning system in dark scene and early warning method thereof |
Also Published As
Publication number | Publication date |
---|---|
US10380434B2 (en) | 2019-08-13 |
CN105981042B (en) | 2019-12-06 |
WO2015114654A1 (en) | 2015-08-06 |
JP2017505946A (en) | 2017-02-23 |
US20160335508A1 (en) | 2016-11-17 |
EP3095073A1 (en) | 2016-11-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105981042B (en) | Vehicle detection system and method | |
US7566851B2 (en) | Headlight, taillight and streetlight detection | |
O'Malley et al. | Vehicle detection at night based on tail-light detection | |
WO2014017403A1 (en) | Vehicle-mounted image recognition device | |
Prakash et al. | Robust obstacle detection for advanced driver assistance systems using distortions of inverse perspective mapping of a monocular camera | |
US20130083971A1 (en) | Front vehicle detecting method and front vehicle detecting apparatus | |
CN107316486A (en) | Pilotless automobile visual identifying system based on dual camera | |
Kowsari et al. | Real-time vehicle detection and tracking using stereo vision and multi-view AdaBoost | |
KR101224027B1 (en) | Method for dectecting front vehicle using scene information of image | |
KR101134857B1 (en) | Apparatus and method for detecting a navigation vehicle in day and night according to luminous state | |
Lin et al. | Adaptive IPM-based lane filtering for night forward vehicle detection | |
KR20160108344A (en) | Vehicle detection system and method thereof | |
Sultana et al. | Vision-based robust lane detection and tracking in challenging conditions | |
Wang et al. | Real-time vehicle signal lights recognition with HDR camera | |
Boonsim et al. | An algorithm for accurate taillight detection at night | |
Wu et al. | Overtaking Vehicle Detection Techniques based on Optical Flow and Convolutional Neural Network. | |
KR20140104516A (en) | Lane detection method and apparatus | |
Ghani et al. | Advances in lane marking detection algorithms for all-weather conditions | |
Pillai et al. | Night time vehicle detection using tail lights: a survey | |
Dai et al. | A driving assistance system with vision based vehicle detection techniques | |
Huang et al. | Nighttime vehicle detection and tracking base on spatiotemporal analysis using RCCC sensor | |
CN113743226B (en) | Daytime front car light language recognition and early warning method and system | |
Merugu et al. | Multi lane detection, curve fitting and lane type classification | |
Chen et al. | A forward collision avoidance system adopting multi-feature vehicle detection | |
CN113581059A (en) | Light adjusting method and related device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |