CN105981042A - Vehicle detection system and method - Google Patents
Vehicle detection system and method Download PDFInfo
- Publication number
- CN105981042A CN105981042A CN201580003808.8A CN201580003808A CN105981042A CN 105981042 A CN105981042 A CN 105981042A CN 201580003808 A CN201580003808 A CN 201580003808A CN 105981042 A CN105981042 A CN 105981042A
- Authority
- CN
- China
- Prior art keywords
- module
- identified
- plaques
- patches
- pair
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 44
- 238000000034 method Methods 0.000 title claims abstract description 34
- 238000012795 verification Methods 0.000 claims description 25
- 238000012790 confirmation Methods 0.000 claims description 21
- 238000001914 filtration Methods 0.000 claims description 17
- 238000003709 image segmentation Methods 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 5
- 238000010200 validation analysis Methods 0.000 claims description 5
- 230000004927 fusion Effects 0.000 claims description 4
- 229920006395 saturated elastomer Polymers 0.000 claims description 4
- 239000003550 marker Substances 0.000 claims description 3
- 230000003628 erosive effect Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 230000008859 change Effects 0.000 description 4
- 230000011218 segmentation Effects 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 3
- 239000013598 vector Substances 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000000877 morphologic effect Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000010339 dilation Effects 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/584—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Traffic Control Systems (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
Description
发明领域field of invention
本发明通常涉及车辆探测,更具体地涉及一种在低光照条件下的车辆探测系统,比如夜间。The present invention relates generally to vehicle detection, and more particularly to a vehicle detection system in low light conditions, such as at night.
发明背景Background of the invention
先进驾驶辅助解决方案正在逐渐获得市场。前向碰撞预警是应用程序之一,其在当主车辆即将与前方的目标车辆碰撞时向司机发出警告。视觉应用程序在白天和夜间探测前方的车辆并根据计算出的即将发生碰撞的时间来生成警告。前向碰撞预警系统和其他的汽车视觉应用程序采用不同的算法来探测在昼夜条件下的车辆。Advanced driver assistance solutions are gradually gaining market share. Forward Collision Warning is one of the applications that warns the driver when the host vehicle is about to collide with the target vehicle ahead. Vision applications detect vehicles ahead during the day and at night and generate warnings based on the calculated time of imminent collision. Forward collision warning systems and other automotive vision applications use different algorithms to detect vehicles in day and night conditions.
然而,现有的车辆探测系统效率不高、不方便并且很昂贵。这就需要一种有效率且经济的基于夜间车辆探测的视觉系统和方法。因此,需要一种能够提供强健的车辆探测的系统,即使在弱光照条件下,在各种实时场景下,也能消除错误的对象。However, existing vehicle detection systems are inefficient, inconvenient, and expensive. This requires an efficient and economical vision system and method based on nighttime vehicle detection. Therefore, there is a need for a system that can provide robust vehicle detection, even in low-light conditions, and eliminate false objects in a variety of real-time scenarios.
传统的视觉处理系统缺乏广泛的能见度条件,包括乡村条件(黑暗)和城市条件(明亮)。此外,该类车辆的探测特别具有挑战性的原因包括:Conventional vision processing systems lack a wide range of visibility conditions, including rural conditions (dark) and urban conditions (bright). Additionally, the reasons why detection of this class of vehicles is particularly challenging include:
·从车辆的不同位置到车辆灯光的形状具有多种多样的不同。·There are various differences from different positions of the vehicle to the shape of the vehicle lights.
·车辆具有不同的灯光。例如,刹车灯、开启的侧光。· Vehicles have different lights. For example, brake lights, side lights turned on.
·车辆没有开灯。·The lights of the vehicle are not turned on.
·在城市条件下的车辆探测,周围混杂有如此之多的环境光线。· Vehicle detection in urban conditions with so much ambient light mixed in.
·两轮车的探测和距离估计。• Detection and distance estimation of two-wheelers.
因此,有必要提供一种对夜间道路上的一个或一个以上的车辆进行探测的车辆探测系统。需要一种强健的系统能够识别和消除错误的对象,包括如路灯、交通锥等与车辆灯光形状非常相似的光及其他混杂光源;从而提供高精确度水平。Therefore, it is necessary to provide a vehicle detection system for detecting one or more vehicles on the road at night. There is a need for a robust system that can identify and eliminate false objects, including lights such as street lamps, traffic cones, etc. that closely resemble the shape of vehicle lights, and other confounding light sources; thus providing a high level of accuracy.
概述overview
本发明的一个实施例描述了一种车辆探测系统。该车辆探测系统包括一个场景识别模块,其配置为接收高曝光图像和低曝光图像之一,以用于识别在动态变化的感兴趣区域(ROI)中的一个或一个以上的场景的情况,一个道路拓扑估算模块,其配置为接收高曝光及低曝光图像之一,以确定在动态变化的感兴趣区域(ROI)中的道路的曲线、坡度和消失点中的至少一个,以及一个车辆探测模块,其与场景识别模块和道路拓扑模块相结合来探测道路上的一个或一个以上的车辆。One embodiment of the present invention describes a vehicle detection system. The vehicle detection system includes a scene recognition module configured to receive one of a high-exposure image and a low-exposure image for identifying conditions of one or more scenes in a dynamically changing region of interest (ROI), a a road topology estimation module configured to receive one of the high exposure and low exposure images to determine at least one of the curve, slope and vanishing point of the road in a dynamically changing region of interest (ROI), and a vehicle detection module , which is combined with the scene recognition module and the road topology module to detect one or more vehicles on the road.
本发明的另一个实施例描述了一种通过车辆探测系统来探测一个或一个以上车辆的方法。该方法包括:通过场景识别模块和道路拓扑估算模块中的至少一个来接收高曝光图像和低曝光图像之一,通过场景识别模块来识别在动态变化的感兴趣区域(ROI)中的一个或一个以上图像场景的情况,确定在动态变化的感兴趣区域(ROI)中的道路的曲线、坡度和消失点中的至少一个,以及处理一个或一个以上图像来探测在动态变化的感兴趣区域(ROI)中的一个或一个以上车辆。Another embodiment of the present invention describes a method of detecting one or more vehicles by a vehicle detection system. The method includes: receiving one of the high-exposure image and the low-exposure image by at least one of the scene recognition module and the road topology estimation module, and identifying one or one of the dynamically changing regions of interest (ROI) by the scene recognition module In the case of the above image scene, at least one of the curve, slope and vanishing point of the road in the dynamically changing region of interest (ROI) is determined, and one or more images are processed to detect the dynamically changing region of interest (ROI). ) of one or more vehicles.
处理一个或一个以上图像,以用于探测在动态变化的感兴趣区域(ROI)中的一个或一个以上车辆,其步骤包括:通过分割模块获得可能的光源,通过滤波模块消除噪声和不需要的信息,在滤波后的图像中识别一个或一个以上斑块,确定每个被识别的斑块的特征,使用至少一个配对逻辑在动态变化的ROI中已识别出的一个或一个以上的斑块中识别一个或一个以上的对象,以及确认与验证一对或一对以上的识别斑块对。Processing one or more images for detecting one or more vehicles in a dynamically changing region of interest (ROI), the steps include: obtaining possible light sources by a segmentation module, removing noise and unwanted light sources by a filtering module information, identify one or more plaques in the filtered image, determine the characteristics of each identified plaque, use at least one pairing logic among the one or more identified plaques in the dynamically changing ROI One or more objects are identified, and one or more pairs of identified plaques are identified and verified.
在一个实施例中,在两对或两对以上识别斑块对中的一个或一个以上的识别斑块通过执行至少一次方法步骤来验证,该方法步骤包括在共享同一斑块的两对识别斑块中消除一对具有较小宽度的识别斑块,其中,两对的列重叠很高或很低,在共享同一斑块的两对识别斑块中消除一对具有较大宽度的识别斑块,其中,两对的列重叠不是很高或不是很低而且中间斑块是非对称分布的,消除宽度和高度小于另一对识别斑块的一对识别斑块,其中,所述两对具有列重叠而不具有行重叠,消除较另一对识别斑块具有较低强度、高度和较宽宽度的一对识别斑块,消除较另一对识别斑块具有更低强度的一对识别斑块,其中,两对具有相同的宽度和高度并且列重叠很高,消除较另一对识别斑块具有更大宽度的一对识别斑块,其中,两对具有列重叠和行重叠且为非对称的,消除较另一对识别斑块具有更小宽度的一对识别斑块,其中,两对有列重叠且为对称的,消除置于另一对识别斑块内的一对识别斑块,其中,两对具有非常少的列重叠,消除置于另一对识别斑块下方的一对识别斑块,其中,两对有非常少的列重叠。In one embodiment, one or more identified plaques in two or more pairs of identified plaques are verified by performing at least one method step comprising performing a test on two pairs of identified plaques that share the same patch. Eliminate a pair of identified patches with a smaller width in a block where the column overlap of the two pairs is high or low, and eliminate a pair of identified patches with a larger width in two pairs of identified patches that share the same patch , where the column overlap of the two pairs is neither very high nor low and the middle patches are distributed asymmetrically, eliminating a pair of identification patches whose width and height are smaller than the other pair of identification patches, where the two pairs have columns Overlap without row overlap, eliminate a pair of identified patches with lower intensity, height, and wider width than the other pair of identified patches, eliminate a pair of identified patches with lower intensity than the other pair of identified patches , where two pairs have the same width and height and the column overlap is high, eliminate a pair of identified patches that has a larger width than the other pair of identified patches, where the two pairs have column overlap and row overlap and are asymmetric , eliminating a pair of identification patches having a smaller width than the other pair of identification patches, wherein the two pairs have columns overlapping and symmetrical, eliminating a pair of identification patches placed within the other pair of identification patches, Where two pairs have very little column overlap, a pair of identified patches placed below another pair of identified patches where two pairs have very little column overlap is eliminated.
附图的简要说明Brief description of the drawings
本发明将结合附图对以上所描述的方面和其他特点作如下解释,其中:The present invention will be explained as follows with reference to the above-described aspects and other features in conjunction with the accompanying drawings, wherein:
图1为根据本发明的一个实施例所示的车辆探测系统的框图。Fig. 1 is a block diagram of a vehicle detection system according to an embodiment of the present invention.
图2为根据本发明的一个实施例所示的车辆探测系统的框图。Fig. 2 is a block diagram of a vehicle detection system according to an embodiment of the present invention.
图3为根据本发明的一个示例性实施例所示的采集的在动态变化的ROI中车辆的图像。Fig. 3 is a captured image of a vehicle in a dynamically changing ROI according to an exemplary embodiment of the present invention.
图4为根据本发明的一个实施例所示的提供给场景识别模块或道路拓扑估算模块的输入帧。Fig. 4 is an input frame provided to a scene recognition module or a road topology estimation module according to an embodiment of the present invention.
图5为根据本发明的一个实施例所示的采集到的动态变化的ROI图像。Fig. 5 is a dynamically changing ROI image collected according to an embodiment of the present invention.
图6为根据本发明的一个实施例所示的道路拓扑估算模块的框图。Fig. 6 is a block diagram of a road topology estimation module according to an embodiment of the present invention.
图7为根据本发明的一个实施例所描述的动态变化ROI随着关于道路的坡度和曲线变化而变化。Fig. 7 is a description of a dynamically changing ROI according to an embodiment of the present invention that changes as the slope and curve of the road change.
图8为根据本发明的一个示例性实施例所描述的一幅采集到的用于分割的图像。Fig. 8 is a captured image for segmentation described according to an exemplary embodiment of the present invention.
图9为根据本发明的一个示例性实施例所描述的分割后得到的输出图像。Fig. 9 is an output image obtained after segmentation described according to an exemplary embodiment of the present invention.
图10为根据本发明的一个示例性实施例所描述的为了分割彩色图像的3×3矩阵。FIG. 10 is a 3×3 matrix for segmenting a color image described according to an exemplary embodiment of the present invention.
图11为根据本发明的一个示例性实施例所示的分割图像经过滤波后的输出图像。Fig. 11 is a filtered output image of a segmented image according to an exemplary embodiment of the present invention.
图12为根据本发明的一个示例性实施例所示的两合并斑块的分离。Figure 12 shows the separation of two merged plaques according to an exemplary embodiment of the present invention.
图13为根据本发明的一个示例性实施例所示的一幅图像,该幅图像中的每个斑块均分配了不同的标记。Fig. 13 is an image according to an exemplary embodiment of the present invention, and each plaque in the image is assigned a different label.
图14为根据本发明的一个实施例所示的用于从暗帧和亮帧中制备最终斑块列表的方法流程图。Fig. 14 is a flowchart of a method for preparing a final patch list from dark frames and bright frames according to an embodiment of the present invention.
图15为根据本发明的一个示例性实施例所示的多幅图像,在这些图像中的斑块均进行分类以识别前照灯、尾灯或其他任何光。Fig. 15 is an illustration of multiple images in which blobs are classified to identify headlights, taillights or any other light according to an exemplary embodiment of the present invention.
图16为根据本发明的一个示例性实施例所示的一幅图像,在该图像中的斑块均被归类为合并斑块。Fig. 16 is an image shown according to an exemplary embodiment of the present invention, and the blobs in the image are all classified as merged blobs.
图17为根据本发明的一个实施例所示的基于配对逻辑来识别有效配对的方法。Fig. 17 shows a method for identifying valid pairing based on pairing logic according to an embodiment of the present invention.
图18为根据本发明的一个实施例所示的确认与验证斑块的方法。Fig. 18 shows a method for confirming and verifying plaques according to an embodiment of the present invention.
图19为根据本发明的一个示例性实施例所示的在确认与验证前后的斑块对的图像。Fig. 19 is an image of plaque pairs before and after confirmation and verification according to an exemplary embodiment of the present invention.
图20为根据本发明的一个示例性实施例所示的合并的灯光/或斑块。Fig. 20 illustrates merged lights and/or patches according to an exemplary embodiment of the present invention.
图21为根据本发明的一个实施例所示的在动态变化的ROI中识别有效斑块的方法,用以探测两轮车。Fig. 21 shows a method for identifying effective plaques in a dynamically changing ROI for detecting two-wheelers according to an embodiment of the present invention.
图22为根据本发明的一个实施例所示的跟踪模块状态模式转换周期。Fig. 22 is a state mode conversion cycle of a tracking module according to an embodiment of the present invention.
图23为根据本发明所示的通过距离估算模块来估算被探测车辆和主车辆之间距离的一个具体实例。Fig. 23 is a specific example of estimating the distance between the detected vehicle and the host vehicle through the distance estimating module according to the present invention.
图24为根据本发明的一个示例性实施例所示的探测到的车辆的估算距离。FIG. 24 shows estimated distances of detected vehicles according to an exemplary embodiment of the present invention.
图25为根据本发明的一个实施例所示的通过车辆探测系统来探测一个或一个以上车辆的方法流程图。FIG. 25 is a flowchart of a method for detecting one or more vehicles through a vehicle detection system according to an embodiment of the present invention.
发明的详细说明Detailed Description of the Invention
本发明的实施例将结合附图详细描述。并且,本发明并不局限于本发明中所描写的实施例。本发明所述装置的大小、形状、位置、数量和各种元素的组合,只是示例性的,本领域的技术人员可在不脱离本发明的范围内进行各种修改。因此,本发明的实施例只是用于向本发明所在领域内的普通技术人员更清楚地解释本发明。在附图中,类似的组件采用类似的引用标号来标示。Embodiments of the present invention will be described in detail with reference to the accompanying drawings. Also, the present invention is not limited to the embodiments described in the present invention. The size, shape, position, quantity and combination of various elements of the device in the present invention are only exemplary, and those skilled in the art can make various modifications without departing from the scope of the present invention. Therefore, the embodiments of the present invention are merely used to more clearly explain the present invention to those skilled in the art to which the present invention pertains. In the drawings, similar components are labeled with similar reference numerals.
说明书中的各个地方都可能涉及到引用“一”、“一个”或“一些”实施例。这并不一定意味着每一个这样的引用都是指的相同的一个实施例(或多个实施例),或该特征仅适用于一个单一的实施例。不同实施例的单个特征也可以组合以提供其他实施例。Various places in the specification may refer to "a", "an" or "some" embodiments. This does not necessarily mean that each such reference refers to the same embodiment (or embodiments), or that the feature applies to only a single embodiment. Single features of different embodiments may also be combined to provide other embodiments.
正如本申请中所使用的,单数形式的“一”、“一个”和“该”同样也应理解为包含复数形式,除非明确说明除外。应进一步地理解,本申请中使用的术语“包括”、“由……组成”、“包含”和/或“构成”,列举设定特征、整数、步骤、操作、元素和/或部件,但不排除现有的或增加一个或一个以上的其他特征、整数、步骤操作、元素、部件、和/或以上的组合。应理解的是被称为,当描述一个元素“连接”或“耦合”至另一个元素时,可以为直接连接或耦合到其他元素或存在一个以上元素介于该两元素之间。此外,本申请中使用的“连接”或“耦合”可以包括实时的连接或耦合。本申请中使用的术语“和/或”包括任一和所有的一个或多个所列关联项目的组合和布置。As used in this application, the singular forms "a", "an" and "the" should also be read to include the plural forms unless expressly stated otherwise. It should be further understood that the terms "comprising", "consisting of", "comprising" and/or "consisting" used in this application enumerate set features, integers, steps, operations, elements and/or components, but Existing or adding one or more other features, integers, step operations, elements, components, and/or combinations thereof is not excluded. It will be understood that when it is described that an element is "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or more than one element may be present between the two elements. In addition, "connected" or "coupled" as used in this application may include real-time connected or coupled. The term "and/or" used in this application includes any and all combinations and arrangements of one or more of the associated listed items.
除非特别限定,本申请中所使用的所有术语(包括技术术语和科学术语)均与本申请披露所涉及领域的普通技术人员的通常理解具有相同的意义。进一步地理解,这些术语,如常用词典中所定义的一样,应理解为具有与相关领域的上下文相一致的意义,并且不会理想化或过度正式意义的理解,除非本申请中特别指出如此定义。Unless otherwise defined, all terms (including technical terms and scientific terms) used in this application have the same meaning as commonly understood by one of ordinary skill in the art to which this application discloses. It is further understood that these terms, as defined in commonly used dictionaries, should be understood to have meanings that are consistent with the context of the relevant field, and will not be understood in an idealized or overly formal sense, unless otherwise specified in the application .
本发明描述了一种车辆探测系统以及发生在夜间低光照条件下探测车辆的方法。车辆探测需要多种应用程序,例如前向碰撞预警系统、动态远光辅助、远光自动控制系统等。所述系统在夜间探测车辆以使用车辆灯光,如尾灯作为主要特征。该系统提供一种强健的车辆探测,使用多种不同分类器和识别并消除错误对象。The present invention describes a vehicle detection system and a method for detecting vehicles at night under low light conditions. Vehicle detection requires various applications such as forward collision warning system, dynamic high beam assist, high beam automatic control system, etc. The system detects vehicles at night using vehicle lights, such as taillights, as a primary feature. The system provides a robust vehicle detection using a variety of different classifiers and identifies and eliminates false objects.
图1为根据本发明的一个实施例所示的车辆探测系统的框图。车辆探测系统100包括一个场景识别模块101、一个道路拓扑估算模块102以及一个车辆探测模块103。所述场景识别模块101配置为接收高曝光图像和低曝光图像其中之一,以识别动态变化的感兴趣区域(ROI)中的一个或一个以上场景/图像的情况。场景识别模块确定在动态变化的感兴趣区域(ROI)中的饱和像素、亮度、和区域变化。所述道路拓扑估算模块102配置为接收高曝光图像或低曝光图像,以确定在动态变化的感兴趣区域(ROI)中的一个或一个以上道路特征,如道路的曲线、坡度和消失点。在一个实施例中,将道路拓扑估算模块耦合至场景识别模块,以用于接收动态变化的感兴趣区域(ROI)中的一个或一个以上场景/图像的情况。车辆探测模块103与场景识别模块101和道路拓扑估算模块102相连接,并配置为用于探测夜间道路上的一辆或多辆车辆,即,行驶在主车辆前方的前方车辆或迎面车辆。在这里,系统在公路和城市道路的夜间条件下工作。该系统以使用车辆灯光作为主要特征,从而用于车辆探测及车辆分类器。Fig. 1 is a block diagram of a vehicle detection system according to an embodiment of the present invention. The vehicle detection system 100 includes a scene recognition module 101 , a road topology estimation module 102 and a vehicle detection module 103 . The scene identification module 101 is configured to receive one of the high-exposure image and the low-exposure image to identify the condition of one or more scenes/images in a dynamically changing region of interest (ROI). The scene recognition module determines saturated pixels, brightness, and regional changes in a dynamically changing region of interest (ROI). The road topology estimation module 102 is configured to receive high exposure images or low exposure images to determine one or more road features in a dynamically changing region of interest (ROI), such as road curves, slopes, and vanishing points. In one embodiment, the road topology estimation module is coupled to the scene recognition module for receiving instances of one or more scenes/images in a dynamically changing region of interest (ROI). The vehicle detection module 103 is connected with the scene recognition module 101 and the road topology estimation module 102, and is configured to detect one or more vehicles on the road at night, ie, the vehicle in front or the oncoming vehicle driving in front of the host vehicle. Here, the system works in nighttime conditions on highways and city roads. The system uses vehicle lights as a main feature for vehicle detection and vehicle classifiers.
图像采集单元采集动态变化的感兴趣区域中的图像,用于为系统100提供相同(如高的和低的曝光/通道具有不同的增益/曝光)。图像采集单元可能是一个车辆摄像机。然后将输入图像转换为灰度图像和彩色图像,并输入提供到系统100。该系统100工作于感兴趣区域(ROI)。场景识别模块101确定场景情况,如白天和黑夜,夜间黑暗相对明亮、雾、和雨。道路拓扑估算模块102确定道路场景,如曲线、坡度等。在一个实施例中,动态的ROI是随着有关于道路的曲线和坡度变化而变化的ROI。道路拓扑估算模块102和场景识别模块101接收并处理高曝光/增益的帧和低曝光/增益的帧的交替集。该系统100与电子控制单元104相连接,用于提供由车辆探测模块103处理的输出信号。电子控制单元104处理接收到的信号,用于显示或警示车辆的使用者或驾驶员。The image acquisition unit acquires images in dynamically changing regions of interest for providing the same (eg high and low exposure/channels with different gains/exposures) to the system 100 . The image acquisition unit may be a vehicle camera. The input image is then converted to a grayscale image and a color image, and input is provided to the system 100 . The system 100 operates on a region of interest (ROI). The scene recognition module 101 determines scene conditions, such as day and night, dark versus bright night, fog, and rain. The road topology estimation module 102 determines road scenes, such as curves, slopes, and the like. In one embodiment, a dynamic ROI is an ROI that changes with respect to the curve and gradient of the road. The road topology estimation module 102 and scene recognition module 101 receive and process alternating sets of high exposure/gain frames and low exposure/gain frames. The system 100 is connected to an electronic control unit 104 for providing output signals processed by a vehicle detection module 103 . The electronic control unit 104 processes the received signals for displaying or warning the user or driver of the vehicle.
在一个实施例中,道路拓扑估算模块102基于以下参数计算出动态变化的ROI:In one embodiment, the road topology estimation module 102 calculates the dynamically changing ROI based on the following parameters:
·饱和像素Saturated pixels
·亮度·brightness
·区域变化· Regional change
·颜色·color
在一个实施例中,如果以上所述的参数通过整个ROI都是高的,则动态变化区域中的场景/图像被分类为明亮。In one embodiment, scenes/images in dynamically changing regions are classified as bright if the parameters described above are high through the entire ROI.
图2为根据本发明的一个实施例所示的车辆探测系统的框图。车辆探测模块103包括图像分割模块201、滤波模块202、斑块识别模块203、对象识别模块204、配对确认与验证模块205、跟踪模块207、以及距离估算模块208。此外,车辆探测模块103还包括两轮车识别模块206。Fig. 2 is a block diagram of a vehicle detection system according to an embodiment of the present invention. The vehicle detection module 103 includes an image segmentation module 201 , a filtering module 202 , a blob recognition module 203 , an object recognition module 204 , a pairing confirmation and verification module 205 , a tracking module 207 , and a distance estimation module 208 . In addition, the vehicle detection module 103 also includes a two-wheeler identification module 206 .
该图像分割模块201配置为用于接收来自场景识别模块101或道路拓扑估算模块102的输入数据,以提供二进制图像数据。该二进制图像数据包括尾灯、前照灯、噪声和不需要的信息。The image segmentation module 201 is configured to receive input data from the scene recognition module 101 or the road topology estimation module 102 to provide binary image data. This binary image data includes taillights, headlights, noise and unnecessary information.
滤波模块202耦合至图像分割模块201,并配置为用于去除噪声和不需要的信息,如非常小的对象、假阳性和类似的其他信息等。The filtering module 202 is coupled to the image segmentation module 201 and is configured to remove noise and unwanted information, such as very small objects, false positives, and similar other information.
斑块识别模块203耦合至滤波模块202,并配置为用于识别经过滤波后的图像中的一个或一个以上的斑块并且随后确定每一被识别斑块的特征。斑块识别模块203配置为执行以下步骤,其中包括为一个或一个以上斑块中的每一斑块分配一个独特的标记、确定一个或一个以上标记斑块的每一个的特征,特征包括斑块的起源、宽度、高度、框面积、像素面积、红色像素的数目、纵横比、以及斑块轮廓,基于确定的特征确定在一个暗帧或亮帧中的一个或一个以上标记斑块的一个或一个以上融合,以及将一个或一个以上标记斑块分类在前照灯、尾灯、合并光和无效光中的至少一个分类内。The blob identification module 203 is coupled to the filtering module 202 and is configured to identify one or more blobs in the filtered image and then determine the characteristics of each identified blob. The plaque identification module 203 is configured to perform steps including assigning a unique marker to each of the one or more plaques, determining characteristics of each of the one or more labeled plaques, the characteristics including plaque Origin, width, height, box area, pixel area, number of red pixels, aspect ratio, and plaque outline, one or More than one fusion, and classifying the one or more marker patches within at least one of headlight, taillight, merged light, and null light.
对象识别模块204耦合至斑块识别模块203,并且配置为基于一个或一个以上配对逻辑来识别对象。该对象识别模块204配置为执行以下步骤中的至少一步,其步骤包括确定一个或一个以上斑块的水平重叠、确定一个或一个以上斑块的纵横比、确定一个或一个以上斑块的像素面积比、确定一个或一个以上斑块的宽度比、和确定一个或一个以上斑块的像素框面积比。The object identification module 204 is coupled to the plaque identification module 203 and is configured to identify objects based on one or more pairing logics. The object recognition module 204 is configured to perform at least one of the following steps including determining a horizontal overlap of one or more blobs, determining an aspect ratio of one or more blobs, determining a pixel area of one or more blobs ratio, determine a width ratio of one or more patches, and determine a pixel frame area ratio of one or more patches.
配对确认与验证模块205耦合至对象识别模块204,并配置为确认与验证一个或一个以上被识别的斑块对。配对确认与验证模块205配置为执行一个或一个以上的步骤,该步骤包括确定一对被识别的斑块、在一行灯光间确定一个或一个以上的识别斑块、在两对或两对以上的识别斑块对间验证一个或一个以上的识别斑块、以及通过识别一个或一个以上的合并斑块来确认合并光。识别斑块对是通过执行以下步骤来确定的,包括确定配对的宽度和纵横比、确定配对中的非配对斑块的数目、以及确定配对和非配对的斑块面积作为配对宽度的百分比。Pairing verification and verification module 205 is coupled to object recognition module 204 and is configured to verify and verify one or more identified plaque pairs. The pairing confirmation and verification module 205 is configured to perform one or more steps including identifying a pair of identified patches, identifying one or more identified patches between a row of lights, identifying between two or more pairs of Verifying one or more identified patches between pairs of identified patches, and confirming merged light by identifying one or more merged patches. Identifying plaque pairs was determined by performing the following steps, including determining the width and aspect ratio of the pair, determining the number of unpaired plaques in the pair, and determining the paired and unpaired plaque area as a percentage of the pair width.
在一个实施例中,在ROI中的灯光行之间确认一个或一个以上的识别斑块是通过执行包括对ROI中的所有灯光行进行线匹配算法的步骤来实现的。In one embodiment, identifying one or more identified patches between light rows in the ROI is accomplished by performing a step comprising performing a line matching algorithm on all light rows in the ROI.
跟踪模块207耦合至配对确认与验证模块205,并配置为跟踪一个或一个以上阶段中的一个或一个以上已确认与验证的斑块对。该一个或一个以上阶段包括闲置阶段、预跟踪阶段、跟踪阶段和取消跟踪阶段。Tracking module 207 is coupled to pair confirmation and verification module 205 and is configured to track one or more confirmed and verified plaque pairs in one or more phases. The one or more phases include an idle phase, a pre-tracking phase, a tracking phase, and an untracing phase.
两轮车识别模块206配置为用于确定ROI中的识别对象为一个两轮车,基于从斑块识别模块203和配对确认与验证模块205中接收到一个或一个以上斑块的信息。一个或一个以上斑块的信息包括在斑块识别模块中被分为前照灯或尾灯的斑块和通过近距离骑手形状轮廓分类器的斑块。在一个实施例中,斑块识别模块203识别感兴趣区域(ROI)中的一个单个斑块来确定一个两轮车。The two-wheeled vehicle identification module 206 is configured to determine that the identified object in the ROI is a two-wheeled vehicle, based on the information of one or more plaques received from the plaque identification module 203 and the pairing confirmation and verification module 205 . The information of one or more blobs includes blobs classified as headlights or taillights in the blob recognition module and blobs passed through the close range rider shape profile classifier. In one embodiment, the blob identification module 203 identifies a single blob in a region of interest (ROI) to determine a two-wheeler.
跟踪模块207耦合至配对确认与验证模块205和两轮车识别模块206,并且配置为用于在一个或一个以上阶段中,跟踪一个或一个以上的经确认与验证了的斑块对/斑块。该一个或一个以上阶段包括闲置阶段、预跟踪阶段、跟踪阶段和/或取消跟踪阶段。The tracking module 207 is coupled to the pairing confirmation and verification module 205 and the two-wheeler identification module 206 and is configured to track one or more confirmed and verified plaque pairs/plaques in one or more stages . The one or more phases include an idle phase, a pre-tracking phase, a tracking phase, and/or an untracing phase.
该距离估算模块208配置为计算一个或一个以上被探测车辆和一个主车辆之间的距离,基于被探测车辆尺寸的实际宽度和镜头焦距的乘积比,以及图像中被探测车辆的宽度和将摄像机的像素转换为米的系数的乘积。The distance estimation module 208 is configured to calculate the distance between one or more detected vehicles and a host vehicle, based on the product ratio of the actual width of the detected vehicle size and the focal length of the lens, and the width of the detected vehicle in the image and the camera The product of the coefficients for converting pixels to meters.
图3为根据本发明的一个示例性实施例所示的采集的动态变化的感兴趣区域中车辆的图像。该系统利用车辆灯光作为车辆探测和分类的主要特征。图像采集单元附着在车辆的预定位置上,采集图像并将所采集的图像提供给系统100。该系统100采用高曝光图像(如图3a所示)和低曝光图像(如图3b所示)。图像采集单元提供采集到的图像作为输入帧,其包括高曝光/增益和低曝光/增益。如图4所示,该输入帧被提供给场景识别模块或道路拓扑估算模块。Fig. 3 is a captured image of a vehicle in a dynamically changing ROI according to an exemplary embodiment of the present invention. The system utilizes vehicle lights as the main feature for vehicle detection and classification. The image acquisition unit is attached to a predetermined position of the vehicle, acquires an image, and provides the acquired image to the system 100 . The system 100 uses a high exposure image (as shown in Figure 3a) and a low exposure image (as shown in Figure 3b). The image acquisition unit provides acquired images as input frames, which include high exposure/gain and low exposure/gain. As shown in Fig. 4, this input frame is provided to the scene recognition module or the road topology estimation module.
图5为根据一个示例性的实施例所示的采集到的图像。图像采集单元采集到图像并将其提供给场景识别模块。场景识别模块101处理接收到的图像,并将该图像/场景分类为亮/暗/雾/雨,如果通过ROI的这些参数如饱和像素、亮度、颜色和区域变化是高的。该图像/场景分类积累通过时间和滞后的增加以确定分类的变化。Fig. 5 is an image collected according to an exemplary embodiment. The image acquisition unit acquires images and provides them to the scene recognition module. The scene recognition module 101 processes the received image and classifies the image/scene as light/dark/fog/rain if these parameters such as saturated pixels, brightness, color and area variation by ROI are high. The image/scene classifications are accumulated over time and lags are added to determine classification changes.
图6为根据本发明的一个实施例所示的道路拓扑估算模块102的框图。在本实施例中,道路拓扑估算模块102计算确定的感兴趣区域的预计消失点。一幅图像中的ROI(感兴趣区域)是用户寻找潜在车辆的区域—没有天空区域的前方道路场景。道路拓扑估算模块102接收输入如偏移估计、消失点估计、俯仰估计、场景识别和车辆外部参数,以确定/估算道路消失点。Fig. 6 is a block diagram of the road topology estimation module 102 according to an embodiment of the present invention. In this embodiment, the road topology estimation module 102 calculates the predicted vanishing point of the determined region of interest. A ROI (Region of Interest) in an image is the area where the user looks for potential vehicles—a road scene ahead without sky areas. The road topology estimation module 102 receives inputs such as offset estimates, vanishing point estimates, pitch estimates, scene recognition, and vehicle extrinsic parameters to determine/estimate road vanishing points.
动态ROI是随着有关于道路的坡度和曲线变化而变化的ROI,如图7(a)和(b)所示。Dynamic ROIs are ROIs that change with changes in slope and curve about the road, as shown in Fig. 7(a) and (b).
对于曲线估算来说,道路拓扑估算模块102采用主车辆的横摆率和速度来估算前方的曲率。For curve estimation, the road topology estimation module 102 uses the yaw rate and speed of the host vehicle to estimate the curvature ahead.
对于坡度估算来说,车辆探测系统使用以下线索来确定前方的坡度For slope estimation, the vehicle detection system uses the following cues to determine the slope ahead
·基于跟踪的匹配/登记,以确定连续帧之间的偏移量Track-based matching/registration to determine offsets between consecutive frames
·来自LDWS(车道偏离警示系统)的输入· Input from LDWS (Lane Departure Warning System)
·来自特征跟踪模块的俯仰估计。• Pitch estimation from feature tracking module.
·场景识别输出,如白天或黑夜等Scene recognition output, such as day or night, etc.
·车辆外部参数,如横摆率、速度等External parameters of the vehicle, such as yaw rate, speed, etc.
使用动态的ROI的优点如下:The advantages of using a dynamic ROI are as follows:
·弯曲的道路上也可以探测到车辆Vehicles can be detected even on curved roads
·减少假阳性· Reduce false positives
·可以避免不必要的处理,从而提高系统性能Can avoid unnecessary processing, thereby improving system performance
图像分割Image segmentation
图8为根据本发明的一个示例性实施例所示的采集到用于分割的图像。在本实施例中,在输入的低曝光/增益图像和高曝光/增益图像中所示的灯光,基于双阈值使用滑动窗口来分割。一维固定/可变长度的局部窗口的阈值是根据窗口像素值、预定义的最小值和预定义的最大值的均值计算出来的。预定义的最小值可以根据图像的亮度进行调整。对于较亮的情况来说,阈值的最小值进一步上升,而对于较暗的情况来说,阈值移动到一个预定义的值。无论是固定窗口还是尺寸可变的窗口,都被用来计算ROI的阈值。图像的像素值可基于阈值进行修正。例如,七段分割图像是由根据不同的预定义最小值和最大值计算出来的七个不同的阈值形成的。分割输出的是二进制图像,如图9所示。Fig. 8 is an image collected for segmentation according to an exemplary embodiment of the present invention. In this example, the lights shown in the input low exposure/gain image and high exposure/gain image are segmented using a sliding window based on dual thresholding. The threshold of a 1D fixed/variable-length local window is calculated from the mean of the window pixel values, a predefined minimum value, and a predefined maximum value. The predefined minimum values can be adjusted according to the brightness of the image. For brighter cases, the minimum value of the threshold is raised further, while for darker cases, the threshold is moved to a predefined value. Either a fixed window or a variable-sized window is used to calculate the threshold of the ROI. The pixel values of an image can be corrected based on a threshold. For example, a seven-segmented image is formed by seven different thresholds calculated from different predefined minimum and maximum values. The segmented output is a binary image, as shown in Figure 9.
对于彩色图像的分割来说,彩色图像和分割的输入图像(采用灰度图像获得)作为输入。根据彩色图像的颜色信息,在已分割的光区域附近增加区域的大小。分割的光像素附近的像素值的确定是基于8邻域红色色相的阈值决定的,如此,低光照条件或远区域的尾光大小将会增加。例如,如图10所示的一个3×3的矩阵,中间像素的值是基于分割的像素(即“1”)和彩色图像来确定的。彩色图像应该有红色色相,用于尾灯条件。处理双级自适应分割图像和彩色图像从而得到最终的分割图像。For color image segmentation, the color image and the segmented input image (obtained as a grayscale image) are used as input. According to the color information of the color image, the size of the region is increased around the segmented light region. The determination of pixel values near the segmented light pixels is based on thresholding the 8-neighborhood red hue, so that the magnitude of the tail light will increase in low light conditions or in far regions. For example, for a 3×3 matrix as shown in FIG. 10, the value of the middle pixel is determined based on the segmented pixel (ie "1") and the color image. Color images should have a red hue, for tail light conditions. Process the bi-level adaptive segmented image and the color image to get the final segmented image.
滤波filtering
如图9所示的分割后的二进制图像由尾灯、噪声和不需要的信息组成。使用滤波消除噪声和不需要的信息。可以采用形态学操作或中值滤波来完成滤波。形态学操作如腐蚀膨胀是与尺寸为三的结构元素一起使用,所以它消除尺寸小于3×3的斑块。中值滤波设计为消除尺寸小于2×3和3×2的斑块。基于场景-为了更亮的场景的腐蚀和更暗的场景的中值滤波-滤波应用于分割后的图像。所有基于场景的不同阈值分割图像均进行滤波。滤波模块的输出是滤波后的图像,如图11所示。经过滤波后的图像,识别出分割的像素(斑块)的突出群。The segmented binary image shown in Figure 9 consists of taillights, noise and unwanted information. Use filtering to remove noise and unwanted information. Filtering can be done using morphological operations or median filtering. Morphological operations such as erosion and dilation are used with structuring elements of size three, so it eliminates patches of size smaller than 3×3. Median filtering is designed to eliminate blobs with sizes smaller than 2×3 and 3×2. Scene-based - erosion for brighter scenes and median filtering for darker scenes - filtering is applied to the segmented image. All segmented images based on different thresholds of the scene are filtered. The output of the filtering module is the filtered image, as shown in Figure 11. After filtering the image, prominent groups of segmented pixels (plaques) are identified.
分离合并斑块:Separate merged plaques:
该系统100还包括用于分离合并斑块的子模块–两尾灯/前照灯、尾灯/前照灯与其他的灯光、尾灯/前照灯与反射光等等–在滤波后的图像中。该系统100对分割后的图像采用3×3内核的两级腐蚀以确定并分离两合并斑块。滤波后的图像经过以下过程来分离两合并斑块,如图12所示。The system 100 also includes sub-modules for separating merged patches - two taillights/headlights, taillights/headlights and other lights, taillights/headlights and reflected light, etc. - in the filtered image. The system 100 uses a two-stage erosion with a 3x3 kernel on the segmented image to identify and separate the two merged patches. The filtered image undergoes the following process to separate the two merged patches, as shown in Figure 12.
如果滤波后的图像有一个斑块而两级腐蚀图像在同一位置有两个斑块,则通过垂直切割两斑块对中的重叠区域来打断滤波后的图像中的斑块。 If the filtered image has one patch and the two-stage erosion image has two patches at the same location, the patches in the filtered image are broken by vertically cutting the overlapping region in the two patch pairs.
如果滤波后的图像有一个斑块而两级腐蚀图像在同一位置无斑块或有一个斑块,则保留滤波后的图像中的斑块。 If the filtered image has one patch and the two-level erosion image has no or one patch at the same location, then the patch in the filtered image is kept.
如果滤波后的图像有一个斑块而两级腐蚀图像在同一位置有两个以上的斑块,则避免任何改变。 Avoid any alterations if the filtered image has one patch and the two-stage erosion image has more than two patches at the same location.
斑块的识别Plaque Identification
斑块识别模块203识别滤波后的图像中的不同类型的斑块并计算它们的特征。执行以下步骤来识别斑块:The plaque identification module 203 identifies different types of plaques in the filtered image and computes their features. Perform the following steps to identify plaques:
·斑块标记· Plaque marking
·斑块特征计算· Plaque feature calculation
·斑块融合· Plaque fusion
·斑块分类· Plaque classification
斑块标记plaque markers
图13为根据本发明的一个示例性实施例所示的一幅图像,该图像中的每个斑块都被分配有不同的标记。采用4连通法进行标记,其中,如果他们是通过4连通法相连接,则将相同的标记分配给一组像素。对每个斑块都分配标记之后,如开始行、结束行、起始列、结束列、分配的标记和像素面积等信息都存放在一个数组中。Fig. 13 is an image according to an exemplary embodiment of the present invention, and each plaque in the image is assigned a different label. Labeling is done using 4-connectivity, where the same label is assigned to a group of pixels if they are connected by 4-connectivity. After assigning tags to each patch, information such as start row, end row, start column, end column, assigned tag, and pixel area are all stored in an array.
斑块特征plaque characteristics
为每个斑块都分配标记后,计算出以下特征:After assigning labels to each patch, the following features are computed:
·斑块的起源,表明斑块是来自暗帧还是亮帧。• The origin of the patch, indicating whether the patch is from a dark frame or a light frame.
·宽度,包括结束列和开始列之间的差距。· Width, including the gap between the end column and the start column.
·高度,包括结束行和开始行之间的差距。· Height, including the gap between the end row and the start row.
·框面积,是宽度与高度的乘积(即,宽度×高度)· Box area, which is the product of width and height (ie, width x height)
·像素面积,包括框内的白色像素的总数Pixel area, including the total number of white pixels within the box
·红色像素的数目,包括基于色调值的红色像素的总数The number of red pixels, including the total number of red pixels based on the hue value
·纵横比,包括最小值(宽度,高度)/最大值(宽度,高度)Aspect ratio, including min(width,height)/maximum(width,height)
·斑块轮廓,包括斑块的形状Plaque outline, including plaque shape
斑块融合plaque fusion
图14为根据本发明的一个实施例所示的用于从暗帧和亮帧中制备最终斑块列表的方法流程图。在步骤1401中,接收低曝光/增益帧的斑块和高曝光/增益帧的斑块以确定斑块的重叠。如果没有重叠,在步骤1402中,高曝光/增益帧的斑块被核对为是一个潜在尾灯的斑块。如果没有重叠,在步骤1402中,通过/允许低曝光/增益帧的斑块被包括在斑块列表中。低曝光/增益帧的斑块可能是由于反射成像而发生的。如果步骤1402中的高曝光/增益帧的斑块不是一个潜在的尾灯,在步骤1404时摈弃该斑块。如果步骤1402中的高曝光/增益帧的斑块是一个潜在的尾灯,则在步骤1405时通过/允许高曝光/增益帧的斑块被包括在斑块列表中。如果在步骤1401时存在一个以上的斑块有重叠,则在步骤1406时通过/允许低曝光/增益帧的斑块被包括在斑块列表中。在步骤1407中,制备得到最终的斑块列表。Fig. 14 is a flowchart of a method for preparing a final patch list from dark frames and bright frames according to an embodiment of the present invention. In step 1401, patches of low exposure/gain frames and patches of high exposure/gain frames are received to determine overlap of the patches. If there is no overlap, in step 1402 the blob of the high exposure/gain frame is checked to be a potential taillight blob. If there is no overlap, in step 1402 the blob that passes/allows the low exposure/gain frame is included in the blob list. Patches in low exposure/gain frames can occur due to reflection imaging. If the blob of the high exposure/gain frame in step 1402 is not a potential tail light, the blob is discarded in step 1404 . If the blob of high exposure/gain frame in step 1402 is a potential tail light, then the blob of passed/allowed high exposure/gain frame is included in the blob list at step 1405 . If at step 1401 there is more than one patch that overlaps, at step 1406 the patch that passes/allows the low exposure/gain frame is included in the patch list. In step 1407, a final plaque list is prepared.
斑块列表的制备基于以下标准:The plaque list was prepared based on the following criteria:
·一个斑块应在预估的视界面积范围内。• A plaque should be within the estimated field of view area.
·一个斑块的下方不应该有任何其他的重叠斑块。• A patch should not have any other overlapping patches below it.
·一个斑块不应该在其相邻的斑块之间具有相同亮度的面积。• A blob should not have areas of the same brightness between its adjacent blobs.
·水平ROI是从用于确定亮帧的潜在候选对象的总宽度的35-65%来定义的。• Horizontal ROIs are defined from 35-65% of the total width of potential candidates for determining bright frames.
·主要自亮帧通过的斑块都是成对的。如果它是一个合并斑块(远光灯/近光灯),则其将会经过暗帧。两块斑块之间水平重叠以配对。由于连续帧的移动与阈值相重叠的概率是非常低的。• Plaques that pass primarily from bright frames are all in pairs. If it is a merged patch (high beam/low beam), it will go through the dark frame. Horizontal overlap between two plaques for pairing. The probability of overlapping the threshold due to the movement of consecutive frames is very low.
·如果亮帧的一大块斑块多于暗帧的1个斑块,则认为斑块来自暗帧。· A patch is considered to be from a dark frame if a large patch of a light frame is more than 1 patch of a dark frame.
斑块分类Plaque classification
图15为根据本发明的一个示例性实施例所示的图像,其中,将斑块进行分类以识别前照灯、尾灯或其他任何灯光。一旦得到最终斑块列表,则斑块就已被分类为前照灯、尾灯、合并灯光和其他灯光。如果红色得分(红色像素号码),由于斑块大于预定阈值,则斑块是尾灯。尾光分类如图15(a)中的蓝色所示。基于以下标准将斑块分类为前照灯:Figure 15 is an image showing blobs classified to identify headlights, taillights or any other lights according to an exemplary embodiment of the present invention. Once the final patch list is obtained, the patches have been categorized into headlights, taillights, combined lights and other lights. If the red score (red pixel number), since the blob is larger than a predetermined threshold, then the blob is a taillight. The tail light classification is shown in blue in Fig. 15(a). Plaques are classified as headlamps based on the following criteria:
a)由于反射该斑块的下方有任一斑块;和/或a) any plaque underlying the plaque due to reflection; and/or
b)当多个斑块之间存在有水平重叠,也存在两块斑块之间的高度比小于最大车宽的一半;和/或b) when there is horizontal overlap between multiple patches, there is also a height ratio between two patches that is less than half the maximum vehicle width; and/or
c)该斑块具有最小值,该斑块附近具有两个最大值,其中,最小值和最大值模式是通过采用特定斑块的垂直剖面来确定的。c) The patch has a minimum and two maxima near the patch, where the pattern of minimum and maximum values is determined by taking a vertical profile of the particular patch.
在进行尾灯和前照灯分类之后,所有具有低的红色得分的前照灯和尺寸很小的前照灯通过将其标记为无效斑块的方式将其从列表中移除。此外,如果任一斑块的下方有一个以上的斑块,则也将其标记为无效斑块。After taillight and headlight classification, all headlights with low red scores and small size headlights are removed from the list by marking them as invalid patches. In addition, if any one patch has more than one patch below it, it is also marked as an invalid patch.
为将以上斑块分类为合并斑块,核对模式如101和111,其中,0是对应于最小值位置,而1是对应于最大值位置。为确定模式,一个斑块被分为三段,并且每一段的最小和最大的位置通过使用滤波后的图像来确定。通过使用这些值,中心段到左右段的比例是确定的以核对101模式。对于111模式来说,左右段到中心段的比例是确定的。To classify the above blobs as merged blobs, patterns such as 101 and 111 are checked, where 0 corresponds to the minimum value position and 1 corresponds to the maximum value position. To determine the pattern, a plaque was divided into three segments, and the minimum and maximum positions of each segment were determined using the filtered image. By using these values, the ratio of the center segment to the left and right segments is determined to check the 101 pattern. For the 111 pattern, the ratio of the left and right segments to the center segment is determined.
在一个实施例中,斑块识别模块203识别感兴趣区域(ROI)中的一个单个的斑块以确定一个两轮车。In one embodiment, the blob identification module 203 identifies a single blob in a region of interest (ROI) to determine a two-wheeler.
图16为根据本发明的一个示例性实施例所示的一幅图像,其中,一个以上的斑块被分类为合并斑块,如果一个斑块被分类为合并的而且其尺寸很小,则该斑块被认为是尾灯或近光灯,基于它们各自较早的分类为尾灯或前照灯。Fig. 16 is an image according to an exemplary embodiment of the present invention, wherein more than one blob is classified as a merged blob, and if a blob is classified as merged and its size is small, the The plaques are considered taillights or dipped beams, based on their respective earlier classifications as taillights or headlights.
配对逻辑pairing logic
图17为根据本发明的一个实施例所示的基于配对逻辑识别出有效配的过程。该系统100还包括配对逻辑模块,其跟随一个基于启发式的方法从多个斑块中确定一对尾灯。根据以下列出的标准进行配对的过程:Fig. 17 shows the process of identifying valid pairing based on pairing logic according to an embodiment of the present invention. The system 100 also includes a pairing logic module that follows a heuristic-based method to determine a pair of taillights from a plurality of blobs. The pairing process is based on the criteria listed below:
1.核对多个斑块的水平重叠,1. Check the horizontal overlap of multiple plaques,
2.核对多个斑块的纵横比,2. Check the aspect ratio of multiple plaques,
3.核对多个斑块的像素面积比,3. Check the pixel area ratio of multiple plaques,
4.核对多个斑块的宽度比,4. Check the width ratio of multiple plaques,
5.核对多个斑块的像素框面积比,5. Check the pixel frame area ratio of multiple plaques,
6.对于较大的斑块来说,进行核对以匹配多个斑块的形状。该多个斑块的形状可以通过从腐蚀图像中减去原始斑块图像来获得。在这里,腐蚀是与尺寸为3的结构元素一起完成的。使用余弦相似度来核对斑块的形状。6. For larger patches, a collation is performed to match the shape of multiple patches. The shapes of the plurality of plaques can be obtained by subtracting the original plaque image from the erosion image. Here, erosion is done with structuring elements of size 3. Cosine similarity is used to check the shape of the plaques.
余弦相似度:其测量多个斑块的向量之间的相似度。Cosine Similarity: It measures the similarity between vectors of multiple patches.
余弦相似度=A.B/||A|| ||B||。Cosine similarity = A.B/||A|| ||B||.
其中,in,
||A||向量A的大小||A||The size of the vector A
||B||向量B的大小||B|| size of vector B
基于以上核对所获得的加权分数计算出最后的置信度得分。按照得分矩阵使用高于阈值的得分来执行配对。配对是非常基本的,使用非常低的阈值以允许不平衡的尾光、开启的侧光以及轻微不相称的斑块配对。A final confidence score is calculated based on the weighted scores obtained from the above checks. Pairing is performed using scores above a threshold according to the score matrix. The pairing is very basic, using a very low threshold to allow for unbalanced tail lights, side lights turned on, and slightly disproportionate patch pairings.
如上所确定的配对允许基于动态ROI的车辆宽度核对。整个逻辑的核心是系统初始化时已预先计算好并加载完成的动态三角形(该ROI根据摄像机和车辆参数保持更新)。Pairing as determined above allows dynamic ROI based vehicle width collation. The core of the whole logic is the pre-calculated and loaded dynamic triangle when the system is initialized (the ROI is kept updated according to the camera and vehicle parameters).
配对的几何中心的行宽度应位于最小和最大三角形(动态ROI)的行宽度之间,如图17(a)所示。配对逻辑的输出(如图17(b)所示)是可能车辆的配对。The row width of the paired geometric centers should lie between the row widths of the smallest and largest triangles (dynamic ROIs), as shown in Fig. 17(a). The output of the pairing logic (shown in Figure 17(b)) is the pairing of possible vehicles.
配对的确认与验证(V&V)Pairing Verification and Verification (V&V)
配对确认与验证模块205确认并验证配对及合并斑块。模块205的输入是所有的可能配对,如图19(a)所示。该模块205分为两个子模块,包括一个配对确认模块和一个合并光确认模块Pairing confirmation and verification module 205 confirms and verifies paired and merged plaques. The input to module 205 is all possible pairings, as shown in Figure 19(a). The module 205 is divided into two sub-modules, including a pairing confirmation module and a combined optical confirmation module
配对确认pairing confirmation
图18为根据本发明的一个实施例所示的确认与验证斑块的方法。对单个配对、在一行灯光之间的配对以及在多个配对之间的配对进行配对确认。Fig. 18 shows a method for confirming and verifying plaques according to an embodiment of the present invention. Pair confirmation for single pairing, pairing between a row of lights, and pairing between multiple pairs.
单个配对的验证:Verification of a single pair:
图18(a)为根据一个实施例所示的一种斑块的单个配对的验证方法。为了验证斑块的单个配对,需要满足以下条件:Fig. 18(a) is a method for verifying a single pair of plaques according to an embodiment. In order to verify a single pairing of plaques, the following conditions need to be met:
1.配对宽度和纵横比1. Pair width and aspect ratio
2.在配对间的非配对斑块的数目2. Number of unpaired plaques between pairs
3.配对和非配对斑块的面积为配对宽度的百分比3. Area of paired and unpaired plaques as a percentage of paired width
在一行灯光之间的配对的验证:Verification of pairings between a row of lights:
在一个实施例中,对于一行灯光如反射光或路灯的情况下,一对斑块是需要验证的。在大多数情况下,无论是反射光还是路灯都是排成一行的。对一行灯光进行线匹配算法核对。如果一行灯光是排成一条直线的和在连续斑块对之间的交叉比率是相同的,则由那些灯光形成的配对是无效的。In one embodiment, for the case of a row of lights such as reflected lights or street lights, a pair of patches is required to be verified. In most cases, both the reflected light and the street lights are lined up. Check the line matching algorithm for a row of lights. If a row of lights is aligned and the intersection ratio between consecutive patch pairs is the same, the pairing formed by those lights is invalid.
配对间验证:Verification between pairs:
图18(b)到18(j)为根据一个实施例所示的一种斑块配对的验证方法。采用以下规则从ROI中的两对斑块对中确定斑块的实际配对:Figures 18(b) to 18(j) illustrate a method for verifying plaque pairing according to an embodiment. The actual pairing of plaques was determined from the two pairs of plaques in the ROI using the following rules:
1.如果两对共享同一块斑块并且列重叠是非常高或非常低的,则消除较小宽度的配对,如图18(b)和(c)所示。1. If two pairs share the same patch and the column overlap is very high or very low, eliminate the pair with smaller width, as shown in Fig. 18(b) and (c).
2.如果两对共享同一块斑块并且列重叠不是很高或很低,而且中间斑块是非对称分布的,则消除较大宽度的配对,如图18(d)和(e)所示。2. If two pairs share the same patch and the column overlap is not very high or low, and the intermediate patches are distributed asymmetrically, the pair with larger width is eliminated, as shown in Fig. 18(d) and (e).
3.如果两对有列重叠而没有行重叠,下面对的宽度和高度都小于上面对,则消除下面对,而假设下面对的高度和强度都大于上面对并且宽度低于上面对,则消除上面对,如果列重叠很高而且宽度和高度是相同的,则消除强度较小的配对,如图18(f)、(g)、和(h)所示3. If two pairs have column overlap but no row overlap, and the width and height of the lower pair are smaller than the upper pair, then eliminate the lower pair, while assuming that the lower pair has greater height and strength than the upper pair and a width less than For the upper pair, the upper pair is eliminated, and if the column overlap is high and the width and height are the same, the pair with less strength is eliminated, as shown in Figure 18(f), (g), and (h)
4.如果两对有列和行重叠,且为非对称的,则消除较宽的配对,而假设列重叠有良好的对称性,则消除宽度较小的配对,如图18(i)和(j)所示4. If the two pairs have columns and rows overlapping and are asymmetrical, eliminate the wider pair, and assume that the column overlap has good symmetry, then eliminate the smaller width pair, as shown in Figure 18(i) and ( as shown in j)
5.如果两对列重叠很少,一对是在另一对的内部,则消除在内部的配对,如图18(k)所示,5. If two pairs of columns overlap very little, and one pair is inside another pair, then eliminate the pairing inside, as shown in Figure 18(k),
6.如果两对列重叠很少,一对是在另一对的下面,则消除下面的配对,如图18(l)所示。6. If two pairs of columns overlap very little, and one pair is below the other, then eliminate the pairing below, as shown in Figure 18(l).
图19为根据本发明的一个示例性实施例所示的在确认与验证前后的斑块对的图像。图19(a)显示在进行确认与验证之前的四对识别的斑块对,而图19(b)描述了基于图18中所描述的方法,经过确认与验证之后,显示三对有效且已验证的斑块对。图19中的配对确认之后,符合四轮车标准的配对被跟踪系统所允许并且允许消除剩下的配对。Fig. 19 is an image of plaque pairs before and after confirmation and verification according to an exemplary embodiment of the present invention. Figure 19(a) shows the four identified plaque pairs before validation and verification, while Figure 19(b) depicts three pairs after validation and validation, based on the method described in Figure 18 Validated plaque pairs. After pairing confirmation in Figure 19, pairings that meet the four-wheeler criteria are allowed by the tracking system and the remaining pairings are allowed to be eliminated.
合并灯光确认Merge light confirmation
图20为根据本发明的一个示例性实施例所示的合并灯光/斑块。在一个实施例中,合并灯光是远距离车辆的前照灯,并需要确认。为了确认合并灯光,执行以下标准:Fig. 20 is a diagram illustrating merged lights/patches according to an exemplary embodiment of the present invention. In one embodiment, the merged lights are the headlights of the distant vehicle and require confirmation. To confirm merging lights, the following criteria are performed:
1.如果合并灯光与前方的四轮车辆有纵向重叠并且在它的下方,则它是无效的。1. If the merging light has a longitudinal overlap with the four-wheeled vehicle ahead and is below it, it is invalid.
2.如果合并灯光与迎面而来的车辆有纵向重叠,合并灯光被分类为尾灯、噪声或不需要的信息,并且如果合并灯光低于迎面而来的车辆,则合并灯光是无效的。2. If the merged light has a longitudinal overlap with oncoming traffic, the merged light is classified as taillight, noise, or unwanted information, and if the merged light is lower than the oncoming vehicle, the merged light is invalid.
3.如果合并灯光与具有形状匹配度高于一第一形状匹配预定阈值的一对斑块有纵向重叠,并且合并灯光分数低于第二预定阈值。如果合并灯光与具有形状匹配度低于第一预定阈值的一对斑块有纵向重叠,则合并灯光是无效的。3. If the merged light has longitudinal overlap with a pair of blobs that have a shape matching degree above a first shape matching predetermined threshold, and the merged light score is below a second predetermined threshold. If the merged light has longitudinal overlap with a pair of blobs having a shape match below a first predetermined threshold, the merged light is invalid.
4.如果合并灯光与四轮车在先配对有纵向重叠和横向重叠,则合并灯光是无效的。4. If the merged light has vertical overlap and horizontal overlap with the four-wheeled vehicle previously paired, the merged light is invalid.
5.如果合并灯光有纵向重叠,没有横向重叠,形状匹配度高于预定阈值并且合并斑块得分低于预定阈值,则合并灯光是无效的。5. If the merged light has vertical overlap, no horizontal overlap, shape match degree is above a predetermined threshold and the merged patch score is below a predetermined threshold, the merged light is invalid.
6.如果在上述情况中,较小的合并灯光逐渐消除,然后核对合并灯光的跟踪。如果合并灯光具有纵向重叠、横向重叠、面积比、高度比并且在预定阈值的范围内,则合并斑块是有效的。6. If in the above case the smaller merged lights fade away, then check the tracking of the merged lights. A merged patch is valid if the merged light has vertical overlap, lateral overlap, area ratio, height ratio and is within a predetermined threshold.
两轮车探测two wheeler detection
图21为根据本发明的一个实施例所示的一种在动态变化的ROI中识别用于探测两轮车的有效斑块方法。两轮探测模块206使用斑块分类信息。模块206探测前方车辆和迎面而来的车辆。非配对的斑块、不列为合并灯光的斑块、被分为前照灯或尾灯的斑块都被认为是可能的两轮车,并且进行额外的核对,如道路坡度、经过分类器的骑行者轮廓、斑块移动,以确认斑块为一个两轮车。为了空间内事件的有效性,核对也是应该进行的,例如,左座驾驶的迎面而来的前照灯,主车辆右侧区域有斑块,而对于右座驾驶来说,这些斑块应该在左侧区域。另外,这些斑块不应该与配对有任何的纵向重叠。满足以上条件的这些斑块被认为是一个两轮车。图21列举了两个例子,其中,识别斑块不符合上述条件,因此,其为无效斑块。Fig. 21 shows a method for identifying effective plaques for detecting two-wheeled vehicles in a dynamically changing ROI according to an embodiment of the present invention. The two-round detection module 206 uses the plaque classification information. Module 206 detects vehicles ahead and oncoming vehicles. Unpaired patches, patches not classified as merged lights, patches classified as headlights or taillights are all considered as possible two-wheelers, and additional checks are performed, such as road slope, Cyclist outlines, plaque moves to confirm that the plaque is a two-wheeler. Checks should also be performed for the validity of events within the space, e.g. LHD's oncoming headlights, the area to the right of the host vehicle has blobs, while for RHD these blobs should be in left area. Additionally, the patches should not have any longitudinal overlap with the pair. These plaques meeting the above conditions are considered a two-wheeler. Fig. 21 lists two examples, where the recognized plaque does not meet the above conditions, therefore, it is an invalid plaque.
跟踪系统tracking system
图22为根据本发明的一个实施例所示的跟踪模块状态模式转换周期。跟踪模块207的功能分为四个阶段,包括闲置2201、预跟踪2202、跟踪2203、和取消跟踪2204。默认情况下,跟踪模块/跟踪系统都处于闲置状态2201。一旦对象被确认为一对斑块(在一个四轮车的情况下)/一个斑块(在一个两轮车的情况下),启动一个新的跟踪(如果之前没有匹配的主动跟踪存在),状态从闲置状态2201改变为预跟踪状态2202。预跟踪状态2202是用于再次确认斑块对/斑块的存在。用于验证预跟踪对象为斑块对/斑块以及将其转换至跟踪状态2203的条件列出如下。已被验证为斑块对/斑块的预跟踪对象要想移动至跟踪状态2203,只有:Fig. 22 is a state mode conversion cycle of a tracking module according to an embodiment of the present invention. The function of the tracking module 207 is divided into four stages, including idle 2201 , pre-tracking 2202 , tracking 2203 , and untracking 2204 . By default, the tracking module/tracking system is in an idle state 2201 . Once the object is identified as a pair of patches (in the case of a four-wheeler)/a patch (in the case of a two-wheeler), a new track is initiated (if no previous matching active track exists), The state changes from idle state 2201 to pre-trace state 2202 . The pre-tracking state 2202 is used to reconfirm the existence of the plaque pair/plaque. The conditions for verifying that a pre-tracking object is a patch pair/plaque and transitioning it to the tracking state 2203 are listed below. A pre-tracked object that has been verified as a patch pair/plaque wants to move to tracked state 2203 only if:
·探测到在“N”帧中,它有一个良好的置信度得分。每一帧中的斑块对/斑块的信心是从通过配对逻辑/两轮车探测返回的探测信心获得的• Detected in "N" frames, which has a good confidence score. The confidence of the patch pair/patch in each frame is obtained from the detection confidence returned by the pairing logic/two-wheeler detection
·它有较高的出现频率・It has a high frequency of occurrence
·它具有良好的移动评分(仅适用于四轮车辆)。跟踪系统保持跟踪斑块对的行为。在这里,它被认为是这两个斑块步调一致。相反方向的移动仅仅允许是在车辆的前方,即,车辆正朝着主车辆行驶,或向远离主车辆的方向行驶。· It has a good mobility score (for four-wheeled vehicles only). A tracking system keeps track of the behavior of the plaque pairs. Here, it is believed that the two plaques are in lock step. Movement in the opposite direction is only permitted in front of the vehicle, ie the vehicle is traveling towards the host vehicle, or away from the host vehicle.
在跟踪状态2203中,被跟踪的对象使用卡尔曼滤波器时发生连续的预测和更新。本领域所知的任何其他合适的滤波器都可以使用。如果发现在一个特定帧中丢失该被跟踪对象,使用卡尔曼预测来显示边界框。并且同一时刻,该跟踪系统从跟踪状态2203转变成取消跟踪状态2204。在取消跟踪状态2204中,对象被验证为“M”帧。此外,该取消跟踪状态2204试图提高良好跟踪系统的连续性(即正在被跟踪的很高的帧数目)通过In the tracking state 2203, continuous prediction and updating of the tracked object occurs using the Kalman filter. Any other suitable filter known in the art may be used. If the tracked object is found to be missing in a particular frame, Kalman prediction is used to display the bounding box. And at the same moment, the tracking system changes from tracking state 2203 to untracking state 2204 . In the untrack state 2204, the object is verified as an "M" frame. In addition, the untracking state 2204 attempts to improve the continuity of a good tracking system (i.e. a very high number of frames being tracked) by
·大面积搜索与运动约束相符合的自由形式Large area search for free-form conformance to motion constraints
·如果环境是非常黑暗的,在高增益/高曝光帧中搜索· If the environment is very dark, search in high gain/high exposure frames
·如果接近车辆配对,试图为一个斑块匹配以延长其生命· If close to vehicle pairing, try to match a patch to extend its life
·如果接近两轮车或四轮车,使用分类器在邻域搜索· If approaching a two-wheeler or four-wheeler, use a classifier to search in the neighborhood
对于取消跟踪状态2204来说,获得“M”帧的配对信心,以决定是将跟踪系统移回到跟踪状态2203还是移动到闲置状态2201(不是一个有效的配对)。For the cancel tracking state 2204, the pairing confidence of "M" frames is obtained to decide whether to move the tracking system back to the tracking state 2203 or to the idle state 2201 (not a valid pairing).
因此,在跟踪过程中,预跟踪和多帧确认摒弃掉虚假的探测,而跟踪和取消跟踪状态往往填补了相应对象的探测间隙。Therefore, during tracking, pre-tracking and multi-frame confirmation discard false detections, while tracking and untracking states tend to fill the detection gaps of the corresponding objects.
在预跟踪状态,‘N’帧的观测时间窗口和在取消跟踪状态‘M’帧的观测时间窗口是一个变量。不同情况使得跟踪状态改变决策动态,如斑块对/斑块类别、斑块对/斑块的得分及在时间内的移动、配对宽度、间歇式斑块对/斑块、曲线/坡度情况。In the pre-tracking state, the observation time window of 'N' frames and in the de-tracking state of 'M' frames is a variable. Different situations make tracking state change decision dynamics, such as patch pair/patch category, patch pair/patch score and movement over time, pair width, intermittent patch pair/patch, curve/slope situation.
距离估计distance estimation
图23为根据本发明所示的用距离估算模块208来估算被探测车辆和主车辆之间的距离的一个实例。该距离估算模块208被配置为用于计算在至少一辆被探测车辆和一辆主车辆之间的距离,基于被探测车辆尺寸的实际宽度和镜头焦距的乘积比,以及图像中被探测车辆的宽度和将摄像机的像素转换为米的系数的乘积。 FIG. 23 is an example of using the distance estimation module 208 to estimate the distance between the detected vehicle and the host vehicle according to the present invention. The distance estimation module 208 is configured to calculate the distance between at least one detected vehicle and a host vehicle, based on the product ratio of the actual width of the detected vehicle size and the focal length of the lens, and the detected vehicle's distance in the image. The product of the width and the factor that converts the camera's pixels to meters.
在一个实施例中,使用透视几何来估算距离。如果对应车辆的三对顶点连成相交于一个顶点的三条直线的话,则形成从一个顶点出发的两个透视三角形。In one embodiment, perspective geometry is used to estimate distance. If three pairs of vertices corresponding to vehicles are connected to form three straight lines intersecting at one vertex, two perspective triangles starting from one vertex are formed.
在透视法中,被探测车辆与主车辆之间的距离估算采用以下公式,并且其图解法如图23所示。In the perspective method, the distance between the detected vehicle and the host vehicle is estimated using the following formula, and its diagrammatic method is shown in FIG. 23 .
其中,in,
f:镜头的焦距(毫米)f: focal length of the lens (mm)
W:该车辆尺寸的实际宽度(米)W: the actual width of the vehicle size (meters)
w:图像中的车辆的宽度(像素)w: the width of the vehicle in the image (pixels)
k:将CCD摄像机的像素转换为米的系数,和k: the coefficient to convert the pixels of the CCD camera to meters, and
D:到目标车辆的距离(米)D: Distance to the target vehicle (meters)
图24为根据本发明的一个示例性实施例所示的最终被探测车辆的估算距离。该车辆探测系统100已探测到三辆车辆,并用矩形框标明。Fig. 24 shows the estimated distance of the final detected vehicle according to an exemplary embodiment of the present invention. The vehicle detection system 100 has detected three vehicles, which are marked with rectangular boxes.
图25为根据本发明的一个实施例所示的采用车辆探测系统来探测一辆或一辆以上车辆的方法。在步骤2501中,在一个场景识别模块或一个道路拓扑估算模块中接收到一幅高曝光图像或一幅低曝光图像。在步骤2502中,场景识别模块识别动态变化的感兴趣区域(ROI)中的一个或一个以上场景的情况。在步骤2503中,在动态变化的感兴趣区域(ROI)内,确定道路的曲线、坡度和消失点中的至少一个。在步骤2504中,一个滤波模块消除噪声和不需要的信息。在步骤2505中,识别滤波后的图像中的一个或一个以上的斑块。在步骤2506中,确定每一个被识别斑块的特征。在步骤2507中,使用至少一个配对逻辑来识别来自动态变化的ROI中识别的一个或一个以上斑块的一个或一个以上对象。在步骤2508中,一个或一个以上的识别斑块对进行确认与验证。FIG. 25 shows a method for detecting one or more vehicles using a vehicle detection system according to an embodiment of the present invention. In step 2501, a high exposure image or a low exposure image is received in a scene recognition module or a road topology estimation module. In step 2502, the scene identification module identifies one or more scene conditions in a dynamically changing region of interest (ROI). In step 2503, within the dynamically changing region of interest (ROI), at least one of the curve, slope and vanishing point of the road is determined. In step 2504, a filtering module removes noise and unwanted information. In step 2505, one or more blobs in the filtered image are identified. In step 2506, characteristics of each identified plaque are determined. In step 2507, at least one pairing logic is used to identify one or more objects from the one or more patches identified in the dynamically changing ROI. In step 2508, one or more identified plaque pairs are identified and verified.
虽然本发明的具体实施例结合附图已经详细描述了本发明的系统和方法,但本发明并不仅限于此。在不脱离本发明的范围和精神下,对斑块识别、斑块分类、配对逻辑和配对确认与验证进行各种替换、修改和变化,对于本领域技术人员来说均是显而易见的。Although the specific embodiments of the present invention have described the system and method of the present invention in detail with reference to the accompanying drawings, the present invention is not limited thereto. Various substitutions, modifications and variations in plaque identification, plaque classification, pairing logic, and pairing validation and verification will be apparent to those skilled in the art without departing from the scope and spirit of the invention.
Claims (17)
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IN159MU2014 | 2014-01-17 | ||
IN159/MUM/2014 | 2014-01-17 | ||
PCT/IN2015/000028 WO2015114654A1 (en) | 2014-01-17 | 2015-01-16 | Vehicle detection system and method thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105981042A true CN105981042A (en) | 2016-09-28 |
CN105981042B CN105981042B (en) | 2019-12-06 |
Family
ID=53059373
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201580003808.8A Expired - Fee Related CN105981042B (en) | 2014-01-17 | 2015-01-16 | Vehicle detection system and method |
Country Status (5)
Country | Link |
---|---|
US (1) | US10380434B2 (en) |
EP (1) | EP3095073A1 (en) |
JP (1) | JP2017505946A (en) |
CN (1) | CN105981042B (en) |
WO (1) | WO2015114654A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111768343A (en) * | 2019-03-29 | 2020-10-13 | 通用电气精准医疗有限责任公司 | System and method for facilitating the examination of liver tumor cases |
CN111879360A (en) * | 2020-08-05 | 2020-11-03 | 吉林大学 | Automatic driving auxiliary safety early warning system in dark scene and early warning method thereof |
TWI723657B (en) * | 2019-12-02 | 2021-04-01 | 宏碁股份有限公司 | Vehicle control method and vehicle control system |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160217333A1 (en) * | 2015-01-26 | 2016-07-28 | Ricoh Company, Ltd. | Information processing apparatus and information processing system |
ITUB20154942A1 (en) * | 2015-10-23 | 2017-04-23 | Magneti Marelli Spa | Method to detect an incoming vehicle and its system |
KR102371589B1 (en) * | 2016-06-27 | 2022-03-07 | 현대자동차주식회사 | Apparatus and method for dectecting front vehicle |
CN106114558B (en) * | 2016-06-29 | 2017-12-01 | 南京雅信科技集团有限公司 | Suitable for the preceding tail-light detection method of subway tunnel bending section |
WO2018047495A1 (en) * | 2016-09-06 | 2018-03-15 | 日立オートモティブシステムズ株式会社 | Image processing device and light distribution control system |
US10248874B2 (en) * | 2016-11-22 | 2019-04-02 | Ford Global Technologies, Llc | Brake light detection |
US10336254B2 (en) | 2017-04-21 | 2019-07-02 | Ford Global Technologies, Llc | Camera assisted vehicle lamp diagnosis via vehicle-to-vehicle communication |
CN110020575B (en) * | 2018-01-10 | 2022-10-21 | 富士通株式会社 | Vehicle detection device and method and electronic equipment |
WO2019146514A1 (en) * | 2018-01-24 | 2019-08-01 | 株式会社小糸製作所 | Onboard camera system, vehicle lamp, control distant place detection method, and control method for vehicle lamp |
WO2019159765A1 (en) * | 2018-02-15 | 2019-08-22 | 株式会社小糸製作所 | Vehicle detection device and vehicle light system |
EP3584742A1 (en) * | 2018-06-19 | 2019-12-25 | KPIT Technologies Ltd. | System and method for traffic sign recognition |
JP7261006B2 (en) * | 2018-12-27 | 2023-04-19 | 株式会社Subaru | External environment recognition device |
US10817777B2 (en) * | 2019-01-31 | 2020-10-27 | StradVision, Inc. | Learning method and learning device for integrating object detection information acquired through V2V communication from other autonomous vehicle with object detection information generated by present autonomous vehicle, and testing method and testing device using the same |
CN111832347B (en) * | 2019-04-17 | 2024-03-19 | 北京地平线机器人技术研发有限公司 | Method and device for dynamically selecting region of interest |
CN112085962B (en) * | 2019-06-14 | 2022-10-25 | 富士通株式会社 | Image-based parking detection method and device and electronic equipment |
CN111256707A (en) * | 2019-08-27 | 2020-06-09 | 北京纵目安驰智能科技有限公司 | Congestion car following system and terminal based on look around |
DE112020006351T5 (en) * | 2019-12-26 | 2022-10-20 | Panasonic Intellectual Property Management Co., Ltd. | Display control device, display system and display control method |
CN111260631B (en) * | 2020-01-16 | 2023-05-05 | 成都地铁运营有限公司 | Efficient rigid contact line structure light bar extraction method |
KR20210148756A (en) * | 2020-06-01 | 2021-12-08 | 삼성전자주식회사 | Slope estimating apparatus and operating method thereof |
CN118447469B (en) * | 2024-07-08 | 2024-10-22 | 潍柴动力股份有限公司 | BP and CNN-based road gradient prediction method and device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006059184A (en) * | 2004-08-20 | 2006-03-02 | Matsushita Electric Ind Co Ltd | Image processor |
US20110164789A1 (en) * | 2008-07-14 | 2011-07-07 | National Ict Australia Limited | Detection of vehicles in images of a night time scene |
CN102542256A (en) * | 2010-12-07 | 2012-07-04 | 摩比莱耶科技有限公司 | Advanced warning system for giving front conflict alert to pedestrians |
CN103029621A (en) * | 2011-09-30 | 2013-04-10 | 株式会社理光 | Method and equipment used for detecting front vehicle |
CN103402819A (en) * | 2011-03-02 | 2013-11-20 | 罗伯特·博世有限公司 | Method and control unit for influencing an illumination scene in front of a vehicle |
Family Cites Families (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3503230B2 (en) * | 1994-12-15 | 2004-03-02 | 株式会社デンソー | Nighttime vehicle recognition device |
JPH0935059A (en) * | 1995-07-14 | 1997-02-07 | Aisin Seiki Co Ltd | Discriminating device for illuminance on moving body |
US7655894B2 (en) * | 1996-03-25 | 2010-02-02 | Donnelly Corporation | Vehicular image sensing system |
WO2007124502A2 (en) * | 2006-04-21 | 2007-11-01 | Sarnoff Corporation | Apparatus and method for object detection and tracking and roadway awareness using stereo cameras |
US7724962B2 (en) * | 2006-07-07 | 2010-05-25 | Siemens Corporation | Context adaptive approach in vehicle detection under various visibility conditions |
WO2008024639A2 (en) * | 2006-08-11 | 2008-02-28 | Donnelly Corporation | Automatic headlamp control system |
EP2168079B1 (en) * | 2007-01-23 | 2015-01-14 | Valeo Schalter und Sensoren GmbH | Method and system for universal lane boundary detection |
US8199198B2 (en) * | 2007-07-18 | 2012-06-12 | Delphi Technologies, Inc. | Bright spot detection and classification method for a vehicular night-time video imaging system |
US8376595B2 (en) * | 2009-05-15 | 2013-02-19 | Magna Electronics, Inc. | Automatic headlamp control |
US20120287276A1 (en) * | 2011-05-12 | 2012-11-15 | Delphi Technologies, Inc. | Vision based night-time rear collision warning system, controller, and method of operating the same |
JP5518007B2 (en) * | 2011-07-11 | 2014-06-11 | クラリオン株式会社 | Vehicle external recognition device and vehicle control system using the same |
US20140163703A1 (en) * | 2011-07-19 | 2014-06-12 | Utah State University | Systems, devices, and methods for multi-occupant tracking |
JP5896788B2 (en) * | 2012-03-07 | 2016-03-30 | キヤノン株式会社 | Image composition apparatus and image composition method |
US20130322697A1 (en) * | 2012-05-31 | 2013-12-05 | Hexagon Technology Center Gmbh | Speed Calculation of a Moving Object based on Image Data |
JP5902049B2 (en) * | 2012-06-27 | 2016-04-13 | クラリオン株式会社 | Lens cloudiness diagnostic device |
JP5947682B2 (en) * | 2012-09-07 | 2016-07-06 | 株式会社デンソー | Vehicle headlamp device |
US9398270B2 (en) * | 2012-12-04 | 2016-07-19 | Gentex Corporation | Imaging system and method for detecting a bright city condition |
US8994652B2 (en) * | 2013-02-15 | 2015-03-31 | Intel Corporation | Model-based multi-hypothesis target tracker |
US9514373B2 (en) * | 2013-08-28 | 2016-12-06 | Gentex Corporation | Imaging system and method for fog detection |
US10297155B2 (en) * | 2013-09-27 | 2019-05-21 | Hitachi Automotive Systems, Ltd. | Object detector |
KR101511853B1 (en) * | 2013-10-14 | 2015-04-13 | 영남대학교 산학협력단 | Night-time vehicle detection and positioning system and method using multi-exposure single camera |
KR20150052638A (en) * | 2013-11-06 | 2015-05-14 | 현대모비스 주식회사 | ADB head-lamp system and Beam control method using the same |
DE102014219120A1 (en) * | 2013-12-19 | 2015-06-25 | Robert Bosch Gmbh | Method and apparatus for determining headlamp leveling of a headlamp |
JP6380843B2 (en) * | 2013-12-19 | 2018-08-29 | 株式会社リコー | Object detection apparatus, mobile device control system including the same, and object detection program |
JP6095605B2 (en) * | 2014-04-24 | 2017-03-15 | 本田技研工業株式会社 | Vehicle recognition device |
US9558455B2 (en) * | 2014-07-11 | 2017-01-31 | Microsoft Technology Licensing, Llc | Touch classification |
US9434382B1 (en) * | 2015-03-19 | 2016-09-06 | Toyota Motor Engineering & Manufacturing North America, Inc. | Vehicle operation in environments with second order objects |
JP6537385B2 (en) * | 2015-07-17 | 2019-07-03 | 日立オートモティブシステムズ株式会社 | In-vehicle environment recognition device |
JP6493087B2 (en) * | 2015-08-24 | 2019-04-03 | 株式会社デンソー | In-vehicle camera device |
DE102015224171A1 (en) * | 2015-12-03 | 2017-06-08 | Robert Bosch Gmbh | Tilt detection on two-wheeled vehicles |
JP6565806B2 (en) * | 2016-06-28 | 2019-08-28 | 株式会社デンソー | Camera system |
-
2015
- 2015-01-16 US US15/112,122 patent/US10380434B2/en active Active
- 2015-01-16 CN CN201580003808.8A patent/CN105981042B/en not_active Expired - Fee Related
- 2015-01-16 EP EP15721342.2A patent/EP3095073A1/en not_active Withdrawn
- 2015-01-16 WO PCT/IN2015/000028 patent/WO2015114654A1/en active Application Filing
- 2015-01-16 JP JP2016542680A patent/JP2017505946A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006059184A (en) * | 2004-08-20 | 2006-03-02 | Matsushita Electric Ind Co Ltd | Image processor |
US20110164789A1 (en) * | 2008-07-14 | 2011-07-07 | National Ict Australia Limited | Detection of vehicles in images of a night time scene |
CN102542256A (en) * | 2010-12-07 | 2012-07-04 | 摩比莱耶科技有限公司 | Advanced warning system for giving front conflict alert to pedestrians |
CN103402819A (en) * | 2011-03-02 | 2013-11-20 | 罗伯特·博世有限公司 | Method and control unit for influencing an illumination scene in front of a vehicle |
CN103029621A (en) * | 2011-09-30 | 2013-04-10 | 株式会社理光 | Method and equipment used for detecting front vehicle |
Non-Patent Citations (1)
Title |
---|
SUNGMIN EUM ET AL.: "Enhancing Light Blob Detection for Intelligent Headlight Control Using Lane Detection", 《IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111768343A (en) * | 2019-03-29 | 2020-10-13 | 通用电气精准医疗有限责任公司 | System and method for facilitating the examination of liver tumor cases |
CN111768343B (en) * | 2019-03-29 | 2024-04-16 | 通用电气精准医疗有限责任公司 | System and method for facilitating examination of liver tumor cases |
TWI723657B (en) * | 2019-12-02 | 2021-04-01 | 宏碁股份有限公司 | Vehicle control method and vehicle control system |
CN111879360A (en) * | 2020-08-05 | 2020-11-03 | 吉林大学 | Automatic driving auxiliary safety early warning system in dark scene and early warning method thereof |
CN111879360B (en) * | 2020-08-05 | 2021-04-23 | 吉林大学 | Automatic driving auxiliary safety early warning system in dark scene and early warning method thereof |
Also Published As
Publication number | Publication date |
---|---|
US20160335508A1 (en) | 2016-11-17 |
US10380434B2 (en) | 2019-08-13 |
EP3095073A1 (en) | 2016-11-23 |
JP2017505946A (en) | 2017-02-23 |
WO2015114654A1 (en) | 2015-08-06 |
CN105981042B (en) | 2019-12-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105981042B (en) | Vehicle detection system and method | |
Yang et al. | Vehicle detection in intelligent transportation systems and its applications under varying environments: A review | |
Rezaei et al. | Robust vehicle detection and distance estimation under challenging lighting conditions | |
Wang et al. | Appearance-based brake-lights recognition using deep learning and vehicle detection | |
US7566851B2 (en) | Headlight, taillight and streetlight detection | |
CN109190523B (en) | Vehicle detection tracking early warning method based on vision | |
CN106682586A (en) | Method for real-time lane line detection based on vision under complex lighting conditions | |
Prakash et al. | Robust obstacle detection for advanced driver assistance systems using distortions of inverse perspective mapping of a monocular camera | |
Kowsari et al. | Real-time vehicle detection and tracking using stereo vision and multi-view AdaBoost | |
CN112348813B (en) | Nighttime vehicle detection method and device integrating radar and vehicle light detection | |
Arora et al. | Automatic vehicle detection system in Day and Night Mode: challenges, applications and panoramic review | |
Nava et al. | A collision warning oriented brake lights detection and classification algorithm based on a mono camera sensor | |
KR101134857B1 (en) | Apparatus and method for detecting a navigation vehicle in day and night according to luminous state | |
KR20160108344A (en) | Vehicle detection system and method thereof | |
Boonsim et al. | An algorithm for accurate taillight detection at night | |
Soga et al. | Pedestrian detection for a near infrared imaging system | |
Ghani et al. | Advances in lane marking detection algorithms for all-weather conditions. | |
KR20140104516A (en) | Lane detection method and apparatus | |
Dai et al. | A driving assistance system with vision based vehicle detection techniques | |
Zarbakht et al. | Lane detection under adverse conditions based on dual color space | |
Pillai et al. | Night time vehicle detection using tail lights: a survey | |
Chen et al. | Robust rear light status recognition using symmetrical surfs | |
Hannan et al. | Traffic sign classification based on neural network for advance driver assistance system | |
Sadik et al. | Vehicles detection and tracking in advanced & automated driving systems: Limitations and challenges | |
Jaiswal et al. | Comparative analysis of CCTV video image processing techniques and application: a survey |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20191206 |