WO2018068313A1 - 车辆检测装置、车辆计数装置及方法 - Google Patents

车辆检测装置、车辆计数装置及方法 Download PDF

Info

Publication number
WO2018068313A1
WO2018068313A1 PCT/CN2016/102158 CN2016102158W WO2018068313A1 WO 2018068313 A1 WO2018068313 A1 WO 2018068313A1 CN 2016102158 W CN2016102158 W CN 2016102158W WO 2018068313 A1 WO2018068313 A1 WO 2018068313A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
area
detection result
input image
unit
Prior art date
Application number
PCT/CN2016/102158
Other languages
English (en)
French (fr)
Inventor
杨雅文
Original Assignee
富士通株式会社
杨雅文
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士通株式会社, 杨雅文 filed Critical 富士通株式会社
Priority to PCT/CN2016/102158 priority Critical patent/WO2018068313A1/zh
Publication of WO2018068313A1 publication Critical patent/WO2018068313A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/065Traffic control systems for road vehicles by counting the vehicles in a section of the road or in a parking area, i.e. comparing incoming count with outgoing count

Definitions

  • the present invention relates to the field of information technology, and in particular, to a vehicle detecting device, a vehicle counting device, and a method.
  • the existing vehicle detection method generally detects and counts vehicles in a monitoring image for a scene with good light, such as daytime, using a classifier or the like. For scenes with poor lighting, such as night, general vehicle detection methods are difficult to effectively detect and count the vehicle. In addition, there are some vehicle detection methods specifically for night scenes, which use special algorithms for detection, which are complicated and cannot be applied to scenes during the day.
  • An embodiment of the present invention provides a vehicle detection device, a vehicle counting device, and a method, which respectively perform a vehicle region detection based on a classifier and a vehicle area detection based on brightness, and combines the detection results of the two as a detection result of the vehicle. It can effectively improve the accuracy of detection results and prevent missed detection. In addition, it can be well applied to various scenes such as day and night, and has a wide range of applications.
  • a vehicle detecting apparatus comprising: a first detecting unit for detecting a vehicle area in an input image using a classifier, obtaining a vehicle area detecting result; and second detecting a unit for detecting a vehicle light region in the input image according to a brightness of the input image to obtain a vehicle light region detection result; a merging unit configured to detect the vehicle region detection result and the vehicle light region The results are combined to obtain a vehicle detection result of the input image.
  • a vehicle counting device comprising: a vehicle detecting device according to the first aspect of the present invention; an establishing unit for a vehicle according to an input image The detection result establishes a vehicle track set; a counting unit is configured to count the vehicles in the input image according to the established vehicle track set.
  • a vehicle detecting method comprising: detecting a vehicle region in an input image using a classifier to obtain a vehicle region detection result; and detecting the input according to brightness of the input image A vehicle light region in the image obtains a vehicle light region detection result; and combines the vehicle region detection result and the vehicle light region detection result to obtain a vehicle detection result of the input image.
  • the invention has the beneficial effects of performing the classifier-based vehicle area detection and the brightness-based vehicle area detection, respectively, and combining the detection results of the two as the detection result of the vehicle, thereby effectively improving the accuracy of the detection result and preventing leakage.
  • it can be well applied to various scenes such as day and night, and has a wide range of applications.
  • FIG. 1 is a schematic view of a vehicle detecting device according to Embodiment 1 of the present invention.
  • FIG. 2 is a schematic diagram of a merging unit 103 according to Embodiment 1 of the present invention.
  • FIG. 3 is a schematic diagram of a method for combining the vehicle area detection result and the vehicle area detection result according to Embodiment 1 of the present invention
  • FIG. 4 is a schematic diagram of a pairing unit 203 according to Embodiment 1 of the present invention.
  • FIG. 5 is a schematic diagram of expanding the matching vehicle light region pair into a vehicle region according to Embodiment 1 of the present invention.
  • Figure 6 is a schematic view of a vehicle counting device according to a second embodiment of the present invention.
  • FIG. 7 is a schematic diagram of an electronic device according to Embodiment 3 of the present invention.
  • FIG. 8 is a schematic block diagram showing the system configuration of an electronic device according to Embodiment 3 of the present invention.
  • Figure 9 is a schematic diagram of a vehicle detecting method according to a fourth embodiment of the present invention.
  • Fig. 1 is a schematic view showing a vehicle detecting device according to a first embodiment of the present invention. As shown in FIG. 1, the device 100 includes:
  • a first detecting unit 101 configured to detect a vehicle area in the input image using a classifier, to obtain a vehicle area detection result
  • a second detecting unit 102 configured to detect a vehicle light area in the input image according to brightness of the input image, to obtain a vehicle light area detection result
  • the merging unit 103 is configured to combine the vehicle area detection result and the vehicle area detection result to obtain a vehicle detection result of the input image.
  • the vehicle region detection based on the classifier and the vehicle area detection based on the brightness are respectively performed, and the detection results of the two are combined to obtain the detection result of the vehicle, thereby effectively improving the accuracy of the detection result and preventing the miss detection.
  • it can be well applied to various scenes such as day and night, and has a wide range of applications.
  • the input image may be a surveillance image, which may be obtained according to existing methods. For example, it can be obtained by installing a camera above the area to be monitored.
  • the input image may be a frame image, and may also include consecutive multi-frame images in the surveillance video for a period of time.
  • the detection can be performed frame by frame.
  • the first detecting unit 101 detects the vehicle area in the input image using the classifier, and obtains Vehicle area test results.
  • the entire input image may be detected, or a predetermined area in the input image may be detected, for example, only a Region of Interest (ROI) in the input image is detected.
  • ROI Region of Interest
  • the first detecting unit 101 may detect a vehicle area in the input image using an existing classifier, for example, a Support Vector Machine (SVM) classifier, a Bayesian classifier, or the like.
  • SVM Support Vector Machine
  • the second detecting unit 102 and the first detecting unit 101 perform detection independently of each other, and the second detecting unit 102 detects the vehicle light area in the input image according to the brightness of the input image, and obtains the light area detection result. .
  • the second detecting unit 102 may determine an area of the input image whose brightness is greater than the first threshold as the vehicle light area.
  • the second detecting unit 102 may further determine an area of the input image whose brightness is greater than the first threshold and whose color variability is less than the second threshold as the vehicle light area.
  • the color variability is used to measure the degree of color change.
  • the color variability refers to the variance value of RGB three channels per pixel.
  • the detection of the lamp area can be further improved by combining the brightness and the color variability.
  • the first threshold and the second threshold may be set according to actual needs.
  • the vehicle lamp may include a headlight and a taillight, etc., wherein the detected area of the lamp is mainly the area where the headlight is located, because the brightness of the light emitted by the headlight is high.
  • the merging unit 103 is configured to combine the vehicle area detection result and the vehicle light area detection result to obtain a vehicle detection result of the input image.
  • the configuration of the merging unit 103 of the present embodiment and the method of combining the vehicle area detection result and the vehicle lamp area detection result will be exemplarily described below.
  • FIG. 2 is a schematic diagram of a merging unit 103 according to Embodiment 1 of the present invention. As shown in FIG. 2, the merging unit 103 includes:
  • a determining unit 201 configured to determine whether each of the vehicle light regions in the vehicle light region detection result is in any one of the vehicle region detection results
  • a removing unit 202 configured to remove a detection result of the vehicle light area when the vehicle light area is in any one of the vehicle area detection results, and retain a detection result of the vehicle area where the vehicle light area is located;
  • a pairing unit 203 configured to detect a vehicle light area matching the vehicle light area in the vehicle light area detection result when the vehicle light area is not in any one of the vehicle area detection results, and obtain a matching vehicle light Regional pair
  • An expansion unit 204 configured to expand the matching vehicle light region pair, obtain a vehicle region corresponding to the matching vehicle light region pair, and add the vehicle region to the vehicle region detection result;
  • the first determining unit 205 is configured to use the supplemented vehicle area detection result as the vehicle detection result of the input image.
  • FIG. 3 is a schematic diagram showing a method of combining the vehicle area detection result and the vehicle area detection result according to Embodiment 1 of the present invention. It is assumed that the detection result of the lamp area includes the detected N lamp areas, as shown in FIG. 3, the method includes:
  • Step 301 Set i to 1;
  • Step 302 It is determined whether the i-th lamp area in the vehicle area detection result is in any one of the vehicle area detection results; when the determination result is “Yes”, the process proceeds to step 303, and the determination result is If no, proceed to step 306;
  • Step 303 removing the detection result of the i-th lamp area, and retaining the detection result of the vehicle area where the i-th lamp area is located;
  • Step 304 Determine whether i is equal to N; when the determination result is "Yes”, proceed to step 308, when the determination result is "No", proceed to step 305;
  • Step 305 Add i to 1;
  • Step 306 Detect a lamp area matching the i-th lamp area in the detection result of the lamp area, and obtain a matching pair of lamp areas;
  • Step 307 Expand the matched vehicle light region pair, obtain a vehicle region corresponding to the matched vehicle light region pair, and add the vehicle region to the vehicle region detection result;
  • Step 308 The updated vehicle area detection result is used as the vehicle detection result of the input image.
  • the determination unit 201 can determine whether it is within any of the vehicle area detection results based on the position coordinates of the respective vehicle light regions in the vehicle light region detection result.
  • the detection result of the vehicle light area is removed, and the detection result of the vehicle area where the vehicle light area is located is retained, thereby removing
  • the vehicle area detection result and the vehicle area detection result correspond to the detection result of the same vehicle, that is, the repeated detection result is removed.
  • the pairing unit 203 detects a vehicle light region that matches the vehicle light region in the vehicle light region detection result, and obtains a matching vehicle light region pair.
  • the updated, that is, the supplemented vehicle area detection result is used as the vehicle detection result of the input image.
  • the structure of the pairing unit 203 of the present embodiment and the method of obtaining the pair of the vehicle lamp regions will be exemplarily described below.
  • the pairing unit 203 includes:
  • a first calculating unit 401 configured to calculate a height difference between the headlight area and other headlight areas in the headlight area detection result
  • a second calculating unit 402 configured to calculate a ratio of the height difference to a predetermined area height of the input image
  • a second determining unit 403 configured to determine a candidate vehicle light region pair according to the height difference and the ratio
  • the third determining unit 404 is configured to determine a matching pair of vehicle light regions in the pair of candidate vehicle light regions according to the width of the pair of candidate vehicle light regions.
  • the first calculating unit 401 may calculate a height difference between the lamp area and other lamp areas in the headlight area detection result according to the position coordinates.
  • the second calculating unit 402 calculates a ratio of the height difference to a predetermined region height of the input image, wherein the predetermined region is, for example, a region of interest (ROI).
  • ROI region of interest
  • the second determining unit 403 determines the candidate vehicle light region pair according to the height difference and the ratio, for example, when the height difference is less than the third threshold and the ratio is less than the fourth threshold, the lamp area is Other headlight regions that satisfy this condition are used as candidate headlight region pairs.
  • the third determining unit 404 determines a matching vehicle light region pair of the candidate vehicle light region pair according to the width of the candidate vehicle light region pair, for example, when the width of the candidate vehicle light region pair is smaller than the lane width.
  • the candidate vehicle light region pair is determined to be a matching vehicle light region pair.
  • the illumination of the vehicle area can further improve the accuracy of the detection.
  • the structure of the pairing unit 203 of the present embodiment and the method of obtaining the paired lamp area are exemplarily described above. After the pairing unit 203 obtains the matching pair of lamp areas, the extension unit 204 pairs the matching lights The area pair is expanded to obtain a vehicle area corresponding to the matching headlight area pair, and the vehicle area is added to the vehicle area detection result.
  • the expansion unit 204 can expand the matching vehicle light region pair according to the proportional relationship between the normal vehicle light region and the entire vehicle region.
  • Fig. 5 is a schematic view showing the matching of the matching lamp region pair to the vehicle region in the first embodiment of the present invention.
  • the matching vehicle light region pair 501 is expanded to the corresponding vehicle region 502.
  • the height H of the vehicle region 502 is 3 to 5 times the height h of the matching vehicle light region pair 501, and the width W of the vehicle region 502. It is 1.2 to 1.8 times the height w of the matching lamp area pair 501.
  • the first determination unit 205 uses the supplemented vehicle region detection result as the vehicle detection result of the input image.
  • the vehicle area detection result obtained by using the classifier detection is likely to cause a missed detection, and supplementing by the detection result of the lamp area can effectively prevent the missed detection, and the removal is eliminated. Repeated test results ensure the accuracy of the test.
  • the vehicle region detection based on the classifier and the vehicle area detection based on the brightness are respectively performed, and the detection results of the two are combined to obtain the detection result of the vehicle, thereby effectively improving the accuracy of the detection result and preventing the miss detection.
  • it can be well applied to various scenes such as day and night, and has a wide range of applications.
  • An embodiment of the present invention further provides a vehicle counting device including the vehicle detecting device according to the first embodiment.
  • FIG. 6 is a schematic diagram of a vehicle counting device according to a second embodiment of the present invention. As shown in FIG. 6, the vehicle counting device 600 includes:
  • Vehicle detecting device 601
  • An establishing unit 602 configured to establish a vehicle track set according to the vehicle detection result of the input image
  • a counting unit 603 is configured to count the vehicles in the input image based on the established set of vehicle trajectories.
  • the structure and function of the vehicle detecting device 601 are the same as those of the vehicle detecting device described in Embodiment 1, and details are not described herein again.
  • the establishing unit 602 can establish a vehicle trajectory set using an existing method.
  • the vehicle detecting device 601 detects a continuous multi-frame input image frame by frame, and establishes an over-complete vehicle trajectory set.
  • the apparatus 600 may further include:
  • the filtering unit 604 is configured to merge the vehicle trajectories in the vehicle trajectory according to the positions of the respective vehicle trajectories in the vehicle trajectory and the distance between each other. For example, any two adjacent vehicle trajectories whose distances are less than a predetermined threshold are merged, wherein one of the trajectories may be retained and the other trajectory may be removed.
  • the counting unit 603 is configured to count the vehicles in the input image according to the vehicle trajectory in the merged vehicle trajectory. For example, the number of vehicles is determined based on the number of vehicle trajectories in which the vehicle trajectory is concentrated.
  • the filtering unit 604 the repeated detection results can be removed, and the accuracy of the counting can be further improved.
  • the filter unit 604 is an optional component, indicated by a dashed box in FIG.
  • the vehicle region detection based on the classifier and the vehicle area detection based on the brightness are respectively performed, and the detection results of the two are combined to obtain the detection result of the vehicle, thereby effectively improving the accuracy of the detection result and preventing the miss detection.
  • it can be well applied to various scenes such as day and night, and has a wide range of applications.
  • the detection result is used for the vehicle count, and the accuracy of the counting result can be ensured.
  • FIG. 7 is a schematic diagram of the electronic device according to Embodiment 3 of the present invention.
  • the electronic device 700 includes a vehicle detecting device or a vehicle counting device 701, wherein the structure and function of the vehicle detecting device or the vehicle counting device 701 are the same as those described in Embodiment 1 and Embodiment 2, and are no longer here. Narration.
  • Fig. 8 is a schematic block diagram showing the system configuration of an electronic apparatus according to Embodiment 3 of the present invention.
  • electronic device 800 can include central processor 801 and memory 802; memory 802 is coupled to central processor 801.
  • the figure is exemplary; other types of structures may be used in addition to or in place of the structure to implement telecommunications functions or other functions.
  • the electronic device 800 may further include: an input unit 803, a display 804, and a power source 805.
  • the functions of the vehicle detection device described in Embodiment 1 may be integrated into the central processing unit 801.
  • the central processing unit 801 may be configured to: use a classifier to detect a vehicle area in the input image to obtain a vehicle area detection result; and detect a vehicle light area in the input image according to a brightness of the input image to obtain a vehicle light area. a detection result; combining the vehicle area detection result and the vehicle area detection result to obtain a vehicle detection result of the input image.
  • the detecting the area of the light in the input image according to the brightness of the input image includes: An area of the input image whose brightness is greater than the first threshold is determined as the headlight area.
  • the detecting, according to the brightness of the input image, the area of the light in the input image comprising: determining, in the input image, an area where the brightness is greater than a first threshold and the color variability is less than a second threshold Headlight area.
  • the combining the vehicle area detection result and the vehicle area detection result includes: determining whether each of the vehicle light area detection results is in any one of the vehicle area detection results. In the vehicle area; when the vehicle light area is within any one of the vehicle area detection results, the detection result of the vehicle light area is removed, and the detection result of the vehicle area where the light area is located is retained; When the vehicle lamp area is not in any one of the vehicle area detection results, detecting a vehicle light area matching the vehicle light area in the vehicle light area detection result, and obtaining a matching vehicle light area pair; Expanding the matching vehicle light region pair, obtaining a vehicle region corresponding to the matching vehicle light region pair, and supplementing the vehicle region to the vehicle region detection result; using the supplemented vehicle region detection result as The vehicle detection result of the input image.
  • the detecting a vehicle light region that matches the vehicle light region in the vehicle light region detection result, and obtaining a matching vehicle light region pair includes: calculating the vehicle light region and the other vehicle in the vehicle light region detection result. a height difference of the lamp area; calculating a ratio of the height difference to a predetermined area height of the input image; determining a candidate vehicle light area pair according to the height difference and the ratio; according to a width of the candidate vehicle light area pair Determining the pair of matching vehicle light regions of the candidate vehicle light region pair.
  • the vehicle detecting device described in Embodiment 1 may be configured separately from the central processing unit 801.
  • the vehicle detecting device may be configured as a chip connected to the central processing unit 801 by the control of the central processing unit 801. Realize the function of the vehicle detection device.
  • the electronic device 800 it is also not necessary for the electronic device 800 to include all of the components shown in FIG. 8 in this embodiment.
  • central processor 801 also sometimes referred to as a controller or operational control, may include a microprocessor or other processor device and/or logic device that receives input and controls various components of electronic device 800. Operation.
  • Memory 802 can be one or more of a buffer, a flash memory, a hard drive, a removable medium, a volatile memory, a non-volatile memory, or other suitable device.
  • the central processing unit 801 can execute the program stored by the memory 802 to implement information storage or processing and the like.
  • the functions of other components are similar to those of the existing ones and will not be described here.
  • the various components of electronic device 800 can be implemented by dedicated hardware, firmware, software, or a combination thereof. Without departing from the scope of the invention.
  • the vehicle region detection based on the classifier and the vehicle area detection based on the brightness are respectively performed, and the detection results of the two are combined to obtain the detection result of the vehicle, thereby effectively improving the accuracy of the detection result and preventing the miss detection.
  • it can be well applied to various scenes such as day and night, and has a wide range of applications.
  • FIG. 9 is a schematic diagram of a vehicle detecting method according to a fourth embodiment of the present invention. As shown in FIG. 9, the method includes:
  • Step 901 Detecting a vehicle area in the input image by using a classifier, and obtaining a vehicle area detection result
  • Step 902 Detect a light area in the input image according to brightness of the input image, and obtain a light area detection result
  • Step 903 Combine the vehicle area detection result and the vehicle area detection result to obtain a vehicle detection result of the input image.
  • step 901 and step 902 may be performed simultaneously or sequentially. This embodiment does not limit the execution order of the two steps.
  • a method of detecting a vehicle region in an input image using a classifier a method of detecting a vehicle light region in the input image based on a brightness of the input image, and a detection result of the vehicle region and a detection result of the vehicle light region
  • the method of merging is the same as that described in Embodiment 1, and will not be described again here.
  • the vehicle region detection based on the classifier and the vehicle area detection based on the brightness are respectively performed, and the detection results of the two are combined to obtain the detection result of the vehicle, thereby effectively improving the accuracy of the detection result and preventing the miss detection.
  • it can be well applied to various scenes such as day and night, and has a wide range of applications.
  • An embodiment of the present invention further provides a vehicle counting method corresponding to the vehicle counting device of Embodiment 2.
  • the method includes: the vehicle detection method according to Embodiment 4; establishing a vehicle trajectory set according to a vehicle detection result of the input image; and locating the vehicle trajectory according to the position of the vehicle trajectory and the distance between each other The vehicle trajectories in the trajectory are merged; the vehicles in the input image are counted according to the trajectory of the vehicle in the merged vehicle trajectory.
  • the method for establishing a vehicle trajectory set and the method for merging the vehicle trajectory set are the same as those in the second embodiment, and details are not described herein again.
  • the classifier-based vehicle area detection and the brightness-based headlight area are separately performed.
  • the domain detection combines the detection results of the two as the detection result of the vehicle, which can effectively improve the accuracy of the detection result and prevent the missed detection.
  • it can be well applied to various scenes such as day and night, and has a wide application range.
  • the detection result is used for the vehicle count, and the accuracy of the counting result can be ensured.
  • Embodiments of the present invention also provide a computer readable program, wherein the program causes a computer to execute Embodiment 4 in the vehicle detecting device or electronic device when the program is executed in a vehicle detecting device or an electronic device The vehicle detection method described.
  • Embodiments of the present invention also provide a computer readable program, wherein the program causes a computer to execute Embodiment 5 in the vehicle counting device or electronic device when the program is executed in a vehicle counting device or an electronic device The vehicle counting method described.
  • the embodiment of the present invention further provides a storage medium storing a computer readable program, wherein the computer readable program causes the computer to execute the vehicle detection method described in Embodiment 4 in a vehicle detecting device or an electronic device.
  • the embodiment of the present invention further provides a storage medium storing a computer readable program, wherein the computer readable program causes the computer to execute the vehicle counting method described in Embodiment 5 in a vehicle counting device or an electronic device.
  • the method of performing vehicle detection in a vehicle detecting apparatus described in connection with an embodiment of the present invention may be directly embodied as hardware, a software module executed by a processor, or a combination of both.
  • one or more of the functional block diagrams shown in FIG. 1 and/or one or more combinations of functional block diagrams may correspond to various software modules of a computer program flow, or to individual hardware modules.
  • These software modules may correspond to the respective steps shown in FIG. 9, respectively.
  • These hardware modules can be implemented, for example, by curing these software modules using a Field Programmable Gate Array (FPGA).
  • FPGA Field Programmable Gate Array
  • the software module can reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, removable disk, CD-ROM, or any other form of storage medium known in the art.
  • a storage medium can be coupled to the processor to enable the processor to read information from, and write information to, the storage medium; or the storage medium can be an integral part of the processor.
  • the processor and the storage medium can be located in an ASIC.
  • the software module can be stored in the memory of the mobile terminal or in a memory card that can be inserted into the mobile terminal.
  • the software module can be stored in the MEGA-SIM card or a large-capacity flash memory device.
  • One or more of the functional blocks described with respect to FIG. 1 and/or one or more combinations of functional blocks may be implemented as a general purpose processor, digital signal processor (DSP), dedicated for performing the functions described herein.
  • DSP digital signal processor
  • One or more of the functional block diagrams described with respect to FIG. 1 and/or one or more combinations of functional block diagrams may also be implemented as a combination of computing devices, eg, a combination of a DSP and a microprocessor, a plurality of microprocessors, One or more microprocessors or any other such configuration in conjunction with DSP communication.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种车辆检测装置(100)、车辆计数装置(600)及方法。该装置及方法分别进行基于分类器的车辆区域检测(901)以及基于亮度的车灯区域检测(902),将两者的检测结果合并后作为车辆的检测结果(903),能够有效提高检测结果的准确率并防止漏检,另外,能够很好的适用于白天和晚上等各种场景,适用范围广。

Description

车辆检测装置、车辆计数装置及方法 技术领域
本发明涉及信息技术领域,尤其涉及一种车辆检测装置、车辆计数装置及方法。
背景技术
随着城市交通状况的复杂化,对交通状况的监视和控制需求日益普遍。而对行驶的车辆的检测和计数是视频监控的常见功能之一。
现有的车辆检测方法,一般针对光线较好的场景,例如白天,使用分类器等对监控图像中的车辆进行检测和计数。而对于光线较差的场景,例如夜晚,一般的车辆检测方法难以对车辆进行有效的检测和计数。另外,也有一些专门针对夜晚场景的车辆检测方法,其使用特殊的算法进行检测,计算复杂且无法适用于白天的场景。
应该注意,上面对技术背景的介绍只是为了方便对本发明的技术方案进行清楚、完整的说明,并方便本领域技术人员的理解而阐述的。不能仅仅因为这些方案在本发明的背景技术部分进行了阐述而认为上述技术方案为本领域技术人员所公知。
发明内容
上述现有的车辆检测和计数方法对于不同的应用场景,其检测准确率的差别较大且容易产生漏检,无法同时适用于白天和晚上的场景。
本发明实施例提供一种车辆检测装置、车辆计数装置及方法,分别进行基于分类器的车辆区域检测以及基于亮度的车灯区域检测,将两者的检测结果合并后作为车辆的检测结果,能够有效提高检测结果的准确率并防止漏检,另外,能够很好的适用于白天和晚上等各种场景,适用范围广。
根据本发明实施例的第一方面,提供一种车辆检测装置,所述装置包括:第一检测单元,其用于使用分类器检测输入图像中的车辆区域,获得车辆区域检测结果;第二检测单元,其用于根据所述输入图像的亮度检测所述输入图像中的车灯区域,获得车灯区域检测结果;合并单元,其用于将所述车辆区域检测结果和所述车灯区域检测结果进行合并,获得所述输入图像的车辆检测结果。
根据本发明实施例的第二方面,提供一种车辆计数装置,所述装置包括:根据根据本发明实施例的第一方面所述的车辆检测装置;建立单元,其用于根据输入图像的车辆检测结果,建立车辆轨迹集;计数单元,其用于根据建立的车辆轨迹集,对所述输入图像中的车辆进行计数。
根据本发明实施例的第三方面,提供一种车辆检测方法,所述方法包括:使用分类器检测输入图像中的车辆区域,获得车辆区域检测结果;根据所述输入图像的亮度检测所述输入图像中的车灯区域,获得车灯区域检测结果;将所述车辆区域检测结果和所述车灯区域检测结果进行合并,获得所述输入图像的车辆检测结果。
本发明的有益效果在于:分别进行基于分类器的车辆区域检测以及基于亮度的车灯区域检测,将两者的检测结果合并后作为车辆的检测结果,能够有效提高检测结果的准确率并防止漏检,另外,能够很好的适用于白天和晚上等各种场景,适用范围广。
参照后文的说明和附图,详细公开了本发明的特定实施方式,指明了本发明的原理可以被采用的方式。应该理解,本发明的实施方式在范围上并不因而受到限制。在所附权利要求的精神和条款的范围内,本发明的实施方式包括许多改变、修改和等同。
针对一种实施方式描述和/或示出的特征可以以相同或类似的方式在一个或更多个其它实施方式中使用,与其它实施方式中的特征相组合,或替代其它实施方式中的特征。
应该强调,术语“包括/包含”在本文使用时指特征、整件、步骤或组件的存在,但并不排除一个或更多个其它特征、整件、步骤或组件的存在或附加。
附图说明
所包括的附图用来提供对本发明实施例的进一步的理解,其构成了说明书的一部分,用于例示本发明的实施方式,并与文字描述一起来阐释本发明的原理。显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。在附图中:
图1是本发明实施例1的车辆检测装置的一示意图;
图2是本发明实施例1的合并单元103的一示意图;
图3是本发明实施例1的将该车辆区域检测结果和该车灯区域检测结果进行合并的方法的一示意图;
图4是本发明实施例1的配对单元203的一示意图;
图5是本发明实施例1的将该匹配车灯区域对扩展为车辆区域的一示意图;
图6是本发明实施例2的车辆计数装置的一示意图;
图7是本发明实施例3的电子设备的一示意图;
图8是本发明实施例3的电子设备的系统构成的一示意框图;
图9是本发明实施例4的车辆检测方法的一示意图。
具体实施方式
参照附图,通过下面的说明书,本发明的前述以及其它特征将变得明显。在说明书和附图中,具体公开了本发明的特定实施方式,其表明了其中可以采用本发明的原则的部分实施方式,应了解的是,本发明不限于所描述的实施方式,相反,本发明包括落入所附权利要求的范围内的全部修改、变型以及等同物。
实施例1
图1是本发明实施例1的车辆检测装置的一示意图。如图1所示,该装置100包括:
第一检测单元101,其用于使用分类器检测输入图像中的车辆区域,获得车辆区域检测结果;
第二检测单元102,其用于根据该输入图像的亮度检测该输入图像中的车灯区域,获得车灯区域检测结果;
合并单元103,其用于将该车辆区域检测结果和该车灯区域检测结果进行合并,获得该输入图像的车辆检测结果。
由上述实施例可知,分别进行基于分类器的车辆区域检测以及基于亮度的车灯区域检测,将两者的检测结果合并后作为车辆的检测结果,能够有效提高检测结果的准确率并防止漏检,另外,能够很好的适用于白天和晚上等各种场景,适用范围广。
在本实施例中,该输入图像可以是监控图像,其可以根据现有方法而获得。例如,可以通过安装在需要监测区域上方的摄像头而获得。
在本实施例中,该输入图像可以是一帧图像,也可以包括一段时间的监控视频中连续的多帧图像。当该输入图像包括多帧图像时,可以逐帧进行检测。
在本实施例中,第一检测单元101使用分类器检测输入图像中的车辆区域,获得 车辆区域检测结果。其中,可以检测整个输入图像,也可以检测输入图像中的预定区域,例如,仅检测输入图像中的感兴趣区域(Region of Interest,ROI)。
在本实施例中,第一检测单元101可以使用现有的分类器检测输入图像中的车辆区域,例如,支持向量机(Support Vector Machine,SVM)分类器,贝叶斯分类器等。
在本实施例中,第二检测单元102与第一检测单元101相互独立的进行检测,第二检测单元102根据该输入图像的亮度检测该输入图像中的车灯区域,获得车灯区域检测结果。
在本实施例中,第二检测单元102可以将该输入图像中亮度大于第一阈值的区域确定为车灯区域。
在本实施例中,第二检测单元102还可以将该输入图像中亮度大于第一阈值、且色彩变异度小于第二阈值的区域确定为车灯区域。在本实施例中,该色彩变异度用来衡量色彩变化的程度,例如,该色彩变异度指的是每个像素点RGB三通道的方差值。
由于车灯中心点的亮度一般较高,色彩接近白色,这样,结合亮度和色彩变异度来检测车灯区域,能够进一步提高检测的准确度。
在本实施例中,该第一阈值和第二阈值可以根据实际需要而设置。
在本实施例中,车灯可以包括车前灯和车尾灯等,其中,由于车前灯发出的光的亮度较高,检测到的车灯区域主要是车前灯所在区域。
在本实施例中,合并单元103用于将该车辆区域检测结果和该车灯区域检测结果进行合并,获得该输入图像的车辆检测结果。以下对本实施例的合并单元103的结构以及将该车辆区域检测结果和该车灯区域检测结果进行合并的方法进行示例性的说明。
图2是本发明实施例1的合并单元103的一示意图。如图2所示,合并单元103包括:
判断单元201,其用于判断该车灯区域检测结果中的各个车灯区域是否在该车辆区域检测结果中的任一个车辆区域内;
去除单元202,其用于当该车灯区域在该车辆区域检测结果中的任一个车辆区域内时,去除该车灯区域的检测结果,保留该车灯区域所在的车辆区域的检测结果;
配对单元203,其用于当该车灯区域不在该车辆区域检测结果中的任一个车辆区域内时,检测在该车灯区域检测结果中与该车灯区域匹配的车灯区域,获得匹配车灯 区域对;
扩展单元204,其用于对该匹配车灯区域对进行扩展,获得与该匹配车灯区域对所对应的车辆区域,并将该车辆区域补充到该车辆区域检测结果中;
第一确定单元205,其用于将经过补充的该车辆区域检测结果作为该输入图像的车辆检测结果。
图3是本发明实施例1的将该车辆区域检测结果和该车灯区域检测结果进行合并的方法的一示意图。假设该车灯区域检测结果中包括检测出的N个车灯区域,如图3所示,该方法包括:
步骤301:将i设为1;
步骤302:判断该车灯区域检测结果中的第i个车灯区域是否在该车辆区域检测结果中的任一个车辆区域内;当判断结果为“是”时,进入步骤303,当判断结果为“否”时,进入步骤306;
步骤303:去除第i个车灯区域的检测结果,保留第i个车灯区域所在的车辆区域的检测结果;
步骤304:判断i是否等于N;当判断结果为“是”时,进入步骤308,当判断结果为“否”时,进入步骤305;
步骤305:将i加上1;
步骤306:检测在该车灯区域检测结果中与第i个车灯区域匹配的车灯区域,获得匹配车灯区域对;
步骤307:对该匹配车灯区域对进行扩展,获得与该匹配车灯区域对所对应的车辆区域,并将该车辆区域补充到该车辆区域检测结果中;
步骤308:将经过更新的该车辆区域检测结果作为该输入图像的车辆检测结果。
在本实施例中,判断单元201可以根据该车灯区域检测结果中的各个车灯区域的位置坐标来判断其是否在该车辆区域检测结果中的任一个车辆区域内。
在本实施例中,当该车灯区域在该车辆区域检测结果中的任一个车辆区域内时,去除该车灯区域的检测结果,保留该车灯区域所在的车辆区域的检测结果,从而去除车辆区域检测结果和车灯区域检测结果中对应于同一车辆的检测结果,即去除重复的检测结果。
在本实施例中,当该车灯区域不在该车辆区域检测结果中的任一个车辆区域内 时,配对单元203检测在该车灯区域检测结果中与该车灯区域匹配的车灯区域,获得匹配车灯区域对。
在本实施例中,当对该车灯区域检测结果中包括的N个车灯区域都逐一检测完成之后,将经过更新的,也就是经过补充的车辆区域检测结果作为该输入图像的车辆检测结果。
以下对本实施例的配对单元203的结构以及获得匹配车灯区域对的方法进行示例性的说明。
图4是本发明实施例1的配对单元203的一示意图。如图4所示,配对单元203包括:
第一计算单元401,其用于计算该车灯区域与该车灯区域检测结果中其他车灯区域的高度差;
第二计算单元402,其用于计算该高度差与该输入图像的预定区域高度的比例;
第二确定单元403,其用于根据该高度差和该比例,确定候选车灯区域对;
第三确定单元404,其用于根据该候选车灯区域对的宽度,确定该候选车灯区域对中的匹配车灯区域对。
在本实施例中,第一计算单元401可以根据位置坐标来计算该车灯区域与该车灯区域检测结果中其他车灯区域的高度差。
在本实施例中,第二计算单元402计算该高度差与该输入图像的预定区域高度的比例,其中,该预定区域例如是感兴趣区域(ROI)。
在本实施例中,第二确定单元403根据该高度差和该比例,确定候选车灯区域对,例如,当该高度差小于第三阈值且该比例小于第四阈值时,将该车灯区域与满足该条件的其他车灯区域作为候选车灯区域对。
在本实施例中,第三确定单元404根据该候选车灯区域对的宽度,确定该候选车灯区域对中的匹配车灯区域对,例如,当该候选车灯区域对的宽度小于车道宽度时,将该候选车灯区域对确定为匹配车灯区域对。
这样,通过车灯区域高度、与预定区域的高度比例以及与车道的宽度关系,对车灯区域对进行筛选,能够进一步提高检测的准确率。
以上对本实施例的配对单元203的结构以及获得配对车灯区域的方法进行了示例性的说明。在配对单元203获得匹配车灯区域对之后,扩展单元204对该匹配车灯 区域对进行扩展,获得与该匹配车灯区域对所对应的车辆区域,并将该车辆区域补充到该车辆区域检测结果中。
例如,扩展单元204可以根据通常的车灯区域与整个车辆区域的比例关系,对匹配车灯区域对进行扩展。
图5是本发明实施例1的将该匹配车灯区域对扩展为车辆区域的一示意图。如图5所示,将匹配车灯区域对501扩展为所对应的车辆区域502,车辆区域502的高度H是匹配车灯区域对501的高度h的3~5倍,车辆区域502的宽度W是匹配车灯区域对501的高度w的1.2~1.8倍。
在本实施例中,在扩展单元204将扩展得到的车辆区域补充到该车辆区域检测结果后,第一确定单元205将经过补充的该车辆区域检测结果作为该输入图像的车辆检测结果。
这样,当在夜晚等光线较暗的环境中,使用分类器检测获得的车辆区域检测结果容易产生漏检,而通过车灯区域的检测结果来进行补充,能够有效防止漏检,并且,去除了重复的检测结果,能够保证检测的准确率。
由上述实施例可知,分别进行基于分类器的车辆区域检测以及基于亮度的车灯区域检测,将两者的检测结果合并后作为车辆的检测结果,能够有效提高检测结果的准确率并防止漏检,另外,能够很好的适用于白天和晚上等各种场景,适用范围广。
实施例2
本发明实施例还提供了一种车辆计数装置,该车辆计数装置包括根据实施例1记载的车辆检测装置。
图6是本发明实施例2的车辆计数装置的一示意图。如图6所示,车辆计数装置600包括:
车辆检测装置601;
建立单元602,其用于根据输入图像的车辆检测结果,建立车辆轨迹集;
计数单元603,其用于根据建立的车辆轨迹集,对该输入图像中的车辆进行计数。
在本实施例中,车辆检测装置601的结构与功能与实施例1所述的车辆检测装置相同,此处不再赘述。
在本实例中,建立单元602可使用现有方法建立车辆轨迹集,例如,车辆检测装置601对连续的多帧输入图像逐帧进行检测,建立过完备的车辆轨迹集。
在本实施例中,该装置600还可以包括:
过滤单元604,其用于根据该车辆轨迹集中各个车辆轨迹的位置和相互之间的距离,对该车辆轨迹集中的车辆轨迹进行合并。例如,将相邻的任意两个距离小于预定阈值的车辆轨迹进行合并,其中,可以保留其中的一个轨迹,而将另一个轨迹去除。
此时,计数单元603用于根据合并后的车辆轨迹集中的车辆轨迹,对该输入图像中的车辆进行计数。例如,根据车辆轨迹集中的车辆轨迹的数量,确定车辆的数量。
这样,通过过滤单元604的合并,能够去除重复的检测结果,进一步提高计数的准确率。
在本实施例中,过滤单元604为可选部件,在图6中用虚线框表示。
由上述实施例可知,分别进行基于分类器的车辆区域检测以及基于亮度的车灯区域检测,将两者的检测结果合并后作为车辆的检测结果,能够有效提高检测结果的准确率并防止漏检,另外,能够很好的适用于白天和晚上等各种场景,适用范围广。并且,将检测结果用于车辆计数,能够保证计数结果的准确性。
实施例3
本发明实施例还提供了一种电子设备,图7是本发明实施例3的电子设备的一示意图。如图7所示,电子设备700包括车辆检测装置或车辆计数装置701,其中,车辆检测装置或车辆计数装置701的结构和功能与实施例1和实施例2中的记载相同,此处不再赘述。
图8是本发明实施例3的电子设备的系统构成的一示意框图。如图8所示,电子设备800可以包括中央处理器801和存储器802;存储器802耦合到中央处理器801。该图是示例性的;还可以使用其它类型的结构,来补充或代替该结构,以实现电信功能或其它功能。
如图8所示,该电子设备800还可以包括:输入单元803、显示器804、电源805。
在一个实施方式中,实施例1所述的车辆检测装置的功能可以被集成到中央处理器801中。其中,中央处理器801可以被配置为:使用分类器检测输入图像中的车辆区域,获得车辆区域检测结果;根据所述输入图像的亮度检测所述输入图像中的车灯区域,获得车灯区域检测结果;将所述车辆区域检测结果和所述车灯区域检测结果进行合并,获得所述输入图像的车辆检测结果。
例如,所述根据所述输入图像的亮度检测所述输入图像中的车灯区域,包括:将 所述输入图像中亮度大于第一阈值的区域确定为所述车灯区域。
例如,所述根据所述输入图像的亮度检测所述输入图像中的车灯区域,包括:将所述输入图像中亮度大于第一阈值、且色彩变异度小于第二阈值的区域确定为所述车灯区域。
例如,所述将所述车辆区域检测结果和所述车灯区域检测结果进行合并,包括:判断所述车灯区域检测结果中的各个车灯区域是否在所述车辆区域检测结果中的任一个车辆区域内;当所述车灯区域在所述车辆区域检测结果中的任一个车辆区域内时,去除该车灯区域的检测结果,保留该车灯区域所在的车辆区域的检测结果;当所述车灯区域不在所述车辆区域检测结果中的任一个车辆区域内时,检测在所述车灯区域检测结果中与该车灯区域匹配的车灯区域,获得匹配车灯区域对;对所述匹配车灯区域对进行扩展,获得与所述匹配车灯区域对所对应的车辆区域,并将该车辆区域补充到所述车辆区域检测结果中;将经过补充的所述车辆区域检测结果作为所述输入图像的车辆检测结果。
例如,所述检测在所述车灯区域检测结果中与该车灯区域匹配的车灯区域,获得匹配车灯区域对,包括:计算该车灯区域与所述车灯区域检测结果中其他车灯区域的高度差;计算所述高度差与所述输入图像的预定区域高度的比例;根据所述高度差和所述比例,确定候选车灯区域对;根据所述候选车灯区域对的宽度,确定所述候选车灯区域对中的所述匹配车灯区域对。
在另一个实施方式中,实施例1所述的车辆检测装置可以与中央处理器801分开配置,例如可以将车辆检测装置配置为与中央处理器801连接的芯片,通过中央处理器801的控制来实现车辆检测装置的功能。
在本实施例中电子设备800也并不是必须要包括图8中所示的所有部件。
如图8所示,中央处理器801有时也称为控制器或操作控件,可以包括微处理器或其它处理器装置和/或逻辑装置,中央处理器801接收输入并控制电子设备800的各个部件的操作。
存储器802,例如可以是缓存器、闪存、硬驱、可移动介质、易失性存储器、非易失性存储器或其它合适装置中的一种或更多种。并且中央处理器801可执行该存储器802存储的该程序,以实现信息存储或处理等。其它部件的功能与现有类似,此处不再赘述。电子设备800的各部件可以通过专用硬件、固件、软件或其结合来实现, 而不偏离本发明的范围。
由上述实施例可知,分别进行基于分类器的车辆区域检测以及基于亮度的车灯区域检测,将两者的检测结果合并后作为车辆的检测结果,能够有效提高检测结果的准确率并防止漏检,另外,能够很好的适用于白天和晚上等各种场景,适用范围广。
实施例4
本发明实施例还提供一种车辆检测方法,其对应于实施例1的车辆检测装置。图9是本发明实施例4的车辆检测方法的一示意图。如图9所示,该方法包括:
步骤901:使用分类器检测输入图像中的车辆区域,获得车辆区域检测结果;
步骤902:根据该输入图像的亮度检测该输入图像中的车灯区域,获得车灯区域检测结果;
步骤903:将该车辆区域检测结果和该车灯区域检测结果进行合并,获得该输入图像的车辆检测结果。
在本实施例中,步骤901和步骤902可以同时执行,也可以先后执行,本实施例不对这两个步骤的执行顺序进行限制。
在本实施例中,使用分类器检测输入图像中的车辆区域的方法、根据该输入图像的亮度检测该输入图像中的车灯区域的方法以及将该车辆区域检测结果和该车灯区域检测结果进行合并的方法与实施例1中的记载相同,此处不再赘述。
由上述实施例可知,分别进行基于分类器的车辆区域检测以及基于亮度的车灯区域检测,将两者的检测结果合并后作为车辆的检测结果,能够有效提高检测结果的准确率并防止漏检,另外,能够很好的适用于白天和晚上等各种场景,适用范围广。
实施例5
本发明实施例还提供一种车辆计数方法,其对应于实施例2的车辆计数装置。该方法包括:根据实施例4所述的车辆检测方法;根据输入图像的车辆检测结果,建立车辆轨迹集;根据所述车辆轨迹集中各个车辆轨迹的位置和相互之间的距离,对所述车辆轨迹集中的车辆轨迹进行合并;根据合并后的车辆轨迹集中的车辆轨迹,对所述输入图像中的车辆进行计数。
在本实施例中,建立车辆轨迹集的方法、合并车辆轨迹集的方法与实施例2中的记载相同,此处不再赘述。
由上述实施例可知,分别进行基于分类器的车辆区域检测以及基于亮度的车灯区 域检测,将两者的检测结果合并后作为车辆的检测结果,能够有效提高检测结果的准确率并防止漏检,另外,能够很好的适用于白天和晚上等各种场景,适用范围广。并且,将检测结果用于车辆计数,能够保证计数结果的准确性。
本发明实施例还提供一种计算机可读程序,其中当在用于车辆检测装置或电子设备中执行所述程序时,所述程序使得计算机在所述车辆检测装置或电子设备中执行实施例4所述的车辆检测方法。
本发明实施例还提供一种计算机可读程序,其中当在用于车辆计数装置或电子设备中执行所述程序时,所述程序使得计算机在所述车辆计数装置或电子设备中执行实施例5所述的车辆计数方法。
本发明实施例还提供一种存储有计算机可读程序的存储介质,其中所述计算机可读程序使得计算机在车辆检测装置或电子设备中执行实施例4所述的车辆检测方法。
本发明实施例还提供一种存储有计算机可读程序的存储介质,其中所述计算机可读程序使得计算机在车辆计数装置或电子设备中执行实施例5所述的车辆计数方法。
结合本发明实施例描述的在用于车辆检测装置中进行车辆检测的方法可直接体现为硬件、由处理器执行的软件模块或二者组合。例如,图1中所示的功能框图中的一个或多个和/或功能框图的一个或多个组合,既可以对应于计算机程序流程的各个软件模块,亦可以对应于各个硬件模块。这些软件模块,可以分别对应于图9所示的各个步骤。这些硬件模块例如可利用现场可编程门阵列(FPGA)将这些软件模块固化而实现。
软件模块可以位于RAM存储器、闪存、ROM存储器、EPROM存储器、EEPROM存储器、寄存器、硬盘、移动磁盘、CD-ROM或者本领域已知的任何其它形式的存储介质。可以将一种存储介质耦接至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息;或者该存储介质可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。该软件模块可以存储在移动终端的存储器中,也可以存储在可插入移动终端的存储卡中。例如,若设备(例如移动终端)采用的是较大容量的MEGA-SIM卡或者大容量的闪存装置,则该软件模块可存储在该MEGA-SIM卡或者大容量的闪存装置中。
针对图1描述的功能框图中的一个或多个和/或功能框图的一个或多个组合,可以实现为用于执行本申请所描述功能的通用处理器、数字信号处理器(DSP)、专用 集成电路(ASIC)、现场可编程门阵列(FPGA)或其它可编程逻辑器件、分立门或晶体管逻辑器件、分立硬件组件、或者其任意适当组合。针对图1描述的功能框图中的一个或多个和/或功能框图的一个或多个组合,还可以实现为计算设备的组合,例如,DSP和微处理器的组合、多个微处理器、与DSP通信结合的一个或多个微处理器或者任何其它这种配置。
以上结合具体的实施方式对本发明进行了描述,但本领域技术人员应该清楚,这些描述都是示例性的,并不是对本发明保护范围的限制。本领域技术人员可以根据本发明的精神和原理对本发明做出各种变型和修改,这些变型和修改也在本发明的范围内。

Claims (12)

  1. 一种车辆检测装置,所述装置包括:
    第一检测单元,其用于使用分类器检测输入图像中的车辆区域,获得车辆区域检测结果;
    第二检测单元,其用于根据所述输入图像的亮度检测所述输入图像中的车灯区域,获得车灯区域检测结果;
    合并单元,其用于将所述车辆区域检测结果和所述车灯区域检测结果进行合并,获得所述输入图像的车辆检测结果。
  2. 根据权利要求1所述的装置,其中,所述第二检测单元用于将所述输入图像中亮度大于第一阈值的区域确定为所述车灯区域。
  3. 根据权利要求1所述的装置,其中,所述第二检测单元用于将所述输入图像中亮度大于第一阈值、且色彩变异度小于第二阈值的区域确定为所述车灯区域。
  4. 根据权利要求1所述的装置,其中,所述合并单元包括:
    判断单元,其用于判断所述车灯区域检测结果中的各个车灯区域是否在所述车辆区域检测结果中的任一个车辆区域内;
    去除单元,其用于当所述车灯区域在所述车辆区域检测结果中的任一个车辆区域内时,去除该车灯区域的检测结果,保留该车灯区域所在的车辆区域的检测结果;
    配对单元,其用于当所述车灯区域不在所述车辆区域检测结果中的任一个车辆区域内时,检测在所述车灯区域检测结果中与该车灯区域匹配的车灯区域,获得匹配车灯区域对;
    扩展单元,其用于对所述匹配车灯区域对进行扩展,获得与所述匹配车灯区域对所对应的车辆区域,并将该车辆区域补充到所述车辆区域检测结果中;
    第一确定单元,其用于将经过补充的所述车辆区域检测结果作为所述输入图像的车辆检测结果。
  5. 根据权利要求4所述的装置,其中,所述配对单元包括:
    第一计算单元,其用于计算该车灯区域与所述车灯区域检测结果中其他车灯区域的高度差;
    第二计算单元,其用于计算所述高度差与所述输入图像的预定区域高度的比例;
    第二确定单元,其用于根据所述高度差和所述比例,确定候选车灯区域对;
    第三确定单元,其用于根据所述候选车灯区域对的宽度,确定所述候选车灯区域对中的所述匹配车灯区域对。
  6. 一种车辆计数装置,所述装置包括:
    根据权利要求1所述的车辆检测装置;
    建立单元,其用于根据输入图像的车辆检测结果,建立车辆轨迹集;
    计数单元,其用于根据建立的车辆轨迹集,对所述输入图像中的车辆进行计数。
  7. 根据权利要求6所述的装置,其中,所述装置还包括:
    过滤单元,其用于根据所述车辆轨迹集中各个车辆轨迹的位置和相互之间的距离,对所述车辆轨迹集中的车辆轨迹进行合并;
    所述计数单元用于根据合并后的车辆轨迹集中的车辆轨迹,对所述输入图像中的车辆进行计数。
  8. 一种车辆检测方法,所述方法包括:
    使用分类器检测输入图像中的车辆区域,获得车辆区域检测结果;
    根据所述输入图像的亮度检测所述输入图像中的车灯区域,获得车灯区域检测结果;
    将所述车辆区域检测结果和所述车灯区域检测结果进行合并,获得所述输入图像的车辆检测结果。
  9. 根据权利要求8所述的方法,其中,所述根据所述输入图像的亮度检测所述输入图像中的车灯区域,包括:
    将所述输入图像中亮度大于第一阈值的区域确定为所述车灯区域。
  10. 根据权利要求8所述的方法,其中,所述根据所述输入图像的亮度检测所述输入图像中的车灯区域,包括:
    将所述输入图像中亮度大于第一阈值、且色彩变异度小于第二阈值的区域确定为所述车灯区域。
  11. 根据权利要求8所述的方法,其中,所述将所述车辆区域检测结果和所述车灯区域检测结果进行合并,包括:
    判断所述车灯区域检测结果中的各个车灯区域是否在所述车辆区域检测结果中的任一个车辆区域内;
    当所述车灯区域在所述车辆区域检测结果中的任一个车辆区域内时,去除该车灯区域的检测结果,保留该车灯区域所在的车辆区域的检测结果;
    当所述车灯区域不在所述车辆区域检测结果中的任一个车辆区域内时,检测在所述车灯区域检测结果中与该车灯区域匹配的车灯区域,获得匹配车灯区域对;
    对所述匹配车灯区域对进行扩展,获得与所述匹配车灯区域对所对应的车辆区域,并将该车辆区域补充到所述车辆区域检测结果中;
    将经过补充的所述车辆区域检测结果作为所述输入图像的车辆检测结果。
  12. 根据权利要求11所述的方法,其中,所述检测在所述车灯区域检测结果中与该车灯区域匹配的车灯区域,获得匹配车灯区域对,包括:
    计算该车灯区域与所述车灯区域检测结果中其他车灯区域的高度差;
    计算所述高度差与所述输入图像的预定区域高度的比例;
    根据所述高度差和所述比例,确定候选车灯区域对;
    根据所述候选车灯区域对的宽度,确定所述候选车灯区域对中的所述匹配车灯区域对。
PCT/CN2016/102158 2016-10-14 2016-10-14 车辆检测装置、车辆计数装置及方法 WO2018068313A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/102158 WO2018068313A1 (zh) 2016-10-14 2016-10-14 车辆检测装置、车辆计数装置及方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/102158 WO2018068313A1 (zh) 2016-10-14 2016-10-14 车辆检测装置、车辆计数装置及方法

Publications (1)

Publication Number Publication Date
WO2018068313A1 true WO2018068313A1 (zh) 2018-04-19

Family

ID=61906113

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/102158 WO2018068313A1 (zh) 2016-10-14 2016-10-14 车辆检测装置、车辆计数装置及方法

Country Status (1)

Country Link
WO (1) WO2018068313A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101382997A (zh) * 2008-06-13 2009-03-11 青岛海信电子产业控股股份有限公司 夜间车辆的检测与跟踪方法及装置
US7577274B2 (en) * 2003-09-12 2009-08-18 Honeywell International Inc. System and method for counting cars at night
CN102231236A (zh) * 2011-06-14 2011-11-02 汉王科技股份有限公司 车辆计数方法和装置
CN103150898A (zh) * 2013-01-25 2013-06-12 大唐移动通信设备有限公司 一种夜间车辆检测方法、跟踪方法及装置
CN105718923A (zh) * 2016-03-07 2016-06-29 长安大学 一种基于逆投影图的夜间车辆检测与计数方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7577274B2 (en) * 2003-09-12 2009-08-18 Honeywell International Inc. System and method for counting cars at night
CN101382997A (zh) * 2008-06-13 2009-03-11 青岛海信电子产业控股股份有限公司 夜间车辆的检测与跟踪方法及装置
CN102231236A (zh) * 2011-06-14 2011-11-02 汉王科技股份有限公司 车辆计数方法和装置
CN103150898A (zh) * 2013-01-25 2013-06-12 大唐移动通信设备有限公司 一种夜间车辆检测方法、跟踪方法及装置
CN105718923A (zh) * 2016-03-07 2016-06-29 长安大学 一种基于逆投影图的夜间车辆检测与计数方法

Similar Documents

Publication Publication Date Title
US20150278615A1 (en) Vehicle exterior environment recognition device
US11108970B2 (en) Flicker mitigation via image signal processing
JP2019053619A (ja) 信号識別装置、信号識別方法、及び運転支援システム
JP2012226513A (ja) 検知装置、及び検知方法
US9336449B2 (en) Vehicle recognition device
US20140028873A1 (en) Image processing apparatus
WO2013189464A2 (zh) 一种近正向俯视监控视频行人跟踪计数方法和装置
US10121083B2 (en) Vehicle exterior environment recognition apparatus
US20160379070A1 (en) Vehicle exterior environment recognition apparatus
TWI609807B (zh) 影像評估方法以及其電子裝置
WO2019085930A1 (zh) 车辆中双摄像装置的控制方法和装置
CN109816621B (zh) 异常光斑的检测装置及方法、电子设备
WO2018058530A1 (zh) 目标检测方法、装置以及图像处理设备
CN103632559B (zh) 基于视频分析的红绿灯状态检测方法
JP2019020956A (ja) 車両周囲認識装置
JP7241772B2 (ja) 画像処理装置
JP2019106644A (ja) 付着物検出装置および付着物検出方法
US20210089818A1 (en) Deposit detection device and deposit detection method
WO2018068313A1 (zh) 车辆检测装置、车辆计数装置及方法
JP7251425B2 (ja) 付着物検出装置および付着物検出方法
JP2021051377A (ja) 付着物検出装置および付着物検出方法
JP6868932B2 (ja) 物体認識装置
KR20140147211A (ko) 안개 발생 여부 판단 방법 및 이를 위한 장치
CN110646173A (zh) 一种远光灯持续开启的检测方法
CN109255349B (zh) 目标检测方法、装置及图像处理设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16918619

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16918619

Country of ref document: EP

Kind code of ref document: A1