WO2013053159A1 - 一种车辆跟踪的方法及装置 - Google Patents

一种车辆跟踪的方法及装置 Download PDF

Info

Publication number
WO2013053159A1
WO2013053159A1 PCT/CN2011/081782 CN2011081782W WO2013053159A1 WO 2013053159 A1 WO2013053159 A1 WO 2013053159A1 CN 2011081782 W CN2011081782 W CN 2011081782W WO 2013053159 A1 WO2013053159 A1 WO 2013053159A1
Authority
WO
WIPO (PCT)
Prior art keywords
target point
tracked
license plate
information
current
Prior art date
Application number
PCT/CN2011/081782
Other languages
English (en)
French (fr)
Inventor
王晓曼
陈维强
刘新
刘微
刘韶
Original Assignee
青岛海信网络科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 青岛海信网络科技股份有限公司 filed Critical 青岛海信网络科技股份有限公司
Publication of WO2013053159A1 publication Critical patent/WO2013053159A1/zh

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules

Definitions

  • the invention relates to the technical field of intelligent traffic monitoring, and in particular to a method and a device for tracking a vehicle. Background technique
  • Current vehicle tracking methods include: a license plate based tracking acquisition method and a motion information based tracking acquisition method.
  • the tracking method based on the license plate includes: determining geographic location information of the current target point in the video image of the current frame, extracting geographic location information of all the tracked target points in the video image of the previous frame, and obtaining the current target point and all the Tracking the minimum distance among the distances between the upper target points, and when the minimum distance is less than the set value, determining that the current target point is the tracked upper target point corresponding to the minimum distance.
  • This method must first locate the geographical location information of the current target point. For vehicles that are not positioned, it is easy to miss the vehicle, and the probability of tracking errors is relatively large.
  • the motion information-based tracking acquisition method includes: comparing an image in a virtual line ⁇ in a current frame video image with an image in a virtual line ⁇ in a previous frame video image to obtain a frame difference map, and traversing the frame difference map Whether each pixel is white point, if the number of white points exceeds half of the total number of pixels in the frame difference graph, the state of the virtual line ⁇ is set to 1, otherwise it is set to 0.
  • Embodiments of the present invention provide a vehicle tracking method and apparatus for improving the efficiency of an intelligent transportation system.
  • An embodiment of the present invention provides a vehicle tracking method, including: Determining a license plate recognized from the detection area of the current frame video image as a current target point; matching the license plate information of the current target point with the license plate information of each target point to be tracked; if the current target point The license plate information is matched with the license plate information of the target point to be tracked, and the current target point is determined as the target point to be tracked, and the tracking list information of the target point to be tracked is updated; otherwise,
  • each tracking list information includes: corresponding to a position of the target point to be tracked on each frame of the video image Information and license plate character identification.
  • An embodiment of the present invention provides a device for tracking a vehicle, including:
  • An identification unit configured to determine, as a current target point, a license plate recognized from a detection area of the current frame video image
  • a matching unit configured to match the license plate information of the current target point with the license plate information of each target point to be tracked
  • a first tracking unit configured to: when the license plate information of the current target point matches the license plate information of a target point to be tracked, determine that the current target point is the target point to be tracked, and update the target point to be tracked Tracking list information; when the license information of the current punctuation does not match the license plate information of any of the punctuation points to be tracked, determining that the current target point is a new target point to be tracked, and establishing the new target point to be tracked Tracking list information, wherein each tracking list information comprises: location information corresponding to the target point to be tracked on each frame of the video image and a license plate character identifier.
  • vehicle tracking is performed by using vehicle license information matching for vehicles in the detection area, so that accurate vehicle tracking can be realized with only a small calculation amount, thereby eliminating the need for a large number of personnel to participate in the vehicle tracking process. , improving the efficiency of intelligent transportation systems.
  • FIG. 1 is a flow chart of vehicle tracking in an embodiment of the present invention
  • FIG. 2 is a flow chart of vehicle tracking in a non-detection area according to an embodiment of the present invention
  • FIG. 3 is a structural diagram of a vehicle tracking device in an embodiment of the present invention. detailed description
  • the license plate of each vehicle in the current frame image detection area is identified, and the license plate information of each license plate and the license plate of each target point to be tracked are identified.
  • the information is matched, and it is determined according to the matching result whether each of the identified license plates is a target point to be tracked.
  • the recognized license plate information of one license plate matches the license plate information of a target point to be tracked, the recognized license plate is the target point to be tracked; when the license plate information of the recognized license plate and all the target points to be tracked When the license plate information does not match, it is determined that the identified license plate is a new target point to be tracked.
  • the target point to be tracked For the target point to be tracked that does not appear in the detection area, it is determined whether the target point to be tracked is still in the current frame video image by predicting the track tracking, wherein the target license plate appearing in the predicted area matches the target point to be tracked. When the target license plate is determined as the target point to be tracked, otherwise, the target point to be tracked does not appear in the current frame video image, that is, is not tracked.
  • the camera picture information in multiple lanes can be acquired by the camera, and the detection area and the tracking area in the video image are determined according to the situation of the intersection and the position of the camera installation.
  • the principle of the detection area setting is that the normal vehicle is in normal condition. At the speed, the number of frames appearing in the detection area is 10 frames or more, and generally 1/4 to 1/3 of the video image is determined as the detection area; the area between the zebra crossing from the upper end of the detection area to the opposite intersection is set to The tracking area does not locate and identify the vehicle in the tracking area, and only predicts the orbit tracking of the vehicle. In this way, the license plate can be accurately identified, the vehicle can be correctly tracked, and time can be saved.
  • each target point to be tracked has appeared in the previous video image, that is, the target point to be tracked has appeared in the video image of the previous frame, or appears in the video image of the previous frame. Therefore, the tracking list information of each target point to be tracked is stored, where the tracking list information includes: position information of the target point to be tracked on each frame of the video image, the license plate character identifier; and may also include the video image of each frame. Frame number and storage location information.
  • the tracking list information of the target point to be tracked includes: the license plate character identifier: 0012300, appears in The position coordinate on the video image of the 108th frame is (xl, yl), the video image of the 108th frame is stored in the storage unit 8, and the position coordinates appearing on the video image of the 109th frame are (x2, y2), the video of the 109th frame The image is stored in the storage unit 9.
  • the vehicle information in the detection area is tracked by using the license plate information matching.
  • the target point to be tracked that does not appear in the detection area it is also determined whether the target point to be tracked is Appearing in the tracking area, you also need to use predictive orbit tracking.
  • a specific process of a vehicle tracking method provided by an embodiment of the present invention includes:
  • Step 101 Identify a license plate from the detection area of the current frame video image, and determine the recognized license plate as the current target point.
  • a license plate in the detection area of the current frame video image can be identified by license plate location, character segmentation, and license plate recognition, and the license plate information of the license plate is obtained.
  • the license plate information includes: a license plate character identifier, and position information of the license plate on the current frame video image.
  • the identified license plate is determined as the current target point, and the license plate information of the current target point is obtained.
  • Step 102 Match the license plate information of the current target point with the license plate information of each target point to be tracked, that is, find all the license plate information of the target point to be tracked and the license plate of the current target point among all the target points to be tracked. The information is matched, if yes, go to step 103, otherwise, go to step 104.
  • the license plate information includes: a license plate character identifier, and position information of the license plate on the current frame video image. Cause Therefore, here, the matching may be performed according to the location information. If the matching is unsuccessful, the license plate character identifier is used for matching. Or, match directly with the license plate character.
  • the M position information is matched first, and then the license plate character identifier is used for matching, so that the calculation amount is small, and the matching comparison process is simple.
  • the matching according to the location information specifically includes:
  • the license plate information of the target point to be tracked
  • the first threshold is the maximum width of the license plate in the image multiplied by a ratio value that is greater than 1, in general, the maximum width is the width of the blue license plate at the bottom of the image.
  • the license plate character identifier is used for matching, and the license plate character identifier of the current target point is directly compared with the license plate character identifier of each target point to be tracked, when the number of the same characters is greater than the set number. And determining that the license plate information of the current target point matches the license plate information corresponding to the target point to be tracked, and performing step 103; otherwise, performing step 104.
  • the second small distance between the current punctuation and the distance of each target point to be tracked may be compared with the second threshold, and when the second small distance is smaller than the second threshold, the license plate character of the current target point is identified.
  • the license plate character identifier of the second target target to be tracked corresponding to the second small distance is compared.
  • the license plate information of the current target point and the license plate information of the second target point to be tracked are determined. If the matching is performed, step 103 is performed. In other cases, it is determined that the license plate information of the current target point does not match the license plate information of any one of the target points to be tracked, and step 104 is performed.
  • the second threshold is greater than the first threshold and is also related to the maximum width of the license plate in the image.
  • Step 103 Determine the current target point as the target point to be tracked that the license plate information matches, and update the tracking list information of the target point to be tracked.
  • the license plate information that has been found to be tracked in all the target points to be tracked matches the license plate information of the current target point. Therefore, the current target point is determined as the target point to be tracked with the license plate information matching, and the target to be tracked is updated.
  • Point tracking list information That is, the position information of the target point to be tracked on the current frame video image, the frame number of the current frame video image, and the storage location information are added to the tracking list information.
  • the updated tracking list information includes: a license plate character identifier: 0012300, which appears on the video image of the 108th frame.
  • the position coordinates are (xl, yl), the 108th frame video image is stored in the storage unit 8, and the position coordinates appearing on the 109th frame video image are (x2, y2), and the 109th frame video image is stored in the storage unit 9.
  • the position coordinates appearing on the video image of the 110th frame are (x3, y3), and the video image of the 10th frame is stored in the storage unit 10.
  • Step 104 Determine the current target point as a new target point to be tracked, and establish a new tracking list information of the target point to be tracked.
  • the current target point is determined as a new target point to be tracked, and a new to-be-tracked is established.
  • Tracking list information for the target point includes: a license plate character identifier, position information of the new target target point to be tracked on the current frame video image, and frame number and storage location information of the frame video image.
  • each license plate identified in the detection area can be positioned, and each license plate is determined to be a target point to be tracked or a new target point to be tracked.
  • a target point can be determined to match in the detection area of the current frame video image, and the tracking process ends. If the target point to be tracked does not appear in the detection area of the current frame video image, the tracking area may appear in the target point to be tracked. Therefore, a specified one to be tracked is not detected in the detection area of the current frame video image.
  • a subsequent predicted orbit tracking process is also required.
  • the target points to be tracked are Vehicle 1, Vehicle 2 and Vehicle 3. Four target points appear in the detection area of the current frame video image.
  • the above tracking process it is determined that the four target points are the vehicle 1, the vehicle 2, the vehicle 3, and the vehicle 4, respectively, at this time, since each is to be tracked The target point has been tracked, so the tracking process ends. If the above tracking process is passed, it is determined that the four target points are the vehicle 1, the vehicle 2, the vehicle 4, and the vehicle 5. At this time, since the vehicle 3 is not tracked, the vehicle 3 may appear in the tracking area, and therefore, a subsequent predicted orbit tracking process is required.
  • the vehicle tracking process when a specified target point to be tracked is not detected in the detection area of the current frame video image, the vehicle tracking process further includes predicting trajectory tracking.
  • the method specifically includes:
  • Step 201 Obtain location information of the undetected target point to be tracked in at least three frames of video images from the tracking list information of the target point to be tracked that is not detected.
  • Target point in the first three frames The position information in the video image is Al (xl, yl ), A2 ( x2, y2 ), A3 ( x3 , y3 ).
  • Step 202 Determine a prediction area in the current frame video image according to the acquired location information.
  • the acquired position information is Al ( xl , yl ), A2 ( x2, y2 ), A3 ( x3 , y3 ), respectively calculate the slope of the line Al A2 tmpSlopel and the intercept tmpOffsetl , the slope of the line A1 A3 tmpSlope2 and the cut From tmpOfFset2, the slope of the line A2A3 is tmpSlope3 and the intercept tmpOffset3, then the average slope Slope and the average intercept Offset are obtained.
  • the approximate position B ( X, y ) that can appear on the video image.
  • the set area centered on B (X, y ) is determined as the prediction area.
  • Step 203 Perform template matching on the license plate in the prediction area, and obtain a minimum mean value of the pixel gray difference mean values of each target area obtained in the template matching process.
  • the license plate image of the target point to be tracked is used as a template, and the upper left corner of the template and the upper left corner of the prediction area are coincident, and an area corresponding to the template size is used as the current target area, and the template and the gray of the corresponding pixel in the current target area are used.
  • the degree value is difference, the absolute value is obtained, and the absolute values corresponding to all the pixels in the current target area are summed, and the sum result is divided by the total number of pixels in the template to obtain the average value of the pixel gray difference of the current target area; then, to the upper left
  • the next pixel of the corner point is a coincidence point, and the template matching process is still performed until each pixel in the prediction area is traversed, and the mean value corresponding to each target area is obtained, and the mean value corresponding to each target area is compared, and the minimum mean value of the template matching is obtained.
  • Step 204 Compare the minimum mean value of the template matching with the third threshold. When the minimum mean value is less than the third threshold, perform step 205. Otherwise, perform step 206.
  • Step 205 Determine a target area corresponding to the minimum mean value as an undetected target point to be tracked, and update the track list information of the undetected target point to be tracked.
  • the average value of the pixel gray difference corresponding to each target area is obtained.
  • the target area corresponding to the minimum mean value is determined as the true target, that is, the target area is not
  • the detected target point to be tracked and the tracking list information of the undetected target point to be tracked is updated.
  • the update process includes: adding the location information c ( X, y ) of the target area, and the frame number and storage location information of the current frame video image to the tracking list information.
  • Step 206 Perform coarse positioning in the prediction area. If the coarse positioning is successful, go to step 207. Otherwise, the coarse positioning is unsuccessful. Determine that the undetected target point to be tracked does not appear in the video image of the current frame.
  • the operator extracts the edge of the binarized image and scans the entire edge binarized image line by line to find the suspected license plate scanning area according to the characteristics of the vertical edge of the license plate: within the specific pixel segment of the current scanning line If the number of pixel jumps reaches a certain value, it is determined that the specific pixel segment is a suspected license plate segment, and after all the line scans are completed, the suspected license plate segments are merged, and specifically, the adjacent rows and the left and right positions are also relatively close. If the suspected license plate segments are merged, a suspected license plate scanning area will be formed.
  • the suspected license plate segment A in the first row has the leftmost pixel as the third pixel and the rightmost pixel as the 83rd pixel.
  • the second row of the suspected license plate The leftmost pixel in segment B is the second pixel, and the rightmost pixel is the 82nd pixel.
  • the leftmost pixel in the suspected license plate segment C in the third row is the third pixel, and the rightmost pixel is the 83rd pixel. It can be considered that the left and right positions of the suspected license plate segments on lines 1-3 are relatively close, and the three suspected license plate segments are combined to obtain a suspected license plate scanning area, and the suspected license plate sweep A first pseudo behavior plate segment region A, the second behave like plate segment B, a third plate section behave like C.
  • step 207 determining the coarsely located license plate as an undetected target point to be tracked, and updating the undetected target to be tracked Point tracking list information.
  • the suspected license plate scanning area is determined as the rough-positioned license plate, and the position information of the suspected license plate scanning area is acquired, and the position information, and the frame number and storage position information of the current frame video image are added. Go to the tracking list information.
  • steps 206 and 207 may not be performed, i.e., only template matching is performed without coarse positioning.
  • the vehicle screen information in the multi-lane can be acquired by the camera, and the acquired current frame video image is stored in the image buffer area, and after the vehicle tracking is completed, the vehicle can be captured according to the set condition.
  • the current frame video image is acquired by the camera, it is stored in the image buffer area by means of cyclic storage.
  • the target is to be tracked from the target. Finding a minimum video image frame number of the target to be tracked in the tracking list information, and determining storage location information corresponding to the minimum video image frame number, and finally, extracting a corresponding video image from the image buffer according to the storage location information, The extracted video image is determined to be a captured image.
  • the image buffer area is allocated 100 storage units, each unit stores one frame of video image, and each time the camera acquires one frame of video image, it is cyclically stored in the image buffer area, and is included in the tracking list information of the target to be tracked.
  • Store location information When a target to be tracked continuously appears in 10 frames of video images, or when the target to be tracked continuously violates the chapter, the minimum video image frame number is found in the tracking list information of the target to be tracked, and the minimum video image frame number is determined.
  • Storage location information For example: The minimum video image frame number is 103 frames, and the storage location information is the third storage. Storage unit.
  • the 103rd frame video image is extracted from the 3rd storage unit, and the 103rd frame video image is determined as the captured image.
  • the captured lanes are images that have just appeared in the field of view, and the vehicle information is clear and easily identifiable.
  • the method includes: an identification unit 100, a matching unit 200, and a first tracking unit 300, where
  • the identification unit 100 is configured to determine a license plate recognized from the detection area of the current frame video image as the current target point.
  • the matching unit 200 is configured to match the license plate information of the current target point with the license plate information of each target point to be tracked.
  • the first tracking unit 300 is configured to: when the license plate information of the current target point matches the license plate information of the target point to be tracked, determine that the current target point is the target point to be tracked, and update the tracking list information of the target point to be tracked; When the license plate information of the target point does not match the license plate information of any target point to be tracked, the current target point is determined as a new target point to be tracked, and a new tracking list information of the target point to be tracked is established, wherein the tracking list information includes : Position information of the target point to be tracked on each frame of video image, license plate character identification, and frame number and storage location information of each frame of video image.
  • the matching unit 200 is specifically configured to determine the current target point and each to be tracked according to the position information of the current target point on the current frame video image and the position information of each target point to be tracked on the previous frame video image.
  • the matching unit 200 matches the character identification information of the current target point with the character identification information of each target point to be tracked, and directly compares the license plate character identifier of the current target point with the license plate character identifier of each target to be tracked. If the number of the same characters is greater than the set number, it is determined that the license plate information of the current target point matches the license plate information of the target point to be tracked, otherwise the confirmation does not match.
  • the matching unit 200 is further configured to: when the second small distance in the distance between the current punctuation and each target point to be tracked is less than a second threshold, and the second character distance corresponding to the second small distance of the current target point When the number of the same characters in the license plate character identifier of the tracking target is greater than the set number, it is determined that the license plate information of the current target point matches the license plate information of the second target point to be tracked; when the current target point and each target point to be tracked If the second small distance in the distance is not less than the second threshold or the license plate character identifier of the current target point and the number of the same characters in the license plate character identifier of the second target to be tracked corresponding to the second small distance are not greater than the set number, determining the current The license plate information of the target point does not match the license plate information of any of the target points to be tracked.
  • the vehicle tracking device uses the license plate information matching to track the vehicles in the detection area. For the target point to be tracked that does not appear in the detection area, it is also necessary to determine whether the target point to be tracked appears in the tracking area, that is, it is also required to be used.
  • the predicted trajectory is tracked, and therefore, the vehicle tracking device further includes a second tracking unit.
  • the second tracking unit is configured to: when the specified target point to be tracked is not detected in the detection area of the current frame video image, obtain the undetected in the tracking list information of the undetected target point to be tracked.
  • Position information of the target point to be tracked in at least three frames of video images determining a prediction area in the video image of the current frame according to at least three pieces of position information, performing template matching on the license plate in the prediction area, and obtaining each obtained in the template matching process a minimum mean value of the pixel grayscale difference mean values of the target area, when the minimum mean value is less than the third threshold value, determining the target area corresponding to the minimum mean value as the undetected target point to be tracked, and updating the undetected target point Tracking list information of the target point to be tracked.
  • the minimum mean value is greater than or equal to the third threshold, it may be determined that the target point to be tracked is not tracked, or the second tracking unit further uses the coarse positioning for orbit tracking, and therefore, the second tracking unit is further used for the minimum mean value.
  • the third threshold is greater than or equal to the third threshold, the license plate of the undetected target point is coarsely located in the prediction area, and when the coarse positioning is successful, determining that the roughly located license plate is the undetected target point to be tracked And update the tracking list information of the undetected target point to be tracked.
  • the vehicle tracking device also includes: a capture unit.
  • the capturing unit is configured to: when the specified tracking condition meets the set capturing condition, find a minimum video image frame number from the specified tracking list information of the target to be tracked, and determine a minimum video image frame number corresponding to The location information is stored; according to the storage location information, a corresponding video image is extracted from the image buffer, and the extracted video image is determined as a captured image.
  • the license plate information matching is used for tracking.
  • For the target point to be tracked that does not appear in the detection area it is also determined whether the target point to be tracked appears in the tracking area, that is, it is still needed. Tracking is performed using predicted orbits. In this way, accurate vehicle tracking can be achieved with only a small amount of calculation, thereby eliminating the need for a large number of people to participate in the vehicle tracking process and improving the efficiency of the intelligent transportation system. Moreover, accurate tracking also helps to judge violations.
  • the captured vehicles are just the video images that appear, so that the vehicle information is cleaned and easily identifiable.
  • embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can be embodied in the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) in which computer usable program code is embodied.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

一种车辆跟踪的方法及装置,用以提高智能交通系统的效率。该方法包括:将从当前帧视频图像的检测区域内识别出的一个车牌确定为当前目标点(101),若所述当前目标点的车牌信息与一个待跟踪目标点的车牌信息匹配,则确定所述当前目标点为所述待跟踪目标点,并更新所述待跟踪目标点的跟踪列表信息(103);否则,确定所述当前目标点为新的待跟踪目标点,并建立所述新的待跟踪目标点的跟踪列表信息(104)。

Description

一种车辆跟踪的方法及装置 本申请要求在 2011年 10月 09日提交中国专利局、 申请号为 201110302716.3、 发明名称 为"一种车辆跟踪的方法及装置 "的中国专利申请的优先权, 其全部内容通过引用结合在本 申请中。 技术领域
本发明涉及智能交通监控技术领域, 特别涉及一种车辆跟踪的方法及装置。 背景技术
随着城市的迅猛发展, 城市的人口和车辆在急剧增长, 交通流量日益加大, 交通拥堵 现象日益严重, 交通问题已经成为城市管理中的重大问题, 其严重阻碍了城市的发展, 特 别是各种车辆违章现象的随时随地的发生, 使得城市交通的监控变得非常困难, 由此出现 了采用运动目标视频跟踪技术监控车辆的智能交通系统。
目前车辆跟踪方法包括: 基于车牌的跟踪捕获方法以及基于运动信息的跟踪捕获方 法。 其中, 基于车牌的跟踪捕获方法包括: 确定当前帧视频图像中当前目标点的地理位置 信息, 提取前一帧视频图像中所有已跟踪上目标点的地理位置信息, 并获得当前目标点与 所有已跟踪上目标点之间的距离中的最小距离, 当该最小距离小于设定值时, 则确定当前 目标点为最小距离对应的已跟踪上目标点。 这种方法必须首先定位出当前目标点的地理位 置信息, 对于未定位的车辆容易漏车, 跟踪错误的几率也比较大。
基于运动信息的跟踪捕获方法包括: 将当前帧视频图像中设定虚拟线圏中的图像与前 一帧视频图像中设定虚拟线圏中的图像进行比较, 获得帧差图, 遍历帧差图中每个像素点 是否为白点, 若白点的个数超过帧差图中像素点总数的一半, 则将虚拟线圏的状态设定为 1 , 否则设定为 0。 当虚拟线圏状态由 0变为 1时, 则确定有车辆进入虚拟线圏, 接下来继续 判断虚拟线圏中的白点个数是否超过虚拟线圏中像素点总数的一半, 在超过时, 确定该车 辆还在通过虚拟线圏的过程中; 当虚拟线圏状态由 1变为 0时,确定车辆从虚拟线圏中离开, 此时, 捕获车辆。 这种方法的局限性比较大, 具有重复捕获、 漏捕获等缺点, 特别在交叉 路口, 重复捕获的可能性会更大。
可见, 目前车辆跟踪方法的准确性还不高, 直接影响了智能交通系统的效率。 发明内容
本发明实施例提供一种车辆跟踪方法及装置, 用以提高智能交通系统的效率。
本发明实施例提供一种车辆跟踪方法, 包括: 将从当前帧视频图像的检测区域内识别出的一个车牌确定为当前目标点; 将所述当前目标点的车牌信息分别与每个待跟踪目标点的车牌信息进行匹配; 若所述当前目标点的车牌信息与一个待 3艮踪目标点的车牌信息匹配, 则确定所述当前 目标点为所述待跟踪目标点, 并更新所述待跟踪目标点的跟踪列表信息; 否则,
确定所述当前目标点为新的待跟踪目标点, 并建立所述新的待跟踪目标点的跟踪列表 信息, 其中, 各跟踪列表信息包括: 对应待跟踪目标点在每帧视频图像上的位置信息和车 牌字符标识。
本发明实施例提供一种车辆跟踪的装置, 包括:
识别单元, 用于将从当前帧视频图像的检测区域内识别出的一个车牌确定为当前目标 点;
匹配单元, 用于将所述当前目标点的车牌信息分别与每个待跟踪目标点的车牌信息进 行匹配;
第一跟踪单元, 用于当所述当前目标点的车牌信息与一个待跟踪目标点的车牌信息匹 配时, 确定所述当前目标点为所述待跟踪目标点, 并更新所述待跟踪目标点的跟踪列表信 息; 当所述当前 标点的车牌信息不与任一个待跟踪 标点的车牌信息匹配时, 确定所述 当前目标点为新的待跟踪目标点, 并建立所述新的待跟踪目标点的跟踪列表信息, 其中, 各跟踪列表信息包括: 对应待跟踪目标点在每帧视频图像上的位置信息和车牌字符标识。
本发明实施例中, 对于检测区域内的车辆, 采用车牌信息匹配进行车辆跟踪, 这样, 只需要较小的计算量就可以实现准确的车辆跟踪, 从而, 不需要大量的人员参与车辆跟踪 过程中, 提高了智能交通系统的效率。 附图说明
图 1为本发明实施例中车辆跟踪的流程图;
图 2为本发明实施例中非检测区域内车辆跟踪的流程图;
图 3为本发明实施例中车辆跟踪装置的结构图。 具体实施方式
本发明实施例中, 通过摄像头获取了当前帧图像后, 对当前帧图像检测区域内的每辆 车的车牌进行识别, 将识别出的每个车牌的车牌信息与每个待跟踪目标点的车牌信息进行 匹配, 根据匹配结果确定识别出的每个车牌是否为待跟踪目标点。 其中, 当识别出的一个 车牌的车牌信息与一个待跟踪目标点的车牌信息匹配, 则该识别出的车牌为该待跟踪目标 点; 当识别出的车牌的车牌信息与所有的待跟踪目标点的车牌信息都不匹配时, 则确定该 识别出的车牌为新的待跟踪目标点。 而对于未在检测区域中出现的待跟踪目标点, 通过预测轨道跟踪, 确定该待跟踪目标 点是否还在当前帧视频图像中, 其中, 当预测区域中出现的目标车牌与待跟踪目标点匹配 时, 确定该目标车牌为待跟踪目标点, 否则, 待跟踪目标点未出现在该当前帧视频图像中, 即未被跟踪上。
本发明实施例中, 通过摄像头可获取多车道中的车辆画面信息, 根据路口的情况及摄 像头安装的位置, 确定视频图像中的检测区域及跟踪区域, 检测区域设置的原则为, 普通 车辆在正常的速度下在检测区域出现的帧数在 10帧及以上, 一般将视频图像下方 1/4到 1/3 确定为检测区域; 将从检测区域的上端到对面路口的斑马线之间的区域设置为跟踪区域, 在跟踪区域内不对车辆进行定位及识别, 只对车辆进行预测轨道跟踪。 这样, 既保证车牌 识别准确, 车辆可以正确 Ϊ艮踪, 并节省时间。
本发明实施例中, 每个待跟踪目标点都已在前面的视频图像中出现了, 即待跟踪目标 点已在上一帧视频图像中出现了, 或在上上一帧视频图像中出现了, 因此, 已存储了每个 待跟踪目标点的跟踪列表信息, 其中, 跟踪列表信息包括: 待跟踪目标点在每帧视频图像 上的位置信息, 车牌字符标识; 还可以包括每帧视频图像的帧号和存储位置信息。 例如: 当前视频帧图像为第 110帧视频图像, 一个待跟踪目标点已分别出现在第 108、 109帧视频图 像, 则该待跟踪目标点的跟踪列表信息包括: 车牌字符标识: 0012300, 出现在第 108帧视 频图像上的位置坐标为 (xl , yl ), 第 108帧视频图像存储在存储单元 8中, 出现在第 109帧 视频图像上的位置坐标为 (x2 , y2 ), 第 109帧视频图像存储在存储单元 9中。
存储了每个待跟踪目标点的跟踪列表信息后, 对于检测区域内的车辆, 采用车牌信息 匹配进行跟踪, 对于未出现在检测区域内的待跟踪目标点, 还需确定该待跟踪目标点是否 出现在跟踪区域, 即还需采用预测轨道跟踪。
下面结合说明书附图对本发明实施例作进一步详细描述。
参见图 1 , 本发明实施例提供的车辆跟踪方法的具体过程包括:
步驟 101 : 从当前帧视频图像的检测区域内识别出一个车牌, 并将识别出的车牌确定 为当前目标点。
通过摄像头获取了当前帧视频图像并存入图像緩存区后, 通过车牌定位、 字符分割、 车牌识别可识别出该当前帧视频图像的检测区域内的一个车牌, 并获得了该车牌的车牌信 息, 车牌信息包括: 车牌字符标识, 以及该车牌在当前帧视频图像上的位置信息。
将识别出的车牌确定为当前目标点, 并获得了当前目标点的车牌信息。
步骤 102: 将当前目标点的车牌信息分别与每个待跟踪目标点的车牌信息进行匹配, 即在所有的待跟踪目标点中查找是否有一个待跟踪目标点的车牌信息与当前目标点的车 牌信息匹配, 若有, 执行步骤 103 , 否则, 执行步骤 104。
由于车牌信息包括: 车牌字符标识, 以及该车牌在当前帧视频图像上的位置信息。 因 此, 这里可首先根据位置信息进行匹配, 若匹配不成功后, 再采用车牌字符标识进行匹配。 或者, 直接采用车牌字符标识进行匹配。
较佳地, 先才 M居位置信息进行匹配, 然后采用车牌字符标识进行匹配, 这样, 计算量 小, 匹配比较过程简单。 其中, 根据位置信息进行匹配具体包括:
从每个待跟踪目标点的跟踪列表信息中, 获取每个待跟踪目标点在上一帧视频图像上 的位置信息, 然后, 根据当前目标点在当前帧视频图像上的位置信息, 以及每个待跟踪目 标点在上一帧视频图像上的位置信息, 确定当前目标点与每个待跟踪目标点的距离, 并将 当前目标点与每个待跟踪目标点的距离中的最小距离与第一阈值进行比较, 若当前目标点 与每个待跟踪目标点的距离中的最小距离小于第一阈值, 则确定该当前目标点的车牌信息 与最小距离对应的第一待跟踪目标点的车牌信息匹配, 执行步驟 103 , 否则, 采用车牌字 符标识进行匹配, 即将当前目标点的字符标识信息与每个待跟踪目标点的字符标识信息进 行匹配, 并根据匹配结果确定当前目标点的车牌信息是否与一个待跟踪目标点的车牌信息 匹配。
第一阈值是车牌在图像中的最大宽度乘于一个比率值, 该比率值大于 1, 一般, 最大 宽度是蓝色车牌在图像最底部时的宽度。
本发明实施例中, 采用车牌字符标识进行匹配, 可直接将当前目标点的车牌字符标识 与每个待跟踪目标点的车牌字符标识进行比对, 当相同字符的个数大于设定个数时, 确定 当前目标点的车牌信息与对应待跟踪目标点的车牌信息匹配, 执行步骤 103 , 否则, 执行 步驟 104。
为进一步减少计算量, 还可将当前 标点与每个待跟踪目标点的距离中的次小距离与 第二阈值进行比较, 当次小距离小于第二阈值, 再将当前目标点的车牌字符标识与次小距 离对应的第二待跟踪目标点的车牌字符标识进行比对, 当相同字符的个数大于设定个数 时, 确定当前目标点的车牌信息与第二待跟踪目标点的车牌信息匹配, 执行步骤 103 , 其 他情况都确定当前目标点的车牌信息不与任一个待跟踪目标点的车牌信息匹配, 并执行步 骤 104。 即当当前目标点与每个待跟踪目标点的距离中的次小距离小于第二阈值, 且当前 目标点的车牌字符标识与次小距离对应的第二待跟踪目标的车牌字符标识中相同字符的 个数大于设定个数时, 确定当前目标点的车牌信息与第二待跟踪目标点的车牌信息匹配, 执行步骤 103 , 否则, 执行步骤步骤 104。 其中, 第二阈值大于第一阔值, 也与车牌在图像 中的最大宽度有关。
这样, 只需比对一次, 就可确定当前目标点的车牌信息是否与待跟踪目标点的车牌信 息匹配, 极大地节省了资源。
步骤 103: 将当前目标点确定为车牌信息匹配的待跟踪目标点, 并更新待跟踪目标点 的跟踪列表信息。 在所有的待跟踪目标点中已查找到一个待跟踪目标点的车牌信息与当前目标点的车 牌信息匹配, 因此, 将当前目标点确定为车牌信息匹配的待跟踪目标点, 并更新待跟踪目 标点的跟踪列表信息。 即将该待跟踪目标点在当前帧视频图像上的位置信息, 当前帧视频 图像的帧号和存储位置信息添加到跟踪列表信息中。
仍以上述的当前视频帧图像为第 110帧视频图像, 一个待跟踪目标点已分别出现在第
108、 109帧视频图像上为例, 当识别出的当前目标点为该待跟踪目标点时, 这里, 更新后 的跟踪列表信息包括: 车牌字符标识: 0012300, 出现在第 108帧视频图像上的位置坐标为 ( xl , yl ), 第 108帧视频图像存储在存储单元 8中, 出现在第 109帧视频图像上的位置坐标 为 (x2 , y2 ), 第 109帧视频图像存储在存储单元 9中, 出现在第 110帧视频图像上的位置坐 标为 (x3 , y3 ), 第 10帧视频图像存储在存储单元 10中。
步骤 104: 将当前目标点确定为新的待跟踪目标点, 并建立新的待跟踪目标点的跟踪 列表信息。
由于在所有的待跟踪目标点中未查找到一个待跟踪目标点的车牌信息与当前目标点 的车牌信息匹配, 因此, 将当前目标点确定为新的待跟踪目标点, 并建立新的待跟踪目标 点的跟踪列表信息。 即该新的待跟踪目标点的跟踪列表信息包括: 车牌字符标识, 新的待 跟踪目标点在当前帧视频图像上的位置信息, 以及当帧视频图像的帧号和存储位置信息。
重复上述过程, 可将从检测区域内识别出的每个车牌进行定位, 并确定每个车牌为待 跟踪目标点或为新的待跟踪目标点。 若通过上述过程, 对于每个待跟踪目标点, 在当前帧 视频图像的检测区域内都能确定一个目标点与其匹配, 那么跟踪过程结束。 若还有待跟踪 目标点未出现在当前帧视频图像的检测区域内时, 则该待跟踪目标点可能会出现跟踪区 域, 因此, 在当前帧视频图像的检测区域内未检测到指定的一个待跟踪目标点时, 还需进 行后续的预测轨道跟踪过程。 例如: 待跟踪目标点分别为车辆 1、 车辆 2和车辆 3。 在当前 帧视频图像的检测区域内出现了 4个目标点, 若通过上述跟踪过程, 确定这 4个目标点分别 为车辆 1、 车辆 2、 车辆 3和车辆 4, 此时, 由于每个待跟踪目标点都已被跟踪上了, 因此跟 踪流程结束。 若通过上述跟踪过程, 确定这 4个目标点分别为车辆 1、 车辆 2、 车辆 4和车辆 5。 此时, 由于车辆 3未被跟踪上, 车辆 3可能会出现在跟踪区域, 因此, 还需进行后续的 预测轨道跟踪过程。
因此, 本发明实施例中在当前帧视频图像的检测区域内未检测到指定的一个待跟踪目 标点时, 车辆跟踪过程还包括预测轨迹跟踪, 参见图 2 , 具体包括:
步骤 201 : 从未检测到的待跟踪目标点的跟踪列表信息中获取该未检测到的待跟踪目 标点在至少三帧视频图像中的位置信息。
从未检测到的待跟踪目标点的跟踪列表信息中获取该未检测到的待跟踪目标点在前 面任意三帧, 四帧或多帧视频图像中的位置信息, 较佳地, 获取该待跟踪目标点在前三帧 视频图像中的位置信息, 分别为 Al ( xl, yl ), A2 ( x2, y2 ), A3 ( x3 , y3 )。
步骤 202: 根据获取的位置信息, 确定当前帧视频图像中的预测区域。
已获取了待跟踪目标点在至少三帧视频图像中的位置信息, 根据两点连成一条直线的 原理, 分别计算任意两点构成的直线的斜率和截距, 然后获得平均斜率和平均截距, 有了 平均斜率和平均截距, 即可根 y=ax+b的原理, 计算该待跟踪目标点在当前帧视频图像中 的位置信息。 最后以该位置信息为中心的设定区域为预测区域。 预测区域的大小与车牌的 大小有关。
例如: 获取的位置信息分别为 Al ( xl , yl ), A2 ( x2, y2 ), A3 ( x3 , y3 ), 分别计算 直线 Al A2的斜率 tmpSlopel和截距 tmpOffsetl , 直线 A1 A3的斜率 tmpSlope2和截距 tmpOfFset2, 直线 A2A3的斜率 tmpSlope3和截距 tmpOffset3 , 然后求出平均斜率 Slope和平均 截距 Offset, 有了斜率和截距, 可以根据 =810 6 ( X ) +Offset, 计算待跟踪目标在当前帧视 频图像上可能出现的大致位置 B ( X, y )。 将以 B ( X, y )为中心的设定区域确定为预测区 域。
步驟 203: 对预测区域中的车牌进行模板匹配, 获取模板匹配过程中得到的各目标区 域的像素灰度差均值中的最小均值。
将待跟踪目标点的车牌图像作为模板, 将模板的左上角点和预测区域的左上角点重 合, 将与模板大小一致的一块区域作为当前目标区域, 将模板与当前目标区域中对应像素 的灰度值作差, 获得绝对值, 并将当前目标区域中所有像素对应的绝对值取和, 将取和结 果除以模板中的像素总数得到当前目标区域的像素灰度差均值; 然后, 以左上角点的下一 个像素为重合点, 仍然进行上述模板匹配过程, 直至遍历预测区域中每个像素, 获得每个 目标区域对应的均值, 比较每个目标区域对应的均值, 获得模板匹配的最小均值。
步骤 204: 将模板匹配的最小均值与第三阈值进行比较, 当最小均值小于第三阈值时, 执行步骤 205 , 否则, 执行步骤 206。
步驟 205 : 将最小均值对应的目标区域确定为未检测到的待跟踪目标点, 并更新该未 检测到的待跟踪目标点的跟踪列表信息。
对预测区域中的车牌进行模板匹配时, 获得每个目标区域对应的像素灰度差均值, 当 最小均值小于第三阈值时, 确定最小均值对应的目标区域为真目标, 即该目标区域为未检 测到的待跟踪目标点,并更新该未检测到的待跟踪目标点的跟踪列表信息。更新过程包括: 将目标区域的位置信息 c ( X, y ), 以及当前帧视频图像的帧号和存储位置信息添加到跟踪 列表信息中。
步骤 206: 在预测区域内进行粗定位, 粗定位成功时, 执行步骤 207, 否则, 粗定位不 成功, 确定当前帧视频图像中未出现该未检测到的待跟踪目标点。
获取预测区域的灰度图像,并对灰度图像进行二值化处理,获得二值化图像,利用 sobel 算子提取二值化图像的边缘, 并对整幅边缘二值化图进行逐行扫描, 以根据车牌垂直边缘 的跳变的特点找到疑似的车牌扫描区域: 在当前扫描行的特定像素段内, 如果像素跳变的 个数达到一定值, 则确定该特定像素段为疑似车牌段, 待所有行扫描结束后, 再将疑似车 牌段合并, 具体的, 将邻近行的且左右位置也比较接近的疑似车牌段合并, 就会形成疑似 的车牌扫描区域, 例如, 第 1行的疑似车牌段 A中最左边像素为第 3个像素、 最右边像素为 第 83个像素, 第 2行的疑似车牌段 B中最左边的像素为第 2个像素、 最右边像素为第 82个像 素, 第 3行的疑似车牌段 C中最左边的像素为第 3个像素、 最右边像素为第 83个像素, 则可 以认为第 1-3行的的疑似车牌段的左右位置比较接近, 并将这三个疑似车牌段进行合并,得 到疑似的车牌扫描区域, 该疑似的车牌扫描区域的第一行为疑似车牌段 A、 第二行为疑似 车牌段 B、 第三行为疑似车牌段 C。 如果这个疑似的车牌扫描区域的高度小于 2倍车牌的高 度, 大于 1/2倍车牌的高度, 则确定粗定位成功, 并将该疑似的车牌扫描区域确定为粗定位 到的车牌, 否则, 粗定位不成功, 确定当前帧视频图像中未出现该未检测到的待跟踪目标 步驟 207: 将粗定位到的车牌确定为未检测到的待跟踪目标点, 并更新该未检测到的 待跟踪目标点的跟踪列表信息。
粗定位成功时, 已将疑似的车牌扫描区域确定为粗定位到的车牌, 则获取该疑似的车 牌扫描区域的位置信息, 将该位置信息, 以及当前帧视频图像的帧号和存储位置信息添加 到跟踪列表信息中。
通过上述过程, 可对未出现在当前帧视频图像的检测区域内的待跟踪目标点进行跟 踪。 当然在本发明另一实施例中, 可不执行步驟 206和 207 , 即只进行模板匹配, 而不进行 粗定位。
本发明实施例中, 通过摄像头可获取多车道中的车辆画面信息, 将获取的当前帧视频 图像存入图像緩存区中, 在上述车辆跟踪完成后, 可根据设定条件对车辆进行捕获。
这里, 通过摄像头获取了当前帧视频图像后, 采用循环存入的方式将其存入图像緩存 区中, 当确定一个待跟踪^]标满足设定的捕获条件, 则从该待跟踪^]标的跟踪列表信息中 查找到该待跟踪目标的最小视频图像帧号, 并确定该最小视频图像帧号对应的存储位置信 息, 最后, 根据该存储位置信息, 从图像緩存区中提取对应的视频图像, 并将提取的视频 图像确定为捕获图像。
例如: 图像緩存区分配了 100个存储单元, 每个单元存储一帧视频图像, 每次摄像头 获取一帧视频图像, 就循环存入图像緩存区中, 并在待跟踪目标的跟踪列表信息中包括存 储位置信息。 当一个待跟踪目标连续出现在 10帧视频图像中, 或该待跟踪目标连续出现违 章情况时, 在待跟踪目标的跟踪列表信息中查找到最小视频图像帧号, 以及确定最小视频 图像帧号对应的存储位置信息。 例如: 最小视频图像帧号为 103帧, 存储位置信息为第 3存 储单元。 则从第 3存储单元中提取第 103帧视频图像, 并第 103帧视频图像确定为捕获图像。 通过上述序列緩存式捕获算法,捕获车道都是刚出现在视野中的图像,车辆信息清晰, 容易辨认。
根据上述车辆跟踪的过程, 可以构建一种车辆跟踪的装置, 参见图 3 , 包括: 识别单 元 100、 匹配单元 200和第一跟踪单元 300 , 其中,
识别单元 100 , 用于将从当前帧视频图像的检测区域内识别出的一个车牌确定为当前 目标点。
匹配单元 200 , 用于将当前目标点的车牌信息分别与每个待跟踪目标点的车牌信息进 行匹配。
第一跟踪单元 300, 用于当当前目标点的车牌信息与一个待跟踪目标点的车牌信息匹 配时, 确定当前目标点为待跟踪目标点, 并更新待跟踪目标点的跟踪列表信息; 当当前目 标点的车牌信息不与任一个待跟踪目标点的车牌信息匹配时, 确定当前目标点为新的待跟 踪目标点, 并建立新的待跟踪目标点的跟踪列表信息, 其中, 跟踪列表信息包括: 待跟踪 目标点在每帧视频图像上的位置信息, 车牌字符标识, 以及每帧视频图像的帧号和存储位 置信息。
其中, 匹配单元 200, 具体用于根据当前目标点在当前帧视频图像上的位置信息, 以 及每个待跟踪目标点在上一帧视频图像上的位置信息, 确定当前目标点与每个待跟踪目标 点的距离; 若当前目标点与每个待跟踪目标点的距离中的最小距离小于第一阈值, 则确定 当前目标点的车牌信息与最小距离对应的第一待跟踪目标点的车牌信息匹配; 否则, 将当 前目标点的字符标识信息与每个待跟踪目标点的字符标识信息进行匹配, 根据匹配结果确 定当前目标点的车牌信息是否与一个待跟踪目标点的车牌信息匹配。
其中, 匹配单元 200将当前目标点的字符标识信息与每个待跟踪目标点的字符标识信 息进行匹配过程可直接当前目标点的车牌字符标识与每个待跟踪目标的车牌字符标识进 行比对, 若相同字符的个数大于设定个数, 则确定当前目标点的车牌信息与待跟踪目标点 的车牌信息匹配, 否则确认不匹配。 或者, 该匹配单元 200 , 还具体用于当当前 标点与 每个待跟踪目标点的距离中的次小距离小于第二阈值, 且当前目标点的车牌字符标识与次 小距离对应的第二待跟踪目标的车牌字符标识中相同字符的个数大于设定个数时, 确定当 前目标点的车牌信息与第二待跟踪目标点的车牌信息匹配; 当当前目标点与每个待跟踪目 标点的距离中的次小距离不小于第二阈值或者当前目标点的车牌字符标识与次小距离对 应的第二待跟踪目标的车牌字符标识中相同字符的个数不大于设定个数时, 确定当前目标 点的车牌信息不与任一个待跟踪目标点的车牌信息匹配。
该车辆跟踪装置对于检测区域内的车辆, 采用车牌信息匹配进行跟踪, 对于未出现在 检测区域内的待跟踪目标点, 还需确定该待跟踪目标点是否出现在跟踪区域, 即还需采用 预测轨迹进行跟踪, 因此, 该车辆跟踪装置还包括第二跟踪单元。 其中, 第二跟踪单元, 用于在当前帧视频图像的检测区域内未检测到指定的一个待跟踪目标 点时, 从未检测到的待跟踪目标点的跟踪列表信息中获取该未检测到的待跟踪目标点在至 少三帧视频图像中的位置信息, 根据至少三个位置信息, 确定当前帧视频图像中的预测区 域, 对预测区域中的车牌进行模板匹配, 获取模板匹配过程中得到的各目标区域的像素灰 度差均值中的最小均值, 当最小均值小于第三阈值时, 将所述最小均值对应的目标区域确 定为该未检测到的待跟踪目标点, 并更新该未检测到的待跟踪目标点的跟踪列表信息。
当最小均值大于或等于第三阈值时, 可确定待跟踪目标点未被跟踪上, 或者, 第二跟 踪单元进一步采用粗定位进行轨道跟踪, 因此, 该第二跟踪单元, 还用于当最小均值大于 或等于第三阈值时, 在预测区域内对该未检测到的待跟踪目标点的车牌进行粗定位, 当粗 定位成功时, 确定粗定位到的车牌为该未检测到的待跟踪目标点, 并更新该未检测到的待 跟踪目标点的跟踪列表信息。
上述车辆跟踪完成后, 还可根据设定条件对车辆进行捕获。 因此, 车辆跟踪装置还包 括: 捕获单元。
该捕获单元, 用于当指定的待跟踪^]标满足设定的捕获条件时, 从指定的待跟踪罔标 的跟踪列表信息中查找到最小视频图像帧号, 并确定最小视频图像帧号对应的存储位置信 息; 根据存储位置信息, 从图像緩存区中提取对应的视频图像, 并将提取的视频图像确定 为捕获图像。
本发明实施例中, 对于检测区域内的车辆, 采用车牌信息匹配进行跟踪, 对于未出现 在检测区域内的待跟踪目标点, 还需确定该待跟踪目标点是否出现在跟踪区域, 即还需采 用预测轨道进行跟踪。 这样, 只需要较小的计算量就可以实现准确的车辆跟踪, 从而, 不 需要大量的人员参与车辆跟踪过程中, 提高了智能交通系统的效率。 并且, 准确的跟踪还 有助于违章事件的判断。
另外, 采用序列緩存捕获算法, 捕获的车辆都是刚出现的视频图像, 这样, 车辆信息 清洗, 容易辨认。
本领域内的技术人员应明白, 本发明的实施例可提供为方法、 系统、 或计算机程序产 品。 因此, 本发明可釆用完全硬件实施例、 完全软件实施例、 或结合软件和硬件方面的实 施例的形式。 而且, 本发明可釆用在一个或多个其中包含有计算机可用程序代码的计算机 可用存储介质(包括但不限于磁盘存储器、 CD-ROM、 光学存储器等)上实施的计算机程 序产品的形式。
本发明是参照根据本发明实施例的方法、 设备(系统)、 和计算机程序产品的流程图 和 /或方框图来描述的。 应理解可由计算机程序指令实现流程图和 /或方框图中的每一流 程和 /或方框、 以及流程图和 /或方框图中的流程和 //或方框的结合。 可提供这些计算机 程序指令到通用计算机、 专用计算机、 嵌入式处理机或其他可编程数据处理设备的处理器 以产生一个机器, 使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用 于实现在流程图一个流程或多个流程和 /或方框图一个方框或多个方框中指定的功能的 装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方 式工作的计算机可读存储器中, 使得存储在该计算机可读存储器中的指令产生包括指令装 置的制造品, 该指令装置实现在流程图一个流程或多个流程和 /或方框图一个方框或多个 方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上, 使得在计算机 或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理, 从而在计算机或其他 可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和 /或方框图一个 方框或多个方框中指定的功能的步骤。
尽管已描述了本发明的优选实施例, 但本领域内的技术人员一旦得知了基本创造性概 念, 则可对这些实施例作出另外的变更和修改。 所以, 所附权利要求意欲解释为包括优选 实施例以及落入本发明范围的所有变更和修改。
显然, 本领域的技术人员可以对本发明进行各种改动和变型而不脱离本发明的精神和 范围。这样,倘若本发明的这些修改和变型属于本发明权利要求及其等同技术的范围之内, 则本发明也意图包含这些改动和变型在内。

Claims

权 利 要 求
1、一种车辆跟踪的方法, 其特征在于, 包括:
将从当前帧视频图像的检测区域内识别出的一个车牌确定为当前目标点;
将所述当前目标点的车牌信息分别与每个待跟踪目标点的车牌信息进行匹配; 若所述当前目标点的车牌信息与一个待跟踪目标点的车牌信息匹配, 则确定所述当前 目标点为所述待跟踪目标点, 并更新所述待跟踪目标点的跟踪列表信息; 否则,
确定所述当前目标点为新的待跟踪目标点, 并建立所述新的待跟踪目标点的跟踪列表 信息, 其中, 各跟踪列表信息包括: 对应待跟踪目标点在每帧视频图像上的位置信息和车 牌字符标识。
2、 如权利要求 1所述的方法, 其特征在于, 将所述当前目标点的车牌信息分别与每个 待跟踪目标点的车牌信息进行匹配包括:
根据所述当前目标点在当前帧视频图像上的位置信息, 以及每个待跟踪目标点在上一 帧视频图像上的位置信息, 确定所述当前目标点与每个待跟踪目标点的距离;
若所述当前目标点与每个待跟踪目标点的距离中的最小距离小于第一阈值, 则确定所 述当前目标点的车牌信息与所述最小距离对应的第一待跟踪目标点的车牌信息匹配; 否 则,
将所述当前目标点的字符标识信息与每个待跟踪目标点的字符标识信息进行匹配, 根 据匹配结果确定所述当前目标点的车牌信息是否与一个待跟踪目标点的车牌信息匹配。
3、 如权利要求 2所述的方法, 其特征在于, 将所述当前目标点的字符标识信息与每个 待跟踪目标点的字符标识信息进行匹配, 根据匹配结果确定所述当前目标点的车牌信息是 否与一个待跟踪目标点的车牌信息匹配包括:
若所述当前目标点与每个待跟踪目标点的距离中的次小距离小于第二阔值, 且所述当 前目标点的车牌字符标识与所述次小距离对应的第二待跟踪目标的车牌字符标识中相同 字符的个数大于设定个数, 则确定所述当前目标点的车牌信息与所述第二待跟踪目标点的 车牌信息匹配; 否则, 确定所述当前目标点的车牌信息不与任一个待跟踪目标点的车牌信 息匹配。
4、 如权利要求 1所述的方法, 其特征在于, 在当前帧视频图像的检测区域内未检测到 指定的一个待跟踪目标点时, 该方法还包括:
从未检测到的待跟踪目标点的跟踪列表信息中获取该未检测到的待跟踪目标点在至 少三帧视频图像中的位置信息;
根据所述至少三个位置信息, 确定当前帧视频图像中的预测区域;
对所述预测区域中的车牌进行模板匹配, 获取模板匹配过程中得到的各目标区域的像 素灰度差均值中的最小均值; 当所述最小均值小于第三阈值时, 将所述最小均值对应的目标区域确定为所述未检测 到的待跟踪目标点, 并更新所述未检测到的待跟踪目标点的跟踪列表信息。
5、 如权利要求 4所述的方法, 其特征在于, 当所述最小均值大于或等于第三阈值时, 还包括:
在所述预测区域内对所述未检测到的待跟踪目标点的车牌进行粗定位, 当粗定位成功 时, 确定粗定位到的车牌为所述未检测到的待跟踪目标点, 并更新所述未检测到的待跟踪 目标点的跟踪列表信息。
6、 如权利要求 1所述的方法, 其特征在于, 各跟踪列表信息中还包括: 每帧视频图像 的帧号和存储位置信息; 该方法还包括:
当指定的待跟踪目标满足设定的捕获条件时, 从所述指定的待跟踪目标的跟踪列表信 息中查找到最小视频图像帧号, 并确定最小视频图像帧号对应的存储位置信息;
根据所述存储位置信息, 从图像緩存区中提取对应的视频图像, 并将提取的视频图像 确定为捕获图像。
7、一种车辆跟踪的装置, 其特征在于, 包括:
识别单元, 用于将从当前帧视频图像的检测区域内识别出的一个车牌确定为当前目标 匹配单元, 用于将所述当前目标点的车牌信息分别与每个待跟踪目标点的车牌信息进 行匹配;
第一跟踪单元, 用于当所述当前目标点的车牌信息与一个待跟踪目标点的车牌信息匹 配时, 确定所述当前 标点为所述待跟踪目标点, 并更新所述待跟踪目标点的跟踪列表信 息; 当所述当前目标点的车牌信息不与任一个待跟踪目标点的车牌信息匹配时, 确定所述 当前目标点为新的待跟踪目标点, 并建立所述新的待跟踪目标点的跟踪列表信息, 其中, 各跟踪列表信息包括: 对应待跟踪目标点在每帧视频图像上的位置信息和车牌字符标识。
8、 如权利要求 7所述的装置, 其特征在于,
所述匹配单元, 具体用于根据所述当前 标点在当前帧视频图像上的位置信息, 以及 每个待跟踪目标点在上一帧视频图像上的位置信息, 确定所述当前目标点与每个待跟踪目 标点的距离; 若所述当前目标点与每个待跟踪目标点的距离中的最小距离小于第一阈值, 则确定所述当前目标点的车牌信息与所述最小距离对应的第一待跟踪目标点的车牌信息 匹配; 否则, 将所述当前目标点的字符标识信息与每个待跟踪目标点的字符标识信息进 行匹配, 根据匹配结果确定所述当前目标点的车牌信息是否与一个待跟踪目标点的车牌信 息匹配。
9、 如权利要求 8所述的装置, 其特征在于,
所述匹配单元, 还具体用于当所述当前目标点与每个待跟踪目标点的距离中的次小距 离小于第二阈值, 且所述当前目标点的车牌字符标识与所述次小距离对应的第二待跟踪目 标的车牌字符标识中相同字符的个数大于设定个数时, 确定所述当前目标点的车牌信息与 所述第二待跟踪目标点的车牌信息匹配; 当所述当前目标点与每个待跟踪目标点的距离中 的次小距离不小于第二阈值或者所述当前目标点的车牌字符标识与所述次小距离对应的 第二待跟踪目标的车牌字符标识中相同字符的个数不大于设定个数时, 确定所述当前目标 点的车牌信息不与任一个待跟踪目标点的车牌信息匹配。
10、 如权利要求 7所述的装置, 其特征在于, 还包括:
第二跟踪单元, 用于在当前帧视频图像的检测区域内未检测到指定的一个待跟踪目标 点时, 从未检测到的待跟踪目标点的跟踪列表信息中获取该未检测到的待跟踪目标点在至 少三帧视频图像中的位置信息, 根据所述至少三个位置信息, 确定当前帧视频图像中的预 测区域, 对所述预测区域中的车牌进行模板匹配, 获取模板匹配过程中得到的各目标区域 的像素灰度差均值中的最小均值, 当所述最小均值小于第三阈值时, 将所述最小均值对应 的目标区域确定为所述未检测到的待跟踪目标点, 并更新所述未检测到的待跟踪目标点的 跟踪列表信息。
11、 如权利要求 10所述的装置, 其特征在于,
所述第二跟踪单元, 还用于当所述最小均值大于或等于第三阈值时, 在所述预测区域 内对所述未检测到的待跟踪目标点的车牌进行粗定位, 当粗定位成功时, 确定粗定位到的 车牌为所述未检测到的待跟踪目标点, 并更新所述未检测到的待跟踪目标点的跟踪列表信 息。
12、 如权利要求 7所述的装置, 其特征在于, 在各跟踪列表信息中还包括: 每帧视频 图像的帧号和存储位置信息; 该装置还包括:
捕获单元, 用于当指定的待跟踪目标满足设定的捕获条件时, 从所述指定的待跟踪目 标的跟踪列表信息中查找到最小视频图像帧号, 并确定最小视频图像帧号对应的存储位置 信息; 根据所述存储位置信息, 从图像緩存区中提取对应的视频图像, 并将提取的视频图 像确定为捕获图像。
PCT/CN2011/081782 2011-10-09 2011-11-04 一种车辆跟踪的方法及装置 WO2013053159A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201110302716.3 2011-10-09
CN201110302716.3A CN102509457B (zh) 2011-10-09 2011-10-09 一种车辆跟踪的方法及装置

Publications (1)

Publication Number Publication Date
WO2013053159A1 true WO2013053159A1 (zh) 2013-04-18

Family

ID=46221533

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2011/081782 WO2013053159A1 (zh) 2011-10-09 2011-11-04 一种车辆跟踪的方法及装置

Country Status (2)

Country Link
CN (1) CN102509457B (zh)
WO (1) WO2013053159A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105809974A (zh) * 2016-05-25 2016-07-27 成都联众智科技有限公司 车辆信息自动识别系统
CN106251633A (zh) * 2016-08-09 2016-12-21 成都联众智科技有限公司 车牌自动识别与跟踪系统
CN109117702A (zh) * 2018-06-12 2019-01-01 深圳中兴网信科技有限公司 目标车辆的检测与跟踪计数方法及系统

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103927508B (zh) * 2013-01-11 2017-03-22 浙江大华技术股份有限公司 一种目标车辆跟踪方法及装置
CN103226812A (zh) * 2013-03-19 2013-07-31 苏州橙果信息科技有限公司 一种基于边缘二值图的纹理过滤方法
CN103606280B (zh) * 2013-11-14 2016-02-03 深圳市捷顺科技实业股份有限公司 一种信息识别方法、装置以及系统
CN105632175B (zh) * 2016-01-08 2019-03-29 上海微锐智能科技有限公司 车辆行为分析方法及系统
CN105654733B (zh) * 2016-03-08 2019-05-21 新智认知数据服务有限公司 一种基于视频检测的前后车牌识别方法及装置
CN106652445B (zh) * 2016-11-15 2019-08-23 成都通甲优博科技有限责任公司 一种公路交通事故判别方法及装置
CN108986472B (zh) * 2017-05-31 2020-10-30 杭州海康威视数字技术股份有限公司 一种掉头车辆监控方法及装置
CN107529665A (zh) * 2017-07-06 2018-01-02 新华三技术有限公司 车辆追踪方法及装置
CN109426252B (zh) * 2017-08-29 2021-09-21 上海汽车集团股份有限公司 一种车辆跟踪方法及装置
CN110163908A (zh) * 2018-02-12 2019-08-23 北京宝沃汽车有限公司 找寻目标物的方法、装置及存储介质
CN108347488A (zh) * 2018-02-13 2018-07-31 山东顺国电子科技有限公司 基于北斗电子地图的车辆管理方法、装置及服务器
CN108538062B (zh) * 2018-05-30 2020-09-15 杭州天铂红外光电技术有限公司 用于检测车辆拥堵的方法
CN110610118A (zh) * 2018-06-15 2019-12-24 杭州海康威视数字技术股份有限公司 交通参数采集方法及装置
CN110619254B (zh) * 2018-06-19 2023-04-18 海信集团有限公司 一种基于视差图的目标跟踪方法、装置及终端
CN108922175B (zh) * 2018-06-22 2021-10-01 大连理工大学 记录多台机动车越实线违法行为的方法及装置
CN109063574B (zh) * 2018-07-05 2021-04-23 顺丰科技有限公司 一种基于深度神经网络检测的包络框的预测方法、系统及设备
CN109063740A (zh) * 2018-07-05 2018-12-21 高镜尧 超声影像关键目标的检测模型构建及检测方法、装置
CN109118519A (zh) * 2018-07-26 2019-01-01 北京纵目安驰智能科技有限公司 基于实例分割的目标Re-ID方法、系统、终端和存储介质
CN109446926A (zh) * 2018-10-09 2019-03-08 深兰科技(上海)有限公司 一种交通监控方法及装置、电子设备和存储介质
CN111243281A (zh) * 2018-11-09 2020-06-05 杭州海康威视系统技术有限公司 一种道路多视频联合检测系统及检测方法
CN109709953A (zh) * 2018-12-21 2019-05-03 北京智行者科技有限公司 道路清洁作业中的车辆跟随方法
CN109993081A (zh) * 2019-03-20 2019-07-09 浙江农林大学暨阳学院 一种基于道路视频及车牌检测的车流量统计方法
CN110021172A (zh) * 2019-05-06 2019-07-16 北京英泰智科技股份有限公司 一种车辆全要素特征采集方法和系统
CN111932901B (zh) * 2019-05-13 2022-08-09 斑马智行网络(香港)有限公司 道路车辆跟踪检测设备、方法及存储介质
CN110503662A (zh) * 2019-07-09 2019-11-26 科大讯飞(苏州)科技有限公司 跟踪方法及相关产品
CN111784224A (zh) * 2020-03-26 2020-10-16 北京京东乾石科技有限公司 物体跟踪方法和装置、控制平台和存储介质
CN112686252A (zh) * 2020-12-28 2021-04-20 中国联合网络通信集团有限公司 一种车牌检测方法和装置
CN115331469A (zh) * 2022-08-15 2022-11-11 北京图盟科技有限公司 一种车辆轨迹在线还原方法、装置及设备

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010010540A1 (en) * 2000-01-31 2001-08-02 Yazaki Corporation Environment monitoring apparatus for vehicle
JP2006059252A (ja) * 2004-08-23 2006-03-02 Denso Corp 動き検出方法及び装置,プログラム,車両用監視システム
CN1909012A (zh) * 2005-08-05 2007-02-07 同济大学 一种用于交通信息实时采集的视频图像处理方法及系统
WO2008088409A2 (en) * 2006-12-19 2008-07-24 Indiana University Research & Technology Corporation Real-time dynamic content based vehicle tracking, traffic monitoring, and classification system
CN101247479A (zh) * 2008-03-26 2008-08-20 北京中星微电子有限公司 一种基于图像中目标区域的自动曝光方法
CN101383003A (zh) * 2008-10-31 2009-03-11 江西赣粤高速公路股份有限公司 车辆号牌实时精确识别方法
CN101556697A (zh) * 2008-04-10 2009-10-14 上海宝康电子控制工程有限公司 一种基于快速特征点的运动目标跟踪方法与系统
CN101727748A (zh) * 2009-11-30 2010-06-09 北京中星微电子有限公司 一种基于车辆尾灯检测的车辆监控方法、系统和设备
US20100208986A1 (en) * 2009-02-18 2010-08-19 Wesley Kenneth Cobb Adaptive update of background pixel thresholds using sudden illumination change detection
CN102074113A (zh) * 2010-09-17 2011-05-25 浙江大华技术股份有限公司 基于视频的牌照识别和车辆速度测量方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1801181A (zh) * 2006-01-06 2006-07-12 华南理工大学 人脸与车牌自动识别机器人
CN101373517B (zh) * 2007-08-22 2011-03-16 北京万集科技有限责任公司 一种车牌识别方法及系统
CN102194132B (zh) * 2011-04-07 2012-11-28 国通道路交通管理工程技术研究中心有限公司 一种伴随车检测识别系统及其方法

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010010540A1 (en) * 2000-01-31 2001-08-02 Yazaki Corporation Environment monitoring apparatus for vehicle
JP2006059252A (ja) * 2004-08-23 2006-03-02 Denso Corp 動き検出方法及び装置,プログラム,車両用監視システム
CN1909012A (zh) * 2005-08-05 2007-02-07 同济大学 一种用于交通信息实时采集的视频图像处理方法及系统
WO2008088409A2 (en) * 2006-12-19 2008-07-24 Indiana University Research & Technology Corporation Real-time dynamic content based vehicle tracking, traffic monitoring, and classification system
CN101247479A (zh) * 2008-03-26 2008-08-20 北京中星微电子有限公司 一种基于图像中目标区域的自动曝光方法
CN101556697A (zh) * 2008-04-10 2009-10-14 上海宝康电子控制工程有限公司 一种基于快速特征点的运动目标跟踪方法与系统
CN101383003A (zh) * 2008-10-31 2009-03-11 江西赣粤高速公路股份有限公司 车辆号牌实时精确识别方法
US20100208986A1 (en) * 2009-02-18 2010-08-19 Wesley Kenneth Cobb Adaptive update of background pixel thresholds using sudden illumination change detection
CN101727748A (zh) * 2009-11-30 2010-06-09 北京中星微电子有限公司 一种基于车辆尾灯检测的车辆监控方法、系统和设备
CN102074113A (zh) * 2010-09-17 2011-05-25 浙江大华技术股份有限公司 基于视频的牌照识别和车辆速度测量方法

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105809974A (zh) * 2016-05-25 2016-07-27 成都联众智科技有限公司 车辆信息自动识别系统
CN106251633A (zh) * 2016-08-09 2016-12-21 成都联众智科技有限公司 车牌自动识别与跟踪系统
CN109117702A (zh) * 2018-06-12 2019-01-01 深圳中兴网信科技有限公司 目标车辆的检测与跟踪计数方法及系统
CN109117702B (zh) * 2018-06-12 2022-01-25 深圳中兴网信科技有限公司 目标车辆的检测与跟踪计数方法及系统

Also Published As

Publication number Publication date
CN102509457B (zh) 2014-03-26
CN102509457A (zh) 2012-06-20

Similar Documents

Publication Publication Date Title
WO2013053159A1 (zh) 一种车辆跟踪的方法及装置
US8184859B2 (en) Road marking recognition apparatus and method
TWI425454B (zh) 行車路徑重建方法、系統及電腦程式產品
US8908915B2 (en) Devices and methods for tracking moving objects
JP2020519989A (ja) ターゲット識別方法、装置、記憶媒体および電子機器
KR20180036753A (ko) 레이저 포인트 클라우드 기반의 도시 도로 인식 방법, 장치, 저장 매체 및 기기
JP2015514278A (ja) マルチキュー・オブジェクトの検出および分析のための方法、システム、製品、およびコンピュータ・プログラム(マルチキュー・オブジェクトの検出および分析)
CN103824070A (zh) 一种基于计算机视觉的快速行人检测方法
CN104517275A (zh) 对象检测方法和系统
KR101678004B1 (ko) 노드-링크 기반 카메라 네트워크 통합 감시 시스템 및 감시 방법
CN111666821B (zh) 人员聚集的检测方法、装置及设备
JP5931662B2 (ja) 道路状況監視装置、及び道路状況監視方法
CN113505638A (zh) 车流量的监测方法、监测装置及计算机可读存储介质
CN111898491A (zh) 一种车辆逆向行驶的识别方法、装置及电子设备
CN114037966A (zh) 高精地图特征提取方法、装置、介质及电子设备
CN112149471B (zh) 一种基于语义点云的回环检测方法及装置
CN111079621A (zh) 检测对象的方法、装置、电子设备和存储介质
CN112347817B (zh) 一种视频目标检测与跟踪方法及装置
Fernández-Rodríguez et al. Automated detection of vehicles with anomalous trajectories in traffic surveillance videos
JP2020194489A (ja) 異常検出方法、異常検出装置、及び異常検出システム
WO2024098992A1 (zh) 倒车检测方法及装置
CN112562315A (zh) 一种获取车流信息的方法、终端及存储介质
CN116311166A (zh) 交通障碍物识别方法、装置及电子设备
US9183448B2 (en) Approaching-object detector, approaching object detecting method, and recording medium storing its program
CN115330841A (zh) 基于雷达图的抛洒物检测方法、装置、设备和介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11873937

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11873937

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 11873937

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC

122 Ep: pct application non-entry in european phase

Ref document number: 11873937

Country of ref document: EP

Kind code of ref document: A1