AU2019100914A4 - Method for identifying an intersection violation video based on camera cooperative relay - Google Patents

Method for identifying an intersection violation video based on camera cooperative relay Download PDF

Info

Publication number
AU2019100914A4
AU2019100914A4 AU2019100914A AU2019100914A AU2019100914A4 AU 2019100914 A4 AU2019100914 A4 AU 2019100914A4 AU 2019100914 A AU2019100914 A AU 2019100914A AU 2019100914 A AU2019100914 A AU 2019100914A AU 2019100914 A4 AU2019100914 A4 AU 2019100914A4
Authority
AU
Australia
Prior art keywords
lane
camera
image
vehicle
target vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2019100914A
Inventor
Shanmao Gu
Wencheng WANG
Xiaojin Wu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weifang University
Original Assignee
Weifang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Weifang University filed Critical Weifang University
Priority to AU2019100914A priority Critical patent/AU2019100914A4/en
Application granted granted Critical
Publication of AU2019100914A4 publication Critical patent/AU2019100914A4/en
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/02Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
    • G01S5/0294Trajectory determination or predictive filtering, e.g. target tracking or Kalman filtering
    • G06T3/14
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/188Capturing isolated or intermittent images triggered by the occurrence of a predetermined event, e.g. an object reaching a predetermined position
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/052Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed
    • G08G1/054Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed photographing overspeeding vehicles
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03HIMPEDANCE NETWORKS, e.g. RESONANT CIRCUITS; RESONATORS
    • H03H17/00Networks using digital techniques
    • H03H17/02Frequency selective networks
    • H03H17/0248Filters characterised by a particular frequency response or filtering method
    • H03H17/0255Filters based on statistics
    • H03H17/0257KALMAN filters

Abstract

The present invention relates to a method for identifying an intersection violation video based on camera cooperative relay, comprising: step SI: demarcating a lane line and identifying 5 a lane indication sign; step S2: detecting a target vehicle according to an image captured by a first camera to determine the lane in which the target vehicle is located; and tracking the target vehicle according to the image captured by the first camera to obtain a vehicle running trajectory; and step S3: identifying whether the target vehicle is in illegal lane change according to the lane in which the target vehicle is located and the vehicle running trajectory. By analyzing and identifying the video captured by the camera, the present invention can simultaneously detect the behaviors of a plurality of motor vehicles, such as illegal line touch, illegal lane change, retrograde, overspeed, running a red light, occurring at intersections, and is applicable to the occasion of long solid roads at intersections. The method is simple, practical, and convenient to construct, and can be directly modified in the original system with strong 5 expandability. zebra crossing driving direction iI Sl _r demarcating a lane line and identifying a lane indication sign detecting a target vehicle according to an image captured by a S2 / first camera to determine the lane in which the target vehicle is located identifying whether the target vehicle is in illegal lane change S3 according to the lane in which the target vehicle is located and the vehicle running trajectory

Description

2019100914 16 Aug 2019
The present invention relates to a method for identifying an intersection violation video based on camera cooperative relay, comprising: step SI: demarcating a lane line and identifying a lane indication sign; step S2: detecting a target vehicle according to an image captured by a first camera to determine the lane in which the target vehicle is located; and tracking the target vehicle according to the image captured by the first camera to obtain a vehicle running trajectory; and step S3: identifying whether the target vehicle is in illegal lane change according to the lane in which the target vehicle is located and the vehicle running trajectory. By analyzing and identifying the video captured by the camera, the present invention can simultaneously detect the behaviors of a plurality of motor vehicles, such as illegal line touch, illegal lane change, retrograde, overspeed, running a red light, occurring at intersections, and is applicable to the occasion of long solid roads at intersections. The method is simple, practical, and convenient to construct, and can be directly modified in the original system with strong expandability.
1/4
FIG. 1
FIG. 2
FIG. 3
2019100914 16 Aug 2019
METHOD FOR IDENTIFYING AN INTERSECTION VIOLATION
VIDEO BASED ON CAMERA COOPERATIVE RELAY
TECHNICAL FIELD [0001] The P^sent invention relates to the field of image processing technologies, and in particular, to a method for identifying an intersection violation video based on camera cooperative relay.
BACKGROUND [0002] I*1 recen; Years> with the development of the national economy and the acceleration of urbanization, automobiles have entered thousands of households. The travel with motor vehicles brings convenience and speed, which also leads to an increase in violations of motor vehicles. Especially at intersections, there are situations from time to time in which there is a lane change at random and driving does not follow traffic signs, and there are constant frictions between drivers, which will not only disrupt traffic order, cause traffic jams, but also seriously threaten people's lives and property. Therefore, how to reduce vehicle violation behaviors through administrative punishment supervision has become a need of social development.
[0003] present, the common violation snapshot is achieved by burying the induction coil at the intersection and triggering the camera to take pictures. The method is connected with the traffic light signal control system. When the traffic is a red light in a certain direction, if there is vehicle running a red light, the induction coil is triggered, and the camera is started to snapshot. The method mainly realizes the detection of violation of running a red light, but has no way to changing lanes at random, stepping on a solid line and reverse driving. Although it is possible to detect the above behavior by burying the coil in the lower part of the solid line, the method has poor expandability. Because the solid line area of the intersection is long, it is time-consuming and labor-intensive to break the earth and bury the ground. If the road is re-routed, all the previous efforts will be wasted. Other methods for detecting traffic violations are also proposed in the prior art, but the existing detecting methods are complicated, difficult, and costly in the implementing process, and cannot cope with the behaviors that a plurality of targets violate regulations at the same time, and it is difficult to ensure the accuracy.
SUMMARY [0004] I*1 v'cw °f the object of the present invention is to overcome the deficiencies of the prior art, and to provide a method for identifying an intersection violation video based on camera cooperative relay.
[0005] T° ach’eve avc object, the present invention adopts the following technical solution: a method for identifying an intersection violation video based on camera cooperative relay, comprising:
[0006] steP : demarcating a lane line and identifying a lane indication sign;
[0007] step S3· detecting a target vehicle according to an image captured by a first camera to determine the lane in which the target vehicle is located; and [0008] ’-racking the target vehicle according to the image captured by the first camera to obtain a vehicle running trajectory; and [0009] step S3· identifying whether the target vehicle is in illegal lane change according to the lane in which the target vehicle is located and the vehicle running trajectory.
[0010] Opd°nady, the process of demarcating a lane line in step SI comprises:
[0011] 'n ’^c case ’-hn’· there is no vehicle and no pedestrian on the road surface, capturing a road surface image by the first camera, wherein the area where the image is captured is a solid line area prior to reaching the intersection;
[0012] preprocessing the captured image;
[0013] image;
[0014] interest;
[0015] identifying lane lines on the preprocessed image to segment the lane lines in the obtaining a region of interest based on the segmented lane, and cropping the region of geometrically transforming the cropped area to obtain an image of parallel lanes with equal width;
[0016] [0017] identifying the vehicle travel indication sign on each lane to obtain a lane category;
for different lane categories, segmenting the image into different regions based on lane lines, and labeling the lane attribute and the lane area coordinate range for each region.
[0018] [0019] [0020] [0021]
Optionally, preprocessing the captured image comprises:
performing a grayscale process on the captured image to obtain a grayscale image;
performing a Gaussian smoothing process on the grayscale image.
Optionally, the process of detecting a target vehicle in step S2 comprises:
[0022] acquiring the difference between two adjacent frame images:
U(H + l) = r„+1(/,y)-r„ (/,J) )Δ(π-1) = ^(/,7)-^ (/,7) [0023] [0024] Perrming a binarization process on the above difference to obtain: i 7? ( ί_!λδ( + 1)-7
J n+1{1,J [0,Δ(η + ϊ)<Τ 3 . . C l,A(n-l)>7”
Λ-ι(ζΥ)-|0^δ(ζ2_1)^7, [0025] [0026] [0027] [0028] then performing logical AND operation to obtain a final foreground image, i.e., _ [j 1 *»+l k [ 0 otherwise after performing a hole filling process according to the grayscale distribution of the target Rk, forming a convex shell according to the boundary of the set of all the pixel points of the target Rk, obtaining and saving the centroid of the target Rk, in which the centroid is obtained in the following formula:
R-ky ~ [0029] vvdlcrc'n r Hp’-/\r p’^andr«+ld’^ represent the pixel values of the (n-l)th frame, the n-th frame, and the (n+l)th frame at (i, j), respectively; and A(n-l) and A(n+1) represent the difference between the two adjacent frame images; respectively; T is a threshold; Xi and yi represent the coordinates of the target area, and Gi is the weight of the pixel points, where G is the number of the pixel points.
[0030] Optionady> lhc method further comprises: assigning a unique ID number to the detected target vehicle.
[0031] Optionady> lhc process of tracking a target vehicle in step S2 comprises:
[0032] Slcp S21: predicting a rough position of the moving target at k time using a Kalman filtering algorithm;
[0033] SlCp $22: finding the real position of the moving target at k time using a mean shift algorithm by obtaining the optimal solution;
[0034] SlCp S23: conveying the real position of the moving target at k time to the Kalman filtering algorithm, optimizing the Kalman filtering algorithm, and obtaining the updated tracking position of the moving target at k time, where k=k+l;
[0035] step $^4: rePeatedly performing steps S21 to S23 until the end of the image sequence;
2019100914 16 Aug 2019 [0036] wherein the set of the tracking positions obtained in the step S23 is the running trajectory of the target vehicle.
[0037]
Optionally, the process of identifying whether the target vehicle is in illegal lane change in the step S2 comprises:
[0038] obtaining a coordinate range of the lane area according to the lane in which the target vehicle is located;
[0039] determining whether the vehicle is in illegal lane change according to the coordinate range of the lane area and the vehicle running trajectory; and [0040] if horizontal coordinate value of any point in the vehicle running trajectory is greater than the maximum value of the horizontal coordinate in the lane area, or is smaller than the minimum value of the horizontal coordinate in the lane area, considering the vehicle to be in illegal lane change.
[0041] the method further comprises: performing target vehicle retrograde detection on the captured image, wherein the specific process comprises:
[0042] Perrming coordinate demarcating on the captured image, wherein the direction of the lane line is set as the vertical axis direction;
[0043] determining the vertical coordinate change trend of the travel position point when the vehicle is running normally; and [0Q44] obtaining the target vehicle running trajectory, and if the vertical coordinate change trend in the running trajectory is inconsistent with the vertical coordinate change trend in the normal running, considering the target vehicle have a reverse driving violation behavior.
[0045]
Optionally, the method further comprises: performing target vehicle overspeed detection on the captured image, wherein the specific process comprises:
[0046] Stabling the time taken by the target vehicle to pass the capturing area according to the time Th when the target vehicle first enters the capturing area and the time Ti when the target vehicle finally leaves the capturing area;
[0047] obtaining the speed v with which the target vehicle passes through the capturing area according to the actual road length corresponding to the capturing area:
L
AT [0048] where L is the actual road length corresponding to the capturing area; ΔΤ -Tx-Th ;
[0049] if v is greater than the maximum speed limit of the road segment, considering the target vehicle to be overspeed.
2019100914 16 Aug 2019 [0050]
Optionally, the method further comprises: performing detection of the target vehicle occupying the non-motorized vehicle lane on the captured image, wherein the specific process comprises:
[0051] providing its corresponding coordinate range for the non-motorized vehicle lane area, wherein the set of coordinate points in the coordinate range is represented by Rn;
[0052] if any of the coordinate points in the target vehicle running trajectory belongs to Rn, considering the target vehicle to have occupied the non-motorized vehicle lane.
[0053] Optionally, the method further comprises: capturing an image of the intersection area by the second camera, wherein there is a partially overlapping region between the image captured by the second camera and the image captured by the first camera;
[0054] Perrming consistency processing on the image captured by the first camera and the image captured by the second camera using the overlapping region, wherein the specific process comprises:
[0055] Gaining an image captured by the second camera, and correcting the image to obtain a corrected image;
[0056] acquiring a template obtained by cropping the overlapping region in the image captured by the first camera;
[0057] Pcr''orm'ng search matching in the corrected image using a template matching method, and obtaining an amplification ratio of the image captured by the first camera with respect to the corrected image; and [0058] scaiinS the corrected image according to the amplification ratio, so that the overlapping region in the scaled image has the image exactly the same as that of the overlapping region in the image captured by the first camera to achieve the relay matching between the first camera and the second camera;
[0059] wherein the image captured by the first camera in this process is an image processed by demarcating the lane line.
[0060] Optionally, after the relay matching between the first camera and the second camera is achieved, the method further comprises: performing the relay tracking of the same target by the first camera and the second camera, wherein the specific process comprises:
[0062] [0061] Staining a viewing field boundary line L of the first camera and the second camera; wherein the expression of L is: Ax+By+C=0;
2019100914 16 Aug 2019 [0063] [0064] [0065] assuming P= Ax+By+C, obtaining the coordinates (xp, yp) of the tracked target vehicle;
if the value of P changes from negative to positive or from positive to negative, indicating that the target vehicle has a viewing field switching in the frame, wherein in the same lane area, the target point among the target center points closest to the viewing field boundary line is the same tracked target;
[0066] tracking the target to achieve cooperative relay tracking of the same target by the first camera and the second camera.
[0067]
Optionally, the method further comprises: performing running red light detection of the target vehicle on the captured image, wherein the specific process comprises:
[0068] delimiting an area in the image captured by the second camera as a violation area;
[0069] in the case that the traffic light in the driving direction of the vehicle is red, if a vehicle enters the violation area, determining that the vehicle has a violation of running a red light.
[0070]
Optionally, the method further comprises: identifying the identity of the illegal vehicle in the image, wherein the specific process comprises:
[0071] cropping a license plate area screenshot of the illegal vehicle in the image captured by the second camera;
[0072] identifying the license plate number based on the license plate area screenshot; and [0073] sendin§ the license plate number to the data processing center for identification.
[0074] The Present invention adopts the above technical solution, the method for identifying an intersection violation video based on camera cooperative relay comprises: step SI:
demarcating a lane line and identifying a lane indication sign; step S2: detecting a target vehicle according to an image captured by a first camera to determine the lane in which the target vehicle is located; and tracking the target vehicle according to the image captured by the first camera to obtain a vehicle running trajectory; and step S3: identifying whether the target vehicle is in illegal lane change according to the lane in which the target vehicle is located and the vehicle running trajectory. By analyzing and identifying the video captured by the camera, the present invention can simultaneously detect the behaviors of a plurality of motor vehicles, such as illegal line touch, illegal lane change, retrograde, overspeed, running a red light, occurring at intersections, and is applicable to the occasion of long solid roads at intersections. The method is simple, practical, and convenient to construct, and can be directly modified in the original system with strong expandability.
2019100914 16 Aug 2019
BRIEF DESCRIPTION OF THE DRAWINGS [0075] I*1 or<^er to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below. It is apparent that the drawings in the following description are merely a part of embodiments of the present invention, and other drawings can be obtained from these drawings to those skilled in the art without any creative work.
[0076] FIG- 1 is a schematic diagram illustrating the distribution of a camera in an embodiment according to the present invention.
[0077] FIG- 2 is a flow schematic diagram illustrating a method for identifying a road violation video in an embodiment according to the present invention.
[0078] FIG 3 is a schematic diagram illustrating identifying lanes in an embodiment according to the present invention.
[0079] FIG 4 is a schematic diagram illustrating a trapezoidal mask in an embodiment according to the present invention.
[0080] FIG- 5 is a schematic diagram illustrating geometric transformation of a plan view of a lane in an embodiment according to the present invention.
[0081] FIG 6 is a schematic diagram illustrating digitally labeling a lane in an embodiment according to the present invention.
[0082] FIG- 7 is a flow schematic diagram illustrating tracking a target vehicle in an embodiment according to the present invention.
[0083] FIG 8 is a trajectory diagram illustrating target motion in an embodiment according to the present invention.
[0084] FIG- 9 is a process flow diagram illustrating image relay matching of a first camera and a second camera in an embodiment according to the present invention.
[0085] FIG- 1θ is a schematic diagram illustrating trapezoidal correction of a still image captured by a second camera in an embodiment according to the present invention.
DESCRIPTION OF THE EMBODIMENTS [0086] I*1 order to ma'<c objects, technical solutions and advantages of the present invention more clear, the technical solutions of the present invention will be described in detail below. It is apparent that the described embodiments are merely a part of the embodiments of
2019100914 16 Aug 2019 the present invention, rather than all of the embodiments. All other embodiments obtained by those skilled in the art based on the embodiments of the present invention without any creative work fall within the scope of the present invention.
[0087] As provided in intersection an embodiment of the present invention, as shown in FIG. 1, two cameras are the same direction. One of the cameras is provided at the front side of the and the other is provided at the rear side of the intersection. The specific implementation is as follows (for the convenience of description, the one-way road map is in the drawings, and the two-way road method is the same): the first camera Cl capturing area in front of the intersection is as indicated by a broken line, and the second camera C2 capturing area behind the intersection is as indicated by a solid line. The broken line area and the solid line area are adjacent to each other with slight overlap. With the zebra crossing as the dividing line, the zebra crossing area belongs to the image capturing area of C2.
[0088] The video captured by Cl is mainly used for lane lines and indication sign identification, vehicle detection and tracking, illegal lane change detection, overspeed detection, retrograde detection, and detection of illegal occupation of non-motorized vehicle lanes.
[0089] The video captured by C2 is mainly used for relay tracking, detection of running a red light and vehicle identification.
[0090] As shown in FIG. 2, the method for identifying intersection violation video based on camera cooperative relay comprises:
[0091 ] SlCp : demarcating a lane line and identifying a lane indication sign.
[0092] F°r v'c'co captured by Cl, the method can automatically identify the lane line and the travel indication sign as follows:
[0093] (1) processing [0094] The image captured by Cl is first grayscale processed. In order to reduce noise interference, Gaussian filtering smoothing is performed using a template with a size of 5*5. The formula is expressed as:
[0095] s(x,y) = f(x,y)*TmP [0096]
The used kernel function template Tmp is:
4 7 4 lj [0097]
Tmp = ——
273
2019100914 16 Aug 2019 [0098] Where f(x, y) is the gray value of the (x, y) point in the image, g(x, y) is the value after the point has been Gaussian filtered, and * is the convolution operation.
[0099] (2) Identifying lanes [00100] This process identifies the lane lines in the image. As shown in FIG. 3, for the image captured by Cl, the searching area is set (assuming that the width is W, the height is L, the lower left coordinate is (0, 0), and the entire image is horizontally divided into 8 parts and vertically divided into 5 parts, the coordinates of the lower left corner of the searching area are (1/8W+1, 1/5L+1), and the coordinates of the upper right corner are (7/8W, 2/5L)), an image comprising only the lane line and the road surface is obtained, and color analysis is performed on the area to obtain potential lane line color and road color information. Then, based on the lane line color and the road surface color as the starting information of the cluster, automatic clustering is performed, and finally the lane line in the drawings is segmented.
[00101] (3) Area cropping of the area of interest [00102] I*1 orc^cr to reduce the interference of objects outside the lane and reduce the computational workload, the region of interest is obtained based on the segmented lanes. As shown in FIG. 4, after identifying lanes, a trapezoidal mask is formed using the outermost lane, the mask is slightly larger than the lane area, and the pixel value outside the lane is 0.
[00103] (4) Geometric transformation [00104] I*1 orc^cr to av°id the influence of dimensional changes during the subsequent vehicle tracking process, in combination with the prior knowledge of the parallel equal width of the lane lines, the geometrical transformation is used so that the image has the same size in all the positions of the images captured by the camera projection. The mapping matrix used for geometric transformation is calculated according to the slope of the trapezoidal waists, so as to finally realize the parallelism and equidistance of the lanes. As shown in Fig. 5, the trapezoid
ABCD can be geometrically transformed and mapped into another rectangular A'B'C'D'.
[00105] (5) Lane sign identification [00106] Tor eac^ 'anCa vchiclc travel indication sign is sprayed on the road surface, and the single layer BP neural network classifier is designed for identification. The identified categories mainly comprise: left turn, right turn, straight run, straight run plus right turn, straight run plus left turn, straight run plus right turn and left turn, non-motorized lanes. Due to the limited type of lanes, the classifier based on BP neural network training can achieve the effect of high speed and high identifying rate using a small number of samples.
[00107] (6) Digital labeling [00108] For different lane categories, the lanes are divided into different areas based on the lane lines, and the same region contains the same attributes, which are labeled with A, B, C, D, E, F, and G, respectively. As shown in FIG. 6, assuming that there are three lanes, and (x, y) respectively represent a horizontal coordinate and a vertical coordinate, the lane area coordinates of the attribute A are: {xl<x<x2, l<y<H}; the lane area coordinates of the attribute B are: {x3<x<x4, l<y<H}; the lane area coordinates of the attribute C are: {x5<x<x6, l<y<H}.
[00109] The aFove steps are implemented in the system initialization phase, and the captured video is set to be completed in the case that there is no vehicle and no pedestrian on the road surface.
[00110] step a target vehicle is detected according to an image captured by a first camera to determine the lane in which the target vehicle is located.
[00111] I11 orc^cr to detect the vehicle entering the field of view of the first camera Cl, a three-frame difference solution method is adopted, and the difference image is obtained by subtracting the pixel values at the corresponding positions in the two images of the adjacent two frames. An area in which the pixel value is smaller in the difference image may be regarded as a background, and an area in which the pixel value is larger in the difference image may be regarded as a target. Then a threshold is set to perform binarization processing. The specific principle is described as follows.
[00112] Assuming that the pixel values at (i, j) of the n-th frame, the (n+l)th frame and the (n-l)th frame are , r«(w')and , respectively, the threshold is T, and the difference between the two adjacent frame images is A(n),
M(H + l) = r„+1(/,y)-r„ (/,7) [00113] ^(/7-1) = ^(/,7)-^(/,7) [00114] [00115] [00116] [00117] [00118] performing a binarization process on the above difference to obtain:
' Γ1,Δ(η + 1)>7
J |0,A(n + l)<7 ' . . Γ1,Δ(η-1)>7 then performing logical AND operation to obtain a final foreground image, i.e., jy _ [j 1 *»+i k [ 0 otherwise after performing a hole filling process according to the grayscale distribution of the target Rk, forming a convex shell according to the boundary of the set of all the pixel points of
2019100914 16 Aug 2019 the target Rk, obtaining and saving the centroid of the target Rk, in which the centroid is obtained in the following formula:
[00119] where Xi and yi represent the coordinates of the target area, and Gi is the weight of the pixel points, where G is the number of the pixel points.
[00120] A unique ID number is assigned to the detected target vehicle for a period of time. The rule is as follows: according to the time when the target is detected, the number is sequentially increased, the initial lane where the vehicle is located is programmed into the ID number according to the labeled lane code, and the number is reset to zero after the counting time exceeds 24 hours. If the current target number is 200, if the initial state lane is C, the ID of the target is 200-C.
[00121] Further, the target vehicle is tracked according to the image captured by the first camera Cl to obtain the vehicle running trajectory;
[00122] I*1 orc^cr to imProve the robustness of vehicle tracking and avoid the loss of tracking due to the proximity of the vehicle to the background color, the present invention employs a combined tracking algorithm of mean shift and Kalman filtering. As shown in FIG. 7, the process of tracking the target vehicle comprises:
[00123] steP initializing the target window and parameters;
[00124] steP $21: predicting a rough position of the moving target at k time using a
Kalman filtering algorithm;
[00125] step $22: finding the real position of the moving target at k time using a mean shift algorithm by obtaining the optimal solution;
[00126] step $23: conveying the real position of the moving target at k time to the Kalman filtering algorithm, optimizing the Kalman filtering algorithm, and obtaining the updated tracking position of the moving target at k time, where k=k+l;
[00127] sequence.
step S24: repeatedly performing steps S21 to S23 until the end of the image
Finally, within the field of view of the camera Cl, the moving trajectory of each [00128] vehicle target center can be obtained. Assuming that the number of coordinate points obtained is K, the set of trajectory coordinates of the target Oi is {Oi(x), Oi(y)}.
2019100914 16 Aug 2019 [00129] step it is identified whether the target vehicle is in illegal lane change according to the lane in which the target vehicle is located and the vehicle running trajectory.
[00130]
The specific process comprises:
according to the coordinate range of the divided lane area and the vehicle running [00131] trajectory, if the horizontal coordinate point in the trajectory exceeds the lane boundary, considering it to be in illegal lane change and exceed the solid line. In order to ensure a certain redundancy b, assuming that the horizontal coordinate in the lane center is Xc, the determining rule is set as:
r normal f I . f .. ertenrtu 1 violation [00132] shown in Fig. 8, for the lane A, the center is xc = (x2 - xl)/2, and the deviation redundancy is b = (x2 - xl)/4. It indicates that if all the horizontal coordinates in the centroid running trajectory of the vehicle are in the range between (x2-xl)/4 and 3(x2-xl)/4, it indicates that the vehicle is running normally. Otherwise, the vehicle is considered to be in illegal lane change or excess the solid line.
[00133] Further, the method further comprises: performing target vehicle retrograde detection on the captured image, wherein the specific process is as follows.
[00134] F°r caPlLir'n§ area °f camera Cl, the lower left corner is the coordinate starting point (1, 1), the horizontal coordinate is gradually increased from left to right, and the vertical coordinate is gradually increased from bottom to top. According to the vehicle running trajectory, if there is a case where the vertical coordinate value in the trajectory gradually increases, it is considered that there is reverse driving. The method is expressed as follows:
o = inoimal )> Oi(yJ > ) ' I violation oOnrwi» [00135] Further, the method further comprises: performing target vehicle overspeed detection on the captured image, wherein the specific process comprises:
[00136] obtaining the time taken by the target vehicle to pass the capturing area according to the time Th when the target vehicle first enters the capturing area and the time Ti when the target vehicle finally leaves the capturing area;
[00137] obtaining the speed v with which the target vehicle passes through the capturing
2019100914 16 Aug 2019 area according to the actual road length corresponding to the capturing area:
L v =-AT [00138] where L is the actual road length corresponding to the capturing area;
ΔΎ = T\-Th [00139] v is Srcalcr lban the maximum speed limit of the road segment, considering the target vehicle to be overspeed.
[00140] Further, the method further comprises: performing detection of the target vehicle occupying the non-motorized vehicle lane on the captured image, wherein the specific process is as follows.
[00141] F°r l^c case w^ere l^c motor vehicle occupies the non-motorized vehicle lane, according to the division of the lane, the non-motorized vehicle lane area will form a set of coordinate points, which is represented by Rn. If the coordinate point of the target vehicle moving trajectory belongs to Rn, it is considered that the non-motorized vehicle lane has been occupied. The determining criteria are:
normal violation othenrisa [00142] Further, as shown in FIG. 1, there is a partially overlapping region between the image captured by the second camera and the image captured by the first camera; consistency processing is performed on the image captured by the first camera and the image captured by the second camera using the overlapping region, achieving relay matching of the first camera and the second camera. As shown in Figure 9, the specific process comprises:
[00143] obtaining a static image captured by the second camera, and performing geometric correction on the image (as shown in FIG. 10, trapezoidal correction) to obtain a corrected image;
[00144] acquiring a template obtained by cropping the overlapping region in the image captured by the first camera;
[00145] performing search matching in the corrected image using a template matching method, and obtaining an amplification ratio of the image captured by the first camera with respect to the corrected image to obtain a transformation matrix; and [00146] scaling the corrected image according to the amplification ratio, so that the
2019100914 16 Aug 2019 overlapping region in the scaled image has the image exactly the same as that of the overlapping region in the image captured by the first camera to achieve the relay matching between the first camera and the second camera;
[00147] wherein the image captured by the first camera in this process is an image processed by demarcating the lane line.
[00148] Further, after the relay matching between the first camera and the second camera is achieved, the method further comprises: performing the relay tracking of the same target by the first camera and the second camera, wherein the specific process comprises:
[00149] obtaining a viewing field boundary line L of the first camera and the second camera, assuming that the viewing field boundary line L is in C2;
[00150] wherein the expression of L is: Ax+By+C=0;
[00151] assuming P= Ax+By+C, [00152] [00153] obtaining the coordinates (xp, yp) of the tracked target vehicle;
defining the discrimination function of target visibility as:
the target is visible in C2 view field range the target is on the view field boundary line the target is invisible in C2 view field range of P changes from negative to positive, it indicates that the target [00154] If value disappears from the view field of Cl and appears in the view field of C2 in the frame. In the same lane area, the target point among the target center points closest to the viewing field boundary line L is the same tracked target. Then, the motor vehicles numbered within the Cl monitoring range carry the information into the C2 monitoring range to complete the target switching. The target is tracked to achieve cooperative relay tracking by the first camera Cl to the second camera C2.
[00155] handover of the motor vehicle target identification can be expressed by the following formula:
[00156] where Pt denotes the target that is tracked by the camera Cl at the time t, L denotes [00157] the viewing field boundary line of the camera Cl and the camera C2, N denotes the number of moving targets detected in the camera C2 near the L range, and Pk denotes the kth moving target in the camera C2. If the kth motor vehicle target in the camera C2 is closest to L, an identifier
2019100914 16 Aug 2019 the same as the tracked target that has just disappeared in the field of view of the camera Cl is assigned to the target, thereby realizing the relay tracking of the same target.
[001 58] I*1 order 10 ensure the synchronization of the cameras Cl and C2, Cl and C2 use video capture cards of the same size and the equal period, so that the sampling rate between the cameras is the same. The synchronous clock is set by software initialization, and the cameras Cl and C2 are driven at the same time to capture and process images so that the video frame Ml captured by the camera Cl and the video frame M2 captured by the camera C2 at the same time are in one-to-one correspondence.
[00159] can understood that in the C2 stage picture, the tracking method of the vehicle is the same as the tracking mode of Cl.
[00160] Further, the method further comprises: performing running red light detection of the target vehicle on the captured image, wherein the specific process comprises:
[00161] delimiting an area in the image captured by the second camera as a violation area; [00162] in case tra®c dght in the driving direction of the vehicle is red, if a vehicle enters the violation area, determining that the vehicle has a violation of running a red light.
[00163] Optionally, the method further comprises: identifying the identity of the illegal vehicle in the image, wherein the specific process comprises:
[00164] cropping a license plate area screenshot of the illegal vehicle in the image captured by the second camera;
[00165] [00166] identifying the license plate number based on the license plate area screenshot; and sending the license plate number to the data processing center for identification.
It should be noted that the method can also be extended to three or more cameras [00167] according to the road segment, and the specific method is the same as the cooperative method of the two cameras in the present case.
[00168] The Present invention separates the lane line and the road by color analysis and clustering based on the pattern identifying and processing method; the geometric transformation avoids the interference caused by the dimensional change in the vehicle tracking process; the lane sign is obtained based on the classifier of the BP neural network training, the demarcation of the region attribute is realized; on this basis, the set of trajectory points of the particle motion is used to quickly determine the violation behavior within a certain redundancy range. The method completing the relay tracking by setting the bi-camera viewing field boundary line expands the monitoring range on the basis of maintaining the time-space consistency, avoiding the time loss caused by the re-detection of external features such as color and contour, solving the monitoring bottleneck having a long solid line lane, realizing the separation of the illegal vehicle tracking and the identification, and effectively reducing the influence of the license plate occlusion caused by the close distance between the vehicles in the low-speed operation phase. The method can realize the identification of the violation behaviors, such as multi-vehicle overspeed, illegal lane change, retrograde, overspeed, illegal occupation of motor lanes, running a red light, occurring at intersections. The method is simple, practical, and strong in expandability and has an important application value.
[00169]
It is to be understood that the same or similar parts in the above embodiments may be referred to each other, and those not described in detail in some embodiments may refer to the same or similar contents in other embodiments.
[001 70] suld noted that in the description of the present invention, the terms first, second and the like are used for descriptive purposes only, and are not to be construed as indicating or implying relative importance. Further, in the description of the present invention, the meaning of a plurality of' indicates at least two unless otherwise stated.
[00171] Any Process or method description in the flowcharts or otherwise described herein may be understood to represent a module, segment or portion of a code that comprises one or more executable instructions for implementing the steps of a particular logical function or process. The scope of the preferred embodiments of the present invention comprises additional implementations, in which the functions may be performed in a substantially simultaneous manner or in an opposite order depending on the functions involved, rather than in the order shown or discussed. It will be understood by those skilled in the art to which the embodiments of the present invention pertain.
[001 72] suld understood that portions of the present invention may be implemented in hardware, software, firmware or a combination thereof. In the above embodiments, a plurality of steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, it can be implemented by any one of the following techniques well known in the art or a combination thereof: discrete logic circuits with logic gate circuits for implementing logic functions on data signals, application specific integrated circuits with suitable combinational logic gate circuits, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), etc.
[001 73] OnC °'' orc''nary skill 'n ike art can understand that all or part of the steps carried by the method of implementing the above embodiments can be completed by a program to
2019100914 16 Aug 2019 instruct related hardware, and the program can be stored in a computer readable storage medium. When the program executed, one of the steps of the method embodiments or a combination thereof is included.
[001 74] I*1 Edition, each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module. The above integrated modules can be implemented in the form of hardware or in the form of software functional modules. The integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
[00175]
The above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like.
[00176] I*1 l^c description of the present specification, the description with reference to the terms one embodiment, some embodiments, an example, a specific example, or some examples and the like indicates that a specific feature, structure, material or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In the present specification, the schematic representation of the above terms does not necessarily mean the same embodiment or example. Furthermore, the specific feature, structure, material or characteristic described may be combined in a suitable manner in any one or more embodiments or examples.
[00177]
Although the embodiments of the present invention have been shown and described, it is understood that the above embodiments are illustrative and are not to be construed as limiting the scope of the present invention. One of ordinary skill in the art can make changes, modifications, substitutions and variations to the above embodiments within the scope of the present invention.

Claims (14)

  1. 2019100914 16 Aug 2019
    WHAT IS CLAIMED IS:
    1. A method for identifying an intersection violation video based on camera cooperative relay, comprising:
    step S1: demarcating a lane line and identifying a lane indication sign;
    step S2: detecting a target vehicle according to an image captured by a first camera to determine the lane in which the target vehicle is located; and tracking the target vehicle according to the image captured by the first camera to obtain a vehicle running trajectory; and step S3: identifying whether the target vehicle is in illegal lane change according to the lane in which the target vehicle is located and the vehicle running trajectory.
  2. 2. The method according to claim 1, wherein the process of demarcating a lane line in step SI comprises:
    in the case that there is no vehicle and no pedestrian on the road surface, capturing a road surface image by the first camera, wherein the area where the image is captured is a solid line area prior to reaching the intersection;
    preprocessing the captured image;
    identifying lane lines on the preprocessed image to segment the lane lines in the image; obtaining a region of interest based on the segmented lane, and cropping the region of interest; geometrically transforming the cropped area to obtain an image of parallel lanes with equal width;
    identifying the vehicle travel indication sign on each lane to obtain a lane category;
    for different lane categories, segmenting the image into different regions based on lane lines, and labeling the lane attribute and the lane area coordinate range for each region.
  3. 3. The method according to claim 2, wherein preprocessing the captured image comprises:
    performing a grayscale process on the captured image to obtain a grayscale image;
    performing a Gaussian smoothing process on the grayscale image.
  4. 4. The method according to claim 3, wherein the process of detecting a target vehicle in step S2 comprises:
    acquiring the difference between two adjacent frame images:
    2019100914 16 Aug 2019
    C Α(λζ+ 1) = r„+1(z, y)-r„(z, y) [Δ(/ϊ -1) = r„ (z, y) - r„_! (A 7) performing a binarization process on the above difference to obtain:
    ' \\,E(n + \)>T j n+^l,J) jO, \(n +1)< T ' . . Γ1,Δ(η-1)>Τ then performing logical AND operation to obtain a final foreground image, i.e., _ Q ^«+1 k [ 0 otherwise after performing a hole filling process according to the grayscale distribution of the target Rk, forming a convex shell according to the boundary of the set of all the pixel points of the target Rk, obtaining and saving the centroid of the target Rk, in which the centroid is obtained in the following formula:
    R-kx EgaEg-lR ky=^T— wherein P-N J) 7)andr n+i(',j) represent the pixel values of the (n-l)th frame, the n-th frame, and the (n+l)th frame at (i, j), respectively; and A(n-l) and A(n+1) represent the difference between the two adjacent frame images; respectively; T is a threshold; Xj and yi represent the coordinates of the target area, and Gi is the weight of the pixel points, where G is the number of the pixel points.
  5. 5. The method according to claim 4, further comprising: assigning a unique ID number to the detected target vehicle.
  6. 6. The method according to claim 2, wherein the process of tracking a target vehicle in step S2 comprises:
    step S21: predicting a rough position of the moving target at k time using a Kalman filtering algorithm;
    step S22: finding the real position of the moving target at k time using a mean shift algorithm by obtaining the optimal solution;
    2019100914 16 Aug 2019 step S23: conveying the real position of the moving target at k time to the Kalman filtering algorithm, optimizing the Kalman filtering algorithm, and obtaining the updated tracking position of the moving target at k time, where k=k+l;
    step S24: repeatedly performing steps S21 to S23 until the end of the image sequence;
    wherein the set of the tracking positions obtained in the step S23 is the running trajectory of the target vehicle.
  7. 7. The method according to claim 2, wherein the process of identifying whether the target vehicle is in illegal lane change in the step S2 comprises:
    obtaining a coordinate range of the lane area according to the lane in which the target vehicle is located;
    determining whether the vehicle is in illegal lane change according to the coordinate range of the lane area and the vehicle running trajectory; and if the horizontal coordinate value of any point in the vehicle running trajectory is greater than the maximum value of the horizontal coordinate in the lane area, or is smaller than the minimum value of the horizontal coordinate in the lane area, considering the vehicle to be in illegal lane change.
  8. 8. The method according to claim 2, further comprising: performing target vehicle retrograde detection on the captured image, wherein the specific process comprises:
    performing coordinate demarcating on the captured image, wherein the direction of the lane line is set as the vertical axis direction;
    determining the vertical coordinate change trend of the travel position point when the vehicle is running normally; and obtaining the target vehicle running trajectory, and if the vertical coordinate change trend in the running trajectory is inconsistent with the vertical coordinate change trend in the normal running, considering the target vehicle have a reverse driving violation behavior.
  9. 9. The method according to claim 2, further comprising: performing target vehicle overspeed detection on the captured image, wherein the specific process comprises:
    obtaining the time taken by the target vehicle to pass the capturing area according to the time Th when the target vehicle first enters the capturing area and the time Ti when the target vehicle finally leaves the capturing area;
    obtaining the speed v with which the target vehicle passes through the capturing area according to the actual road length corresponding to the capturing area:
    2019100914 16 Aug 2019
    L
    AT where L is the actual road length corresponding to the capturing area; ΔΤ = T\-Th ;
    if v is greater than the maximum speed limit of the road segment, considering the target vehicle to be overspeed.
  10. 10. The method according to claim 2, further comprising: performing detection of the target vehicle occupying the non-motorized vehicle lane on the captured image, wherein the specific process comprises:
    providing its corresponding coordinate range for the non-motorized vehicle lane area, wherein the set of coordinate points in the coordinate range is represented by Rn;
    if any of the coordinate points in the target vehicle running trajectory belongs to Rn, considering the target vehicle to have occupied the non-motorized vehicle lane.
  11. 11. The method according to claim 2, further comprising: capturing an image of the intersection area by the second camera, wherein there is a partially overlapping region between the image captured by the second camera and the image captured by the first camera;
    performing consistency processing on the image captured by the first camera and the image captured by the second camera using the overlapping region, wherein the specific process comprises:
    obtaining an image captured by the second camera, and correcting the image to obtain a corrected image;
    acquiring a template obtained by cropping the overlapping region in the image captured by the first camera;
    performing search matching in the corrected image using a template matching method, and obtaining an amplification ratio of the image captured by the first camera with respect to the corrected image; and scaling the corrected image according to the amplification ratio, so that the overlapping region in the scaled image has the image exactly the same as that of the overlapping region in the image captured by the first camera to achieve the relay matching between the first camera and the second camera;
    wherein the image captured by the first camera in this process is an image processed by
    2019100914 16 Aug 2019 demarcating the lane line.
  12. 12. The method according to claim 11, wherein after the relay matching between the first camera and the second camera is achieved, the method further comprises: performing the relay tracking of the same target by the first camera and the second camera, wherein the specific process comprises:
    obtaining a viewing field boundary line L of the first camera and the second camera;
    wherein the expression of L is: Ax+By+C=0;
    assuming P= Ax+By+C, obtaining the coordinates (xp, yp) of the tracked target vehicle;
    if the value of P changes from negative to positive or from positive to negative, indicating that the target vehicle has a viewing field switching in the frame, wherein in the same lane area, the target point among the target center points closest to the viewing field boundary line is the same tracked target;
    tracking the target to achieve cooperative relay tracking of the same target by the first camera and the second camera.
  13. 13. The method according to claim 11, further comprising: performing running red light detection of the target vehicle on the captured image, wherein the specific process comprises: delimiting an area in the image captured by the second camera as a violation area;
    in the case that the traffic light in the driving direction of the vehicle is red, if a vehicle enters the violation area, determining that the vehicle has a violation of running a red light.
  14. 14. The method according to claim 13, further comprising: identifying the identity of the illegal vehicle in the image, wherein the specific process comprises:
    cropping a license plate area screenshot of the illegal vehicle in the image captured by the second camera;
    identifying the license plate number based on the license plate area screenshot; and sending the license plate number to the data processing center for identification.
    1/4
    FIG. 1
    FIG 2
    FIG 3
    2/4
    FIG. 4
    FIG 5
    xl. . x2 x3.. x4 x5. . x6
    FIG 6
    3/4
    2019100914 16 Aug 2019
    FIG. 7
    driving direction
    FIG 8
    4/4
    2019100914 16 Aug 2019
    Camera Cl Camera C2 Static background image 1 Static background image 2 Acquiring a te mo late geometric correction
AU2019100914A 2019-08-16 2019-08-16 Method for identifying an intersection violation video based on camera cooperative relay Ceased AU2019100914A4 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2019100914A AU2019100914A4 (en) 2019-08-16 2019-08-16 Method for identifying an intersection violation video based on camera cooperative relay

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2019100914A AU2019100914A4 (en) 2019-08-16 2019-08-16 Method for identifying an intersection violation video based on camera cooperative relay

Publications (1)

Publication Number Publication Date
AU2019100914A4 true AU2019100914A4 (en) 2019-09-26

Family

ID=67989099

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2019100914A Ceased AU2019100914A4 (en) 2019-08-16 2019-08-16 Method for identifying an intersection violation video based on camera cooperative relay

Country Status (1)

Country Link
AU (1) AU2019100914A4 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110782677A (en) * 2019-11-25 2020-02-11 湖南车路协同智能科技有限公司 Illegal vehicle snapshot warning method and device
CN111126171A (en) * 2019-12-04 2020-05-08 江西洪都航空工业集团有限责任公司 Vehicle reverse running detection method and system
CN112700653A (en) * 2020-12-21 2021-04-23 上海眼控科技股份有限公司 Method, device and equipment for judging illegal lane change of vehicle and storage medium
CN112785850A (en) * 2020-12-29 2021-05-11 上海眼控科技股份有限公司 Method and device for identifying vehicle lane change without lighting
CN113870551A (en) * 2021-08-16 2021-12-31 清华大学 Roadside monitoring system capable of identifying dangerous and non-dangerous driving behaviors

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110782677A (en) * 2019-11-25 2020-02-11 湖南车路协同智能科技有限公司 Illegal vehicle snapshot warning method and device
CN111126171A (en) * 2019-12-04 2020-05-08 江西洪都航空工业集团有限责任公司 Vehicle reverse running detection method and system
CN112700653A (en) * 2020-12-21 2021-04-23 上海眼控科技股份有限公司 Method, device and equipment for judging illegal lane change of vehicle and storage medium
CN112785850A (en) * 2020-12-29 2021-05-11 上海眼控科技股份有限公司 Method and device for identifying vehicle lane change without lighting
CN113870551A (en) * 2021-08-16 2021-12-31 清华大学 Roadside monitoring system capable of identifying dangerous and non-dangerous driving behaviors

Similar Documents

Publication Publication Date Title
AU2019100914A4 (en) Method for identifying an intersection violation video based on camera cooperative relay
CN110178167B (en) Intersection violation video identification method based on cooperative relay of cameras
US9704060B2 (en) Method for detecting traffic violation
CN106647776B (en) Method and device for judging lane changing trend of vehicle and computer storage medium
JP5981550B2 (en) Three-dimensional object detection apparatus and three-dimensional object detection method
CN105426864A (en) Multiple lane line detecting method based on isometric peripheral point matching
JP6020567B2 (en) Three-dimensional object detection apparatus and three-dimensional object detection method
CN102663743A (en) Multi-camera cooperative character tracking method in complex scene
CN110386065A (en) Monitoring method, device, computer equipment and the storage medium of vehicle blind zone
WO2020154990A1 (en) Target object motion state detection method and device, and storage medium
EP2813973B1 (en) Method and system for processing video image
KR102493930B1 (en) Apparatus and method for controlling traffic signal based on reinforcement learning
CN107705577B (en) Real-time detection method and system for calibrating illegal lane change of vehicle based on lane line
WO2023179697A1 (en) Object tracking method and apparatus, device, and storage medium
Nagaraj et al. Traffic jam detection using image processing
CN111951576A (en) Traffic light control system based on vehicle identification and method thereof
CN111523385B (en) Stationary vehicle detection method and system based on frame difference method
Jiang et al. Lane line detection optimization algorithm based on improved Hough transform and R-least squares with dual removal
CN114627409A (en) Method and device for detecting abnormal lane change of vehicle
CN108629225A (en) A kind of vehicle checking method based on several subgraphs and saliency analysis
Li et al. Intelligent transportation video tracking technology based on computer and image processing technology
JP2019121356A (en) Interference region detection apparatus and method, and electronic apparatus
Tang et al. Robust vehicle surveillance in night traffic videos using an azimuthally blur technique
JP2020095631A (en) Image processing device and image processing method
JP5892254B2 (en) Image processing apparatus, image processing method, and image processing program

Legal Events

Date Code Title Description
FGI Letters patent sealed or granted (innovation patent)
MK22 Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry