CN112329724B - Real-time detection and snapshot method for lane change of motor vehicle - Google Patents

Real-time detection and snapshot method for lane change of motor vehicle Download PDF

Info

Publication number
CN112329724B
CN112329724B CN202011353164.4A CN202011353164A CN112329724B CN 112329724 B CN112329724 B CN 112329724B CN 202011353164 A CN202011353164 A CN 202011353164A CN 112329724 B CN112329724 B CN 112329724B
Authority
CN
China
Prior art keywords
image
lane
target
vehicle
straight line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011353164.4A
Other languages
Chinese (zh)
Other versions
CN112329724A (en
Inventor
周欣
蒋欣荣
王若君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN202011353164.4A priority Critical patent/CN112329724B/en
Publication of CN112329724A publication Critical patent/CN112329724A/en
Application granted granted Critical
Publication of CN112329724B publication Critical patent/CN112329724B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Abstract

The invention discloses a real-time detection and snapshot method for lane change of a motor vehicle, which comprises the following steps: s1: manually calibrating a road scene to obtain required parameters; s2: traversing the edge frame difference image according to step length by using a HAAR horizontal edge detector to obtain an area distributed with a large number of horizontal edges, and representing the area by using a straight line segment at the center of the area; s3: according to the calibrated lane width, combining the straight line segments obtained in the step S2 in the set area to obtain a straight line representing the bottom position of the moving target; the problems that the recognition and tracking algorithm is complex in design and is correlated with each other, the detection rate is low, the detection distance is short, the requirement on processing equipment is high, and the real-time performance and the robustness are insufficient are solved.

Description

Real-time detection and snapshot method for lane change of motor vehicle
Technical Field
The invention relates to the field, in particular to a motor vehicle lane change real-time detection and snapshot method.
Background
Vehicles should pass according to traffic signals. The lane change of motor vehicles on a road traffic marking road with prohibited lane change is illegal behavior and is an important factor causing traffic accidents. According to the technical specification requirement of GA/T832-2014, namely evidence obtaining of road traffic safety illegal activity images, illegal activity characteristics need to be recorded when the motor vehicle violates the indication of the warning marking. The evidence is to become evidence of the illegal act of changing lane of the motor vehicle, and the evidence at least comprises three pictures, as shown in fig. 1, fig. 2 and fig. 3, one picture is a picture before the motor vehicle changes lane, one picture is a picture of the motor vehicle pressing a lane change prohibition sign line, and the other picture is a picture after the motor vehicle changes lane. And the picture contains clearly identified panoramic features of the motor vehicle. If the road monitoring camera can be used for realizing automatic, real-time, accurate and long-distance motor vehicle lane change detection and snapshot, the method has important significance for standardizing the behavior of motor vehicle drivers, reducing accidents and improving traffic management.
Representative publications that are available at present in connection with lane change detection and capturing of motor vehicles include: a vehicle violation line pressing detection CN201910309083.5 based on computer vision, a method for identifying a vehicle crossing a solid line and merging at a traffic intersection CN201510575636.3, a method for detecting and tracking a vehicle based on a video technology CN 200810024699, a method for detecting a vehicle violation lane change based on a video detection technology CN201210226419.X, a device for detecting and tracking a vehicle crossing a yellow line and capturing vehicle information and a method CN201010106469.5 thereof, and a real-time detection method and a system CN201711026761.4 for calibrating the vehicle violation lane change based on a lane line are disclosed.
The main technical routes discussed in these documents are basically the same, and firstly, a vehicle target is detected and identified in an image; secondly, tracking a vehicle target to obtain a motion track; and finally, judging whether the lane change behavior exists or not through trajectory analysis. Different patents are different in the concrete technical implementation of each link. In the aspect of detection and identification, technologies such as background segmentation, motion analysis, feature identification, neural networks and the like are adopted. In the aspect of target tracking, technologies such as feature point matching, an optical flow method and Kalman filtering are adopted. In the aspect of track analysis, technologies such as coordinate position analysis or direction angle analysis are adopted.
The combined use of these techniques allows the detection of lane-changing behaviour of motor vehicles to a certain extent, but still presents some problems and drawbacks. Here, the practical use environment of the lane change detection technology of the motor vehicle is analyzed, and then the defects of the prior art are discussed. First, in the road section where lane change is prohibited, the number of vehicles illegally changing is much smaller than the total number of vehicles passing through the road section, i.e., a large number of vehicles are traveling as specified. Second, the vehicle on the detected road segment has two directions of movement, from far to near and from near to far. Third, lane-change behavior may occur at any location within 200 meters of what the camera can detect. Fourthly, the traffic surveillance cameras are developing towards high resolution, and the surveillance area is getting larger and larger. The processor develops towards miniaturization and embedding, and more functions to be completed are such as license plate recognition, driver behavior analysis, vehicle detail feature recognition and the like. Fifthly, three pictures are needed to record the complete behavior of the lane change of the motor vehicle, and the clear panoramic characteristic of the motor vehicle can be used as the law violation evidence.
Considering the practical application environment comprehensively, the shortcomings of the prior art can be found out mainly in the following aspects. First, the behavior that the lane change law of the motor vehicle is only a few vehicles is not considered, all vehicles are detected and identified, computer resources are occupied, the use of the existing equipment is not facilitated, and the real-time detection when the traffic flow is large is also not facilitated. Secondly, if the vehicle runs from far to near, the vehicle at far is small and difficult to identify, and the vehicle which is not identified cannot be tracked and track calculated. Then the lane change behavior of the vehicle at a remote location may be missed. This shortens the effective detection distance of lane change behavior. Thirdly, there is basically no accurate snapshot time given for three pictures that can reflect lane-change behavior. Fourthly, the adopted deep learning identification method or tracking methods such as feature matching and optical flow method are complex, the algorithm design difficulty is high, the requirement on processing equipment is high, but the real-time performance and the robustness are not high.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a method for detecting and snapshotting lane change of a motor vehicle in real time, which solves the problems that the identification and tracking algorithm is complex in design and is correlated with each other, the detection rate is not high, the detection distance is short, the requirement on processing equipment is high, and the real-time performance and the robustness are not enough.
The technical scheme adopted by the invention is that the method for detecting and snapshotting the lane change of the motor vehicle in real time comprises the following steps:
s1: manually calibrating a road scene to obtain required parameters;
s2: traversing the edge frame difference image according to step length by using a HAAR horizontal edge detector to obtain an area distributed with a large number of horizontal edges, and representing the area by using a straight line segment at the center of the area;
s3: according to the calibrated lane width, combining the straight line segments obtained in the step S2 in the set area to obtain a straight line representing the bottom position of the moving target;
s4: recording the position change of a straight line segment Dkj at the bottom position of a moving object on a video image sequence, and extracting three images from a cache image sequence for the object crossing a lane line as a lane change moving object record;
s5: for the lane-changing moving target image record, when the processing equipment is idle, a trained vehicle recognizer is called to recognize a target area in a close-range image;
s6: if the target in the close-range image is judged to be a vehicle, outputting the lane-changing moving target image record serving as a vehicle illegal lane-changing record to a corresponding illegal database; and if the target in the close-range image is judged to be a non-vehicle, deleting the lane-changing target image record.
Preferably, S1 includes the following sub-steps:
s11: acquiring one road scene image, wherein the lower left corner of the image is marked as an origin coordinate, the horizontal axis is an X axis, the vertical axis is a Y axis, the coordinate axes all take pixels as units, the width of the image is m, and the height of the image is n;
s12: marking the lane line position of each lane according to pixel coordinates on the image, and marking lane lines which are not allowed to change lanes;
s13: determining a detection area on the image, marking the farthest point and the nearest point of the detection area, and recording the image Y-axis coordinate of the farthest point as Y1 and the image Y-axis coordinate of the nearest point as Y2;
s14: lane widths Wj, Wj ═ W1+ (W2-W1) × (y1-j)/(y1-y2) are calculated for each j position where y2 is smaller than y 1.
Preferably, S2 includes the following sub-steps:
s21: calculating the horizontal edge intensity of all pixel points of each frame of original image S by using (1,0, -1) T operator; calculating the absolute value of the horizontal edge intensity difference of all pixel points of two adjacent frames of images to be used as an edge frame difference image P;
s22: calculating an integral graph I of the edge frame difference image P;
s23: constructing a cluster of HAAR detectors according to the image Y-axis coordinate, wherein each HAAR detector is larger than Y2 and smaller than the position of j of Y1, the side length of each HAAR detector is Rj ═ Wj/5 x2, the height of a white rectangular area is Rj/2, and the height of a black rectangular area is Rj/2;
s24: in the detection area, traversing the edge frame difference image P in the X direction and the Y direction according to the step length 8, and calculating HAAR characteristics Tij of the edge frame difference image P at a pixel point (I, j) on the basis of the integral image I;
wherein, Tij is the sum of the pixel values of the white rectangular area-the sum of the pixel values of the black rectangular area;
s25: if Tij >8 × Rj, a straight line segment Lij of length Rj marks the region as a prominent horizontal edge region centered at (i, j).
Preferably, S3 includes the following sub-steps:
s31: marking all the straight line segments Lij acquired at S2 on the original image S;
s32: determining a square area Wj multiplied by Wj by taking the central point (i, j) of each straight line segment Lij as the center of the bottom edge, and combining all straight line segments with central points positioned in the area to obtain a straight line segment Dkj;
s33: the Y-axis coordinate of the left and right endpoints of the straight line segment Dkj is j, and the X-axis coordinate of the left endpoint of the straight line segment Dkj is determined by the minimum value of the X-axes of all merged straight line segments in the Wj X (Wj/4) rectangular area; the right end point X-axis coordinate of straight line segment Dkj is determined by the maximum of the X-axes of all merged straight line segments within the Wj X (Wj/4) rectangular area, and straight line segment Dkj represents the detected moving object.
Preferably, S4 includes the following sub-steps:
s41: marking a straight line segment Dkj on each frame image as the same moving object according to the principle that the distance from the Y-axis direction to the nearest in the video image sequence;
s42: determining a lane change moving target according to the position change of Dkj and the marked lane line which does not allow lane change, and recording three moments;
the first time is time t1 at which the entire target lane change front is located in the original lane: for a moving object from near to far, t1 is the time when it passes through the nearest point of the detection region; the second time is time t2 when the target rides the pressure lane line; the third time is the time t3 when the target is located on another lane after changing lanes, and for the moving target from far to near, t3 is the time when the moving target passes through the nearest point of the detection area;
s43: according to t1, t2 and t3, saving a lane change front image, a ride pressure lane line image and a lane change back image of a lane change moving target from a cache image sequence to form a complete lane change target image record, wherein for a moving target from far to near, the lane change back image is a close-range image; for the moving target from near to far, the image before the lane change is a close-range image.
Preferably, S5 includes the following sub-steps:
s51: completing the training of the vehicle recognizer offline to obtain the vehicle recognizer, and acquiring 2000 vehicle images with 82 x 82 pixels as training positive samples;
s52: 3000 non-vehicle images of 82 x 82 pixels are collected to be used as training negative samples;
s53: extracting 2916 characteristics of each sample image by adopting an HOG algorithm, training on the basis of an SVM (support vector machine) in an LIBSVM (LiBSVM) toolbox, and obtaining a vehicle recognizer;
s54: the HOG algorithm configuration parameters are as follows: the edge detectors are (1,0, -1) and (1,0, -1) T, cells is 8 × 8, blocks is 16 × 16; the SVM configuration training parameters are as follows: selecting a linear kernel function; and setting the penalty parameter C to be 0.1.
Preferably, the HOG algorithm in S53 includes the following sub-steps:
s531: selecting an edge detector to calculate a vertical edge Y and a horizontal edge X of each pixel; calculating the edge strength e-sqrt (X2+ Y2); the edge direction a of each pixel is calculated as arccot (X/Y),
wherein, T: performing matrix transposition; sqrt: an evolution function; arccot: an inverse cotangent function;
s532: partitioning the image of n × n pixels by cells × cells; meanwhile, the image of n × n pixels is marked as a unit according to the size of blocks × blocks pixels by step length cells, and each unit is generally composed of 4 blocks, namely, blocks is 2 × cells;
s533: calculating the sum S of the edge intensities of the pixels by taking each unit as an independent unit; carrying out normalization calculation on the edge intensity of each pixel in the unit to obtain the normalized edge intensity E of the pixel as E/S;
s534: in one unit, carrying out edge feature statistics according to blocks;
s535: according to S534, each cell obtains 36 (9 × 4) features, and the image obtains (n/cell-1) × 36 features in total, and these features are recorded as the feature vector of the image, that is, the feature vector of the sample.
Preferably, S534 comprises the following sub-steps:
s5341: equally dividing the angle of 0-180 degrees into 9 directions;
s5342: according to the edge direction A of each pixel, adding the normalized edge intensities of all pixels belonging to the same direction to obtain the accumulated normalized edge intensity of the direction;
s5343: taking these 9 statistics as the characteristics of the block, a cell has 4 groups of block characteristics according to S5342.
Preferably, S6 includes the following sub-steps:
s61: at the target position of the close-range image, segmenting a region of Wj multiplied by Wj pixels as a vehicle identification image V0, and gradually reducing V0 to 82 multiplied by 82 by adopting a bilinear interpolation algorithm according to the proportion of 1.1 to obtain a group of images V0, V1, V2 and … Vk;
s62: judging whether the target is a vehicle or not by adopting a trained vehicle recognizer for the group of images V0, V1, V2 and … Vk obtained in the S61, and outputting the lane-changing target image record to a corresponding illegal database as a vehicle illegal lane-changing record if the target is judged to be the vehicle; and if the target is judged to be a non-vehicle, deleting the lane change target record.
The method for detecting and snapshotting lane change of the motor vehicle in real time has the following beneficial effects:
1. the invention fully considers the lane-changing behavior characteristics and the motor vehicle characteristics of the motor vehicle and provides a processing technology for separating detection of a snapshot moving target from vehicle identification. Since it is not necessary to perform recognition first and then track the trajectory as in the prior art, it is not necessary to perform the vehicle recognition process on all the video images. For the record of the lane-changing moving target, the vehicle identification module can be called when the equipment is idle, and real-time processing is not needed. The change of the processing thought and the flow ensures that the technology of the invention realizes the real-time detection and snapshot of the lane change of the motor vehicle and greatly reduces the performance requirement on processing equipment.
2. The invention designs a group of square and variable-scale HAAR horizontal edge feature detectors, and the detectors are used in combination with an integral graph technology. If a large number of horizontal edges are uniformly distributed in the lower bottom area of the moving object, the moving object can be detected quickly and accurately. The detector is simple in calculation, high in robustness and strong in feature expression capability, and has an excellent detection effect on vehicle targets. Therefore, the detection of targets of various scales can be completed, and the large-range and long-distance vehicle detection is realized.
3. The invention simplifies the arrangement of the lower bottom area into a straight line segment, namely, each target is finally represented by a straight line segment at the bottom of the target, rather than a rectangular frame in the traditional technology. The height information of the target is ignored, and the actual position of the target in the road can be more accurately reflected. The time before the target lane change, the time of riding and pressing the lane line and the time after the lane change are marked more accurately. The target is represented by a straight line segment, and the same target is marked by the nearest distance, so that real-time target tracking is easier to realize.
4. The invention constructs the vehicle recognizer by using the HOG characteristics and the linear SVM, and is a classifier design technology with simple structure, convenient sample selection, easy training, high precision and high calculation speed. Greatly reduces the performance requirement on the processing equipment, and can be embedded into the current traffic equipment for use.
Drawings
Fig. 1 is evidence one of the illegal act of changing lane of the motor vehicle according to the present invention.
Fig. 2 is a second evidence of the illegal act of changing lane of the motor vehicle.
Fig. 3 shows evidence three of the illegal act of changing lane of the motor vehicle according to the present invention.
FIG. 4 is a HAAR horizontal edge feature detector of the present invention.
Fig. 5 is a diagram of 2000 vehicle images with 82 × 82 pixels collected as training positive samples according to the present invention.
FIG. 6 is a graph of 3000 non-vehicle images of 82 × 82 pixels collected as training negative samples according to the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
The invention provides a motor vehicle lane change real-time detection and snapshot technology, which comprises the following steps.
Step A-step B-step C-step D-step E
In the step A: and manually calibrating the road scene, and acquiring parameters required in the subsequent processing steps.
The step A also comprises the following steps:
step A1: and acquiring one road scene image, wherein the lower left corner of the image is marked as an origin coordinate, the horizontal axis is an X axis, the vertical axis is a Y axis, and the coordinate axes all take pixels as units. The image has a width m and a height n. The following steps are all calculated according to the coordinate axis and the unit, and the calculation results are all rounded downwards.
Step A2: lane-dividing line positions of the respective lanes are marked in pixel coordinates on the image, and lane-dividing lines which do not allow lane change are marked.
Step A3: a detection area is determined on the image. The farthest and closest points of the detection area are marked. The image Y-axis coordinate of the farthest point is denoted as Y1, and the image Y-axis coordinate of the closest point is denoted as Y2. The lane width of the farthest point is determined and is marked as W1, and the lane width of the nearest point is determined and is marked as W2. The width of the detection area is set to the width of the image.
Step A4: lane widths Wj, Wj ═ W1+ (W2-W1) × (y1-j)/(y1-y2) are calculated for each j position where y2 is smaller than y 1.
In the step B: traversing the edge frame difference image according to step length by using a HAAR horizontal edge detector to obtain an area distributed with a large number of horizontal edges; the region is represented by a straight line segment at the center of the region.
The step B also comprises the following steps:
step B1-step B2-step B3-step B4-step B5.
Step B1: calculating the horizontal edge intensity of all pixel points of each frame of original image S by using (1,0, -1) T operator; and calculating the absolute value of the horizontal edge intensity difference of all pixel points of the two adjacent frames of images to be used as an edge frame difference image P.
Step B2: an integral image I of the edge frame difference image P is calculated.
Step B3: a cluster of HAAR detectors is constructed in image Y-axis coordinates as shown in fig. 4. At each j position where y2 is greater than y1, the side length of the HAAR detector is Rj ═ (Wj/5) × 2, the height of the white rectangular region is Rj/2, and the height of the black rectangular region is Rj/2.
Step B4: in the detection area, the edge frame difference image P is traversed in both the X direction and the Y direction by step 8. And calculating HAAR characteristics Tij of the edge frame difference image P at the pixel point (I, j) on the basis of the integral image I. Tij ═ sum (white rectangular region pixel value sum) - (black rectangular region pixel value sum).
Step B5: if Tij >8 × Rj × Rj, a straight line segment Lij of length Rj marks the region as a prominent horizontal edge region centered at (i, j).
In the step C: and D, according to the calibrated lane width, combining the straight line segments obtained in the step B in a set area to obtain a straight line representing the bottom position of the moving target.
The step C also comprises the following steps:
step C1-step C2-step C3.
Step C1: all the straight line segments Lij acquired in step B are marked on the original image S.
Step C2: taking the center point (i, j) of each straight line segment Lij as the center of the bottom edge, a square area Wj × Wj is determined, and all straight line segments with the center points located in the area are combined to obtain a straight line segment Dkj.
Step C3: the Y-axis coordinate of the left and right endpoints of the straight line segment Dkj is j; the left end X-axis coordinate of straight line segment Dkj is determined by the minimum of the X-axes of all merged straight line segments within the Wj X (Wj/4) rectangular region; the right end point X-axis coordinate of straight line segment Dkj is determined by the maximum of the X-axes of all merged straight line segments within the Wj × (Wj/4) rectangular region. Straight line segment Dkj represents the detected moving object.
In the step D: the position change of the straight line segment Dkj of the same moving object is recorded on a video image sequence, and three images of the object crossing the lane line are extracted from the buffer image sequence to be used as a lane change moving object record.
The step D also comprises the following steps:
step D1-step D2-step D3.
Step D1: in the video image sequence, a straight line segment Dkj marked on each frame image according to the principle of closest distance in the Y-axis direction is the same moving object.
Step D2: the moving object of lane change is determined according to the position change of Dkj and the marked lane line which does not allow lane change, and three moments are recorded. One is time t1 when the whole of the target is located in the original lane before lane changing, t1 is time when the target passes through the nearest point of the detection area for the moving target from near to far, the other is time t2 when the target rides and presses the lane dividing line, the third is time t3 when the whole of the target is located in the other lane after lane changing, and t3 is time when the moving target from far to near passes through the nearest point of the detection area.
Step D3: and according to t1, t2 and t3, saving the lane change target image before lane change, the ride-over lane line image and the lane change image from the buffer image sequence to form a complete lane change target image record. For moving targets from far to near, the image after the lane change is a close-range picture; for the moving target from near to far, the image before the lane change is a close-range image.
In the step E: and for the lane change target image record, calling a trained vehicle recognizer to recognize a target area in the close-range image when the processing equipment is idle. If the target is judged to be a vehicle, outputting the lane-changing target image record to a corresponding illegal database as a vehicle illegal lane-changing record; and if the target is judged to be a non-vehicle, deleting the lane change target record.
The step E also comprises the following steps:
step E1-step E2-step E3.
Step E1: and finishing the training of the vehicle recognizer off line to obtain the vehicle recognizer. Vehicles of 82 x 82 pixels were collected.
2000 images as training positive samples are shown in fig. 5; 3000 non-vehicle images of 82 × 82 pixels are collected as training negative samples as shown in fig. 6. 2916 features of each sample image are extracted by adopting an HOG algorithm, and a vehicle recognizer is trained and obtained on the basis of SVM in an LIBSVM tool box. The HOG algorithm configuration parameters are as follows: the edge detectors are (1,0, -1) and (1,0, -1) T, cells is 8 × 8, blocks is 16 × 16; the SVM configuration training parameters are as follows: selecting a linear kernel function; and setting the penalty parameter C to be 0.1.
Step E2: at the target position of the close-up image, the region of Wj × Wj pixels is sliced as the vehicle identification image V0. At a scale of 1.1, V0 is gradually reduced to 82 × 82 by using a bilinear interpolation algorithm, and a group of images V0, V1, V2, … Vk are obtained.
Step E3: and (3) judging whether the target is a vehicle or not by adopting a trained vehicle recognizer for the group of images V0, V1, V2 and … Vk obtained in the step 2. If the target is judged to be a vehicle, outputting the lane-changing target image record to a corresponding illegal database as a vehicle illegal lane-changing record; if the target is judged to be a non-vehicle, the lane change target record is deleted.
The invention takes the detection and the snapshot as a processing module, and the vehicle identification as a processing module separately, thereby realizing the separation of the identification and the snapshot. And the detection and snapshot module adopts a simple and quick image processing technology to ensure the real-time performance and robustness requirements of the whole algorithm. And a linear SVM classifier is adopted for a vehicle identification module, so that higher identification accuracy is ensured and the performance requirement on processing equipment is lowered.
In the detection and snapshot module, basic parameters of the road scene are calibrated manually. A set of square, scale-varying HAAR horizontal edge feature detectors are designed as shown in fig. 4, and such HAAR features can quickly detect the underlying significant horizontal edge regions of moving objects in edge frame difference images. And (4) sorting the detected lower bottom edge area to obtain a straight line segment to position the bottom of the moving target, and taking the straight line segment as the detected moving target. And marking the moving target in the video according to the principle of closest distance without using tracking algorithms such as feature point matching, optical flow calculation or Kalman filtering. Judging whether the marked moving target changes lanes or not according to the calibrated lane line; if the moving target changes the lane, saving the picture before changing the lane, the picture of riding the lane line and the picture after changing the lane of the moving target according to the mark to form a complete lane change target record. For moving targets from far to near, the picture after the lane change is a close-range picture; for the moving target from near to far, the picture before changing the lane is a close-range picture.
In the vehicle identification module, for the lane-changing target record, a vehicle identifier constructed by a linear SVM is used for identifying whether the moving target is a vehicle on the target close-range image, and final illegal output is formed. The processing module is invoked when the device is idle. The target close-range image has clear vehicle characteristics, the linear SVM classifier is simple to train, and the recognition accuracy is high. Because only a few lane-changing targets are identified once by using the linear SVM classifier, the requirement on the performance of processing equipment is low, and the algorithm can be embedded into the currently used traffic equipment.

Claims (9)

1. The method for detecting and snapshotting lane change of the motor vehicle in real time is characterized by comprising the following steps:
s1: manually calibrating a road scene to obtain required parameters;
s2: traversing the edge frame difference image according to step length by using a HAAR horizontal edge detector to obtain an area distributed with a large number of horizontal edges, and representing the area by using a straight line segment at the center of the area;
s3: according to the calibrated lane width, combining the straight line segments obtained in the step S2 in the set area to obtain a straight line representing the bottom position of the moving target;
s4: recording the position change of a straight line segment Dkj at the bottom position of a moving object on a video image sequence, and extracting three images from a cache image sequence for the object crossing a lane line as a lane change moving object record;
s5: for the lane-changing moving target image record, when the processing equipment is idle, a trained vehicle recognizer is called to recognize a target area in a close-range image;
s6: if the target in the close-range image is judged to be a vehicle, outputting the lane-changing moving target image record serving as a vehicle illegal lane-changing record to a corresponding illegal database; and if the target in the close-range image is judged to be a non-vehicle, deleting the lane-changing target image record.
2. The motor vehicle lane change real-time detection and snapshot method of claim 1, wherein the S1 comprises the following sub-steps:
s11: acquiring one road scene image, wherein the lower left corner of the image is marked as an origin coordinate, the horizontal axis is an X axis, the vertical axis is a Y axis, the coordinate axes all take pixels as units, the width of the image is m, and the height of the image is n;
s12: marking the lane line position of each lane according to pixel coordinates on the image, and marking lane lines which are not allowed to change lanes;
s13: determining a detection area on the image, marking the farthest point and the nearest point of the detection area, wherein the Y-axis coordinate of the image of the farthest point is recorded as Y1, and the Y-axis coordinate of the image of the nearest point is recorded as Y2;
s14: lane widths Wj, Wj = W1+ (W2-W1) × (y1-j)/(y1-y2) are calculated for each position of j greater than y2 and less than y1, W2 representing the lane width of the farthest point and W1 representing the lane width of the nearest point.
3. The motor vehicle lane change real-time detection and snapshot method according to claim 1, wherein the S2 comprises the following sub-steps:
s21: with (1,0, -1) T Operator computationThe horizontal edge intensity of all pixel points of each frame of original image S; calculating the absolute value of the horizontal edge intensity difference of all pixel points of two adjacent frames of images as an edge frame difference image P, T: performing matrix transposition;
s22: calculating an integral graph I of the edge frame difference image P;
s23: constructing a cluster of HAAR detectors according to the Y-axis coordinate of the image, wherein each HAAR detector is larger than Y2 and smaller than the position of Y1 j, the side length of the HAAR detector is Rj = (Wj/5). times.2, the height of a white rectangular area is Rj/2, and the height of a black rectangular area is Rj/2;
y1 represents the image Y-axis coordinate of the farthest point in the detection region;
y2 represents the image Y-axis coordinates of the closest point of the detection region;
wj denotes the lane width at each coordinate position j greater than y2 and less than y 1;
s24: in the detection area, traversing the edge frame difference image P in the X direction and the Y direction according to the step length 8, and calculating HAAR characteristics Tij of the edge frame difference image P at a pixel point (I, j) on the basis of the integral image I;
where Tij = sum of white rectangular area pixel values-sum of black rectangular area pixel values;
s25: if Tij >8 × Rj, a straight line segment Lij of length Rj marks the region as a prominent horizontal edge region centered at (i, j).
4. The motor vehicle lane change real-time detection and snapshot method of claim 1, wherein the S3 comprises the following sub-steps:
s31: marking all the straight line segments Lij acquired at S2 on the original image S;
s32: determining a square area Wj multiplied by Wj by taking the central point (i, j) of each straight line segment Lij as the center of the bottom edge, and combining all straight line segments with central points positioned in the area to obtain a straight line segment Dkj;
wj: lane widths at coordinate locations j each greater than y2 less than y 1;
y1 represents the image Y-axis coordinate of the farthest point in the detection region;
y2 represents the image Y-axis coordinates of the closest point of the detection region;
s33: the left and right end point Y-axis coordinates of straight line segment Dkj are j, and the left end point X-axis coordinates of straight line segment Dkj are determined by the minimum value of the X-axes of all merged straight line segments within the Wj X (Wj/4) rectangular area; the right end point X-axis coordinate of straight line segment Dkj is determined by the maximum of the X-axes of all merged straight line segments within the Wj X (Wj/4) rectangular area, and straight line segment Dkj represents the detected moving object.
5. The motor vehicle lane change real-time detection and snapshot method of claim 1, wherein the S4 comprises the following sub-steps:
s41: marking a straight line segment Dkj on each frame image as the same moving object according to the principle that the distance from the Y-axis direction to the nearest in the video image sequence;
s42: determining a lane change moving target according to the position change of Dkj and the marked lane line which does not allow lane change, and recording three moments;
the first time is a time t1 when the whole vehicle before the target lane change is located in the original lane: for a moving object from near to far, t1 is the time when it passes through the nearest point of the detection region; the second time is time t2 when the target rides the pressure lane line; the third time is the time t3 when the target is located on another lane after changing lanes, and for the moving target from far to near, t3 is the time when the moving target passes through the nearest point of the detection area;
s43: according to t1, t2 and t3, saving a lane change front image, a ride pressure lane line image and a lane change back image of a lane change moving target from a cache image sequence to form a complete lane change target image record, wherein for a moving target from far to near, the lane change back image is a close-range image; for the moving target from near to far, the image before the lane change is a close-range image.
6. The motor vehicle lane change real-time detection and snapshot method of claim 1, wherein the S5 comprises the following sub-steps:
s51: completing the training of the vehicle recognizer offline to obtain the vehicle recognizer, and acquiring 2000 vehicle images with 82 x 82 pixels as training positive samples;
s52: 3000 non-vehicle images of 82 x 82 pixels are collected to be used as training negative samples;
s53: extracting 2916 characteristics of each sample image by adopting an HOG algorithm, training on the basis of an SVM (support vector machine) in an LIBSVM (LiBSVM) toolbox, and obtaining a vehicle recognizer;
s54: the HOG algorithm configuration parameters are as follows: the edge detectors are (1,0, -1) and (1,0, -1) T Cells =8, blocks = 16; the SVM configuration training parameters are as follows: selecting a linear kernel function; setting a penalty parameter C =0.1, T: and (5) matrix transposition.
7. The method for real-time detection and snapshot of lane change of motor vehicle as claimed in claim 6, wherein said HOG algorithm in S53 comprises the following sub-steps:
s531: selecting an edge detector to calculate a vertical edge Y and a horizontal edge X of each pixel; calculate edge strength e = sqrt (X) 2 +Y 2 ) (ii) a The edge direction a = arccot (X/Y) of each pixel is calculated,
wherein, T: performing matrix transposition; sqrt: an evolution function; arccot: an inverse cotangent function;
s532: partitioning the image of n × n pixels by cells × cells; simultaneously, marking the image with n multiplied by n pixels as units according to the size blocks multiplied by blocks pixels by step length cells, wherein each unit is composed of 4 blocks, namely blocks =2 multiplied by cells; cells =8, blocks = 16;
s533: calculating the sum S of the edge intensities of the pixels by taking each unit as an independent unit; carrying out normalization calculation on the edge intensity of each pixel in the unit to obtain the normalized edge intensity E = E/S of the pixel;
s534: in one unit, carrying out edge feature statistics according to blocks;
s535: according to S534, each cell obtains 36 features, and the image obtains (n/cells-1) × (n/cells-1) × 36 features in total, and these features are recorded as the feature vector of the image, i.e., the feature vector of the sample.
8. The method for real-time detection and snapshot of lane change of motor vehicle as claimed in claim 7, wherein said S534 comprises the following sub-steps:
s5341: equally dividing the angle of 0-180 degrees into 9 directions;
s5342: according to the edge direction A of each pixel, adding the normalized edge intensities of all pixels belonging to the same direction to obtain the accumulated normalized edge intensity of the direction;
s5343: taking these 9 statistics as the characteristics of the block, a cell has 4 groups of block characteristics according to S5342.
9. The motor vehicle lane change real-time detection and snapshot method of claim 1, wherein the S6 comprises the following sub-steps:
s61: at the target position of the close-range image, segmenting a region of Wj multiplied by Wj pixels as a vehicle identification image V0, gradually reducing V0 to 82 multiplied by 82 by adopting a bilinear interpolation algorithm, and obtaining a group of images V0, V1, V2 and … Vk;
wj denotes the lane width at each coordinate position j greater than y2 and less than y 1;
y1 represents the image Y-axis coordinate of the farthest point in the detection region;
y2 represents the image Y-axis coordinates of the closest point of the detection region;
s62: judging whether the target is a vehicle or not by adopting a trained vehicle recognizer for the group of images V0, V1, V2 and … Vk obtained in the S61, and outputting the lane-changing target image record to a corresponding illegal database as a vehicle illegal lane-changing record if the target is judged to be the vehicle; and if the target is judged to be a non-vehicle, deleting the lane change target record.
CN202011353164.4A 2020-11-26 2020-11-26 Real-time detection and snapshot method for lane change of motor vehicle Active CN112329724B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011353164.4A CN112329724B (en) 2020-11-26 2020-11-26 Real-time detection and snapshot method for lane change of motor vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011353164.4A CN112329724B (en) 2020-11-26 2020-11-26 Real-time detection and snapshot method for lane change of motor vehicle

Publications (2)

Publication Number Publication Date
CN112329724A CN112329724A (en) 2021-02-05
CN112329724B true CN112329724B (en) 2022-08-05

Family

ID=74309065

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011353164.4A Active CN112329724B (en) 2020-11-26 2020-11-26 Real-time detection and snapshot method for lane change of motor vehicle

Country Status (1)

Country Link
CN (1) CN112329724B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111047908B (en) * 2018-10-12 2021-11-02 富士通株式会社 Detection device and method for cross-line vehicle and video monitoring equipment

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101196980A (en) * 2006-12-25 2008-06-11 四川川大智胜软件股份有限公司 Method for accurately recognizing high speed mobile vehicle mark based on video
CN104537841A (en) * 2014-12-23 2015-04-22 上海博康智能信息技术有限公司 Unlicensed vehicle violation detection method and detection system thereof
CN106878674A (en) * 2017-01-10 2017-06-20 哈尔滨工业大学深圳研究生院 A kind of parking detection method and device based on monitor video
CN109145798A (en) * 2018-08-13 2019-01-04 浙江零跑科技有限公司 A kind of Driving Scene target identification and travelable region segmentation integrated approach
CN109598729A (en) * 2018-11-28 2019-04-09 江苏科技大学 A kind of ship target detection method divided based on SRM and be layered line segment feature
CN109948416A (en) * 2018-12-31 2019-06-28 上海眼控科技股份有限公司 A kind of illegal occupancy bus zone automatic auditing method based on deep learning
CN110136447A (en) * 2019-05-23 2019-08-16 杭州诚道科技股份有限公司 Lane change of driving a vehicle detects and method for distinguishing is known in illegal lane change
CN110276971A (en) * 2019-07-03 2019-09-24 广州小鹏汽车科技有限公司 A kind of auxiliary control method of vehicle drive, system and vehicle
CN110415529A (en) * 2019-09-04 2019-11-05 上海眼控科技股份有限公司 Automatic processing method, device, computer equipment and the storage medium of vehicle violation
WO2020000251A1 (en) * 2018-06-27 2020-01-02 潍坊学院 Method for identifying video involving violation at intersection based on coordinated relay of video cameras
CN111626165A (en) * 2020-05-15 2020-09-04 安徽江淮汽车集团股份有限公司 Pedestrian recognition method and device, electronic equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101196980A (en) * 2006-12-25 2008-06-11 四川川大智胜软件股份有限公司 Method for accurately recognizing high speed mobile vehicle mark based on video
CN104537841A (en) * 2014-12-23 2015-04-22 上海博康智能信息技术有限公司 Unlicensed vehicle violation detection method and detection system thereof
CN106878674A (en) * 2017-01-10 2017-06-20 哈尔滨工业大学深圳研究生院 A kind of parking detection method and device based on monitor video
WO2020000251A1 (en) * 2018-06-27 2020-01-02 潍坊学院 Method for identifying video involving violation at intersection based on coordinated relay of video cameras
CN109145798A (en) * 2018-08-13 2019-01-04 浙江零跑科技有限公司 A kind of Driving Scene target identification and travelable region segmentation integrated approach
CN109598729A (en) * 2018-11-28 2019-04-09 江苏科技大学 A kind of ship target detection method divided based on SRM and be layered line segment feature
CN109948416A (en) * 2018-12-31 2019-06-28 上海眼控科技股份有限公司 A kind of illegal occupancy bus zone automatic auditing method based on deep learning
CN110136447A (en) * 2019-05-23 2019-08-16 杭州诚道科技股份有限公司 Lane change of driving a vehicle detects and method for distinguishing is known in illegal lane change
CN110276971A (en) * 2019-07-03 2019-09-24 广州小鹏汽车科技有限公司 A kind of auxiliary control method of vehicle drive, system and vehicle
CN110415529A (en) * 2019-09-04 2019-11-05 上海眼控科技股份有限公司 Automatic processing method, device, computer equipment and the storage medium of vehicle violation
CN111626165A (en) * 2020-05-15 2020-09-04 安徽江淮汽车集团股份有限公司 Pedestrian recognition method and device, electronic equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Lane-Change Detection Based on Vehicle-Trajectory Prediction;Hanwool Woo et al.;《IEEE Robotics and Automation Letters》;20170127;全文 *
一种基于小波分解的纸币污损检测算法;盖杉等;《哈尔滨工业大学学报》;20110330(第03期);全文 *
基于卷积神经网络预处理的Hog特征车辆检测算法;杨映波等;《现代计算机》;20190528;全文 *

Also Published As

Publication number Publication date
CN112329724A (en) 2021-02-05

Similar Documents

Publication Publication Date Title
CN109034047B (en) Lane line detection method and device
CN106919915B (en) Map road marking and road quality acquisition device and method based on ADAS system
CN106709436B (en) Track traffic panoramic monitoring-oriented cross-camera suspicious pedestrian target tracking system
US7046822B1 (en) Method of detecting objects within a wide range of a road vehicle
CN110619279B (en) Road traffic sign instance segmentation method based on tracking
Chen et al. Conflict analytics through the vehicle safety space in mixed traffic flows using UAV image sequences
KR102001002B1 (en) Method and system for recognzing license plate based on deep learning
CN110689724B (en) Automatic motor vehicle zebra crossing present pedestrian auditing method based on deep learning
CN105488454A (en) Monocular vision based front vehicle detection and ranging method
CN107845104A (en) A kind of method, associated processing system, passing vehicle detecting system and vehicle for detecting passing vehicle
WO2015089867A1 (en) Traffic violation detection method
KR20120072020A (en) Method and apparatus for detecting run and road information of autonomous driving system
CN104616502A (en) License plate identification and positioning system based on combined type vehicle-road video network
CN107389084A (en) Planning driving path planing method and storage medium
CN111915883A (en) Road traffic condition detection method based on vehicle-mounted camera shooting
US10984264B2 (en) Detection and validation of objects from sequential images of a camera
CN114898296A (en) Bus lane occupation detection method based on millimeter wave radar and vision fusion
CN109635737A (en) Automobile navigation localization method is assisted based on pavement marker line visual identity
CN106650730A (en) Turn signal lamp detection method and system in car lane change process
CN114170580A (en) Highway-oriented abnormal event detection method
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
CN109791607A (en) It is detected from a series of images of video camera by homography matrix and identifying object
CN107918775B (en) Zebra crossing detection method and system for assisting safe driving of vehicle
CN111738071B (en) Inverse perspective transformation method based on motion change of monocular camera
CN113903008A (en) Ramp exit vehicle violation identification method based on deep learning and trajectory tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant