CN114898325B - Vehicle dangerous lane change detection method and device and electronic equipment - Google Patents

Vehicle dangerous lane change detection method and device and electronic equipment Download PDF

Info

Publication number
CN114898325B
CN114898325B CN202210814610.XA CN202210814610A CN114898325B CN 114898325 B CN114898325 B CN 114898325B CN 202210814610 A CN202210814610 A CN 202210814610A CN 114898325 B CN114898325 B CN 114898325B
Authority
CN
China
Prior art keywords
vehicle
lane
frame
image
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210814610.XA
Other languages
Chinese (zh)
Other versions
CN114898325A (en
Inventor
邵源
冯思鹤
温浩凯
孙超
刘星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Urban Transport Planning Center Co Ltd
Original Assignee
Shenzhen Urban Transport Planning Center Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Urban Transport Planning Center Co Ltd filed Critical Shenzhen Urban Transport Planning Center Co Ltd
Priority to CN202210814610.XA priority Critical patent/CN114898325B/en
Publication of CN114898325A publication Critical patent/CN114898325A/en
Application granted granted Critical
Publication of CN114898325B publication Critical patent/CN114898325B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/36Applying a local operator, i.e. means to operate on image points situated in the vicinity of a given point; Non-linear local filtering operations, e.g. median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Nonlinear Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a device for detecting dangerous lane change of a vehicle and electronic equipment, wherein the method comprises the following steps: acquiring a road video shot by a camera; determining the vehicle position in each frame image of the road video by adopting a trained vehicle identification model; determining the lane line position in the image of each frame based on an edge detection algorithm and a Hough transform algorithm; determining the lane where the vehicle is located in each frame of the image according to the corresponding vehicle position and the lane line position; and judging whether the vehicle changes lanes continuously and/or frequently according to the lanes where the vehicle is located in the images of each continuous frame. The invention can effectively reduce the cost of vehicle dangerous lane change detection.

Description

Vehicle danger lane change detection method and device and electronic equipment
Technical Field
The invention relates to the technical field of vehicle safety monitoring, in particular to a method and a device for detecting dangerous lane change of a vehicle and electronic equipment.
Background
In road traffic, dangerous lane changing of vehicles is a very dangerous driving behavior, and great potential safety hazards exist. The dangerous lane changing of the vehicle comprises continuous lane changing and frequent lane changing, wherein the continuous lane changing means that the vehicle continuously changes more than two motor lanes once, and the frequent lane changing means that lane changing driving is carried out for multiple times in a period of time or a distance (not including normal overtaking behaviors).
At present, the method for detecting the dangerous lane change of the vehicle comprises the steps of deploying police force on an important road section and carrying out field inspection, but the method is time-consuming and labor-consuming, and increases the labor cost.
Disclosure of Invention
The invention solves the problem of reducing the cost of vehicle dangerous lane change detection.
In order to solve the above problems, the present invention provides a method and an apparatus for detecting a dangerous lane change of a vehicle, and an electronic device.
In a first aspect, the present invention provides a dangerous lane change detection method for a vehicle, including:
acquiring a road video shot by a camera;
determining the vehicle position in each frame image of the road video by adopting a trained vehicle identification model;
determining the lane line position in the image of each frame based on an edge detection algorithm and a Hough transform algorithm;
determining the lane where the vehicle is located in each frame of the image according to the corresponding vehicle position and the lane line position;
and judging whether the vehicle changes lanes continuously and/or frequently according to the lanes where the vehicle is located in the images of the continuous frames.
Optionally, the determining the lane line position in the image of each frame based on the edge detection algorithm and the hough transform algorithm includes:
determining one frame of image without a moving object in each frame of image of the road video as an initial image;
carrying out edge detection on the initial image by adopting an edge detection algorithm to obtain a contour image;
selecting an interested area in the outline image, and performing Boolean operation on the interested area and the outline image to obtain a processed image, wherein the interested area comprises a lane line;
and identifying lane lines in the processed image by adopting a Hough transform algorithm, and determining the lane line position in the image of each frame.
Optionally, the determining the position of the lane line in the image of each frame includes:
if the lane line is a straight line, fitting the lane line by adopting a straight line equation, and determining the position of the lane line;
if the lane line is a curve, sequentially extracting N points on the lane line, dividing the extracted N points into a plurality of point combinations, wherein each point combination comprises a plurality of continuous points, respectively fitting the point combinations to obtain curves, fusing all the curves obtained by fitting, and determining the position of the lane line, wherein N is greater than or equal to 3.
Optionally, the determining, as an initial image, one frame of the image without the moving object in each frame of the image of the road video includes:
and determining the images without moving objects in all the images by adopting a background difference method, and setting one frame of the images without moving objects as the initial images.
Optionally, the determining the vehicle position in each frame of image of the road video by using the trained vehicle recognition model includes:
and respectively inputting the images of each frame into the trained vehicle identification model, determining a detection frame comprising the vehicle in the images of each frame, and determining each vertex coordinate of the detection frame, wherein the vehicle position comprises each corresponding vertex coordinate.
After determining the lane line position in each frame of the image, the method further includes:
acquiring the passing time of the vehicle acquired by a radar monitor and the detection time from the time when the vehicle is shot by the camera to the time when the vehicle is completely displayed in a shot picture, wherein the radar monitor is installed beside a road and is positioned at a position corresponding to the edge of the shot picture of the camera, and the edge of the shot picture is the edge of the picture far away from the camera;
comparing the passing time with the detection time, and determining whether vehicle shielding exists according to a comparison result;
if the vehicle is shielded, comparing the vertex coordinates in the images of each frame with the positions of the lane lines respectively, and determining whether all the vertex coordinates in the images of each frame are in the same lane or not according to the comparison result;
and if all the vertex coordinates in at least one frame of the image are not in the same lane, judging whether the vehicle drives along the lane by radar detection equipment, wherein the radar detection equipment is buried along the lane line.
Optionally, the vehicle includes an engineering vehicle, and before determining the vehicle position in each frame image of the road video by using the trained vehicle recognition model, the method further includes:
acquiring a picture set comprising a plurality of engineering vehicle pictures;
marking the engineering vehicles in each engineering vehicle picture by adopting a minimum external rectangular frame to obtain a marked picture set;
training an improved SSD model by adopting the marked picture set to obtain the trained vehicle identification model;
the improved SSD model comprises a CONV4_3 layer, a CONV6 layer, a CONV7 layer, a CONV8_2 layer, a CONV9_2 layer, a CONV10_2 layer, a CONV11_2 layer, a pooling layer and a prediction layer which are connected in sequence, and the size of a convolution kernel of the CONV11_2 layer is 3 multiplied by 256.
Optionally, the determining the lane in which the vehicle is located in each frame of the image according to the corresponding vehicle position and the lane line position includes:
dividing the road into a plurality of lanes according to the positions of the lane lines, and numbering the lanes in sequence;
and respectively comparing the vertex coordinates corresponding to the vehicle head with the positions of the lane lines, and determining the number of the lane where the vehicle is located in each frame of the image according to the comparison result.
Optionally, the determining whether the vehicle changes lanes continuously and/or frequently according to the lane where the vehicle is located in the images of the consecutive frames includes:
determining whether the vehicle changes the lane according to the number of the lane where the vehicle is located in each continuous frame of image;
when the vehicle changes lanes, if the vehicle continuously crosses more than two lanes within a first preset time period, determining that the vehicle continuously changes lanes;
and if the lane changing times of the vehicle in the second preset time and/or the preset distance are larger than or equal to a preset threshold value, determining that the vehicle frequently changes lanes.
Optionally, after determining whether the vehicle changes lanes continuously and/or frequently according to the lane where the vehicle is located in each frame of the image, the method further includes:
when the continuous lane change and/or the frequent lane change of the vehicle are detected, vehicle information of the vehicle is identified, and the vehicle information and the road video are sent to a traffic police platform; and sending a reminding signal to the vehicle-mounted equipment for playing through the vehicle-road cooperative system, wherein the reminding signal is used for reminding a driver of standardizing the driving behavior.
Optionally, after determining whether the vehicle changes lanes continuously and/or frequently according to the lane where the vehicle is located in each frame of the image, the method further includes:
and when the continuous lane changing and/or the frequent lane changing of the vehicle are detected, acquiring human body sign information of a driver, which is acquired by a sensor arranged in the vehicle cab, and judging whether the driver is in dangerous driving according to the human body sign information.
In a second aspect, the present invention provides a dangerous lane-change detection device for a vehicle, comprising:
the acquisition module is used for acquiring a road video shot by the camera;
the recognition module is used for determining the vehicle position in each frame of image of the road video by adopting a trained vehicle recognition model; determining the lane line position in the image of each frame based on an edge detection algorithm and a Hough transform algorithm; determining the lane where the vehicle is located in each frame of the image according to the corresponding vehicle position and the lane line position;
and the detection module is used for judging whether the vehicle changes lanes continuously and/or frequently according to the lanes where the vehicle is located in the images of the continuous frames.
In a third aspect, the invention provides an electronic device comprising a memory and a processor;
the memory for storing a computer program;
the processor is configured to implement the dangerous lane change detection method for a vehicle according to any one of the first aspect when executing the computer program.
The method, the device and the electronic equipment for detecting the dangerous lane change of the vehicle have the beneficial effects that: the method comprises the steps of obtaining a road video shot by a road side camera, determining the position of a vehicle in each frame of image of the road video by adopting a trained vehicle recognition model, and training the vehicle recognition model by acquiring a vehicle picture in advance. And determining the lane line position of the lane line in each frame of image by adopting an edge detection algorithm and a Hough transform algorithm, and dividing the road into a plurality of lanes by the lane line position. The lane where the vehicle is located in each frame of image is determined according to the position relation between the position of the vehicle and the position of the lane line, and whether the vehicle changes continuously and/or frequently can be judged according to whether the lane where the vehicle is located in each continuous frame of image changes, the change frequency and the like, namely whether dangerous lane changing driving behaviors exist in the vehicle is determined. According to the technical scheme, the road video can be shot by directly utilizing the road side camera, whether the vehicle is in danger and changes the lane is determined through algorithms such as image processing and the like, compared with the prior art, police force does not need to be arranged additionally, and the labor cost of dangerous lane changing detection is reduced.
Drawings
FIG. 1 is a schematic flow chart of a dangerous lane-change detection method for a vehicle according to an embodiment of the present invention;
FIG. 2 is a block diagram of an improved SSD model according to an embodiment of the present invention;
FIG. 3 is a schematic plan view of a road scene according to an embodiment of the present invention;
FIG. 4 is a frame of image of a road video captured by a camera according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a dangerous lane-changing detection apparatus for a vehicle according to another embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below. While certain embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present invention. It should be understood that the drawings and the embodiments of the present invention are illustrative only and are not intended to limit the scope of the present invention.
It should be understood that the various steps recited in the method embodiments of the present invention may be performed in a different order and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the invention is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments"; the term "optionally" means "alternative embodiments". Relevant definitions for other terms will be given in the following description. It should be noted that the terms "first", "second", and the like in the present invention are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in the present invention are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that reference to "one or more" unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present invention are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
As shown in fig. 1, a method for detecting a dangerous lane change of a vehicle according to an embodiment of the present invention includes:
and step S100, acquiring a road video shot by a camera.
Specifically, accessing camera video stream information: the ethernet cable can be used to connect with a camera mounted on the roadside, and Real-Time video stream information collected by the camera is received in a manner of Real-Time Streaming Protocol (RTSP) video stream address.
Video decoding: a way to convert a file in an original video format into a file in another video format by compression techniques. The most important coding and decoding standards in video streaming transmission are H.261, H.263 and H.264 of the International telecommunication Union.
Video preprocessing: and the single-frame picture in the road video is subjected to color space conversion and image filtering denoising processing, so that the subsequent deep processing operation on the image is facilitated.
S200, determining the vehicle position in each frame image of the road video by adopting a trained vehicle identification model;
step S300, determining the lane line position in the image of each frame based on an edge detection algorithm and a Hough transform algorithm;
step S400, determining the lane where the vehicle is located in each frame of the image according to the corresponding vehicle position and the lane line position;
and step S500, judging whether the vehicle changes lanes continuously and/or frequently according to the lanes where the vehicle is located in the images of each continuous frame.
In this embodiment, a road video shot by the road side camera is obtained, a trained vehicle recognition model is adopted to determine the vehicle position where the vehicle is located in each frame of image of the road video, and the vehicle recognition model can be trained by acquiring a vehicle picture in advance. And determining the lane line position of the lane line in each frame of image by adopting an edge detection algorithm and a Hough transform algorithm, and dividing the road into a plurality of lanes by the lane line position. The lane where the vehicle is located in each frame of image is determined according to the position relation between the position of the vehicle and the position of the lane line, and whether the vehicle changes lane continuously and/or frequently can be judged according to whether the lane where the vehicle is located in each continuous frame of image changes, the changing times and the like, namely whether dangerous lane changing driving behaviors exist in the vehicle is determined. According to the technical scheme, the road video can be shot by directly utilizing the road side camera, whether the vehicle is in danger and changes the lane is determined through algorithms such as image processing and the like, and compared with the prior art, the method and the device do not need to arrange police force additionally, and labor cost for dangerous lane change detection is reduced.
Optionally, the determining the lane line position in the image of each frame based on the edge detection algorithm and the hough transform algorithm includes:
and determining one frame of image without a moving object in each frame of image of the road video as an initial image.
Specifically, when there is no difference between the two previous and next frames of images, it can be determined that there is no moving object in the two frames of images, and the previous frame of image can be taken as the initial image.
And carrying out edge detection on the initial image by adopting an edge detection algorithm to obtain a contour image.
Specifically, a Canny edge detection algorithm can be adopted to carry out edge detection on the initial image, wherein firstly, gaussian blur can be used to remove noise points in the initial image; then, carrying out gray level conversion on the processed image; calculating the gradient size and gradient direction of each point in the image obtained by conversion by using a sobel operator; using non-maximum suppression (only maximum reservation) to eliminate spurious effects caused by edge detection; applying dual thresholds to determine true and potential edges in the image; and the final edge detection is done by suppressing weak edges.
And selecting an interested area from the contour image, and performing Boolean operation on the interested area and the contour image to obtain a processed image, wherein the interested area comprises a lane line.
Specifically, for different lane lines in the contour image, different arrays may be respectively used to select ROIs (regions of interest), where each region of interest includes one lane line. And performing Boolean operation on the region of interest and the contour image, filtering out pixel points which are not lane lines, and retaining the pixel points within the range of the lane lines in the contour image.
And identifying the lane lines in the processed image by adopting a Hough transform algorithm, and determining the positions of the lane lines.
Specifically, a hough transform algorithm may be adopted to fit a lane line in the processed image, and determine the lane line position.
In the optional embodiment, the lane lines in each frame of image are drawn through an edge detection algorithm and a Hough transform algorithm, and then the road is divided into a plurality of lanes according to the lane lines.
Optionally, the lane line position in each frame of image is determined again at every calibration time or when the weather and the illumination are good based on an edge detection algorithm and a hough transform algorithm, so as to perform timing correction on the lane line position.
Optionally, the determining, as an initial image, one frame of the image without the moving object in each frame of the image of the road video includes:
and determining the images without moving objects in all the images by adopting a background difference method, and setting one frame of the images without moving objects as the initial images.
Specifically, each two adjacent frames of images are compared by adopting a background difference method, and if the two adjacent frames of images have no difference, the two frames of images can be determined to have no moving object.
In the optional embodiment, the image without the moving target is determined as the initial image to determine the position of the lane line, so that the influence of the moving target on the blocking of the lane line on the calculation accuracy can be avoided, and the position of the lane line in each frame of the photographed image is usually fixed, so that the position difference of the lane line in each frame of the photographed image is usually small, the positions of the lane lines in all the images can be known by determining the position of the lane line in one frame of the image, and the image processing efficiency is improved.
Optionally, the determining the lane line position in each frame of the image includes:
if the lane line is a straight line, fitting the lane line by adopting a straight line equation, and determining the position of the lane line;
if the lane line is a curve, sequentially extracting N points on the lane line, dividing the extracted N points into a plurality of point combinations, wherein each point combination comprises a plurality of continuous points, respectively fitting the point combinations to obtain curves, fusing all the curves obtained by fitting, and determining the position of the lane line, wherein N is greater than or equal to 3.
Specifically, the method for detecting whether the lane line is a straight line or a curved line is the prior art, and is not described herein again. When the lane line is a straight line, at least two points on the lane line can be extracted, and a straight line equation is adopted to fit the extracted points, so that a function expression for describing the position of the lane line can be obtained. When the lane line is a curve, N points are sequentially selected on the lane line, and the extracted N points are divided into a plurality of point combinations, each point combination comprises continuous points, for example, the 1 st point to the N-1 th point are used as one point combination, a function expression of a first curve can be obtained through fitting, the 2 nd point to the N th point are used as another point combination, a function expression of a second curve can be obtained through fitting, and the first curve and the second curve are fused to obtain a function expression for describing the position of the lane line. The N points may also be divided into other point combinations, for example, the 1 st point to the N-2 nd point are first point combinations, the 2 nd point to the N-1 st point are second point combinations, the 3 rd point to the N th point are third point combinations, and so on, which is not described herein again.
In this optional embodiment, when distortion exists on the video image, the fitting accuracy of the lane line may be affected, so that the plurality of points extracted on the lane line are divided into a plurality of point combinations, curves are obtained by respectively fitting the point combinations, and the curves can be checked and compared with each other, thereby improving the accuracy of the lane line finally obtained by fusion.
Optionally, the determining the vehicle position in each frame of image of the road video by using the trained vehicle recognition model includes:
and respectively inputting the images of each frame into the trained vehicle identification model, determining a detection frame comprising the vehicle in the images of each frame, and determining each vertex coordinate of the detection frame, wherein the vehicle position comprises each corresponding vertex coordinate.
Specifically, the detection frame is a minimum bounding rectangular frame of the vehicle.
After determining the lane line position in each frame of the image, the method further includes:
the method comprises the steps of acquiring the passing time of the vehicle collected by the radar monitor and the detection time of the vehicle completely displayed in a shot picture shot by the camera, wherein the radar monitor is installed beside a road and located at the position corresponding to the shot picture edge of the camera, and the shot picture edge is far away from the picture edge of the camera (namely the upper edge in the picture 4).
And comparing the passing time with the detection time, and determining whether the vehicle is shielded according to a comparison result.
Specifically, when the volume of the front vehicle is large or the distance between the front vehicle and the rear vehicle is short in the front and rear vehicles in the same lane, the situation that the front vehicle blocks the rear vehicle is likely to occur. Whether vehicle shielding exists or not can be judged through the passing time of the vehicle passing through the radar monitor and the detection time of the camera from the time of shooting the vehicle to the time of completely displaying the vehicle in the shot image, for example, the passing time detected by the radar monitor is two periods of time, and the detection time of the camera is one period of time, the vehicle shielding possibly exists; or the starting time of the passing time is close to the starting time of the detection time of the camera, but the ending time of the passing time is earlier than the ending time of the detection time, and the passing of the vehicle is detected soon after the passing time, so that the vehicle occlusion may exist. It should be noted that, due to the problem of the shooting angle of the camera, the time when the vehicle is detected is different from the time when the radar monitor monitors, and therefore, the time needs to be corrected in advance.
And if the vehicle is shielded, comparing the vertex coordinates in the images of the frames with the positions of the lane lines respectively, and determining whether all the vertex coordinates in the images of the frames are in the same lane or not according to the comparison result.
Specifically, for any frame of image, four vertex coordinates of a detection frame in the image can be respectively compared with lane line positions, and if the four vertex coordinates are in the same lane, it indicates that two vehicles with occlusion normally run; if at least one of the four vertex coordinates is not in the same lane, it cannot be determined whether the vehicles normally run in the lane, and further determination is needed.
And if all the vertex coordinates in at least one frame of the image are not in the same lane, judging whether the vehicle drives in a way pressed by the radar detection equipment, wherein the radar detection equipment is buried along the lane line.
Specifically, when there is at least one of the four vertex coordinates that is not in the same lane, it is further determined whether there is a vehicle running out of the lane by a radar detection device buried along the lane line. The front vehicle can judge whether to run over or not through the vertex coordinates corresponding to the vehicle head, and if the front vehicle is judged not to run over the road and the radar detection equipment detects that the vehicle runs over the road, the rear vehicle can be determined to run over the road and possibly change the road. And only when all the vertex coordinates in at least one frame of image are not in the same lane, the radar detection equipment is awakened for further detection, and the radar detection equipment can be dormant at other times so as to reduce the power consumption of the radar detection equipment, save energy and protect environment. An infrared scanning wall along a lane line may be formed by a plurality of radar detection devices.
In this optional embodiment, the vehicle passing time collected by the radar monitor is compared with the detection time of the vehicle detected by the camera, so that whether the vehicle is sheltered or not can be effectively judged. When detecting that there is the vehicle and sheltering from, whether further judge through radar check out test set whether there is the vehicle to press the way to travel, realized sheltering from down the vehicle detection that the vehicle is not standard to travel to the vehicle, more press close to the actual vehicle condition, improved the suitability.
Optionally, as shown in fig. 4, the vehicle includes an engineering vehicle, and before determining the vehicle position in each frame image of the road video by using the trained vehicle recognition model, the method further includes:
and acquiring a picture set comprising a plurality of engineering vehicle pictures.
Specifically, multiple-view, multiple-model and multiple-scene engineering vehicle pictures can be acquired through a network collection or field shooting method to form a picture set. Work vehicles include, but are not limited to, work vehicles, mixer trucks, cranes, and the like.
And marking the engineering vehicles in each engineering vehicle picture by adopting a minimum external rectangular frame to obtain a marked picture set.
Specifically, labelImg calibration software can be adopted to calibrate the engineering vehicles in all the engineering vehicle pictures, the engineering vehicles in the engineering vehicle pictures are calibrated by using a minimum external rectangular frame, each picture which is successfully calibrated generates a corresponding xml format file, and various parameters such as vertex coordinates of the rectangular frame are recorded. And taking one part of the marked pictures in the marked picture set as a training set, and taking the other part of the marked pictures as a test set.
It should be noted that when the minimum external rectangular frame is used for calibrating the engineering vehicle in each engineering vehicle picture, the minimum external rectangular frame should be as close to the engineering vehicle body as possible to improve the precision of the vehicle recognition model obtained through training, so that when the vehicle position in each frame image of the road video is determined by using the trained vehicle recognition model, the detection frame representing the vehicle position can be as close to the vehicle body as possible, and the judgment accuracy of the lane where the vehicle is located is prevented from being influenced by the overlarge detection frame.
Training an improved SSD model by adopting the marked picture set to obtain the trained vehicle identification model;
as shown in fig. 2, the improved SSD model includes a CONV4_3 layer, a CONV6 layer, a CONV7 layer, a CONV8_2 layer, a CONV9_2 layer, a CONV10_2 layer, a CONV11_2 layer, a pooling layer, and a prediction layer, which are connected in sequence, and a convolution kernel size of the CONV11_2 layer is 3 × 3 × 256.
Specifically, the improved SSD model takes VGG16 as a basic architecture, and is configured to add a CONV8_2 layer, a CONV9_2 layer, a CONV10_2 layer and a pooling layer as a CONV4_3 layer and a CONV7 layer in a network architecture, and add a CONV11_2 convolution layer of 3 × 3 × 256 after the original CONV10_2 layer of the SSD model, so that the accuracy of the model in detecting a small target can be increased, and the accuracy of detecting a long-distance (small pixel area in a video stream) engineering vehicle can be improved. Compared with other models, the SSD model with the improved picture input is processed, and the detection accuracy is improved.
And inputting the marked picture set into the improved SSD model, comparing output with expected output, calculating error by using a cost function, iteratively updating the model weight, reducing the training loss rate of the model to the minimum value, and finally determining the weight of the model.
The training process comprises the following steps: randomly determining an initialization weight of the improved SSD model; sending the first group of input values to an SSD model to be trained, and obtaining output values through forward propagation; comparing the output value with an expected output value and calculating an error using a cost function; performing back propagation according to the error, and adjusting the weight of the model; and repeating the steps for each input value in the training set until the error is less than or equal to a preset threshold value to obtain a trained vehicle identification model, wherein the trained vehicle identification model is a trained SSD model.
Optionally, the determining the lane where the vehicle is located in each frame of the image according to the corresponding vehicle position and the lane line position includes:
and dividing the road into a plurality of lanes according to the positions of the lane lines, and numbering the lanes in sequence.
In fig. 3, a lane between the lane line 2 and the lane line 2 is referred to as a lane 1, a lane between the lane line 1 and the lane line 2 is referred to as a lane 2, and a lane between the lane line 1 and the lane line 1 is referred to as a lane 3.
And respectively comparing the vertex coordinates corresponding to the vehicle head with the positions of the lane lines, and determining the number of the lane where the vehicle is located in each frame of the image according to the comparison result.
Specifically, as shown in fig. 3, assume that the functional expression of the lane line 1 is: y1= Ax1+ B, lane line 2: y2= Cx2+ D, road boundary 1: y3= Ex3+ F, road boundary 2: y4= Gx4+ H, and coordinates of four vertexes of a rectangular frame surrounded by the engineering vehicle in each frame image are (upper left, upper right, lower left, lower right): (a, b), (a + c, b), (a, b + d), (a + c, b + d).
For the minimum enclosing rectangular frame of the vehicle, two vertex coordinates (a, b + d), (a + c, b + d) corresponding to the head of the vehicle are always in the lane, and the other two vertex coordinates may be in the lane or out of the lane, so that the two vertex coordinates corresponding to the head of the vehicle can be compared with the position of the lane line to determine the lane in which the vehicle is positioned.
Because the road video is formed by splicing images of one frame, the position of an actually detected vehicle in each frame can be changed and is influenced by the number of frames per second and the vehicle speed provided by the actual video, in order to reduce detection errors, a floating threshold value can be set for the position of a lane line, and the floating threshold value can be specifically set according to actual conditions, wherein Y can be a positive value or a negative value.
The road lane line is generally mainly a straight line, and the curve part is composed of a circular curve and a gentle curve, so the calibration of the road lane line and the road boundary line is realized by acquiring points on the curve through a video (considering the possibility of discontinuity of the road lane line), for example, a curve determined by passing points from 1 st to N-1 st and a curve determined by passing points from 2 nd to N th are fused according to the road lane line to obtain the curve. Similarly, when the vehicle is in a lane identified by a curved lane line, the vertex corresponding to the head is always in the lane, and therefore, the following description will specifically take a straight lane line as an example.
The conditions that the current engineering vehicle should satisfy in different lanes are as follows:
(1) The condition should be satisfied when the engineering vehicle is in lane 1: y4 ≦ G (a + Y) + H and Y2 ≧ C (a + Y) + D and Y4 ≦ G (a + C + Y) + H and Y2 ≧ C (a + C + Y) + D, i.e., at least vertices (a, b + D), (a + C, b + D) in lane 1.
(2) The condition should be satisfied when the engineering vehicle is in the lane 2: y2 is equal to or less than C (a + Y) + D, Y1 is equal to or greater than A (a + Y) + B, Y2 is equal to or less than C (a + C + Y) + D, and Y1 is equal to or greater than A (a + C + Y) + B, i.e., at least vertices (a, B + D), (a + C, B + D) are within lane 2.
(3) The condition should be satisfied when the engineering vehicle is in lane 3: y1 is less than or equal to A (a + Y) + B and Y3 is greater than or equal to E (a + Y) + F, Y1 is less than or equal to A (a + c + Y) + B and Y3 is greater than or equal to E (a + c + Y) + F, namely at least the vertexes (a, B + d), (a + c, B + d) are in the lane 3.
(4) The condition should be met when the engineering vehicle occupies lanes 1, 2 simultaneously: y4 is equal to or less than G (a + Y) + H, Y2 is equal to or greater than C (a + Y) + D, Y2 is equal to or less than C (a + C + Y) + D, and Y1 is equal to or greater than A (a + C + Y) + B, namely, at least the vertexes (a, B + D) are in the lane 1, and the vertexes (a + C, B + D) are in the lane 2.
(5) The condition should be met when the engineering vehicle occupies lanes 2, 3 simultaneously: y2 is equal to or less than C (a + Y) + D, Y1 is equal to or greater than A (a + Y) + B, Y1 is equal to or less than A (a + C + Y) + B, and Y3 is equal to or greater than E (a + C + Y) + F, namely at least the vertexes (a, B + D) are in the lane 2, and the vertexes (a + C, B + D) are in the lane 3.
In the optional embodiment, the vertex coordinates of the detection frame are compared with the position of the lane line, so that the lane where the vehicle is located can be determined quickly, and the algorithm is simple and easy to implement. And a floating threshold value is added on the basis of the vertex coordinates, so that the lane detection accuracy caused by detection errors is avoided.
After the lane where the vehicle is located is monitored through the video images, whether the vehicle drives in a lane or not can be further determined through radar monitoring equipment buried along a lane line.
In fig. 3, X is the abscissa of the two-dimensional coordinate system of the detection target scene in the video frame; y is the ordinate of a two-dimensional coordinate system of the detection target scene in the video picture; establishing coordinate transformation of a world coordinate system and a pixel coordinate system, wherein the specific formula is as follows:
Figure DEST_PATH_IMAGE002
wherein, the first and the second end of the pipe are connected with each other,
Figure DEST_PATH_IMAGE004
is a coordinate system of the pixel, and is,
Figure DEST_PATH_IMAGE006
is a world coordinate system and is characterized by that,
Figure DEST_PATH_IMAGE008
is the focal length of the camera and is,
Figure DEST_PATH_IMAGE010
are camera coordinates.
Figure DEST_PATH_IMAGE012
Is an internal reference of the camera and is used as a reference of the camera,
Figure DEST_PATH_IMAGE014
is an external reference of the camera.
The internal reference and the external reference of the camera can be obtained by a Zhang friend calibration method, and the position relation of the vehicle position relative to the lane line is considered in the method rather than the specific distance relation, so that the standard length of the road distance and the lane center line dotted line can be used as the conversion parameter of a world coordinate system and a pixel coordinate system.
Optionally, the determining whether the vehicle changes lanes continuously and/or frequently according to the lane where the vehicle is located in the images of the consecutive frames includes:
determining whether the vehicle changes lanes according to the number of lanes in which the vehicle is located in the images of the continuous frames;
when the vehicle changes lanes, if the vehicle continuously crosses more than two lanes within a first preset time period, the vehicle is determined to continuously change lanes.
Specifically, a first number of a lane where the vehicle is located before lane changing, a second number of the lane where the vehicle is located after lane changing and lane changing duration are determined;
and judging whether the lane changing time length is less than a first preset time length or not, determining whether the vehicle crosses more than two lanes or not according to the first number and the second number, and if so, determining that the vehicle continuously changes lanes.
In the figure, if the engineering vehicle passes from the lane 1 to the lane 3 once or passes from the lane 3 to the lane 1 once within a first preset time period, the engineering vehicle is considered to change lanes continuously. A counter Z may be provided, and when the engineering vehicle is in lanes 1, 2, 3, the value of the counter Z is set to 1, 2, 3 in sequence.
And when Z =1 or 3, updating the current time t1 in real time, setting a lane changing time length t2 for continuously changing the lane without timely correcting the direction of an engineering vehicle after changing the lane, wherein if the lane changing time length t2 is less than a first preset time length, the first preset time length can be adjusted to be a smaller value according to the actual condition. When Z =1, if the work vehicle satisfies Y4 ≦ G (a + Y + H) and Y2 ≦ C (a + Y + D) and Y2 ≦ C (a + C + Y + D) and Y1 ≦ A (a + C + Y + B) at t1, satisfies Y2 ≦ C (a + Y + D) and Y1 ≦ A (a + Y + B) and Y1 ≦ A (a + C + Y + B) and Y3 ≦ E (a + C + Y + F) at t1+ t2, or when Z =3, if the engineering vehicle meets the conditions that Y2 is not less than C (a + Y + D), Y1 is not less than A (a + Y + B), Y1 is not less than A (a + C + Y + B) and Y3 is not less than E (a + C + Y + F) at t1, and meets the conditions that Y4 is not less than G (a + Y + H), Y2 is not less than C (a + Y + D), Y2 is not less than C (a + C + Y + D) and Y1 is not less than A (a + C + Y + B) at t1+ t2, it is determined that the engineering vehicle has continuous lane changing behavior, and an electric police can be linked to shoot evidences.
And if the lane changing times of the vehicle in the second preset time and/or the preset distance are larger than or equal to a preset threshold value, determining that the vehicle frequently changes lanes.
Specifically, the whole road can be detected in real time by means of electric alarms and camera front-end facilities distributed at intervals of 200, dangerous behaviors exist when the engineering vehicle is set to change the lane within a preset distance Q for P times, a lane change counter M is set, a distance counter N is set, M and N =0 are initialized, and when the engineering vehicle meets the lane change judgment condition for the first time: and when Y4 is less than or equal to G (a + Y + H), Y2 is more than or equal to C (a + Y + D), Y2 is less than or equal to C (a + C + Y + D), Y1 is more than or equal to A (a + C + Y + B) or Y2 is less than or equal to C (a + Y + D), Y1 is more than or equal to A (a + Y + B), Y1 is more than or equal to A (a + C + Y + B), and Y3 is more than or equal to E (a + C + Y + F), or the number of the lane where the engineering vehicle is located is changed, a counter M =1, N =0, and when the engineering vehicle meets lane change judging conditions every time, M is added with 1, N to record the driving distance. When N is more than or equal to Q and M is less than or equal to P, M and N are reset. And when N is less than or equal to Q and M is more than or equal to P, the engineering vehicle is determined to have frequent lane changing behavior, and the electric police can be linked to shoot evidences.
Optionally, the determining whether the vehicle changes lanes continuously and/or frequently according to the lane where the vehicle is located in each frame of the image includes:
when the continuous lane changing and/or the frequent lane changing of the vehicle are detected, vehicle information of the vehicle is identified, and the vehicle information and the road video are sent to a traffic police platform; and sending a reminding signal to the vehicle-mounted equipment for playing through the vehicle-road cooperative system, wherein the reminding signal is used for reminding a driver of standardizing the driving behavior.
Specifically, the existing cameras and electric warning devices on the road can be adopted, or one set of electric warning device and camera is arranged at a preset distance (for example, 200 m), and the camera is combined with the electric warning device to shoot the road video. When the license plate cannot be identified through the camera, an RFID (Radio Frequency Identification) detector can be arranged at a roadside position and the like, and due to the fact that the engineering vehicle is required to be provided with the RFID label on the road, when dangerous lane changing of the engineering vehicle is detected, the identity information of the engineering vehicle can be detected through the RFID detector, and the identity information is synchronously uploaded to a traffic police platform through the video shot by the camera to serve as law enforcement basis. And when dangerous lane change of the engineering vehicle is detected, a signal is sent to the vehicle-mounted OBU through the RSU, the OBU sends the signal to the vehicle-mounted voice equipment, and the driver is reminded to standardize the driving behavior through voice.
In this optional embodiment, when a dangerous lane change of the vehicle is detected, the vehicle information and the road video are uploaded to a traffic police platform, so that the vehicle can be conveniently subjected to law enforcement. Meanwhile, voice is sent to the vehicle-mounted equipment through the vehicle-road cooperative system, a driver can be reminded to standardize driving behaviors, and law enforcement supervision of dangerous lane changing of the vehicle and closed-loop management of reminding are achieved.
Optionally, after determining whether the vehicle changes lanes continuously and/or frequently according to the lane where the vehicle is located in each frame of the image, the method further includes:
and when the continuous lane changing and/or the frequent lane changing of the vehicle are detected, acquiring human body sign information of a driver, which is acquired by a sensor arranged in the vehicle cab, and judging whether the driver is in dangerous driving according to the human body sign information.
Specifically, the sensors may include an alcohol detector disposed in the cab, a heart rate monitoring sensing sheet mounted on the steering wheel, and the like, and the sensors acquire human body sign information such as heart rate and alcohol concentration, and determine whether the driver is in dangerous driving, such as drunk driving, by comparing the human body sign information with a preset threshold value and the like.
As shown in fig. 5, another embodiment of the present invention provides a dangerous lane-change detection apparatus for a vehicle, including:
the acquisition module is used for acquiring a road video shot by the camera;
the recognition module is used for determining the vehicle position in each frame of image of the road video by adopting a trained vehicle recognition model; determining the lane line position in the image of each frame based on an edge detection algorithm and a Hough transform algorithm; determining the lane where the vehicle is located in each frame of the image according to the corresponding vehicle position and the lane line position;
and the detection module is used for judging whether the vehicle continuously changes lanes and/or frequently changes lanes according to the lanes where the vehicle is located in the images of each continuous frame.
The dangerous lane change detection device for the vehicle of the embodiment is used for realizing the dangerous lane change detection method for the vehicle, and the beneficial effects of the dangerous lane change detection device for the vehicle correspond to those of the dangerous lane change detection method for the vehicle, and are not repeated herein.
Another embodiment of the present invention provides an electronic device, including a memory and a processor; the memory for storing a computer program; the processor is used for realizing the dangerous lane change detection method of the vehicle when executing the computer program. The electronic device may employ an edge computing gateway.
Yet another embodiment of the present invention provides a computer-readable storage medium having a computer program stored thereon, which, when executed by a processor, implements the dangerous lane-change detection method for a vehicle as described above.
An electronic device that can be a server or a client of the present invention, which is an example of a hardware device that can be applied to aspects of the present invention, will now be described. Electronic device is intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
The electronic device includes a computing unit that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) or a computer program loaded from a storage unit into a Random Access Memory (RAM). In the RAM, various programs and data required for the operation of the device can also be stored. The computing unit, the ROM, and the RAM are connected to each other by a bus. An input/output (I/O) interface is also connected to the bus.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like. In this application, the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiments of the present invention. In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
Although the present disclosure has been described above, the scope of the present disclosure is not limited thereto. Various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the spirit and scope of the present disclosure, and these changes and modifications are intended to be within the scope of the present disclosure.

Claims (8)

1. A dangerous lane change detection method for a vehicle is characterized by comprising the following steps:
acquiring a road video shot by a camera;
determining the vehicle position in each frame of image of the road video by adopting a trained vehicle recognition model, wherein the method comprises the following steps: inputting the images of each frame into the trained vehicle recognition model respectively, determining a detection frame comprising the vehicle in the images of each frame, and determining each vertex coordinate of the detection frame, wherein the vehicle position comprises each corresponding vertex coordinate; determining the lane line position in the image of each frame based on an edge detection algorithm and a Hough transform algorithm;
determining the lane where the vehicle is located in each frame of the image according to the corresponding vehicle position and the lane line position;
judging whether the vehicle changes lanes continuously and/or frequently according to lanes where the vehicle is located in the images of the continuous frames;
after determining the lane line position in each frame of the image, the method further includes:
acquiring the passing time of the vehicle acquired by a radar monitor and the detection time from the time when the vehicle is shot by the camera to the time when the vehicle is completely displayed in a shot picture, wherein the radar monitor is installed beside a road and is positioned at a position corresponding to the edge of the shot picture of the camera, and the edge of the shot picture is the edge of the picture far away from the camera;
comparing the passing time with the detection time, and determining whether the vehicle is shielded according to a comparison result;
if the vehicle is shielded, comparing the vertex coordinates in the images of each frame with the positions of the lane lines respectively, and determining whether all the vertex coordinates in the images of each frame are in the same lane or not according to the comparison result;
and if all the vertex coordinates in at least one frame of the image are not in the same lane, judging whether the vehicle drives along the lane by radar detection equipment, wherein the radar detection equipment is buried along the lane line.
2. The method according to claim 1, wherein the determining the lane line position in each frame of the image based on an edge detection algorithm and a hough transform algorithm comprises:
determining one frame of image without a moving object in each frame of image of the road video as an initial image;
carrying out edge detection on the initial image by adopting an edge detection algorithm to obtain a contour image;
selecting an interested area in the outline image, and performing Boolean operation on the interested area and the outline image to obtain a processed image, wherein the interested area comprises a lane line;
and identifying lane lines in the processed image by adopting a Hough transform algorithm, and determining the lane line position in the image of each frame.
3. The method according to claim 2, wherein the determining the lane line position in each frame of the image comprises:
if the lane line is a straight line, fitting the lane line by adopting a straight line equation, and determining the position of the lane line;
if the lane line is a curve, sequentially extracting N points on the lane line, dividing the extracted N points into a plurality of point combinations, wherein each point combination comprises a plurality of continuous points, respectively fitting the point combinations to obtain curves, fusing all the curves obtained by fitting, and determining the position of the lane line, wherein N is greater than or equal to 3.
4. The method according to claim 1, wherein the vehicle comprises an engineering vehicle, and before determining the vehicle position in each frame image of the road video by using the trained vehicle recognition model, the method further comprises:
acquiring a picture set comprising a plurality of engineering vehicle pictures;
marking the engineering vehicles in each engineering vehicle picture by adopting a minimum external rectangular frame to obtain a marked picture set;
training an improved SSD model by adopting the marked picture set to obtain the trained vehicle identification model;
the improved SSD model comprises a CONV4_3 layer, a CONV6 layer, a CONV7 layer, a CONV8_2 layer, a CONV9_2 layer, a CONV10_2 layer, a CONV11_2 layer, a pooling layer and a prediction layer which are connected in sequence, and the size of a convolution kernel of the CONV11_2 layer is 3 multiplied by 256.
5. The method according to any one of claims 1 to 3, wherein the determining the lane in which the vehicle is located in each frame of the image according to the corresponding vehicle position and the lane line position comprises:
dividing the road into a plurality of lanes according to the positions of the lane lines, and numbering the lanes in sequence; respectively comparing the vertex coordinates corresponding to the vehicle head with the positions of the lane lines, and determining the number of the lane where the vehicle is located in each frame of the image according to the comparison result;
and/or, the step of judging whether the vehicle changes lanes continuously and/or frequently according to the lane where the vehicle is located in the continuous frames of the images comprises the following steps:
determining whether the vehicle changes the lane according to the number of the lane where the vehicle is located in each continuous frame of image; when the vehicle changes lanes, if the vehicle continuously crosses more than two lanes within a first preset time period, determining that the vehicle continuously changes lanes; and if the lane changing times of the vehicle in the second preset time and/or the preset distance are larger than or equal to a preset threshold value, determining that the vehicle frequently changes lanes.
6. The method according to any one of claims 1 to 3, wherein after determining whether the vehicle changes lanes continuously and/or frequently according to the lane in which the vehicle is located in each frame of the image, the method further comprises:
when the continuous lane change and/or the frequent lane change of the vehicle are detected, vehicle information of the vehicle is identified, and the vehicle information and the road video are sent to a traffic police platform; sending a reminding signal to the vehicle-mounted equipment for playing through the vehicle-road cooperative system, wherein the reminding signal is used for reminding a driver of standardizing the driving behavior;
and/or when the continuous lane change and/or the frequent lane change of the vehicle are detected, acquiring human body sign information of a driver, which is acquired by a sensor arranged in the vehicle cab, and judging whether the driver is in dangerous driving according to the human body sign information.
7. A dangerous lane-change detection device for a vehicle, comprising:
the acquisition module is used for acquiring a road video shot by the camera;
the identification module is used for determining the vehicle position in each frame of image of the road video by adopting a trained vehicle identification model, and comprises: respectively inputting the images of each frame into the trained vehicle identification model, determining a detection frame comprising the vehicle in the images of each frame, and determining each vertex coordinate of the detection frame, wherein the vehicle position comprises each corresponding vertex coordinate; determining the lane line position in the image of each frame based on an edge detection algorithm and a Hough transform algorithm; determining the lane where the vehicle is located in each frame of the image according to the corresponding vehicle position and the lane line position; acquiring the passing time of the vehicle acquired by a radar monitor and the detection time from the time when the vehicle is shot by the camera to the time when the vehicle is completely displayed in a shot picture, wherein the radar monitor is installed beside a road and is positioned at a position corresponding to the edge of the shot picture of the camera, and the edge of the shot picture is the edge of the picture far away from the camera; comparing the passing time with the detection time, and determining whether the vehicle is shielded according to a comparison result; if the vehicle is shielded, comparing the vertex coordinates in the images of each frame with the positions of the lane lines respectively, and determining whether all the vertex coordinates in the images of each frame are in the same lane or not according to the comparison result; if all the vertex coordinates in at least one frame of the image are not in the same lane, judging whether a vehicle drives in a way pressed by the radar detection equipment or not by the radar detection equipment, wherein the radar detection equipment is buried along a lane line;
and the detection module is used for judging whether the vehicle changes lanes continuously and/or frequently according to the lanes where the vehicle is located in the images of the continuous frames.
8. An electronic device comprising a memory and a processor;
the memory for storing a computer program;
the processor, when executing the computer program, is configured to implement the vehicle lane change hazard detection method according to any one of claims 1 to 6.
CN202210814610.XA 2022-07-12 2022-07-12 Vehicle dangerous lane change detection method and device and electronic equipment Active CN114898325B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210814610.XA CN114898325B (en) 2022-07-12 2022-07-12 Vehicle dangerous lane change detection method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210814610.XA CN114898325B (en) 2022-07-12 2022-07-12 Vehicle dangerous lane change detection method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN114898325A CN114898325A (en) 2022-08-12
CN114898325B true CN114898325B (en) 2022-11-25

Family

ID=82729982

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210814610.XA Active CN114898325B (en) 2022-07-12 2022-07-12 Vehicle dangerous lane change detection method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN114898325B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106981202A (en) * 2017-05-22 2017-07-25 中原智慧城市设计研究院有限公司 A kind of vehicle based on track model lane change detection method back and forth
CN109147393A (en) * 2018-10-18 2019-01-04 清华大学苏州汽车研究院(吴江) Vehicle lane change detection method based on video analysis
CN109784190A (en) * 2018-12-19 2019-05-21 华东理工大学 A kind of automatic Pilot scene common-denominator target Detection and Extraction method based on deep learning
CN109858459A (en) * 2019-02-20 2019-06-07 公安部第三研究所 System and method based on police vehicle-mounted video element information realization intelligently parsing processing
CN111950394A (en) * 2020-07-24 2020-11-17 中南大学 Method and device for predicting lane change of vehicle and computer storage medium
CN112712703A (en) * 2020-12-09 2021-04-27 上海眼控科技股份有限公司 Vehicle video processing method and device, computer equipment and storage medium
CN113380038A (en) * 2021-07-06 2021-09-10 深圳市城市交通规划设计研究中心股份有限公司 Vehicle dangerous behavior detection method, device and system
CN113688652A (en) * 2020-05-18 2021-11-23 魔门塔(苏州)科技有限公司 Method and device for processing abnormal driving behaviors
CN114360256A (en) * 2021-07-05 2022-04-15 上海安道雷光波系统工程有限公司 Embedded radar monitoring combination instrument and traffic flow radar information system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3357749B2 (en) * 1994-07-12 2002-12-16 本田技研工業株式会社 Vehicle road image processing device
CN107067737A (en) * 2017-04-13 2017-08-18 山东鼎讯智能交通股份有限公司 Integrated multi-functional road traffic crime scene investigation device
CN110356325B (en) * 2019-09-04 2020-02-14 魔视智能科技(上海)有限公司 Urban traffic passenger vehicle blind area early warning system
CN110853356A (en) * 2019-11-29 2020-02-28 南京慧尔视智能科技有限公司 Vehicle lane change detection method based on radar and video linkage
KR20210099436A (en) * 2020-02-04 2021-08-12 삼성전자주식회사 Apparatus and method for estimating road geometry
CN112017437B (en) * 2020-09-10 2021-03-26 北京雷信科技有限公司 Intersection traffic information perception control system and method
CN113807270A (en) * 2021-09-22 2021-12-17 北京百度网讯科技有限公司 Road congestion detection method and device and electronic equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106981202A (en) * 2017-05-22 2017-07-25 中原智慧城市设计研究院有限公司 A kind of vehicle based on track model lane change detection method back and forth
CN109147393A (en) * 2018-10-18 2019-01-04 清华大学苏州汽车研究院(吴江) Vehicle lane change detection method based on video analysis
CN109784190A (en) * 2018-12-19 2019-05-21 华东理工大学 A kind of automatic Pilot scene common-denominator target Detection and Extraction method based on deep learning
CN109858459A (en) * 2019-02-20 2019-06-07 公安部第三研究所 System and method based on police vehicle-mounted video element information realization intelligently parsing processing
CN113688652A (en) * 2020-05-18 2021-11-23 魔门塔(苏州)科技有限公司 Method and device for processing abnormal driving behaviors
CN111950394A (en) * 2020-07-24 2020-11-17 中南大学 Method and device for predicting lane change of vehicle and computer storage medium
CN112712703A (en) * 2020-12-09 2021-04-27 上海眼控科技股份有限公司 Vehicle video processing method and device, computer equipment and storage medium
CN114360256A (en) * 2021-07-05 2022-04-15 上海安道雷光波系统工程有限公司 Embedded radar monitoring combination instrument and traffic flow radar information system
CN113380038A (en) * 2021-07-06 2021-09-10 深圳市城市交通规划设计研究中心股份有限公司 Vehicle dangerous behavior detection method, device and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于特征重用和语义聚合的SAR图像舰船目标检测;江源等;《海军航空工程学院学报》;20191230(第06期);9-18+37 *
车载移动执法中违规车辆智能检测研究;陈刚等;《电子科技大学学报》;20180530(第03期);32-37 *

Also Published As

Publication number Publication date
CN114898325A (en) 2022-08-12

Similar Documents

Publication Publication Date Title
CN112528878B (en) Method and device for detecting lane line, terminal equipment and readable storage medium
US11205284B2 (en) Vehicle-mounted camera pose estimation method, apparatus, and system, and electronic device
CN109671006B (en) Traffic accident handling method, device and storage medium
CN112329552A (en) Obstacle detection method and device based on automobile
CN108877269B (en) Intersection vehicle state detection and V2X broadcasting method
CN110298300B (en) Method for detecting vehicle illegal line pressing
CN110738150B (en) Camera linkage snapshot method and device and computer storage medium
CN112507862B (en) Vehicle orientation detection method and system based on multitasking convolutional neural network
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
CN110991264A (en) Front vehicle detection method and device
CN113076851B (en) Method and device for collecting vehicle violation data and computer equipment
CN110929606A (en) Vehicle blind area pedestrian monitoring method and device
CN112183206B (en) Traffic participant positioning method and system based on road side monocular camera
CN111332306A (en) Traffic road perception auxiliary driving early warning device based on machine vision
CN114898325B (en) Vehicle dangerous lane change detection method and device and electronic equipment
CN107452230B (en) Obstacle detection method and device, terminal equipment and storage medium
CN114724119B (en) Lane line extraction method, lane line detection device, and storage medium
CN116486351A (en) Driving early warning method, device, equipment and storage medium
CN112990117B (en) Installation data processing method and device based on intelligent driving system
CN107255470B (en) Obstacle detection device
CN115019511A (en) Method and device for identifying illegal lane change of motor vehicle based on automatic driving vehicle
CN113688662A (en) Motor vehicle passing warning method and device, electronic device and computer equipment
CN113147746A (en) Method and device for detecting ramp parking space
CN113313968A (en) Parking space detection method and storage medium
CN112183413B (en) Parking space detection method and device, storage medium and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant