CN113936257A - Detection method and detection device for vehicle violation behaviors and vehicle-mounted electronic equipment - Google Patents

Detection method and detection device for vehicle violation behaviors and vehicle-mounted electronic equipment Download PDF

Info

Publication number
CN113936257A
CN113936257A CN202111205837.6A CN202111205837A CN113936257A CN 113936257 A CN113936257 A CN 113936257A CN 202111205837 A CN202111205837 A CN 202111205837A CN 113936257 A CN113936257 A CN 113936257A
Authority
CN
China
Prior art keywords
lane
vehicle
line
crossing
lane line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111205837.6A
Other languages
Chinese (zh)
Inventor
邱翰
胡桂雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rainbow Software Co ltd
Original Assignee
Rainbow Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rainbow Software Co ltd filed Critical Rainbow Software Co ltd
Priority to CN202111205837.6A priority Critical patent/CN113936257A/en
Publication of CN113936257A publication Critical patent/CN113936257A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding

Abstract

The invention discloses a detection method and a detection device for vehicle violation behaviors and vehicle-mounted electronic equipment. The detection method comprises the following steps: controlling a camera to collect a road image in front of the current vehicle based on preset calibration parameters; judging whether other vehicles on the front road have line crossing behaviors in the driving process based on the corresponding relative position of each road image obtained by analyzing the plurality of road images; if other vehicles have lane crossing behaviors, extracting original time sequence lane line characteristics of the other vehicles in the complete lane crossing process; and based on the original time sequence lane line characteristics, judging whether the lane crossing behavior of other vehicles is illegal lane change or not by using the trained behavior judgment model. The invention solves the technical problem that the detection result robustness is lower due to the lack of time sequence of the characteristics when the attribute of the lane line is not combined to the whole time sequence process of the lane crossing of the vehicle in the detection of the lane changing violation in the related technology.

Description

Detection method and detection device for vehicle violation behaviors and vehicle-mounted electronic equipment
Technical Field
The invention relates to the technical field of image analysis, in particular to a detection method and a detection device for vehicle violation behaviors and vehicle-mounted electronic equipment.
Background
With the improvement of economic level and the progress of infrastructure, vehicles on roads are more and more, however, the heavy traffic also causes more and more traffic accidents, and great harm is brought to individual families and society. Therefore, the demand for improving the safety of travel is increasingly promoted by standardizing the driving behavior of the driver.
Among a plurality of traffic accidents, the traffic accidents caused by the violation of solid line lane change account for a considerable part, and the reduction of the violation of solid line lane change behavior can effectively reduce the occurrence of the traffic accidents. In order to effectively reduce the times of the driver violation solid line lane change behavior, an effective way is provided for monitoring the driving behavior of the driver in real time and reporting the violation lane change behavior in time.
The camera is arranged to shoot a real-time video of the vehicle, the motion track of the vehicle is analyzed, and whether the monitored vehicle has a violation lane-changing behavior or not is judged by utilizing a preset rule. The technology mainly has two core problems: lane line attributes and vehicle trajectory. Based on this, the current detection method is to install a high-speed camera in a fixed scene (intersection, etc.), calibrate the attribute of the lane line, and detect the lane change of the solid line by analyzing the running track of the vehicle and combining with the rule formulated by the solid line. However, the technology of this method needs a lot of manpower to perform calibration work, and the camera can only monitor a fixed area after being installed, and later schemes start to install the camera on the vehicle and solve a part of calibration problems by using a statistical characteristic method, specifically, after preprocessing pictures, counting the color distribution of lane lines, setting a specific determination threshold value to determine the attribute of the lane lines, then determining whether the vehicle position deviates to determine whether the vehicle crosses the line, and combining the two points to determine the lane change of the solid line.
In the related art, two commonly used illegal lane change detection modes are provided: the method comprises the steps that firstly, a solid line lane change detection method based on fixed scene parameter calibration judges whether a violation lane-crossing behavior exists in the whole process by calibrating the linear attribute under a lens and combining a vehicle running track and preset violation lane change direction and distance parameters and utilizing manual rules; and secondly, judging the linear type by counting the distribution of black points and white points of the preprocessed lane line samples, and comprehensively judging the lane change of the solid line by using lane position deviation parameters, wherein if the lane position deviation parameters cross a lane, the lane change of the vehicle is judged.
However, the two violation lane change detection methods have a big problem, and for the first method, there are two disadvantages: firstly, the method needs a large amount of calibration work, parameter presetting and manual rule making work before operation, the labor cost is high, only fixed scenes can be monitored after installation, and calibration manual rule design may need to be carried out again after movement. For the second, there are the following problems: firstly, statistics based on color distribution is interfered by environmental factors such as illumination, stains and the like, robustness is low, secondly, misdetection can occur in curve judgment by utilizing self deviation parameters of a vehicle, and finally, the scheme does not combine lane line attributes into the whole time sequence process of vehicle line crossing, characteristics lack time sequence, and the robustness of the scheme is also low.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a detection method and a detection device for vehicle violation behaviors and vehicle-mounted electronic equipment, and at least solves the technical problem that when violation lane change detection is carried out in the related technology, the lane line attribute is not combined in the whole time sequence process of vehicle line crossing, the characteristic lacks time sequence, and the robustness of a detection result is low.
According to an aspect of the embodiment of the invention, a method for detecting vehicle violation is provided, which comprises the following steps: controlling a camera to collect a road image in front of the current vehicle based on preset calibration parameters; judging whether other vehicles on the front road have line crossing behaviors in the driving process or not based on the corresponding relative position of each road image obtained by analyzing the plurality of road images; if the other vehicles have the lane crossing behavior, extracting the original time sequence lane line characteristics of the other vehicles in the complete lane crossing process; and judging whether the lane crossing behavior of other vehicles is illegal lane changing or not by utilizing a trained behavior judgment model based on the original time sequence lane line characteristics.
Optionally, determining whether there is an over-route behavior of another vehicle on the road ahead during the driving process based on the relative position corresponding to each road image obtained by analyzing the plurality of road images, includes: determining the relative position between the other vehicle and the lane line contained in each road image by combining the vehicle information and the lane line information obtained by analyzing each road image, wherein the relative position is the proportion of the distance from the vehicle to the lane line to the width of the vehicle; and integrating the relative positions of the plurality of road images to judge whether the other vehicles on the front road have the line crossing behavior in the driving process.
Optionally, determining the relative position between the other vehicle and the lane line included in each of the road images by combining the vehicle information and the lane line information obtained by analyzing each of the road images includes: analyzing each road image to calibrate at least one vehicle detection frame and at least one lane line detection frame, wherein each vehicle detection frame corresponds to one other vehicle, and each lane line detection frame corresponds to one lane line; and determining the vehicle information and the lane line information of each other vehicle through at least one vehicle detection frame and at least one lane line detection frame.
Optionally, integrating the relative positions of the plurality of road images to determine whether there is an over-route behavior of another vehicle on the road ahead during the driving process includes: if the absolute value of the first relative position is smaller than a first threshold value, and positive and negative changes exist between the second relative position and the first relative position, the other vehicles have the line crossing behavior in the driving process; otherwise, the other vehicles do not have the line crossing behavior during the driving process.
Optionally, determining the relative position between the other vehicle and the lane line included in each of the road images by analyzing the vehicle information and the lane line information obtained from each of the road images includes: obtaining the position of a preset point of a vehicle detection area and the position of a vehicle lane line intersection point based on the vehicle information and the lane line information, wherein the vehicle lane line intersection point comprises an intersection point from a horizontal straight line where the preset point of the vehicle detection area is located to the lane line or the lane line extension line, or a vertical intersection point from the position of the preset point of the vehicle detection area to the lane line or the lane line extension line; and determining the relative position by adopting a first formula according to the point position of the preset point of the vehicle detection area, the point position of the intersection point of the vehicle lane lines and the width value of the vehicle detection frame.
Optionally, determining the vehicle information and the lane line information of each of the other vehicles through at least one vehicle detection frame and at least one lane line detection frame includes: analyzing the lane line trend vector and the lane line attribute information based on the lane line detection frame to obtain the lane line information; and analyzing the vehicle position, the vehicle height and the vehicle width of each other vehicle based on the vehicle detection frame to obtain the vehicle information.
Optionally, analyzing the lane line trend vector and the lane line attribute information based on the lane line detection frame to obtain the lane line information, including: inputting the lane line detection frame into a lane line model, analyzing attribute feature vectors of lane lines by using the lane line model to obtain the lane line information, wherein the lane line model is a model trained in advance, a preset classification frame is adopted to extract a lane line training sample set according to the marked positions of the lane lines in the training process, the lane line training sample set is input into a convolutional neural network system, to train the detection network to obtain a lane line model, or to obtain the lane line information by a conventional image processing method, wherein, the traditional image processing method is to obtain the marking position of the lane line after the image preprocessing is carried out on the lane line detection frame, analyzing the lane line trend vector and the lane line attribute information based on the lane line marking position to obtain the lane line information, wherein the image preprocessing comprises the following steps: binarization processing, image denoising and lane line segmentation.
Optionally, extracting the original time-series lane line feature in the complete lane crossing process of the other vehicle includes: determining a close frame and an end frame of a complete line crossing process according to the relative positions of the road images, and acquiring a timing chart of the complete line crossing process; and extracting the original time sequence lane line characteristics from the time sequence diagram.
Optionally, determining an approaching frame and an ending frame of the complete line crossing process comprises: if the absolute value of the relative position is smaller than a second threshold value, determining the relative position as a close frame of the complete line crossing process; and if the absolute value of the relative position is greater than a third threshold value and the relative position corresponding to the relative position and the approaching frame has positive and negative changes, determining the relative position as an end frame of the complete crossing process.
Optionally, based on the original time series lane line feature, judging whether the lane crossing behavior of the other vehicle is a violation lane change by using a trained behavior determination model, including: inputting the original time sequence lane line characteristics into a solid line confidence coefficient network to obtain a time sequence solid line confidence coefficient; and inputting the relative position and the time sequence solid line confidence degree contained in the corresponding frame in the complete line crossing process into the behavior judgment model to judge whether the line crossing behavior of other vehicles is illegal lane change or not, wherein the behavior judgment model is classified into an integrated behavior judgment model or a time sequence behavior judgment model.
Optionally, when the behavior determination model is an integrated behavior determination model, the detection method includes: combining the relative positions contained in the corresponding frames of each time sequence in the complete line crossing process, and classifying and counting the confidence coefficients of the solid lines of the time sequences to obtain histogram features; and inputting the histogram features into the integrated behavior judgment model to judge whether the crossing behavior of the other vehicles is illegal lane change or not.
Optionally, the obtaining of the histogram feature by classifying and counting the time series solid line confidence degrees in combination with the relative position included in each time series corresponding frame in the complete line crossing process includes: dividing the confidence of the time sequence solid line into n confidence sets by combining the relative position contained in each time sequence corresponding frame in the complete line crossing process and a preset segmentation threshold; extracting k-dimensional histogram features from the classification confidence corresponding to each frame of road image in each confidence set to obtain n k-dimensional histogram features; and connecting the obtained n k-dimensional histogram features in series according to a time sequence relation in a historical process to obtain n x k-dimensional histogram features.
Optionally, the integrated behavior determination model is a model obtained by performing integrated learning training by using an information entropy gain.
Optionally, when the behavior determination model is a time-series behavior determination model, the detection method includes: and directly inputting the relative positions and the time sequence solid line confidence degrees contained in the corresponding frames in the complete line crossing process into the time sequence behavior judgment model according to time sequence arrangement so as to judge whether the line crossing behavior of other vehicles is violation lane change or not.
Optionally, the time-series behavior decision model is obtained by convolution calculation of a base network and back propagation training of calculated cross entropy loss.
Optionally, the detection method further includes: and judging whether the lane crossing behavior of the other vehicles is lane change capable of crossing the lane by using a lane crossing module based on the original time sequence lane line characteristics and the vehicle information in each road image.
Optionally, judging whether the lane crossing behavior of the other vehicle is lane change capable of crossing the lane by using a lane crossing module based on the original time series lane line feature and the vehicle information in each road image, including: controlling each sub-module of the line-crossing module to slide on the time sequence lane line characteristic by adopting a sliding window with preset time sequence length so as to determine the linear category of the lane line, wherein the line-crossing module comprises a virtual line sub-module, a real line sub-module, a bus area sub-module and other line-crossing sub-modules; and judging whether the lane crossing behavior of the other vehicle is lane change capable of crossing the lane by utilizing each submodule of the lane crossing module based on the vehicle information and the linear category of the lane line.
Optionally, if the lane crossing behavior of the other vehicle is lane changing that can cross the lane, the determining, by the virtual-real dual-line split module, whether the lane crossing behavior of the other vehicle is lane changing that can cross the lane includes: sliding on the original time sequence lane line characteristics by adopting a sliding window with a preset time sequence length to determine a plurality of confidence coefficients of the lane line as a virtual line and a real line; calculating a first average value of the confidence degrees, and judging whether the lane line is a virtual line or a real line or not based on the first average value to obtain a linear category of the lane line; and judging whether the lane-crossing behavior of the other vehicle is lane change capable of crossing the lane or not based on the lane-crossing direction of the vehicle and the linear type of the lane line.
Optionally, the determining whether the lane crossing behavior of the other vehicle is lane change capable of crossing the lane based on the lane crossing direction of the vehicle and the linear category of the lane line includes: if the vehicle lane crossing direction indicates that the lane crossing behavior of the other vehicle starts to line from the right side of the current lane line, and the linear category of the lane line is a left virtual solid line and a right solid line, determining that the lane crossing behavior of the other vehicle is not lane change capable of crossing; and if the vehicle lane crossing direction indicates that the lane crossing behavior of the other vehicle starts to line from the left side of the current lane line, and the linear category of the lane line is a left real and right dotted line, determining that the lane crossing behavior of the other vehicle is not lane change capable of crossing the line.
Optionally, if the lane-crossing module is a bus regional sub-module, determining whether the lane-crossing behavior of the other vehicle is lane change that can cross the lane through the bus regional sub-module includes: sliding on the original time sequence lane line characteristic by adopting a sliding window with a preset time sequence length to determine a plurality of confidence coefficients of the lane line as a public vehicle area side line; calculating a second average value of the confidence degrees, and judging whether the lane line is a public vehicle area side line or not based on the second average value to obtain a linear category of the lane line; and judging whether the lane crossing behavior of the other vehicles is lane change capable of crossing the lane based on the linear category of the lane and the time information in the historical process.
Optionally, the determining whether the lane crossing behavior of the other vehicle is lane change capable of crossing the lane based on the linear category of the lane and the time information in the historical process includes: if the linear type of the lane line indicates that the lane line is a public vehicle area sideline, adopting a sliding window with a preset time sequence length to slide on the vehicle attribute characteristics of other vehicles so as to judge whether the other vehicles are public vehicles or not and obtain a judgment result; if the judgment result indicates that the other vehicle is not a public vehicle, judging whether the historical time period in the historical process is a driving permission time period; if the historical time period in the historical process is a non-allowable driving time period, determining that the lane-crossing behavior of the other vehicles is not lane change capable of crossing the lane; and if the historical time period in the historical process is the allowed driving time period, determining that the lane crossing behavior of the other vehicles is lane change capable of crossing the lane.
Optionally, if the lane-crossing module is another lane-crossing sub-module, determining whether the lane-crossing behavior of the other vehicle is lane-crossing by the another lane-crossing sub-module includes: adopting a sliding window with a preset time sequence length to slide on the original time sequence lane line characteristics so as to determine a plurality of confidence coefficients of lane lines as line-crossing region side lines; calculating a third average of the confidence coefficients; if the third average value of the confidence degrees is larger than a preset value, determining that the lane crossing behavior of the other vehicles is lane changing capable of crossing; and if the third average value of the confidence degrees is less than or equal to a preset value, determining that the lane crossing behavior of the other vehicles is not lane change capable of crossing.
Optionally, after the lane-crossing behavior of the other vehicle is judged to be lane-crossing lane-changing lane-crossing by using the lane-crossing enabling module, the detection method further includes: if the violation of the rule is determined to be carried out by the other vehicles, marking the violation vehicles; sending out violation prompt information; and reporting the vehicle violation information and the vehicle information to a vehicle management platform.
Optionally, before controlling the camera to acquire an image of a road ahead of the current vehicle based on preset calibration parameters, the detection method further includes: installing a camera at a front windshield of a current vehicle; calibrating a vehicle cover area and a horizon position according to the installation position of the camera; and determining preset calibration parameters and an image detection area based on the calibrated vehicle cover area and the horizontal line position.
Optionally, the step of controlling the camera to collect an image of a road ahead of the current vehicle based on preset calibration parameters includes: if the camera is a long-focus camera and a wide-angle camera, the focusing information when the road image is analyzed is adjusted according to the field angle information of the camera, and calibration information for calibrating the detection frames of other vehicles is adjusted.
Optionally, the step of controlling the camera to capture an image of the road in front of the current vehicle includes: analyzing ambient light parameters around the current vehicle; and if the ambient light parameter is lower than a preset light threshold value, acquiring a road image in front of the current vehicle by using an infrared camera.
According to another aspect of the embodiments of the present invention, there is also provided a device for detecting a vehicle violation, including: the control unit is used for controlling the camera to collect road images in front of the current vehicle based on preset calibration parameters; the first judging unit is used for judging whether other vehicles on the front road have line crossing behaviors in the driving process based on the corresponding relative position of each road image acquired by analyzing a plurality of road images; the extraction unit is used for extracting original time sequence lane line characteristics in the complete line crossing process of the other vehicles when the other vehicles have line crossing behaviors; and the second judgment unit is used for judging whether the lane crossing behavior of other vehicles is illegal lane change or not by utilizing a trained behavior judgment model based on the original time sequence lane line characteristics.
According to another aspect of the embodiments of the present invention, there is also provided a road vehicle, including: the vehicle-mounted camera is arranged at a windshield in front of the vehicle and used for acquiring road images of a road in front; and the vehicle-mounted control unit is connected with the vehicle-mounted camera and executes any one of the detection methods of the vehicle violation behaviors.
According to another aspect of the embodiments of the present invention, there is also provided an in-vehicle electronic apparatus, including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the method of detecting a vehicle violation via execution of the executable instructions.
According to another aspect of the embodiment of the present invention, there is further provided a computer-readable storage medium, where the computer-readable storage medium includes a stored computer program, and when the computer program runs, the apparatus where the computer-readable storage medium is located is controlled to execute any one of the above methods for detecting vehicle violation behaviors.
The method and the device can be applied to a scene of real-time monitoring of the illegal lane changing behavior of the vehicles on the road, and the running track of the front vehicle is monitored, the time sequence histogram features are extracted based on the time sequence state information in the whole process of the vehicles, and the real-line lane changing behavior is identified to be reported by combining the behavior judgment model, so that a driver is helped to develop a good driving habit, and traffic accidents caused by fewer similar factors occur.
According to the method, a camera is controlled to collect a road image in front of a current vehicle based on preset calibration parameters, whether other vehicles on the road in front have lane crossing behaviors in the driving process is judged based on the corresponding relative position of each road image obtained by analyzing a plurality of road images, if the other vehicles have the lane crossing behaviors, original time sequence lane characteristics of the other vehicles in the complete lane crossing process are extracted, and whether the lane crossing behaviors of the other vehicles are illegal lane changing is judged by using a trained behavior judgment model based on the original time sequence lane characteristics. In the embodiment, whether the lane crossing behavior of the vehicle is illegal lane changing is analyzed based on the time sequence information of the lane changing behavior of the vehicle in the driving process, the lane crossing behavior of the vehicle is combined with the attribute characteristics of the lane in the whole time sequence process, the final result is obtained by classifying the behavior judgment model, whether the current lane crossing behavior is illegal is judged, the analyzed lane crossing behavior characteristics have time sequence, the robustness of the detection result can be obviously improved, and the technical problem that the robustness of the detection result is low due to the fact that the feature lacks time sequence when the lane crossing behavior is not combined with the whole time sequence process of the lane crossing of the vehicle in the related technology is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow chart of an alternative method of vehicle violation detection in accordance with an embodiment of the present invention;
FIG. 2 is a schematic diagram of an alternative extraction vehicle detection box and lane line detection box according to an embodiment of the invention;
FIG. 3 is a schematic illustration of an alternative calibration of the relative position of other vehicles to the lane lines in accordance with embodiments of the present invention;
FIG. 4 is a schematic diagram of an alternative timing histogram extraction according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an alternative vehicle violation determination utilizing an over-the-line behavior module, according to embodiments of the present invention;
FIG. 6 is a schematic diagram of an alternative use of a wide-angle camera to analyze a vehicle for violations in accordance with an embodiment of the present invention;
FIG. 7 is a diagram illustrating an alternative method for determining line crossing behavior in consideration of different illumination intensities according to an embodiment of the present invention;
fig. 8 is a schematic diagram of an alternative vehicle violation detection arrangement in accordance with an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The method and the device can be applied to various road monitoring scenes or vehicle violation lane change detection scenes, and can confirm whether the current lane crossing behavior violates the regulations or not through the time sequence information of the vehicle in the driving process. The present application is described below with reference to various embodiments.
Example one
In accordance with an embodiment of the present invention, there is provided an embodiment of a method for detecting vehicle violations, it being noted that the steps illustrated in the flowchart of the figure may be performed in a computer system, such as a set of computer executable instructions, and that while a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than that presented herein.
Fig. 1 is a flow chart of an alternative method for detecting vehicle violations in accordance with an embodiment of the present invention, as shown in fig. 1, the method including the steps of:
step S102, controlling a camera to collect a road image in front of a current vehicle based on preset calibration parameters;
step S104, judging whether other vehicles on the front road have line crossing behaviors in the driving process based on the corresponding relative position of each road image obtained by analyzing the plurality of road images;
step S106, if other vehicles have lane crossing behaviors, extracting original time sequence lane line characteristics of the other vehicles in the complete lane crossing process;
and step S108, judging whether the lane crossing behavior of other vehicles is illegal lane change or not by using the trained behavior judgment model based on the original time sequence lane line characteristics.
Through the steps, the camera is controlled to collect the road image in front of the current vehicle based on the preset calibration parameters, whether the lane crossing behavior exists in the driving process of other vehicles on the road in front is judged based on the corresponding relative position of each road image obtained by analyzing a plurality of road images, if the lane crossing behavior exists in other vehicles, the original time sequence lane line characteristics of the other vehicles in the complete lane crossing process are extracted, and whether the lane crossing behavior of the other vehicles is the violation lane change is judged by using the trained behavior judgment model based on the original time sequence lane line characteristics. In the embodiment, whether the lane crossing behavior of the vehicle is illegal lane changing is analyzed based on the time sequence information of the lane changing behavior of the vehicle in the driving process, the lane crossing behavior of the vehicle is combined with the attribute characteristics of the lane in the whole time sequence process, the final result is obtained by classifying the behavior judgment model, whether the current lane crossing behavior is illegal is judged, the analyzed lane crossing behavior characteristics have time sequence, the robustness of the detection result can be obviously improved, and the technical problem that the robustness of the detection result is low due to the fact that the feature lacks time sequence when the lane crossing behavior is not combined with the whole time sequence process of the lane crossing of the vehicle in the related technology is solved.
The following describes embodiments of the present invention in detail with reference to various implementation steps.
Optionally, before controlling the camera to collect the road image in front of the current vehicle based on the preset calibration parameter, the detection method further includes: installing a camera at a front windshield of a current vehicle; calibrating a vehicle cover area and a horizon position according to the installation position of the camera; and determining preset calibration parameters and an image detection area based on the calibrated vehicle cover area and the horizontal line position.
In the application, during calibration, a camera may be installed in front of a vehicle (for example, the inside of the vehicle is close to a windshield, so as to conveniently shoot a front road image, and whether lane changing behavior of a front vehicle crosses a line or not is analyzed through multiple images or videos shot by the current vehicle, and whether the lane crossing behavior violates the regulations is analyzed at the same time), and according to the installation position of the camera, a vehicle cover and a horizon position (which can cope with the situation of a wide field angle FOV) are calibrated.
In this embodiment, when collecting and labeling materials, a formally implemented camera and corresponding calibration parameters are used to collect image materials on various scene roads (for example, roads at various levels such as urban and rural roads, elevated roads, and expressways), after a video is acquired, positions and attributes (dotted lines or solid lines) of lane lines in a video frame image are labeled, and meanwhile, a part of the video in an effective detection area where a solid line crosses and a part of the video in which a solid line crosses are not segmented, and when collecting materials, different weather conditions such as sunny days, cloudy days, rainy days, and the like, and illumination conditions such as strong light and backlight are comprehensively considered. The collected material training behavior judgment model considers that the violation solid line crossing behavior is a time sequence behavior, cannot judge whether the violation behavior occurs at present only through single state information, needs to comprehensively consider the attributes of the line shape where the vehicle is located in the current line crossing process by combining the current state information, and judges whether the violation behavior occurs at present through the time sequence information.
And S102, controlling a camera to acquire a road image in front of the current vehicle based on preset calibration parameters.
In this embodiment, the vehicle types of the current vehicle include, but are not limited to: trucks, cars, sports cars, buses, and the like.
The image types of the collected road images can be color images, gray-scale images and binary images, and the road images of the road ahead are shot through at least one camera arranged at the position of the current vehicle close to a front windshield. It should be noted that, if the time-series classification network discriminates images of different types, it needs to be trained based on the material of the image type.
Optionally, in this embodiment, the vehicle running on the road is used to photograph the road and the vehicle ahead, so as to analyze the lane crossing behavior of the vehicle; meanwhile, the automatic monitoring of the violation lane-changing behavior of the vehicle under a fixed scene (namely, the light supplement lamp and the camera on the road side or a road monitor) can be adopted, and the additional calibration of the attribute of the lane line and the manual formulation of the violation lane-changing rule are not needed.
And step S104, judging whether other vehicles on the front road have line crossing behaviors in the driving process based on the corresponding relative position of each road image acquired by analyzing the plurality of road images.
With respect to the above embodiment, determining whether there is an out-of-line behavior of another vehicle on the road ahead during the driving process based on the relative position corresponding to each road image acquired by analyzing the plurality of road images includes: determining the relative position between the other vehicle and the lane line contained in each road image by combining the vehicle information and the lane line information obtained by analyzing each road image, wherein the relative position is the proportion of the distance between the vehicle and the lane line in the width of the vehicle; and (4) integrating the relative positions of the multiple road images to judge whether the other vehicles on the front road have the line crossing behavior in the driving process.
In this embodiment, the vehicle information includes, but is not limited to: vehicle height, vehicle width, vehicle length, shape of the vehicle rear, and location of the vehicle tires. And lane line information includes, but is not limited to: the color of the lane line, the edge position of the lane line, the length of the lane line, and the relative position between the lane line and the lane center line rail. The vehicle information and the lane line information are convenient for positioning the driving position of the vehicle in the driving process. And further monitoring and analyzing the motion trail of the vehicle based on the vehicle information and the lane line information acquired at a plurality of moments.
In an optional implementation manner of this embodiment, the vehicle information and lane line information obtained by analyzing each road image includes: analyzing each road image to calibrate at least one vehicle detection frame and at least one lane line detection frame, wherein each vehicle detection frame is internally provided with one corresponding other vehicle, and each lane line detection frame is provided with one corresponding lane line; and determining the vehicle information and the lane line information of each other vehicle through the at least one vehicle detection frame and the at least one lane line detection frame.
In this embodiment, when analyzing a road image, a road pixel area, a lane line pixel area, and a vehicle pixel area in the image can be distinguished, based on which a vehicle detection frame and a lane line detection frame are calibrated, the two detection frames can be understood as an interesting frame roi (region of interest) identified by the image, and the types of the detection frames include but are not limited to: the vehicle detection frame is rectangular, square or circular, and can contain all/part of components of each vehicle, for example, the tail of a vehicle ahead or the tail of the vehicle in an oblique direction can be distinguished through the rectangle (when the vehicle runs, the vehicle often follows along a straight line along a road, and the current road often has a plurality of roads on which the vehicle can run in parallel, so that the current vehicle can observe the tail of the vehicle in the road ahead and can observe the vehicles in other lanes in the oblique direction); and in the lane line detection frame, all/part of lane lines need to be included, and because the vehicle can block part of the lane lines, the lane lines exposed to the extension of the vehicle possibly appear in the lane line detection frame, and the extending direction of the lane lines and the length of the lane lines can be deduced based on the positions of the lane lines, so that the lane line detection frame capable of including one complete lane line is determined.
Fig. 2 is a schematic diagram of an optional extraction of a vehicle detection frame and a lane line detection frame according to an embodiment of the present invention, as shown in fig. 2, after analyzing a captured image in front of a vehicle, a plurality of vehicle detection frames are divided, a lane line detection frame (indicated by a line frame in fig. 2) is determined, and a relative position between a time-series vehicle and a lane line is determined according to information contained in the vehicle detection frame and the lane line detection frame. Meanwhile, in the embodiment, the lane line number, the lane line image confidence level and the relative position information can be output in real time in the detection frame, and the time sequence lane line image confidence level can also be displayed.
In the embodiment, the attribute of the lane line where the vehicle is located is comprehensively considered in combination with the whole process of crossing the line of the current target vehicle, and the time sequence information is utilized to judge whether the current line-crossing behavior violates regulations. Firstly, whether the target vehicle has the line crossing behavior or not is determined, specifically, when the target vehicle finishes 3 states of approaching, pressing and leaving of the vehicle relative to the same lane line according to time sequence, the target vehicle is judged to have the line crossing behavior, and all information in the whole line crossing process of the target vehicle needs to be recorded.
As an optional implementation manner of this embodiment, determining the relative position between the other vehicle and the lane line included in each road image by combining the vehicle information and the lane line information obtained by analyzing each road image includes: obtaining the position of a preset point of a vehicle detection area and the position of a vehicle lane line intersection point based on the vehicle information and the lane line information, wherein the vehicle lane line intersection point comprises the intersection point from the preset point of the vehicle detection area to a lane line or a lane line extension line or the vertical intersection point from the preset point of the vehicle detection area to the lane line or the lane line extension line; and determining the relative position by adopting a first formula according to the point position of a preset point of the vehicle detection area, the point position of the intersection point of the vehicle lane line and the width value of the vehicle detection frame. Specifically, the preset point of the vehicle detection area is any point of the vehicle detection frame area, such as the center of the bottom edge of the vehicle detection frame, the center of the vehicle detection frame, and the like
Fig. 3 is a schematic diagram of an alternative method for calibrating the relative position between another vehicle and a lane line according to an embodiment of the present invention, and as shown in fig. 3, taking an intersection point of the lane line of the vehicle as a preset point of a vehicle detection area and an intersection point of an extension line of the lane line as an example, p is a midpoint of a bottom edge of a vehicle detection frame, and q is an intersection point of the lane line of the vehicle (indicating an intersection point of the midpoint of the bottom edge of the vehicle detection frame and the extension line of the lane line). By using such asDetermining the relative positions of other vehicles and the lane lines in the detection frame by using a first formula, and analyzing the lane crossing behavior of the vehicles according to three states of approach, line pressing and departure of the vehicles to the same lane line according to time sequence, wherein the first formula is as follows:
Figure BDA0003306794860000111
wherein x ispFor the vehicle, the position of the point at the midpoint of the bottom edge of the frame, xqThe point position, width, of the intersection point of the horizontal straight line of the center of the bottom edge of the vehicle and the extension line of the lane linevehicleThe ratio is the ratio of the distance from the center of the bottom edge of the vehicle to the lane line in the image to the vehicle detection frame, and can be used for representing and determining the relative positions of other vehicles in the detection frame and the lane line. Under the perspective angle, the farther the other detected vehicles are from the acquisition position, the smaller the vehicle width and the width of the lane line. If the relative position is represented by simply using the distance from the center of the bottom edge of the image vehicle detection frame to the lane line, even if the distance between two vehicles and the lane line is the same, the relative position calculated by the vehicle farther away from the acquisition position is larger than the relative position calculated by the vehicle closer to the acquisition position, which is not in accordance with the actual situation. The proportion characterization relative position is calculated in combination with the width of the detection frame, so that the proportion calculated through the first formula can not be changed no matter how far the vehicle is from the acquisition position as long as the vehicle and the lane line are the same distance, and the obtained relative position is under the same judgment reference.
Optionally, determining the vehicle information and lane line information of each other vehicle through at least one vehicle detection frame and at least one lane line detection frame includes: analyzing the lane line trend vector and the lane line attribute information based on the lane line detection frame to obtain lane line information; and analyzing the vehicle position, the vehicle height and the vehicle width of each other vehicle based on the vehicle detection frame to obtain vehicle information.
Another optional, based on the lane line detection frame, analyzing the lane line trend vector and the lane line attribute information to obtain the lane line information, including: inputting a lane line detection frame into a lane line model, analyzing attribute feature vectors of lane lines by using the lane line model to obtain lane line information, wherein the lane line model is a pre-trained model, extracting a lane line training sample set by using a preset classification frame according to lane line marking positions in a training process, inputting the lane line training sample set into a convolutional neural network system to train a detection network to obtain the lane line model, or obtaining the lane line information by using a traditional image processing method, wherein the traditional image processing method is to obtain the lane line marking positions after image preprocessing is performed on the lane line detection frame, and analyzing the lane line trend vectors and the lane line attribute information based on the lane line marking positions to obtain the lane line information, wherein the image preprocessing comprises the following steps: binarization processing, image denoising and lane line segmentation.
In this embodiment, the determining whether there is an out-of-line behavior in the driving process of other vehicles on the road ahead by integrating the relative positions of the multiple road images includes: if the absolute value of the first relative position is smaller than a first threshold value, and positive and negative changes exist between the second relative position and the first relative position, the other vehicles have the line crossing behavior in the driving process; otherwise, the other vehicles do not have the line crossing behavior during the driving process.
Specifically, the lane crossing behavior of the vehicle during driving is not the behavior at a certain time, but whether the vehicle crosses the lane ahead or not is determined by observing that the vehicle completes the complete motion from left to right or from right to left relative to the same lane line. In combination with the definition of the relative position, when the vehicle is on both sides of the lane line, the relative position is different in positive and negative. In the complete process of vehicle crossing, the multiple road images contain corresponding time sequence relative positions, and the situations that the absolute value of the relative position is smaller than a preset first threshold value and the relative position changes positively and negatively necessarily exist. And (3) integrating the relative positions of the multiple road images, and determining whether the vehicle has the line crossing behavior in the driving process by adopting the following first judgment condition:
Figure BDA0003306794860000131
wherein, ratio1Is the relative position, ratio, of the first road image2Is the relative position of the second road image which is arranged chronologically after the first road image. Lane change indicates that there has been an over-the-wire behavior, and not indicates that there has been no over-the-wire behavior.
In the embodiment, the relative positions of the time sequences formed by the multiple road images are integrated, whether the front vehicle crosses the lane or not is judged by observing the vehicle through a complete lane crossing process from left to right or from right to left relative to the same lane line instead of only depending on the position information at a single moment, the condition that the line is temporarily pressed but is misjudged to be crossed is avoided, and the accuracy and the robustness of judging the line crossing behavior are improved.
And step S106, if the other vehicles have the lane crossing behavior, extracting the original time sequence lane line characteristics of the other vehicles in the complete lane crossing process.
In an optional implementation manner of this embodiment, extracting the original time-series lane line feature in the complete lane crossing process of another vehicle includes: determining a close frame and an end frame of the complete line crossing process according to the relative positions of the plurality of road images, and acquiring a timing diagram of the complete line crossing process; original time series lane line characteristics are extracted from the timing diagram.
The timing diagram is each corresponding frame in the complete lane crossing process of the vehicle, and may include: the time point and the relative position also contain the attribute information of the lane line; and taking the lane line as a time sequence reference line, and recording the driving position and the line crossing behavior of the vehicle along with the advancing time in the driving process.
In this embodiment, since the conventional representation form of the lane line on the road is a long line shape, the long line shape is located at the end of the road and at the intersection of two roads, a solid line or a dotted line perpendicular to the extending direction of the road appears in front of the road, and the white line is usually used to mark on the road, when analyzing the attribute feature vector of the lane line through the lane line model, the position where the vehicle approaches (or has pressed a line or left the lane line) the intersection or the end of the road is mainly detected, and the attribute information of the line shape where the vehicle is currently located is analyzed.
When the road vehicle flow is large, the lane line information included in the single frame image is not complete, and is blocked by the vehicle, for example, and the broken line is erroneously determined as a solid line. In order to avoid the above situation, further, when there is an lane crossing behavior of another vehicle for the same lane line, it is necessary to further determine attribute information of the lane line in the complete lane crossing process. The complete line crossing process comprises three states of approaching, line pressing and leaving, and the time sequence lane line does not have the change of type and state in the complete line crossing process, so that the attribute information of the lane line of the corresponding frame is extracted and recorded, and the detection and the judgment are carried out according to the characteristics of the time sequence lane line, thereby being beneficial to the stability and the accuracy of the detection result.
Optionally, determining the approach frame and the end frame of the complete line crossing process includes: if the absolute value of the relative position is smaller than a second threshold value, determining the relative position as a close frame of a complete line crossing process; and if the absolute value of the relative position is greater than the third threshold value and the relative position corresponding to the close frame has positive and negative changes, determining the relative position as an end frame of the complete crossing process.
If the vehicle has approached the lane line, first record the attribute information of the currently located line, and some (but not limited to) embodiments are as follows:
for attribute information of the lane line, in this embodiment, an xgboost traditional machine learning algorithm or a basic network structure of a deep learning network such as Mobilenet, resnet, vgg, imagenet and the like may be used as a classification frame, after adaptive adjustment is performed according to a lane line labeling position, a lane line training sample set is extracted and input to a convolutional neural network system for training of a detection network, and after convolution calculation is performed through the basic network, cross entropy loss is calculated for back propagation. Then inputting the pixel area of the target lane line into the trained lane line model to obtain an attribute feature vector fiAnd recorded. In this embodiment, the attribute information extracted at the same time may be a feature of a certain layer or a final classification score, and for the lane line attribute, conventional description information, such as HOG, Haar, LBP, and the like, may also be extracted.
After determining that the vehicle is approaching/approaching the lane line, the vehicle may have a state of pressing and leaving the lane line as time passes, and first, it is determined whether the vehicle is approaching the lane line, i.e., an approaching frame, using the following second determination condition:
Figure BDA0003306794860000141
where close indicates that the line has been pressed and not indicates that the lane line has not been approached.
If the vehicle presses the lane line, recording attribute information of the current lane line, determining whether the vehicle completes a complete lane crossing behavior, and then determining whether the vehicle leaves the lane line by using the following third judgment condition, namely whether the vehicle is a leaving frame:
Figure BDA0003306794860000142
where leave indicates that the push vehicle has left the lane line, and not indicates that the push vehicle has not left the lane line.
For the behavior determined as lane change, after a timing chart of a complete lane crossing process is determined, time sequence information in the whole lane crossing process is analyzed from the timing chart to obtain a series of original time sequence lane characteristics (f) with indefinite length1,f2,f3,...fn-1,fn) And then inputting the characteristics of the original time sequence lane lines into a behavior judgment model, and judging whether the line crossing behavior violates rules or not based on the time sequence information in the whole process.
In the embodiment, the condition that the vehicle has the line crossing behavior in the running process can be analyzed, and the condition that the violation occurs in the line crossing behavior is obtained.
And step S108, judging whether the lane crossing behavior of other vehicles is illegal lane change or not by using the trained behavior judgment model based on the original time sequence lane line characteristics.
As an optional implementation manner of this embodiment, based on the original time series lane line features, the trained behavior determination model is used to determine whether the lane crossing behavior of another vehicle is a violation lane change, including: inputting the original time sequence lane line characteristics into a solid line confidence coefficient network to obtain a time sequence solid line confidence coefficient; and inputting the relative position and the time sequence solid line confidence degree contained in the corresponding frame in the complete line crossing process into a behavior judgment model to judge whether the line crossing behavior of other vehicles is violation lane change or not, wherein the behavior judgment model is classified into an integrated behavior judgment model or a time sequence behavior judgment model.
In this embodiment, the behavior determination model needs to analyze whether the vehicle crossing behavior violates regulations in the complete crossing process by combining the time sequence information. The behavior determination model in the present embodiment may include two cases, one is an integrated behavior determination model, and the other is a time-series behavior determination model. Optionally, the vehicle violation may be determined in this embodiment through various embodiments.
In the first embodiment, the vehicle violation behavior is judged by the integrated behavior judgment model.
As an optional implementation manner of this embodiment, when the behavior determination model is an integrated behavior determination model, the method includes: obtaining histogram features by classifying and counting the confidence of the time sequence solid line by combining the relative positions contained in the corresponding frames of each time sequence in the complete line crossing process; and inputting the histogram features into the integrated behavior judgment model to judge whether the crossing behavior of other vehicles is illegal lane change or not.
Optionally, the obtaining of the histogram feature by classifying and counting the confidence of the time-series solid line by combining the relative positions included in the corresponding frames of each time series in the complete line crossing process includes: dividing the confidence coefficient of a time sequence solid line into n confidence coefficient sets by combining the relative position contained in each time sequence corresponding frame in the complete line crossing process and a preset segmentation threshold; extracting k-dimensional histogram features from the classification confidence corresponding to each frame of road image in each confidence set to obtain n k-dimensional histogram features; and connecting the obtained n k-dimensional histogram features in series according to a time sequence relation in a historical process to obtain n x k-dimensional histogram features. Specifically, the preset segmentation threshold may be adaptively adjusted according to the recognition accuracy.
Fig. 4 is a schematic diagram of an optional time series histogram extraction according to an embodiment of the present invention, and as shown in fig. 4, time series histogram features are obtained by analyzing the confidence of a time series lane line picture and the relative position of a corresponding time series vehicle lane line. In fig. 4, the video stream refers to a video stream captured by a vehicle. And analyzing the confidence degree of the time sequence lane line picture indicated by each video frame in the video stream at different moments and the corresponding time sequence vehicle lane line relative position.
Firstly, obtaining the characteristic f of each frame in the channel changing processiAnd acquiring corresponding time sequence solid line confidence through the solid line confidence network. According to the ratio (p in the present application) of the distance from the vehicle to the lane line in each frame of image to the width of the vehicle in the line pressing process, the confidence of the time sequence solid line is divided into n sets by adopting the following second formula, wherein the second formula is as follows:
Figure BDA0003306794860000161
where θ is a preset segmentation threshold, for each set miConfidence of intra picture
Figure BDA0003306794860000163
Extracting statistical k-dimensional histogram feature w (w)1,w2,w3…wn)。
In this embodiment, when extracting the histogram feature, the following third formula is used for extraction, where the third formula is:
Figure BDA0003306794860000162
wherein, wiThe ratio of the number of the frame pictures in the ith solid line confidence coefficient range to the total frame number. Finally, in order to utilize the time sequence information in the violation line crossing process, n k-dimensional features are connected in series according to a time sequence mode to obtain n x k-dimensional time sequence histogram features.
Optionally, the integrated behavior determination model is a model obtained by performing integrated learning training by using information entropy gain.
In this embodiment, after the fixed-length features are extracted from the violation line-crossing behavior with different timing lengths, the timing sequence histogram features with uniform dimensions can be obtained through classification and statistics, and then the timing sequence histogram features with uniform dimensions are input into the integrated behavior determination model trained through ensemble learning to be classified, so as to improve the generalization of the model, for example, a random forest is used, and the integrated behavior determination model is finally obtained through the ensemble learning training by using the information entropy gain. According to the method and the device, whether the violation behaviors of changing lanes of a solid line occur in the line crossing process of other vehicles or not is analyzed through an integrated behavior judgment model according to the characteristics of the original time sequence lane lines.
In the second embodiment, the violation behaviors are judged by utilizing a time-sequence behavior judgment model.
As an implementation manner of this embodiment, when the behavior determination model is a time-series behavior determination model, the detection method includes: and (4) directly inputting the relative position and the time sequence solid line confidence degree contained in the corresponding frame in the complete line crossing process into a time sequence behavior judgment model according to time sequence arrangement so as to judge whether the line crossing behavior of other vehicles is illegal lane change or not. Optionally, the time sequence behavior determination model is obtained by performing convolution calculation on a basic network and performing back propagation training on the calculated cross entropy loss.
Specifically, for a behavior that has been determined to be a lane change, a series of primitive timing lane features (f) of varying length are derived1,f2,f3,...fn-1,fn) Inputting the models into a time sequence behavior judgment model, arranging the characteristics according to a time sequence, inputting a transform-XL model, training a detection network, calculating cross entropy loss for back propagation after convolution calculation of a basic network, and obtaining a trained behavior recognition model.
Optionally, for the network type used by the time sequence behavior determination model, a conventional machine learning algorithm such as svm may be used, or a time sequence behavior determination network such as RNN, LSTM, transfrom, ViT may be used.
In this embodiment, the method for determining violation behaviors by using the time-series behavior determination model belongs to a data-driven mode, and has a strong data fitting capability, on one hand, on the premise that platform computation power is sufficient, the more sufficient the training sample is, the better the determination effect of the time-series behavior determination model obtained by training is, and on the other hand, the richer the input time-series information is, the lower the computation efficiency of the time-series behavior determination model is, and the more computation resources are occupied.
For the identified solid line lane change behavior, whether the real line lane change is a violation solid line lane change needs to be further confirmed, and in an actual situation, all the solid line lane change are not violation behaviors. The method and the device have the advantages that the lane-crossing behavior of the vehicle can be screened and eliminated from the lane-changing behavior of the vehicle through the lane-crossing module, and the accuracy of the violation behavior judgment is further improved.
Optionally, the detection method further includes: and judging whether the lane crossing behavior of other vehicles is lane changing capable of crossing the lane by using a lane crossing module based on the characteristics of the original time sequence lane and the vehicle information in each road image.
Fig. 5 is a schematic diagram of an alternative method for determining violation behaviors of a vehicle by using an offline behavior module according to an embodiment of the present invention, and as shown in fig. 5, after a relative position between the vehicle and a lane line in time sequence information is analyzed by taking a picture (using each video frame in a video Stream in fig. 5), an offline behavior time sequence lane line graph is analyzed to obtain a lane line attribute network, and the lane line attribute network is input to a behavior determination model by combining with time sequence lane line attribute characteristics, and a determination result is obtained by using three offline modules (a virtual and real double-line module, a bus area module, and other offline modules) respectively, and indicates whether the vehicle offline behavior violates rules or not.
In fig. 5, the lane crossing sample is determined by using the relative position relationship between the vehicle and the lane line, and the attribute feature of each frame of the lane line is recorded, and as in the training mode, the trained behavior determination model is input according to the attribute feature of the lane line in the whole process, and finally, whether the event is a solid line crossing event is determined. Because the calculation capacity of the vehicle-mounted hardware is limited and the calculation amount of deep learning is relatively large, the forward calculation of the whole network is needed, and the forward calculation can be transplanted to corresponding equipment only by performing customized optimization according to the hardware, so that whether a lane change behavior of a front vehicle exists or not can be detected in real time.
As an optional implementation manner of this embodiment, based on the original time-series lane line features and the vehicle information in each road image, the determining whether the lane crossing behavior of another vehicle is lane change capable of crossing the lane by using the lane crossing module includes: controlling each sub-module of the line-crossing module to slide on the characteristics of the time sequence lane line by adopting a sliding window with preset time sequence length so as to determine the linear category of the lane line, wherein the line-crossing module comprises a virtual line sub-module, a real line sub-module, a bus area sub-module and other line-crossing sub-modules; and judging whether the lane crossing behavior of other vehicles is lane change capable of crossing the lane by utilizing each submodule of the lane crossing module based on the vehicle information and the linear category of the lane line.
For the identified solid line lane change behavior, it needs to further confirm whether the solid line lane change behavior is a violation solid line lane change, wherein the line crossing module includes: virtual and real double-line modules, bus area modules and other modules capable of crossing lines.
(1) For both virtual and real two-wire modules.
In this embodiment, if the lane crossing module is an imaginary-real two-line split module, determining whether the lane crossing behavior of another vehicle is lane changing that can cross the lane by the imaginary-real two-line split module includes: sliding on the original time sequence lane line characteristics by adopting a sliding window with a preset time sequence length to determine a plurality of confidence coefficients of the lane line as a virtual line and a real line; calculating a first average value of the confidence degrees, and judging whether the lane line is a virtual line or a real line or not based on the first average value to obtain a linear category of the lane line; whether the lane change of the lane change is possible or not is judged based on the lane change direction of the vehicle and the linear type of the lane line.
Optionally, judging whether the lane crossing behavior of another vehicle is lane change capable of crossing the lane based on the lane crossing direction of the vehicle and the linear category of the lane line, includes: if the lane crossing direction of the vehicle indicates that the lane crossing behavior of other vehicles starts to line from the right side of the current lane line, and the linear category of the lane line is a left virtual solid line and a right solid line, determining that the lane crossing behavior of other vehicles is not lane change capable of crossing the line; and if the lane crossing direction of the vehicle indicates that the lane crossing behavior of other vehicles starts to press the line from the left side of the current lane line, and the linear category of the lane line is a left solid-right dotted line, determining that the lane crossing behavior of other vehicles is not lane change capable of crossing the line.
In the virtual-real double-line module, for the solid-line lane change behavior, whether the real-line lane change behavior is a virtual-real double line is judged, and then whether the real-line lane change behavior is a violation event is judged by combining the lane change direction of the vehicle, and a certain implementation scheme (but not limited to the implementation scheme) is as follows:
1) sliding a window with the time sequence length of k on the time sequence lane line characteristics obtained from the violation time to obtain the average confidence coefficient of the relevant attributes of the virtual line and the real line, and judging whether the window is the virtual line and the real line, wherein the judgment rule is as follows:
Figure BDA0003306794860000191
wherein, ldrs represents a left virtual-right solid line, lsrd represents a left real-right dotted line, and 2 is entered after the specific type of the virtual-real double line is judged;
2) judging the vehicle line-crossing direction and the obtained linear attribute, comprehensively judging whether the vehicle violates the regulations, and specifically judging as follows:
Figure BDA0003306794860000192
the abnormal table shows that the traffic violation is real, the normal table shows that the traffic violation is not real, the right table shows that the traffic crossing starts to press the line from the right side of the current lane line, and the left table shows that the traffic crossing starts to press the line from the left side of the current lane line.
(2) And (5) dividing the bus into modules.
Alternatively, if the lane crossing module is a bus regional sub-module, judging whether the lane crossing behavior of other vehicles is lane changing capable of crossing the lane through the bus regional sub-module includes: adopting a sliding window with a preset time sequence length to slide on the characteristics of the original time sequence lane line so as to determine a plurality of confidence coefficients of the lane line as the side line of the public vehicle area; calculating a second average value of the confidence degrees, and judging whether the lane line is a public vehicle area side line or not based on the second average value to obtain a linear category of the lane line; and judging whether the lane crossing behavior of other vehicles is lane change capable of crossing lanes or not based on the linear type of the lane lines and the time information in the historical process.
In this embodiment, the determining whether the lane crossing behavior of another vehicle is lane change that can cross the lane based on the linear category of the lane and the time information in the history process includes: if the linear type of the lane line indicates that the lane line is a public vehicle area sideline, adopting a sliding window with a preset time sequence length to slide on the vehicle attribute characteristics of other vehicles so as to judge whether the other vehicles are public vehicles or not and obtain a judgment result; if the judgment result indicates that the other vehicle is not the public vehicle, judging whether the historical time period in the historical process is the driving permission time period; if the historical time period in the historical process is a non-allowable driving time period, determining that the lane-crossing behavior of other vehicles is not lane change capable of crossing the lane; and if the historical time period in the historical process is the allowed driving time period, determining that the lane crossing behavior of other vehicles is lane change capable of crossing the lane.
In a bus area module, whether a real-line lane change behavior is a bus area is judged, and whether the real-line lane change behavior is a violation event is judged by combining vehicle attributes (whether the real-line lane change behavior is a bus) and line-crossing time, wherein a certain (but not limited to) implementation scheme is as follows:
1) sliding a window with the time sequence length of k on the time sequence lane line characteristics obtained from the violation time to obtain the average confidence coefficient of the time sequence lane line which is the bus area sideline, judging whether the time sequence lane line is the bus area sideline or not, and if so, entering 2);
2) confirming that the lane line crossing is the bus area sideline, sliding on the time sequence vehicle attribute characteristics obtained by the violation time by using a window with the time sequence length of k, judging whether the lane line is the bus, and if not, entering 3)
3) And after confirming that the crossing vehicle is not the bus, confirming whether the time is the feasible time, and if not, alarming.
(3) And other cross-line modules can be divided.
Other cross-wire modules in this embodiment may include, but are not limited to: vehicle continuous broken line lane changing module and riding line driving behavior.
Optionally, if the lane-crossing module is another lane-crossing sub-module, determining whether the lane-crossing behavior of another vehicle is lane-crossing by using the other lane-crossing sub-module includes: adopting a sliding window with a preset time sequence length to slide on the characteristics of the original time sequence lane line so as to determine a plurality of confidence coefficients of the lane line as the sideline of the line-crossing region; calculating a third average of the confidence coefficients; if the third average value of the confidence degrees is larger than a preset value, determining that the lane-crossing behavior of other vehicles is lane-changing capable of crossing; and if the third average value of the confidence degrees is less than or equal to a preset value, determining that the lane changing behavior of other vehicles is not lane changing capable of crossing.
The other line-crossing module judges whether the solid line lane-changing behavior is a line-crossing area or not and then judges whether the solid line lane-changing behavior is a violation event or not, and a certain implementation scheme (but not limited to the implementation scheme) is as follows: and (4) sliding a window with the time sequence length of k on the attribute characteristics of the time sequence lane line obtained from the violation time to obtain the average confidence coefficient of the side line of the line-crossing-capable area (such as a stop line) and alarming if the average confidence coefficient is not the side line.
Alternatively, after the lane-crossing behavior of another vehicle is judged to be lane-crossing by using the lane-crossing module, the detection method further includes: if the violation of the rule is determined to be carried out by crossing the lines of other vehicles, marking the violation vehicles; sending out violation prompt information; and reporting the vehicle violation information and the vehicle information to a vehicle management platform.
The video stream is collected through the camera, the picture can be output in real time, when the vehicle leaves the violation lane change from the solid line, the violation vehicle can be marked, violation alarm information prompt is carried out, and a corresponding platform is reported.
Because many violations occur at the peak of congestion when the vehicle is off duty, the violations of many vehicles are near violations, the violations can be shot only by a wide-angle camera with a large FOV (field of view), interference information such as an engine cover can not be introduced at the moment, the position of an engine in an imaged image can be marked by a user side after the camera is fixed, the position calibration of the engine cover is completed, the engine cover is transmitted to an algorithm end to automatically remove the area, and then the scheme of material acquisition, training, detection and output results is the same as that of the implementation mode.
Fig. 6 is a schematic diagram of optionally analyzing whether a violation behavior occurs in a vehicle by using a wide-angle camera according to an embodiment of the present invention, as shown in fig. 6, after a relative position between a time-series vehicle and a lane line is analyzed by a captured image, a lane line crossing behavior is used as a time-series lane line graph, and different field angles are combined to determine whether calibration by using a hood in front of the vehicle is required, if the field angle is small, a conventional near-field camera is used to capture an image, attribute characteristics of the time-series lane line can be directly input to a behavior determination model to obtain a determination result, and if the field angle is large, a wide-angle camera or a long-focus camera is used to capture an image, calibration processing can be performed by using the hood, and then the attribute characteristics of the time-series lane line are input to the behavior determination model to obtain a determination result.
In practical application, whether the camera is a wide-angle camera or not can be judged according to the field angle information of the camera, and the corresponding model is adopted for detecting the violation line-crossing event.
In this embodiment, based on presetting the calibration parameter, the step of the road image of control camera collection in the present vehicle the place ahead includes: if the camera is a long-focus camera and a wide-angle camera, the focusing information when the road image is analyzed is adjusted according to the field angle information of the camera, and calibration information for calibrating the detection frames of other vehicles is adjusted.
The field angles of the telephoto camera and the wide-angle camera are known to be respectively epsilon12And the closest camera can be judged according to the known field angle information epsilon of the current camera, and then whether the calibration work is needed or not is automatically judged. And finally, combining the information acquired by the two cameras, and performing better detection on violation crossing at near and far positions.
In consideration of the practical life, the ambient light is dim due to factors such as night, the imaging quality of the common RGB camera in a dim scene is poor, the violation event can be captured by the infrared camera, and then the scheme of material collection, training and detection output is the same as that of the embodiment.
In practical application, the actual illumination intensity beta outside the vehicle and the illumination intensity threshold value beta can be used121<β2) And automatically realizing relatively stable image type switching according to a fourth judgment condition, and performing violation line crossing detection by adopting a corresponding model, wherein the fourth judgment condition is as follows:
Figure BDA0003306794860000211
an optional step of controlling the camera to capture an image of the road ahead of the current vehicle, comprising: analyzing the ambient light parameters around the current vehicle; and if the ambient light parameter is lower than the preset light threshold value, acquiring a road image in front of the current vehicle by using an infrared camera.
Fig. 7 is a schematic diagram of an alternative method for determining an over-line behavior in consideration of different illumination intensities according to an embodiment of the present invention, and as shown in fig. 7, after obtaining a relative position between each vehicle and a lane line in time sequence information through analysis of a captured road image, a time sequence lane line graph may be analyzed to obtain attribute characteristics of the lane line, and then different time sequence behavior determination models may be used to analyze the over-line behavior according to different illumination intensities.
In the embodiment, the type of the current camera can be judged according to the scene illumination intensity information, the corresponding model is automatically selected, and the violation behaviors in a dark scene can be better detected.
Through the embodiment, the time sequence information in the whole lane changing process is used for confirming whether the current lane crossing behavior violates the regulations or not, specifically, a deep learning model is used for learning the attribute of the lane line in the whole process of changing the lane violations for regulation, the attribute characteristics of the lane line in the whole time sequence process are combined, the behavior judging model is used for classifying to obtain the final result, and whether the current lane crossing behavior violates the regulations or not is judged, so that a driver is helped to standardize the driving behavior.
Example two
The embodiment provides a detection device for vehicle violation behaviors, and each implementation unit included in the detection device corresponds to each implementation step in the first embodiment.
Fig. 8 is a schematic diagram of an alternative vehicle violation detection arrangement according to an embodiment of the present invention, which, as shown in fig. 8, may include: a control unit 81, a first judging unit 83, an extracting unit 85, a second judging unit 87, wherein,
the control unit 81 is used for controlling the camera to collect road images in front of the current vehicle based on preset calibration parameters;
the first judging unit 83 is used for judging whether other vehicles on the front road have line crossing behaviors in the driving process based on the corresponding relative position of each road image acquired by analyzing the plurality of road images;
the extraction unit 85 is used for extracting original time sequence lane line characteristics of other vehicles in a complete lane line crossing process when the other vehicles have line crossing behaviors;
and the second judging unit 87 is used for judging whether the lane crossing behavior of other vehicles is illegal lane change or not by using the trained behavior judging model based on the original time sequence lane line characteristics.
The detection device for the vehicle violation behaviors can control the camera to collect the road image in front of the current vehicle through the control unit 81 based on the preset calibration parameters, judge whether the lane crossing behaviors exist in the driving process of other vehicles on the front road through the first judgment unit 83 based on the corresponding relative position of each road image obtained by analyzing a plurality of road images, extract the original time sequence lane line characteristics of the other vehicles in the complete lane crossing process through the extraction unit 85 when the lane crossing behaviors exist in the other vehicles, and judge whether the lane crossing behaviors of the other vehicles are violation lane changing or not by using the trained behavior judgment model based on the original time sequence lane line characteristics through the second judgment unit 87. In the embodiment, whether the lane crossing behavior of the vehicle is illegal lane changing is analyzed based on the time sequence information of the lane changing behavior of the vehicle in the driving process, the lane crossing behavior of the vehicle is combined with the attribute characteristics of the lane in the whole time sequence process, the final result is obtained by classifying the behavior judgment model, whether the current lane crossing behavior is illegal is judged, the analyzed lane crossing behavior characteristics have time sequence, the robustness of the detection result can be obviously improved, and the technical problem that the robustness of the detection result is low due to the fact that the feature lacks time sequence when the lane crossing behavior is not combined with the whole time sequence process of the lane crossing of the vehicle in the related technology is solved.
Optionally, the first determining unit includes: the first determining module is used for determining the relative position between the other vehicle and the lane line contained in each road image by combining the vehicle information and the lane line information obtained by analyzing each road image, wherein the relative position is the proportion of the distance between the vehicle and the lane line in the width of the vehicle; the first judging module is used for integrating the relative positions of the multiple road images and judging whether other vehicles on the front road have line crossing behaviors in the driving process.
Optionally, the first determining module includes: the first analysis submodule is used for analyzing each road image to calibrate at least one vehicle detection frame and at least one lane line detection frame, wherein each vehicle detection frame corresponds to one other vehicle, and each lane line detection frame corresponds to one lane line; and the first determining submodule is used for determining the vehicle information and the lane line information of each other vehicle through at least one vehicle detection frame and at least one lane line detection frame.
Optionally, the first determining module includes: the second determining submodule is used for determining whether the absolute value of the first relative position is smaller than a first threshold value or not, and if positive and negative changes exist between the second relative position and the first relative position, other vehicles have line crossing behaviors in the driving process; otherwise, the other vehicles do not have the line crossing behavior during the driving process.
Optionally, the first determining module includes: the first obtaining submodule is used for obtaining the position of a preset point of a vehicle detection area and the position of a vehicle lane line intersection point based on vehicle information and lane line information, wherein the vehicle lane line intersection point comprises an intersection point from a horizontal straight line where the preset point of the vehicle detection area is located to a lane line or a lane line extension line, or a vertical intersection point from the position of the preset point of the vehicle detection area to the lane line or the lane line extension line; and the third determining submodule is used for determining the relative position by adopting a first formula according to the point position of the preset point of the vehicle detection area, the point position of the intersection point of the vehicle lane line and the width value of the vehicle detection frame.
Optionally, the first determining sub-module includes: the first analysis submodule is used for analyzing the lane line trend vector and the lane line attribute information based on the lane line detection frame to obtain lane line information; and the second analysis submodule is used for analyzing the vehicle position, the vehicle height and the vehicle width of each other vehicle based on the vehicle detection frame to obtain the vehicle information.
Optionally, the first analysis sub-module includes: a first input sub-module for inputting the lane line detection frame to a lane line model, analyzing attribute feature vectors of the lane lines by using the lane line model to obtain lane line information, wherein the lane line model is a pre-trained model, extracting a lane line training sample set according to lane line marking positions by using a preset classification frame in a training process, and inputting the lane line training sample set to a convolutional neural network system for training a detection network to obtain the lane line model, or a second obtaining sub-module for obtaining the lane line information by using a conventional image processing method, wherein the conventional image processing method is to obtain the lane line marking positions after image preprocessing is performed on the lane line detection frame, and analyzing the lane line trend vectors and the lane line attribute information based on the lane line marking positions to obtain the lane line information, the image preprocessing comprises the following steps: binarization processing, image denoising and lane line segmentation.
Optionally, the extracting unit includes: the second determining module is used for determining a close frame and an end frame of the complete line crossing process according to the relative positions of the road images and acquiring a timing chart of the complete line crossing process; the first extraction module is used for extracting the original time sequence lane line characteristics from the time sequence diagram.
Optionally, the second determining module includes: the fourth determining submodule is used for determining the frame as a close frame of the complete line crossing process when the absolute value of the relative position is smaller than the second threshold; and the fifth determining submodule is used for determining the relative position as an end frame of the complete crossing process when the absolute value of the relative position is greater than the third threshold and the relative position corresponding to the close frame has positive and negative changes.
Optionally, the second determining unit is configured to: inputting the original time sequence lane line characteristics into a solid line confidence coefficient network to obtain a time sequence solid line confidence coefficient; and inputting the relative position and the time sequence solid line confidence degree contained in the corresponding frame in the complete line crossing process into the behavior judgment model to judge whether the line crossing behavior of other vehicles is illegal lane change or not, wherein the behavior judgment model is classified into an integrated behavior judgment model or a time sequence behavior judgment model.
Optionally, when the behavior determination model is an integrated behavior determination model, the detecting device is further configured to: combining the relative positions contained in the corresponding frames of each time sequence in the complete line crossing process, and classifying and counting the confidence coefficients of the solid lines of the time sequences to obtain histogram features; and inputting the histogram features into the integrated behavior judgment model to judge whether the crossing behavior of the other vehicles is illegal lane change or not.
Optionally, the detection device is further configured to: dividing the confidence of the time sequence solid line into n confidence sets by combining the relative position contained in each time sequence corresponding frame in the complete line crossing process and a preset segmentation threshold; extracting k-dimensional histogram features from the classification confidence corresponding to each frame of road image in each confidence set to obtain n k-dimensional histogram features; and connecting the obtained n k-dimensional histogram features in series according to a time sequence relation in a historical process to obtain n x k-dimensional histogram features.
Optionally, the integrated behavior determination model is a model obtained by performing integrated learning training by using an information entropy gain.
Optionally, when the behavior determination model is a time-series behavior determination model, the detecting device is further configured to: and directly inputting the relative positions and the time sequence solid line confidence degrees contained in the corresponding frames in the complete line crossing process into the time sequence behavior judgment model according to time sequence arrangement so as to judge whether the line crossing behavior of other vehicles is violation lane change or not.
Optionally, the time-series behavior decision model is obtained by convolution calculation of a base network and back propagation training of calculated cross entropy loss.
Optionally, the detection device further includes: and the judging unit is used for judging whether the lane crossing behavior of other vehicles is lane change capable of crossing lanes by using the lane crossing module based on the original time sequence lane line characteristics and the vehicle information in each road image.
Optionally, the determining unit includes: the first control module is used for controlling all the sub-modules of the line-crossing module to respectively adopt a sliding window with preset time sequence length to slide on the characteristics of the time sequence lane line so as to determine the linear category of the lane line, wherein the line-crossing module comprises a virtual line and real line sub-module, a bus area sub-module and other line-crossing sub-modules; and the first judging module is used for judging whether the lane crossing behavior of other vehicles is lane change capable of crossing lanes by utilizing each submodule of the lane crossing modules based on the vehicle information and the linear category of the lane lines.
Optionally, if the cross-line module is a virtual-real dual-line split module, the determining unit includes: the first sliding module is used for sliding on the original time sequence lane line characteristic by adopting a sliding window with a preset time sequence length so as to determine a plurality of confidence coefficients of the lane line as a virtual line and a real line; the first calculation module is used for calculating a first average value of the confidence degrees, judging whether the lane line is a virtual line or a real line or not based on the first average value, and obtaining the linear category of the lane line; and the second judging module is used for judging whether the lane crossing behavior of other vehicles is lane change capable of crossing the lane based on the lane crossing direction of the vehicles and the linear type of the lane line.
Optionally, the second judging module includes: the sixth determining submodule is used for indicating that the lane crossing behavior of other vehicles starts to press the line from the right side of the current lane line in the lane crossing direction of the vehicle, the linear category of the lane line is a left virtual and right solid line, and determining that the lane crossing behavior of other vehicles is not lane change capable of crossing the line; and the seventh determining sub-module is used for indicating that the lane crossing behavior of other vehicles starts to press the line from the left side of the current lane line in the lane crossing direction of the vehicle, the linear category of the lane line is a left solid-right dotted line, and the lane changing behavior of other vehicles is determined not to be lane crossing.
Optionally, if the cross-line module is a bus regional sub-module, the determining unit includes: the second sliding module is used for sliding on the original time sequence lane line characteristic by adopting a sliding window with a preset time sequence length so as to determine a plurality of confidence coefficients of the lane line as a public vehicle region side line; the second calculation module is used for calculating a second average value of the confidence degrees so as to judge whether the lane line is a public vehicle area side line or not based on the second average value to obtain the linear category of the lane line; and the third judging module is used for judging whether the lane crossing behavior of other vehicles is lane change capable of crossing lanes or not based on the linear category of the lane lines and the time information in the historical process.
Optionally, the third determining module includes: the first sliding submodule is used for indicating that the lane line is a public vehicle area sideline in the linear type of the lane line and adopting a sliding window with preset time sequence length to slide on the vehicle attribute characteristics of other vehicles so as to judge whether the other vehicles are public vehicles or not and obtain a judgment result; the fourth judging module is used for judging whether the historical time period in the historical process is the allowed running time period or not when the judging result indicates that other vehicles are not public vehicles; an eighth determining submodule for determining that the lane change of the lane crossing behavior of the other vehicle is not the lane crossing possibility when the history time period in the history process is the non-allowable travel time period; and a ninth determining submodule for determining that the lane-crossing behavior of the other vehicle is lane-change that is possible to cross the lane, when the history time period in the history process is the travel allowed time period.
Optionally, if the cross-line module is another cross-line sub-module, the determining unit includes: the third sliding module is used for sliding on the original time sequence lane line characteristic by adopting a sliding window with a preset time sequence length so as to determine a plurality of confidence coefficients of lane lines as boundary lines of a line-crossing region; the third calculation module is used for calculating a third average value of the confidence coefficients; a tenth determining submodule, configured to determine that the lane-crossing behavior of the other vehicle is lane-changing that can cross the line if the third average value of the plurality of confidence degrees is greater than a preset value; and the eleventh determining submodule is used for determining that the lane changing of the lane crossing behavior of other vehicles is not the lane changing of the lane crossing behavior when the third average value of the confidence degrees is smaller than or equal to the preset value.
Optionally, the detection apparatus further comprises: the marking unit is used for marking the violation vehicles if the lane crossing behavior of other vehicles is determined to be violation after judging whether the lane crossing behavior of other vehicles is lane change capable of crossing by using the lane crossing module; the first sending unit is used for sending violation prompt information; the first reporting unit is used for reporting the vehicle violation information and the vehicle information to the vehicle management platform.
Optionally, the detection apparatus further comprises: the installation unit is used for installing the camera at the front windshield of the current vehicle before controlling the camera to collect the road image in front of the current vehicle based on the preset calibration parameters; the calibration unit is used for calibrating the position of the vehicle cover area and the horizon line according to the installation position of the camera; and the determining unit is used for determining preset calibration parameters and an image detection area based on the calibrated vehicle cover area and the horizontal line position.
Optionally, the control unit comprises: and the adjusting module is used for adjusting the focusing information when the road image is analyzed according to the field angle information of the camera when the camera is in a long-focus camera type and a wide-angle camera type, and adjusting the calibration information for calibrating the detection frames of other vehicles.
Optionally, the control unit is further configured to analyze an ambient light parameter around the current vehicle; and if the ambient light parameter is lower than the preset light threshold value, acquiring a road image in front of the current vehicle by using an infrared camera.
The detection device for the vehicle violation behavior may further include a processor and a memory, where the control unit 81, the first determining unit 83, the extracting unit 85, the second determining unit 87, and the like are stored in the memory as program units, and the processor executes the program units stored in the memory to implement corresponding functions.
The processor comprises a kernel, and the kernel calls a corresponding program unit from the memory. The kernel can be set to be one or more than one, and the kernel parameters are adjusted to judge whether the lane crossing behavior of other vehicles is illegal lane changing or not by utilizing the trained behavior judgment model based on the original time sequence lane line characteristics.
The memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip.
According to another aspect of the embodiments of the present invention, there is also provided a road vehicle, including: the vehicle-mounted camera is arranged at a windshield in front of the vehicle and used for acquiring road images of a road in front; and the vehicle-mounted control unit is connected with the vehicle-mounted camera and executes any one of the detection methods of the vehicle violation behaviors.
According to another aspect of the embodiments of the present invention, there is also provided an in-vehicle electronic apparatus, including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the method of detecting a vehicle violation via execution of the executable instructions.
According to another aspect of the embodiment of the present invention, there is further provided a computer-readable storage medium, where the computer-readable storage medium includes a stored computer program, and when the computer program runs, the apparatus where the computer-readable storage medium is located is controlled to execute any one of the above methods for detecting vehicle violation behaviors.
The present application further provides a computer program product adapted to perform a program for initializing the following method steps when executed on a data processing device: controlling a camera to collect a road image in front of the current vehicle based on preset calibration parameters; judging whether other vehicles on the front road have line crossing behaviors in the driving process based on the corresponding relative position of each road image obtained by analyzing the plurality of road images; if other vehicles have lane crossing behaviors, extracting original time sequence lane line characteristics of the other vehicles in the complete lane crossing process; and based on the original time sequence lane line characteristics, judging whether the lane crossing behavior of other vehicles is illegal lane change or not by using the trained behavior judgment model.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (39)

1. A method for detecting vehicle violation behavior, comprising:
controlling a camera to collect a road image in front of the current vehicle based on preset calibration parameters;
judging whether other vehicles on the front road have line crossing behaviors in the driving process or not based on the corresponding relative position of each road image obtained by analyzing the plurality of road images;
if the other vehicles have the lane crossing behavior, extracting the original time sequence lane line characteristics of the other vehicles in the complete lane crossing process;
and judging whether the lane crossing behavior of other vehicles is illegal lane changing or not by utilizing a trained behavior judgment model based on the original time sequence lane line characteristics.
2. The detection method according to claim 1, wherein determining whether there is an out-of-line behavior of another vehicle on the road ahead during the driving process based on the relative position corresponding to each road image obtained by analyzing the plurality of road images, comprises:
determining the relative position between the other vehicle and a lane line contained in each road image by combining vehicle information and lane line information obtained by analyzing each road image, wherein the relative position is the proportion of the distance from the other vehicle to the lane line to the width of the vehicle;
and integrating the relative positions of the plurality of road images to judge whether the other vehicles on the front road have the line crossing behavior in the driving process.
3. The detection method according to claim 2, wherein the vehicle information and lane line information obtained by analyzing each of the road images includes:
analyzing each road image to calibrate at least one vehicle detection frame and at least one lane line detection frame, wherein each vehicle detection frame corresponds to one other vehicle, and each lane line detection frame corresponds to one lane line;
and determining the vehicle information and the lane line information of each other vehicle through at least one vehicle detection frame and at least one lane line detection frame.
4. The detection method according to claim 2, wherein the step of determining whether there is an out-of-line behavior of other vehicles on the road ahead during the driving process by integrating the relative positions of the plurality of road images comprises:
if the absolute value of the first relative position is smaller than a first threshold value, and positive and negative changes exist between the second relative position and the first relative position, the other vehicles have the line crossing behavior in the driving process;
otherwise, the other vehicles do not have the line crossing behavior during the driving process.
5. The detection method according to claim 2, wherein determining the relative position between the other vehicle and the lane line included in each of the road images in combination with vehicle information and lane line information obtained by analyzing each of the road images comprises:
obtaining the position of a preset point of a vehicle detection area and the position of a vehicle lane line intersection point based on the vehicle information and the lane line information, wherein the vehicle lane line intersection point comprises an intersection point from a horizontal straight line where the preset point of the vehicle detection area is located to the lane line or the lane line extension line, or a vertical intersection point from the position of the preset point of the vehicle detection area to the lane line or the lane line extension line;
and determining the relative position by adopting a first formula according to the point position of the preset point of the vehicle detection area, the point position of the intersection point of the vehicle lane lines and the width value of the vehicle detection frame.
6. The detection method according to claim 3, wherein determining the vehicle information and the lane line information of each of the other vehicles by at least one vehicle detection box and at least one lane line detection box includes:
analyzing the lane line trend vector and the lane line attribute information based on the lane line detection frame to obtain the lane line information;
and analyzing the vehicle position, the vehicle height and the vehicle width of each other vehicle based on the vehicle detection frame to obtain the vehicle information.
7. The detection method according to claim 6, wherein analyzing the lane line trend vector and the lane line attribute information based on the lane line detection frame to obtain the lane line information comprises:
inputting the lane line detection frame into a lane line model, analyzing attribute feature vectors of lane lines by using the lane line model to obtain the lane line information, wherein the lane line model is a model trained in advance, extracting a lane line training sample set by using a preset classification frame according to lane line marking positions in the training process, and inputting the lane line training sample set into a convolutional neural network system to train a detection network to obtain the lane line model, or,
obtaining the lane line information by adopting a traditional image processing method, wherein the traditional image processing method is to obtain a lane line marking position after image preprocessing is carried out on the lane line detection frame, and analyze a lane line trend vector and lane line attribute information based on the lane line marking position to obtain the lane line information, and the image preprocessing comprises the following steps: binarization processing, image denoising and lane line segmentation.
8. The detection method according to claim 1, wherein extracting the original time series lane characteristics of the other vehicles during the complete lane crossing comprises:
determining a close frame and an end frame of a complete line crossing process according to the relative positions of the road images, and acquiring a timing chart of the complete line crossing process;
and extracting the original time sequence lane line characteristics from the time sequence diagram.
9. The detection method of claim 8, wherein determining the close frame and the end frame of the complete crossing process comprises:
if the absolute value of the relative position is smaller than a second threshold value, determining the relative position as a close frame of the complete line crossing process;
and if the absolute value of the relative position is greater than a third threshold value and the relative position corresponding to the relative position and the approaching frame has positive and negative changes, determining the relative position as an end frame of the complete crossing process.
10. The detection method of claim 1, wherein the step of judging whether the lane crossing behavior of the other vehicle is a violation lane change or not by using a trained behavior judgment model based on the original time sequence lane line characteristics comprises the following steps:
inputting the original time sequence lane line characteristics into a solid line confidence coefficient network to obtain a time sequence solid line confidence coefficient;
and inputting the relative position and the time sequence solid line confidence degree contained in the corresponding frame in the complete line crossing process into the behavior judgment model to judge whether the line crossing behavior of other vehicles is illegal lane change or not, wherein the behavior judgment model is classified into an integrated behavior judgment model or a time sequence behavior judgment model.
11. The detection method according to claim 10, wherein when the behavior determination model is an integrated behavior determination model, the detection method includes:
combining the relative positions contained in the corresponding frames of each time sequence in the complete line crossing process, and classifying and counting the confidence coefficients of the solid lines of the time sequences to obtain histogram features;
and inputting the histogram features into the integrated behavior judgment model to judge whether the crossing behavior of the other vehicles is illegal lane change or not.
12. The detection method according to claim 11, wherein obtaining histogram features by classifying and counting confidences of the time series solid lines in combination with the relative positions included in the respective time series corresponding frames in the complete line-crossing process comprises:
dividing the confidence of the time sequence solid line into n confidence sets by combining the relative position contained in each time sequence corresponding frame in the complete line crossing process and a preset segmentation threshold;
extracting k-dimensional histogram features from the classification confidence corresponding to each frame of road image in each confidence set to obtain n k-dimensional histogram features;
and connecting the obtained n k-dimensional histogram features in series according to a time sequence relation in a historical process to obtain n x k-dimensional histogram features.
13. The detection method according to claim 11, wherein the integrated behavior decision model is a model obtained by performing integrated learning training using information entropy gain.
14. The detection method according to claim 10, wherein when the behavior determination model is a time-series behavior determination model, the detection method includes:
and directly inputting the relative positions and the time sequence solid line confidence degrees contained in the corresponding frames in the complete line crossing process into the time sequence behavior judgment model according to time sequence arrangement so as to judge whether the line crossing behavior of other vehicles is violation lane change or not.
15. The detection method according to claim 14, wherein the time-series behavior decision model is obtained by convolution calculation of a base network and back propagation training of calculated cross entropy loss.
16. The detection method according to claim 1, further comprising:
and judging whether the lane crossing behavior of the other vehicles is lane change capable of crossing the lane by using a lane crossing module based on the original time sequence lane line characteristics and the vehicle information in each road image.
17. The detection method according to claim 16, wherein the determining whether the lane crossing behavior of the other vehicle is lane change for lane crossing using a lane crossing enabling module based on the original time-series lane line feature and the vehicle information in each road image comprises:
controlling each sub-module of the line-crossing module to slide on the time sequence lane line characteristic by adopting a sliding window with preset time sequence length so as to determine the linear category of the lane line, wherein the line-crossing module comprises a virtual line sub-module, a real line sub-module, a bus area sub-module and other line-crossing sub-modules;
and judging whether the lane crossing behavior of the other vehicle is lane change capable of crossing the lane by utilizing each submodule of the lane crossing module based on the vehicle information and the linear category of the lane line.
18. The detecting method according to claim 17, wherein if the lane-crossing behavior of the other vehicle is lane-crossing behavior, the determining step includes:
sliding on the original time sequence lane line characteristics by adopting a sliding window with a preset time sequence length to determine a plurality of confidence coefficients of the lane line as a virtual line and a real line;
calculating a first average value of the confidence degrees, and judging whether the lane line is a virtual line or a real line or not based on the first average value to obtain a linear category of the lane line;
and judging whether the lane-crossing behavior of the other vehicle is lane change capable of crossing the lane or not based on the lane-crossing direction of the vehicle and the linear type of the lane line.
19. The detection method according to claim 18, wherein determining whether the lane crossing behavior of the other vehicle is lane change that can cross the lane based on the vehicle crossing direction and the linear category of the lane, comprises:
if the vehicle lane crossing direction indicates that the lane crossing behavior of the other vehicle starts to line from the right side of the current lane line, and the linear category of the lane line is a left virtual solid line and a right solid line, determining that the lane crossing behavior of the other vehicle is not lane change capable of crossing;
and if the vehicle lane crossing direction indicates that the lane crossing behavior of the other vehicle starts to line from the left side of the current lane line, and the linear category of the lane line is a left real and right dotted line, determining that the lane crossing behavior of the other vehicle is not lane change capable of crossing the line.
20. The detecting method according to claim 17, wherein if the lane-crossing behavior of the other vehicle is lane-crossing behavior, the determining, by the bus zone segment, whether the lane-crossing behavior of the other vehicle is lane-crossing behavior includes:
sliding on the original time sequence lane line characteristic by adopting a sliding window with a preset time sequence length to determine a plurality of confidence coefficients of the lane line as a public vehicle area side line;
calculating a second average value of the confidence degrees, and judging whether the lane line is a public vehicle area side line or not based on the second average value to obtain a linear category of the lane line;
and judging whether the lane crossing behavior of the other vehicles is lane change capable of crossing the lane based on the linear category of the lane and the time information in the historical process.
21. The detection method according to claim 20, wherein the judging whether the lane crossing behavior of the other vehicle is lane change that can cross the lane based on the linear category of the lane and the time information in the history process includes:
if the linear type of the lane line indicates that the lane line is a public vehicle area sideline, adopting a sliding window with a preset time sequence length to slide on the vehicle attribute characteristics of other vehicles so as to judge whether the other vehicles are public vehicles or not and obtain a judgment result;
if the judgment result indicates that the other vehicle is not a public vehicle, judging whether the historical time period in the historical process is a driving permission time period;
if the historical time period in the historical process is a non-allowable driving time period, determining that the lane-crossing behavior of the other vehicles is not lane change capable of crossing the lane;
and if the historical time period in the historical process is the allowed driving time period, determining that the lane crossing behavior of the other vehicles is lane change capable of crossing the lane.
22. The detecting method according to claim 17, wherein if the lane-crossing behavior of the other vehicle is determined to be lane-crossing behavior by the other lane-crossing behavior determining module if the lane-crossing behavior of the other vehicle is lane-crossing behavior, the detecting method includes:
adopting a sliding window with a preset time sequence length to slide on the original time sequence lane line characteristics so as to determine a plurality of confidence coefficients of lane lines as line-crossing region side lines;
calculating a third average of the confidence coefficients;
if the third average value of the confidence degrees is larger than a preset value, determining that the lane crossing behavior of the other vehicles is lane changing capable of crossing;
and if the third average value of the confidence degrees is less than or equal to a preset value, determining that the lane crossing behavior of the other vehicles is not lane change capable of crossing.
23. The detection method according to claim 17, wherein after the determination of whether the lane change behavior of the other vehicle is the lane change with lane change possibility using the lane change possibility module, the detection method further comprises:
if the violation of the rule is determined to be carried out by the other vehicles, marking the violation vehicles;
sending out violation prompt information;
and reporting the vehicle violation information and the vehicle information to a vehicle management platform.
24. The detection method according to claim 1, wherein before controlling the camera to capture an image of the road ahead of the current vehicle based on preset calibration parameters, the detection method further comprises:
installing a camera at a front windshield of a current vehicle;
calibrating a vehicle cover area and a horizon position according to the installation position of the camera;
and determining the preset calibration parameters and the image detection area based on the calibrated bonnet area and the horizontal line position.
25. The detection method according to claim 1, wherein the step of controlling the camera to capture an image of the road ahead of the current vehicle based on preset calibration parameters comprises:
and if the cameras are in the types of a long-focus camera and a wide-angle camera, adjusting focusing information when the road image is analyzed according to the field angle information of the cameras, and adjusting calibration information for calibrating detection frames of other vehicles.
26. The detection method according to claim 1, wherein the step of controlling the camera to capture an image of the road ahead of the current vehicle comprises:
analyzing ambient light parameters around the current vehicle;
and if the ambient light parameter is lower than a preset light threshold value, acquiring a road image in front of the current vehicle by adopting an infrared camera.
27. A vehicle violation detection device, comprising:
the control unit is used for controlling the camera to collect road images in front of the current vehicle based on preset calibration parameters;
the first judging unit is used for judging whether other vehicles on the front road have line crossing behaviors in the driving process based on the corresponding relative position of each road image acquired by analyzing a plurality of road images;
the extraction unit is used for extracting original time sequence lane line characteristics in the complete line crossing process of the other vehicles when the other vehicles have line crossing behaviors;
and the second judgment unit is used for judging whether the lane crossing behavior of other vehicles is illegal lane change or not by utilizing a trained behavior judgment model based on the original time sequence lane line characteristics.
28. The detection apparatus according to claim 27, wherein the first judgment unit includes:
a first determining module, configured to determine, in combination with vehicle information and lane line information obtained by analyzing each road image, a relative position between the other vehicle and a lane line included in each road image, where the relative position is a ratio of a distance from the other vehicle to the lane line to a vehicle width;
and the first judging module is used for integrating the relative positions of the road images and judging whether other vehicles on the front road have line crossing behaviors in the driving process.
29. The detection apparatus according to claim 28, wherein the first determination module comprises:
the first analysis sub-module is used for analyzing each road image to calibrate at least one vehicle detection frame and at least one lane line detection frame, wherein each vehicle detection frame corresponds to one other vehicle, and each lane line detection frame corresponds to one lane line;
the first determining submodule is used for determining the vehicle information and the lane line information of each other vehicle through at least one vehicle detection frame and at least one lane line detection frame.
30. The detection device according to claim 28, wherein the first discrimination module comprises:
the second determination submodule is used for determining whether the absolute value of the first relative position is smaller than a first threshold value or not, and if positive and negative changes exist between the second relative position and the first relative position, the other vehicles have line crossing behaviors in the driving process;
otherwise, the other vehicles do not have the line crossing behavior during the driving process.
31. The detection apparatus according to claim 28, wherein the first determination module comprises:
the first obtaining submodule is used for obtaining the point position of a preset point of a vehicle detection area and the point position of a vehicle lane line intersection point based on the vehicle information and the lane line information, wherein the vehicle lane line intersection point comprises the intersection point from a horizontal straight line where the preset point of the vehicle detection area is located to the lane line or the lane line extension line, or the vertical intersection point from the point position of the preset point of the vehicle detection area to the lane line or the lane line extension line;
and the third determining submodule is used for determining the relative position by adopting a first formula according to the point position of the preset point of the vehicle detection area, the point position of the intersection point of the vehicle lane line and the width value of the vehicle detection frame.
32. The sensing device of claim 29, wherein the first determination submodule comprises:
the first analysis submodule is used for analyzing the trend vector of the lane line and the attribute information of the lane line based on the lane line detection frame to obtain the information of the lane line;
and the second analysis submodule is used for analyzing the vehicle position, the vehicle height and the vehicle width of each other vehicle based on the vehicle detection frame to obtain the vehicle information.
33. The detection device of claim 32, wherein the first analysis submodule comprises:
a first input sub-module, configured to input the lane line detection frame to a lane line model, and analyze attribute feature vectors of lane lines using the lane line model to obtain the lane line information, where the lane line model is a model that is trained in advance, and in the training process, a preset classification frame is used to extract a lane line training sample set according to lane line labeling positions, and the lane line training sample set is input to a convolutional neural network system to perform training of a detection network to obtain a lane line model, or,
a second obtaining sub-module, configured to obtain the lane line information by using a conventional image processing method, where the conventional image processing method obtains a lane line labeling position after performing image preprocessing on the lane line detection frame, and obtains the lane line information by analyzing a lane line trend vector and lane line attribute information based on the lane line labeling position, where the image preprocessing includes: binarization processing, image denoising and lane line segmentation.
34. The detection apparatus according to claim 28, wherein the extraction unit includes:
the second determining module is used for determining a close frame and an end frame of a complete line crossing process according to the relative positions of the road images and acquiring a timing chart of the complete line crossing process;
and the first extraction module is used for extracting the original time sequence lane line characteristics from the time sequence diagram.
35. The detection apparatus according to claim 34, wherein the second determination module comprises:
a fourth determining submodule, configured to determine, when the absolute value of the relative position is smaller than a second threshold, a close frame of the complete crossing process;
and the fifth determining submodule is used for determining the relative position as an end frame of the complete crossing process when the absolute value of the relative position is greater than a third threshold and the relative position corresponding to the approach frame in the relative position has positive and negative changes.
36. The detection apparatus according to claim 27, wherein the second determination unit includes:
the first acquisition module is used for inputting the original time sequence lane line characteristics into a solid line confidence coefficient network so as to acquire a time sequence solid line confidence coefficient;
and the judging module is used for inputting the relative position and the time sequence solid line confidence degree contained in the corresponding frame in the complete line crossing process into the behavior judging model so as to judge whether the line crossing behavior of the other vehicles is rule-breaking lane change or not, wherein the behavior judging model is an integrated behavior judging model or a time sequence behavior judging model.
37. A road vehicle, characterized by comprising:
the vehicle-mounted camera is arranged at a windshield in front of the vehicle and used for acquiring road images of a road in front;
and the vehicle-mounted control unit is connected with the vehicle-mounted camera and executes the method for detecting the vehicle violation behaviors in any one of claims 1 to 26.
38. An in-vehicle electronic apparatus, characterized by comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of detecting vehicle violation behaviour of any one of claims 1 to 26 via execution of the executable instructions.
39. A computer-readable storage medium, comprising a stored computer program, wherein the computer program, when executed, controls a device on which the computer-readable storage medium is located to perform the method for detecting vehicle violations as claimed in any one of claims 1 to 26.
CN202111205837.6A 2021-10-15 2021-10-15 Detection method and detection device for vehicle violation behaviors and vehicle-mounted electronic equipment Pending CN113936257A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111205837.6A CN113936257A (en) 2021-10-15 2021-10-15 Detection method and detection device for vehicle violation behaviors and vehicle-mounted electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111205837.6A CN113936257A (en) 2021-10-15 2021-10-15 Detection method and detection device for vehicle violation behaviors and vehicle-mounted electronic equipment

Publications (1)

Publication Number Publication Date
CN113936257A true CN113936257A (en) 2022-01-14

Family

ID=79279628

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111205837.6A Pending CN113936257A (en) 2021-10-15 2021-10-15 Detection method and detection device for vehicle violation behaviors and vehicle-mounted electronic equipment

Country Status (1)

Country Link
CN (1) CN113936257A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114581890A (en) * 2022-03-24 2022-06-03 北京百度网讯科技有限公司 Method and device for determining lane line, electronic equipment and storage medium
CN115471708A (en) * 2022-09-27 2022-12-13 禾多科技(北京)有限公司 Lane line type information generation method, device, equipment and computer readable medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114581890A (en) * 2022-03-24 2022-06-03 北京百度网讯科技有限公司 Method and device for determining lane line, electronic equipment and storage medium
CN114581890B (en) * 2022-03-24 2023-03-10 北京百度网讯科技有限公司 Method and device for determining lane line, electronic equipment and storage medium
CN115471708A (en) * 2022-09-27 2022-12-13 禾多科技(北京)有限公司 Lane line type information generation method, device, equipment and computer readable medium
CN115471708B (en) * 2022-09-27 2023-09-12 禾多科技(北京)有限公司 Lane line type information generation method, device, equipment and computer readable medium

Similar Documents

Publication Publication Date Title
CN111368687B (en) Sidewalk vehicle illegal parking detection method based on target detection and semantic segmentation
CN110487562B (en) Driveway keeping capacity detection system and method for unmanned driving
Tseng et al. Real-time video surveillance for traffic monitoring using virtual line analysis
CA2132515C (en) An object monitoring system
CN104537841B (en) Unlicensed vehicle violation detection method and detection system thereof
Pavlic et al. Classification of images in fog and fog-free scenes for use in vehicles
US8233662B2 (en) Method and system for detecting signal color from a moving video platform
CN110852274B (en) Intelligent rainfall sensing method and device based on image recognition
CN109299674B (en) Tunnel illegal lane change detection method based on car lamp
WO2019174682A1 (en) Method and device for detecting and evaluating roadway conditions and weather-related environmental influences
CN113936257A (en) Detection method and detection device for vehicle violation behaviors and vehicle-mounted electronic equipment
CN110298300B (en) Method for detecting vehicle illegal line pressing
CN106022243B (en) A kind of retrograde recognition methods of the car lane vehicle based on image procossing
CN111126171A (en) Vehicle reverse running detection method and system
CN114913449A (en) Road surface camera shooting analysis method for Internet of vehicles
CN113033275B (en) Vehicle lane-changing non-turn signal lamp analysis system based on deep learning
CN115331191B (en) Vehicle type recognition method, device, system and storage medium
CN109858459A (en) System and method based on police vehicle-mounted video element information realization intelligently parsing processing
CN204856897U (en) It is detection device violating regulations in abscission zone territory that motor vehicle stops promptly
Cheng et al. Sparse coding of weather and illuminations for ADAS and autonomous driving
Hautiere et al. Meteorological conditions processing for vision-based traffic monitoring
CN114530042A (en) Urban traffic brain monitoring system based on internet of things technology
CN115240435A (en) AI technology-based vehicle illegal driving detection method and device
CN110321973B (en) Combined vehicle detection method based on vision
CN113177443A (en) Method for intelligently identifying road traffic violation based on image vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination