CN116052440A - Vehicle intention plug behavior identification method, device, equipment and storage medium - Google Patents

Vehicle intention plug behavior identification method, device, equipment and storage medium Download PDF

Info

Publication number
CN116052440A
CN116052440A CN202310054998.2A CN202310054998A CN116052440A CN 116052440 A CN116052440 A CN 116052440A CN 202310054998 A CN202310054998 A CN 202310054998A CN 116052440 A CN116052440 A CN 116052440A
Authority
CN
China
Prior art keywords
vehicle
lane
line
image
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310054998.2A
Other languages
Chinese (zh)
Other versions
CN116052440B (en
Inventor
陈磊
黄金叶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Qiyang Special Equipment Technology Engineering Co ltd
Original Assignee
Shenzhen Qiyang Special Equipment Technology Engineering Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Qiyang Special Equipment Technology Engineering Co ltd filed Critical Shenzhen Qiyang Special Equipment Technology Engineering Co ltd
Priority to CN202310054998.2A priority Critical patent/CN116052440B/en
Publication of CN116052440A publication Critical patent/CN116052440A/en
Application granted granted Critical
Publication of CN116052440B publication Critical patent/CN116052440B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Analytical Chemistry (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Chemical & Material Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method, a device, equipment and a storage medium for identifying the vehicle intention jam behavior, and relates to the technical field of traffic control. According to the method, firstly, a vehicle identification result and a lane line identification result are obtained through real-time identification according to on-site video data collected by a road monitoring camera, then whether each identified vehicle is line-pressed or not is identified in real time according to the lane line identification result, then corresponding vehicle body direction and a lane to be driven in are obtained through real-time estimation according to corresponding vehicle images, finally when the corresponding vehicle body marking frame is judged to be intersected with the vehicle body marking frames of the front vehicle and the rear vehicle on the corresponding lane to be driven in according to the vehicle identification result, the corresponding vehicle is determined to have an intended plugging action, corresponding vehicle intended plugging judgment evidence is reserved, and therefore the vehicle intended plugging action identification and evidence reserving time can be grasped independently, and the problems of time and labor waste and difficulty in unification of identification standards are avoided.

Description

Vehicle intention plug behavior identification method, device, equipment and storage medium
Technical Field
The invention belongs to the technical field of traffic control, and particularly relates to a method, a device, equipment and a storage medium for identifying vehicle intention plugging behavior.
Background
With the increasing level of living and the continuous progress of technology, automobiles gradually become a walking tool for people to go out, and traffic violation is increasing. The plugging behavior mainly refers to passing by way or occupying the opposite lane and penetrating and waiting vehicles when the driving motor vehicle stops and queues or runs slowly when meeting the front motor vehicle. Since the random "plugging" of vehicles is extremely prone to cause traffic accidents, the intended plugging behavior of vehicles is a typical driving violation behavior and is configured with corresponding penalties.
At present, the existing vehicle intention jam recognition means is mainly based on manual analysis of video data stored by a road monitoring camera, obviously has the problems of time and labor waste and difficulty in unification of recognition standards due to human factors, and meanwhile, because the data analysis time is mostly carried out after traffic accidents, the problems of serious lag of behavior recognition, evidence retention and violation processing time exist, and the intention jam is not easy to take measures in time. Therefore, how to automatically identify the intended jam behavior of a vehicle and preserve evidence during traffic monitoring in order to unify the identification criteria and mitigate the traffic control workload is a subject of urgent study by those skilled in the art.
Disclosure of Invention
The invention aims to provide a vehicle intention plugging behavior recognition method, a device, computer equipment and a computer readable storage medium, which are used for solving the problems that the existing vehicle intention plugging recognition means are time-consuming and labor-consuming, recognition standards are difficult to unify due to human factors, behavior recognition is difficult, and evidence retention time is seriously lagged.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
in a first aspect, a method for identifying an intended vehicle plug behavior is provided, including:
acquiring field video data acquired in real time by a road monitoring camera;
performing vehicle identification real-time processing on the on-site video data by adopting a target detection algorithm to obtain a vehicle identification result, wherein the vehicle identification result comprises at least one identified vehicle and a vehicle body mark frame of each vehicle in the at least one vehicle;
carrying out lane line identification real-time processing on the on-site video data by adopting a lane line identification algorithm to obtain a lane line identification result;
judging whether the corresponding vehicle is in line with each vehicle in the at least one vehicle in real time according to the lane line identification result, if so, intercepting a corresponding vehicle image from the field video data according to a corresponding vehicle body mark frame;
For each line pressing vehicle in the at least one vehicle, estimating and obtaining a corresponding vehicle body direction in real time according to a corresponding vehicle image, and determining a corresponding lane to be driven in based on the vehicle body direction;
and judging whether the corresponding vehicle body mark frames are intersected with the vehicle body mark frames of the front and rear vehicles on the corresponding lane to be driven according to the vehicle identification result, if so, determining that the corresponding vehicles have the intended jam behavior, and retaining the corresponding vehicle intended jam judgment evidence.
Based on the above-mentioned summary, a data processing scheme is provided for automatically identifying the vehicle intention plugging behavior and retaining evidence in the traffic monitoring process, namely, firstly, according to the on-site video data collected by the road monitoring camera in real time, the vehicle identification result and the lane line identification result are identified in real time, then, whether the line is pressed according to the lane line identification result is identified in real time for each identified vehicle, then, according to each line pressing vehicle, the corresponding vehicle body direction and the lane to be driven in are obtained in real time according to the corresponding vehicle image estimation, finally, when the corresponding vehicle body mark frame is judged to be intersected with the vehicle body mark frames of the front and rear vehicles on the corresponding lane to be driven in according to the vehicle identification result, the intention plugging behavior of the corresponding vehicle is determined, and the corresponding vehicle intention plugging judgment evidence is retained, so that the vehicle intention plugging behavior identification and the evidence retaining opportunity can be grasped independently, further, measures can be prevented in time, and time and effort are avoided, and the problem that the identification standard is difficult to be unified due to human factors is convenient for practical application and popularization.
In one possible design, the method for performing real-time processing on the on-site video data by using a target detection algorithm to obtain a vehicle identification result includes:
and importing the live video image in the live video data into a vehicle recognition model which is based on a YOLO v4 target detection algorithm and is trained in advance to obtain a vehicle recognition result, wherein the vehicle recognition result comprises at least one recognized vehicle and a vehicle body marking frame of each vehicle in the at least one vehicle.
In one possible design, the lane line recognition algorithm is used to perform the real-time lane line recognition processing on the on-site video data to obtain a lane line recognition result, and the method includes:
converting live video images in the live video data into gray-scale images in real time;
carrying out Gaussian filtering real-time processing on the gray level image to obtain a denoising image;
performing edge detection real-time processing on the denoising image to obtain an edge detection result image containing edge pixel points, wherein the edge pixel points are used as lane line pixel points;
performing mask real-time processing on the edge detection result image according to a preset region of interest to obtain a new edge detection result image which only contains edge pixel points in the region of interest;
Performing Hough transformation real-time processing on the new edge detection result image to obtain at least one straight line segment for forming a lane line;
fitting in real time according to the slope average value and the intercept average value of the at least one straight line segment to obtain a continuous lane line;
loading the lane lines into the field video image in real time to obtain a lane line identification result.
In one possible design, for each vehicle in the at least one vehicle, determining whether the corresponding vehicle is pressed according to the lane line identification result in real time includes:
for each vehicle in the at least one vehicle, determining a corresponding in-frame diagonal line segment in real time according to a corresponding vehicle body marking frame;
and judging whether the identified lane line intersects with the corresponding diagonal line segment in the frame according to the lane line identification result in real time for each vehicle, if so, judging that the corresponding vehicle is in line, otherwise, judging that the corresponding vehicle is not in line.
In one possible design, for each wire pressing vehicle in the at least one vehicle, estimating, in real time, a corresponding vehicle body direction according to a corresponding vehicle image includes:
for each line pressing vehicle in the at least one vehicle, importing a corresponding vehicle image into a vehicle body direction estimation model which is based on a mask-CNN framework and is trained in advance, and outputting to obtain the probability that the vehicle body direction of the corresponding vehicle belongs to each sector area in a plurality of sector areas, wherein the plurality of sector areas refer to all non-overlapping sector areas discretizing the circumferential direction by 360 degrees;
For each line pressing vehicle, a corresponding vehicle body direction angle R is calculated according to the following formula:
Figure BDA0004060223460000031
wherein K represents the total number of the sectors of the plurality of sector-shaped partitions, K represents a positive integer less than or equal to K, and w k Representing probability that the body direction of the wire pressing vehicle belongs to the kth sector among the plurality of sector areas, θ loc,k Representing the angular extent of the kth sector, media () represents a median function.
In one possible design, for a wire pressing vehicle of the at least one vehicle, retaining corresponding vehicle intent-to-jam decision evidence includes:
identifying the license plate number of the certain line pressing vehicle from the vehicle image of the certain line pressing vehicle by adopting a license plate number identification algorithm;
and taking the license plate number of the certain line pressing vehicle and the field video data acquired in the target period as vehicle intention jam judgment evidence of the certain line pressing vehicle, and sending a rule breaking driving report message carrying the vehicle intention jam judgment evidence to a rule breaking driving report platform so as to carry out evidence preservation on the rule breaking driving report platform, wherein the target period comprises the line pressing period of the certain line pressing vehicle and a fixed-length adjacent period positioned behind the line pressing period.
In one possible design, the license plate number recognition algorithm is used to recognize the license plate number of the certain wire pressing vehicle from the vehicle image of the certain wire pressing vehicle, including:
carrying out license plate rectangular outline detection processing on the vehicle image of the certain line pressing vehicle to obtain a license plate marking frame;
according to the license plate mark frame, a license plate image is intercepted from the vehicle image of the certain line pressing vehicle;
and carrying out character recognition processing on the license plate image by adopting a character recognition packet pytesseract to obtain a character string, and taking the character string as a license plate number of the certain line pressing vehicle.
In a second aspect, a vehicle intention jam behavior recognition device is provided, which comprises a data acquisition module, a vehicle recognition module, a lane line recognition module, a line pressing judgment module, a lane determination module and a behavior confirmation module;
the data acquisition module is used for acquiring on-site video data acquired in real time by the road monitoring camera;
the vehicle identification module is in communication connection with the data acquisition module and is used for carrying out vehicle identification real-time processing on the on-site video data by adopting a target detection algorithm to obtain a vehicle identification result, wherein the vehicle identification result comprises at least one identified vehicle and a vehicle body marking frame of each vehicle in the at least one vehicle;
The lane line identification module is in communication connection with the data acquisition module and is used for carrying out lane line identification real-time processing on the field video data by adopting a lane line identification algorithm to obtain a lane line identification result;
the line pressing judging module is respectively in communication connection with the vehicle identification module and the lane line identification module and is used for judging whether the corresponding vehicle is pressed according to the lane line identification result in real time for each vehicle in the at least one vehicle, if so, the corresponding vehicle image is intercepted from the field video data according to the corresponding vehicle body mark frame;
the lane determining module is in communication connection with the line pressing judging module and is used for estimating corresponding vehicle body directions according to corresponding vehicle images in real time for each line pressing vehicle in the at least one vehicle and determining corresponding lanes to be driven into based on the vehicle body directions;
the behavior confirmation module is respectively in communication connection with the lane determination module and the vehicle identification module, and is used for judging whether the corresponding vehicle body mark frame is intersected with the vehicle body mark frames of the front and rear vehicles on the corresponding lane to be driven according to the vehicle identification result for each line pressing vehicle, if so, determining that the corresponding vehicle has the intention jam behavior, and reserving the corresponding vehicle intention jam judgment evidence.
In a third aspect, the present invention provides a computer device comprising a memory, a processor and a transceiver in communication connection in sequence, wherein the memory is adapted to store a computer program, the transceiver is adapted to receive and transmit messages, and the processor is adapted to read the computer program and to perform the method for identifying a vehicle intended plugging behaviour according to the first aspect or any of the possible designs of the first aspect.
In a fourth aspect, the present invention provides a computer readable storage medium having instructions stored thereon which, when executed on a computer, perform a vehicle intended plug behavior recognition method as described in the first aspect or any of the possible designs of the first aspect.
In a fifth aspect, the present invention provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the vehicle intention-to-plug behavior recognition method as described in the first aspect or any of the possible designs of the first aspect.
The beneficial effect of above-mentioned scheme:
(1) The invention creatively provides a data processing scheme for automatically identifying the vehicle intention plugging action and reserving evidence in the traffic monitoring process, namely, firstly, according to field video data acquired by a road monitoring camera in real time, a vehicle identification result and a lane line identification result are obtained through real-time identification, then, whether each identified vehicle is in line or not is identified in real time according to the lane line identification result, then, corresponding vehicle directions and lanes to be driven in are obtained through real-time estimation according to corresponding vehicle images according to each line pressing vehicle, finally, when the corresponding vehicle body mark frame is judged to be intersected with the vehicle body mark frames of the front vehicle and the rear vehicle on the corresponding lanes to be driven in according to the vehicle identification result, the intention plugging action of the corresponding vehicle is determined, and the corresponding vehicle intention plugging judgment evidence is reserved, so that the vehicle intention plugging action identification and the evidence reserving opportunity can be grasped independently, further, the plugging action can be prevented by timely measures, and meanwhile, the problem that time and labor are wasted and the identification standard are difficult to unify due to human factors can be avoided;
(2) Powerful vehicle intention jam judgment evidence containing rich information can be obtained, the availability of the evidence is ensured, and the practical application and popularization are facilitated.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a method for identifying a vehicle intended plugging behavior according to an embodiment of the present application.
Fig. 2 is a diagram illustrating a positional relationship between a live video image and a region of interest according to an embodiment of the present application.
Fig. 3 is an exemplary diagram of a vehicle intended to plug behavior recognition result provided in an embodiment of the present application.
Fig. 4 is a schematic structural diagram of a vehicle intended plugging behavior recognition device according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the present invention will be briefly described below with reference to the accompanying drawings and the description of the embodiments or the prior art, and it is obvious that the following description of the structure of the drawings is only some embodiments of the present invention, and other drawings can be obtained according to these drawings without inventive effort to a person skilled in the art. It should be noted that the description of these examples is for aiding in understanding the present invention, but is not intended to limit the present invention.
It should be understood that although the terms first and second, etc. may be used herein to describe various objects, these objects should not be limited by these terms. These terms are only used to distinguish one object from another. For example, a first object may be referred to as a second object, and similarly a second object may be referred to as a first object, without departing from the scope of example embodiments of the invention.
It should be understood that for the term "and/or" that may appear herein, it is merely one association relationship that describes an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: three cases of A alone, B alone or both A and B exist; as another example, A, B and/or C, can represent the presence of any one of A, B and C or any combination thereof; for the term "/and" that may appear herein, which is descriptive of another associative object relationship, it means that there may be two relationships, e.g., a/and B, it may be expressed that: the two cases of A and B exist independently or simultaneously; in addition, for the character "/" that may appear herein, it is generally indicated that the context associated object is an "or" relationship.
Examples:
as shown in fig. 1, the method for identifying the vehicle intended plugging behavior provided in the first aspect of the present embodiment may be performed by, but not limited to, a computer device having a certain computing resource and being communicatively connected to a road monitoring camera, for example, a road side edge device, a platform server, a personal computer (Personal Computer, PC, a multipurpose computer with a size, price and performance suitable for personal use, a desktop computer, a notebook computer, a small notebook computer, a tablet computer, an ultrabook, etc. all belong to a personal computer), a smart phone, a personal digital assistant (Personal Digital Assistant, PDA), or an electronic device such as a wearable device. As shown in fig. 1, the method for identifying the vehicle intended to be plugged may include, but is not limited to, the following steps S1 to S6.
S1, acquiring field video data acquired in real time by a road monitoring camera.
In the step S1, the road monitoring camera is an existing traffic monitoring camera which is mainly used for capturing behaviors such as reverse running, lane changing and pressing solid line, illegal parking and the like; which is generally disposed at one side of a road or at one side of a road where a main road and an auxiliary road enter and exit each other; the camera is generally in a shape of a white cuboid and is generally mounted on an inverted L-shaped white traffic rod at the roadside. The lens view of the road monitoring camera covers a target road area and is used for collecting video frame images of the target road area in real time to obtain field video data containing a plurality of continuous video frame images. In addition, the road monitoring camera can transmit the acquired data to the local equipment in a conventional manner.
S2, carrying out vehicle identification real-time processing on the on-site video data by adopting a target detection algorithm to obtain a vehicle identification result, wherein the vehicle identification result comprises at least one identified vehicle and a vehicle body marking frame of each vehicle in the at least one vehicle.
In the step S2, the target detection algorithm is an existing artificial intelligent recognition algorithm for recognizing the object inside and marking the position of the object in the picture, specifically, but not limited to, a target detection algorithm is proposed in 2015 by using a fast R-CNN (Faster Regions with Convolutional Neural Networks features, he Kaiming, etc., which obtains a plurality of first target detection algorithms in the ILSVRV and COCO contest in 2015, an SSD (Single Shot MultiBox Detector, single lens multi-box detector, one of the target detection algorithms proposed by Wei Liu on the ECCV 2016, one of the currently popular main detection frameworks), or a YOLO (You only look once), and the basic principle of which is that the input image is firstly divided into 7x7 grids, 2 frames are predicted for each grid, then a target window with a relatively low possibility of removing the threshold value is removed, and finally a redundant window is removed by using a frame combining method to obtain a detection result), so that the vehicle recognition result can be processed based on the target detection algorithm to obtain the real-time vehicle recognition result.
Specifically, the on-site video data is subjected to vehicle identification real-time processing by using a target detection algorithm to obtain a vehicle identification result, including but not limited to: and importing the live video image in the live video data into a vehicle recognition model which is based on a YOLO v4 target detection algorithm and is trained in advance to obtain a vehicle recognition result, wherein the vehicle recognition result comprises at least one recognized vehicle and a vehicle body marking frame of each vehicle in the at least one vehicle. The specific model structure of the YOLO v4 target detection algorithm consists of three parts, namely a backbone network back, a neck network neg and a head network head. The Backbone network Backbone may employ a CSPDarknet53 (CSP representation Cross Stage Partial) network for extracting features. The neck network neg consists of SPP (Spatial Pyramid Pooling block) blocks for adding receptive fields and separating out the most important features and PANet (Path Aggregation Network) networks for ensuring that semantic features are accepted from the higher level layers and fine-grained features are accepted from the lower level layers of the transverse backbone network at the same time. The head network head is based on anchor boxes and detects three different sized feature maps 13x13, 26x26 and 52x52 for detecting large to small objects respectively (here, the large sized feature map is more informative and thus the 52x52 sized feature map is used for detecting small objects and vice versa). The vehicle recognition model can be trained by a conventional sample training mode, so that after a test image is input, information such as whether a vehicle recognition result and a confidence prediction value of the vehicle recognition result can be output.
S3, carrying out lane line identification real-time processing on the on-site video data by adopting a lane line identification algorithm to obtain a lane line identification result.
In the step S3, since the vehicle intended congestion behavior is a driving behavior that occurs in a lane where a congested vehicle is located, it is necessary to monitor a specific situation in the lane by the recognition result of the lane line. Specifically, the lane line recognition algorithm is adopted to perform the lane line recognition real-time processing on the on-site video data to obtain a lane line recognition result, including but not limited to the following steps S31 to S37.
S31, converting the live video image in the live video data into a gray level image in real time.
In the step S31, the live video image in red, green and yellow RGB format can be directly converted into the gray image in real time using, but not limited to, a cvttcolor function in the cross-platform computer vision library Opencv. The cvttcolor function has three interface parameters, which are respectively: input images, output images, and format conversion categories.
S32, carrying out Gaussian filtering real-time processing on the gray level image to obtain a denoising image.
In the step S32, gaussian filtering, also called gaussian blur, may specifically, but not limited to, use the gaussian blue function in the cross-platform computer vision library Opencv to perform gaussian filtering real-time processing on the gray image, so as to reject some noise points in the original image (if no gaussian filtering is used, some insignificant features in the image cannot be avoided when the original image is directly processed, and some less clear noise points are deleted after the gaussian blur is passed). The five interface parameters of the gaussian blue function are respectively: the input image, the output image, the gaussian kernel, the standard deviation of the gaussian kernel in the X direction and the standard deviation of the gaussian kernel in the Y direction, wherein the gaussian kernel is composed of two dimensions of width and height, which may use different values but must be either positive odd or 0, whereas the standard deviation of the gaussian kernel in both X and Y directions, respectively, is typically set to 0.
S33, performing edge detection real-time processing on the denoising image to obtain an edge detection result image containing edge pixel points, wherein the edge pixel points are used as lane line pixel points.
In the step S33, since the lane line and the road surface have obvious boundary characteristics, the detected edge pixel point may be regarded as a lane line pixel point. The edge detection real-time processing can be specifically performed on the denoising image by using a Canny function in the cross-platform computer vision library Opencv, but is not limited to. The Canny function has 5 interface parameters, which are respectively: the method comprises the steps of inputting an image, outputting the image, a threshold value 1, a threshold value 2 and aperture parameters of a sobel operator, wherein the threshold value 1 and the threshold value 2 are used as the basis for judging whether each pixel point is an edge pixel point or not: i.e. a pixel below said threshold 1 will be considered as not an edge pixel, a pixel above said threshold 2 will be considered as an edge pixel, whereas for a pixel between said threshold 1 and said threshold 2, if adjacent to a pixel above said threshold 2, it is considered as an edge pixel, otherwise it is considered as not an edge pixel. In addition, the aperture parameter of the solid operator is generally defaulted to 3, i.e., a matrix denoted as 3*3.
S34, performing mask real-time processing on the edge detection result image according to a preset region of interest to obtain a new edge detection result image which only contains edge pixel points in the region of interest.
In the step S34, it is considered that the edge detection result image obtained by edge detection contains a lot of environmental information which is not interesting, and thus mask extraction of the desired information is required. As shown in fig. 2, considering a trapezoid area where the lane line is generally located below the image, 4 points may be manually set in advance, and four corner points of the trapezoid area are formed so as to define the region of interest by the four corner points; the trapezoidal region may be drawn in particular, but not limited to, using a fillcovexpoly function in the cross-platform computer vision library Opencv. The fileconvexpoly function described above has a total of 4 interface parameters: blank (the size is consistent with the original), corner information, the number of sides of the polygon and line color. After the region of interest is obtained, the region of interest can be used as a trapezoidal mask region to perform bitwise_and operation with the edge detection result image, so as to obtain an edge detection result only in the region of interest, and as shown in fig. 2, only lane line information can be seen. The bitwise_and operation is an existing operation of performing an and operation on two images, and the corresponding functions of the bitwise_and operation have 3 interface parameters, which are respectively: a mask map (which contains the region of interest), an artwork (i.e., the edge detection result image), and an output map. In addition, it should be noted that the sizes of the three images and the number of color channels are consistent.
S35, carrying out Hough transformation real-time processing on the new edge detection result image to obtain at least one straight line segment for forming the lane line.
In the step S35, since the edge pixels in the new edge detection result image are still independent pixels and are not connected to form a line, it is necessary to find a straight line segment in the image based on the edge pixels by hough transform. The hough transform has 3 existing modes: standard hough transform, multi-scale hough transform and cumulative probability hough transform, wherein the first two uses HoughLines functions and the last one uses HoughLinesP functions. Since the execution efficiency of the hough transform is higher, there is generally a greater tendency to use the hough transform, i.e. the present embodiment also employs the hough transform. The Hough transformation converts a line in a Cartesian coordinate system into a polar coordinate system, namely, a set of all straight lines passing through a point in the Cartesian coordinate system is a sine curve in the polar coordinate system, and points represented by the curves are on the same straight line; the hough transform is to find the intersection points to determine which pixels are on the same straight line. In addition, if the lens of the road monitoring camera is disposed right in front of the road, it is also possible to determine, for the obtained straight line segments, which straight line segments are left straight line segments for forming the left lane line and which straight line segments are right straight line segments for forming the right lane line, by the oblique relationship between the obtained straight line segments and the longitudinal center line of the image.
S36, fitting in real time according to the slope average value and the intercept average value of the at least one straight line segment to obtain a continuous lane line.
In the step S36, it is considered that the lane lines are printed on the road surface in the form of broken lines, and therefore, it is necessary to perform fitting processing on the straight line segments obtained by hough transform to obtain continuous lane lines. Furthermore, the slope and intercept are common terms in mathematics, so that a complete continuous straight line can be drawn based on the slope average value and the intercept average value of a plurality of straight line segments through the characteristics of a conventional straight line function.
S37, loading the lane lines into the field video image in real time to obtain a lane line identification result.
In the step S37, the lane line may be specifically, but not limited to, loaded into the live video image in real time using an addWeighted function in the cross-platform computer vision library Opencv. The addWeighted function has 6 interface parameters, which are respectively: original fig. 1 (i.e. the image containing the lane lines), transparency of fig. 1, original fig. 2 (i.e. the live video image), transparency of fig. 2, weighting value (typically set to 0) and output map.
Thus, the lane line recognition/detection task for the live video image of the sheet Zhang Suo can be completed through the foregoing steps S31 to S37.
S4, judging whether the corresponding vehicle is pressed according to the lane line identification result in real time for each vehicle in the at least one vehicle, if so, intercepting the corresponding vehicle image from the field video data according to the corresponding vehicle body mark frame.
In the step S4, since the lane line recognition result includes the recognized lane line, and the vehicle intended for plugging must enter the adjacent plugged lane beyond the lane line when plugging, whether there is a vehicle line-pressing condition can be used as a precondition for whether there is a vehicle intended for plugging. Specifically, for each vehicle in the at least one vehicle, whether the corresponding vehicle is pressed or not is judged in real time according to the lane line recognition result, including but not limited to the following steps S41 to S42.
S41, determining corresponding in-frame diagonal line segments according to corresponding vehicle body marking frames in real time for each vehicle in the at least one vehicle.
In the step S41, the in-frame diagonal line segment may be, but is not limited to, a connecting line segment determined by the upper left corner coordinates (xmin, ymin) and the lower right corner coordinates (xmax, ymax).
S42, judging whether the identified lane lines intersect with the corresponding in-frame diagonal line segments according to the lane line identification results in real time for each vehicle, if so, judging that the corresponding vehicles are pressed, otherwise, judging that the corresponding vehicles are not pressed.
S5, aiming at each line pressing vehicle in the at least one vehicle, estimating and obtaining a corresponding vehicle body direction in real time according to a corresponding vehicle image, and determining a corresponding lane to be driven in based on the vehicle body direction.
In the step S5, specifically, for each wire pressing vehicle in the at least one vehicle, a corresponding vehicle body direction is estimated in real time according to a corresponding vehicle image, including, but not limited to, the following steps S51 to S52.
S51, for each line pressing vehicle in at least one vehicle, a corresponding vehicle image is imported into a vehicle body direction estimation model which is based on a mask-CNN framework and is trained in advance, and the probability that the vehicle body direction of the corresponding vehicle belongs to each sector area in a plurality of sector areas is obtained, wherein the plurality of sector areas refer to all non-overlapping sector areas discretizing the circumferential direction by 360 degrees.
In the step S51, the masker-CNN architecture is a two-stage existing network framework, in which the first stage is used to scan images and generate proposals (i.e., regions that may contain a target), and the second stage is used to sort the proposals and generate bounding boxes and masks. In detail, the mask-CNN architecture includes a feature pyramid network (Feature Pyramid Network, FPN), a region selection network (Region Proposal Network, RPN) and a RoIAlign module, where the FPN mainly solves the multi-scale problem in object detection, that is, through simple network connection change, under the condition of not increasing the calculation amount of the original model basically, the RPN is used to generate a set of two-dimensional anchor frames with a fixed aspect ratio in the whole region of the provided feature map, and the RoIAlign module converts each feature map defined by the region of interest frame into a grid with a fixed size, and ensures the accuracy of the spatial position through a bilinear interpolation method. The plurality of sector areas may specifically, but not limited to, refer to 72 non-overlapping sector areas equally discretizing 360 degrees in the circumferential direction, that is, the upper limit and lower limit of the angle range of each sector area are all 5 degrees. In addition, the vehicle body direction estimation model can be trained by a conventional sample training mode, so that after the vehicle image is input, probability distribution results of the vehicle body direction of the corresponding vehicle belonging to the plurality of sector areas can be output.
S52, calculating corresponding vehicle body direction angles R according to the following formulas aiming at the line pressing vehicles:
Figure BDA0004060223460000111
wherein K represents the total number of the sectors of the plurality of sector-shaped partitions, K represents a positive integer less than or equal to K, and w k Representing probability that the body direction of the wire pressing vehicle belongs to the kth sector among the plurality of sector areas, θ loc,k Representing the angular extent of the kth sector, media () represents a median function.
In the step S52, for example, if the angle range of the kth sector is 35 to 40 degrees, media (θ loc,k ) The value of (2) is 37.5 degrees.
In the step S5, the specific manner of determining the lane to be driven in based on the vehicle body direction may include, but is not limited to, the following: for two lanes located respectively on the left and right sides of the pressed lane line, the lane to which the vehicle body direction is directed may be determined as the lane to be driven in, as shown in fig. 3.
S6, aiming at each line pressing vehicle, judging whether the corresponding vehicle body mark frame is intersected with the vehicle body mark frames of the front and rear vehicles on the corresponding lane to be driven in according to the vehicle identification result, if so, determining that the corresponding vehicle has the action of intention to plug, and reserving corresponding vehicle intention to plug judging evidence.
In the step S6, as shown in fig. 3, since the body marking frames of the wire pressing vehicle (i.e., the vehicle a in fig. 3) intersect with the body marking frames of the front and rear vehicles (i.e., the vehicles B and C in fig. 3) on the lane where the wire pressing vehicle is to be driven, it is sufficient to identify that the wire pressing vehicle has the behavior of intending to jam, and the evidence of the intention of the wire pressing vehicle to jam can be retained as a subsequent penalty basis.
The data processing scheme for automatically identifying the vehicle intention plugging behavior and retaining evidence in the traffic monitoring process is provided based on the vehicle intention plugging behavior identification method described in the steps S1-S6, namely, a vehicle identification result and a lane line identification result are obtained through real-time identification according to on-site video data acquired by a road monitoring camera, whether line pressing is carried out or not is identified according to the lane line identification result in real time for each identified vehicle, then corresponding vehicle directions and lanes to be driven into are obtained through real-time estimation according to corresponding vehicle images for each line pressing vehicle, finally, when the fact that corresponding vehicle body mark frames are intersected with vehicle body mark frames of two vehicles before and after the corresponding vehicles to be driven into the lanes is judged according to the vehicle identification result, the fact that the corresponding vehicle intention plugging behavior exists is determined, and the corresponding vehicle intention plugging judgment evidence is retained, so that the vehicle intention plugging behavior identification and the evidence retaining time can be grasped independently, measures can be taken in time conveniently, and man-made factors which are difficult to unify due to the time consumption and the identification standards can be avoided, and practical application and popularization are facilitated.
The embodiment further provides a possible design of how to specifically perform evidence preservation, that is, for a certain line pressing vehicle in the at least one vehicle, the evidence of the corresponding vehicle intention jam judgment is preserved, including but not limited to the following steps S61-S62.
S61, recognizing the license plate number of the certain wire pressing vehicle from the vehicle image of the certain wire pressing vehicle by adopting a license plate number recognition algorithm.
In the step S61, the following steps S611 to S613 are specifically included, but not limited to: s611, carrying out license plate rectangular outline detection processing on the vehicle image of the certain line pressing vehicle to obtain a license plate marking frame; s612, intercepting a license plate image from the vehicle image of the certain line pressing vehicle according to the license plate marking frame; s613, performing character recognition processing on the license plate image by using a character recognition packet pytesseract to obtain a character string, and taking the character string as a license plate number of the certain line pressing vehicle. In the foregoing step S611, the license plate marking frame may be found by using, but not limited to, the rectangular outline detection function in the cross-platform computer vision library Opencv, and the known information such as the exact size, color, and approximate position of the license plate may be combined to improve the detection accuracy. In addition, the character recognition package pytesseract is an existing OCR (Optical Character Recognition ) tool.
S62, taking the license plate number of the certain line pressing vehicle and the field video data acquired in the target period as vehicle intention jam judgment evidence of the certain line pressing vehicle, and sending a rule breaking driving report message carrying the vehicle intention jam judgment evidence to a rule breaking driving report platform so as to carry out evidence preservation on the rule breaking driving report platform, wherein the target period comprises the line pressing period of the certain line pressing vehicle and a fixed-length adjacent period positioned behind the line pressing period.
In the step S62, for example, the line pressing period is 11:12:00 to 11:12:30 seconds, and the fixed-length adjacent period (i.e., the fixed-length time is 2 minutes) from 11:12:30 to 11:14:30 seconds may also be used as the portion of the target period, i.e., the target period is 11:12:00 to 11:14:30 seconds.
Based on the first possible design, strong and rich-information vehicle intention jam judgment evidence can be obtained, and the availability of the evidence is ensured.
As shown in fig. 4, a second aspect of the present embodiment provides a virtual device for implementing the method for identifying the behavior of the vehicle intended to be plugged according to the first aspect or the possible design of the method, which includes a data acquisition module, a vehicle identification module, a lane line identification module, a line pressing judgment module, a lane determination module, and a behavior confirmation module;
The data acquisition module is used for acquiring on-site video data acquired in real time by the road monitoring camera;
the vehicle identification module is in communication connection with the data acquisition module and is used for carrying out vehicle identification real-time processing on the on-site video data by adopting a target detection algorithm to obtain a vehicle identification result, wherein the vehicle identification result comprises at least one identified vehicle and a vehicle body marking frame of each vehicle in the at least one vehicle;
the lane line identification module is in communication connection with the data acquisition module and is used for carrying out lane line identification real-time processing on the field video data by adopting a lane line identification algorithm to obtain a lane line identification result;
the line pressing judging module is respectively in communication connection with the vehicle identification module and the lane line identification module and is used for judging whether the corresponding vehicle is pressed according to the lane line identification result in real time for each vehicle in the at least one vehicle, if so, the corresponding vehicle image is intercepted from the field video data according to the corresponding vehicle body mark frame;
the lane determining module is in communication connection with the line pressing judging module and is used for estimating corresponding vehicle body directions according to corresponding vehicle images in real time for each line pressing vehicle in the at least one vehicle and determining corresponding lanes to be driven into based on the vehicle body directions;
The behavior confirmation module is respectively in communication connection with the lane determination module and the vehicle identification module, and is used for judging whether the corresponding vehicle body mark frame is intersected with the vehicle body mark frames of the front and rear vehicles on the corresponding lane to be driven according to the vehicle identification result for each line pressing vehicle, if so, determining that the corresponding vehicle has the intention jam behavior, and reserving the corresponding vehicle intention jam judgment evidence.
The working process, working details and technical effects of the foregoing apparatus provided in the second aspect of the present embodiment may refer to the first aspect or may possibly design a method for identifying the vehicle intended plugging behavior, which is not described herein again.
As shown in fig. 5, a third aspect of the present embodiment provides a computer device for executing the vehicle intended plugging behavior recognition method according to the first aspect or the possible design, comprising a memory, a processor and a transceiver, which are connected in communication in this order, wherein the memory is used for storing a computer program, the transceiver is used for receiving and transmitting a message, and the processor is used for reading the computer program, and executing the vehicle intended plugging behavior recognition method according to the first aspect or the possible design. By way of specific example, the Memory may include, but is not limited to, random-Access Memory (RAM), read-Only Memory (ROM), flash Memory (Flash Memory), first-in first-out Memory (First Input First Output, FIFO), and/or first-in last-out Memory (First Input Last Output, FILO), etc.; the processor may be, but is not limited to, a microprocessor of the type STM32F105 family. In addition, the computer device may include, but is not limited to, a power module, a display screen, and other necessary components.
The working process, working details and technical effects of the foregoing computer device provided in the third aspect of the present embodiment may be referred to in the first aspect or may be configured to a vehicle intention plugging behavior recognition method, which is not described herein.
A fourth aspect of the present embodiment provides a computer-readable storage medium storing instructions containing a method for identifying a vehicle's intended plugging behavior as in the first aspect or as may be devised, i.e. having instructions stored thereon which, when run on a computer, perform a method for identifying a vehicle's intended plugging behavior as in the first aspect or as may be devised. The computer readable storage medium refers to a carrier for storing data, and may include, but is not limited to, a floppy disk, an optical disk, a hard disk, a flash Memory, and/or a Memory Stick (Memory Stick), where the computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
The working process, working details and technical effects of the foregoing computer readable storage medium provided in the fourth aspect of the present embodiment may be referred to as the first aspect or the possible design of a vehicle intended plugging behavior recognition method, which are not described herein.
A fifth aspect of the present embodiment provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of identifying a vehicle intended plugging behaviour as described in the first aspect or as a possible design. Wherein the computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus.
Finally, it should be noted that: the foregoing description is only of the preferred embodiments of the invention and is not intended to limit the scope of the invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A vehicle intended to plug behavior recognition method, characterized by comprising:
acquiring field video data acquired in real time by a road monitoring camera;
performing vehicle identification real-time processing on the on-site video data by adopting a target detection algorithm to obtain a vehicle identification result, wherein the vehicle identification result comprises at least one identified vehicle and a vehicle body mark frame of each vehicle in the at least one vehicle;
carrying out lane line identification real-time processing on the on-site video data by adopting a lane line identification algorithm to obtain a lane line identification result;
Judging whether the corresponding vehicle is in line with each vehicle in the at least one vehicle in real time according to the lane line identification result, if so, intercepting a corresponding vehicle image from the field video data according to a corresponding vehicle body mark frame;
for each line pressing vehicle in the at least one vehicle, estimating and obtaining a corresponding vehicle body direction in real time according to a corresponding vehicle image, and determining a corresponding lane to be driven in based on the vehicle body direction;
and judging whether the corresponding vehicle body mark frames are intersected with the vehicle body mark frames of the front and rear vehicles on the corresponding lane to be driven according to the vehicle identification result, if so, determining that the corresponding vehicles have the intended jam behavior, and retaining the corresponding vehicle intended jam judgment evidence.
2. The method for identifying the vehicle intended to be plugged according to claim 1, wherein the real-time processing of the vehicle identification is performed on the field video data by using a target detection algorithm to obtain a vehicle identification result, comprising:
and importing the live video image in the live video data into a vehicle recognition model which is based on a YOLOv4 target detection algorithm and is trained in advance to obtain a vehicle recognition result, wherein the vehicle recognition result comprises at least one recognized vehicle and a vehicle body marking frame of each vehicle in the at least one vehicle.
3. The method for identifying the vehicle intended to be plugged according to claim 1, wherein the lane line identification real-time processing is performed on the on-site video data by using a lane line identification algorithm to obtain a lane line identification result, comprising:
converting live video images in the live video data into gray-scale images in real time;
carrying out Gaussian filtering real-time processing on the gray level image to obtain a denoising image;
performing edge detection real-time processing on the denoising image to obtain an edge detection result image containing edge pixel points, wherein the edge pixel points are used as lane line pixel points;
performing mask real-time processing on the edge detection result image according to a preset region of interest to obtain a new edge detection result image which only contains edge pixel points in the region of interest;
performing Hough transformation real-time processing on the new edge detection result image to obtain at least one straight line segment for forming a lane line;
fitting in real time according to the slope average value and the intercept average value of the at least one straight line segment to obtain a continuous lane line;
loading the lane lines into the field video image in real time to obtain a lane line identification result.
4. The vehicle intended congestion behavior recognition method according to claim 1, wherein for each of the at least one vehicle, determining whether the corresponding vehicle is line-pressed in real time according to the lane line recognition result includes:
for each vehicle in the at least one vehicle, determining a corresponding in-frame diagonal line segment in real time according to a corresponding vehicle body marking frame;
and judging whether the identified lane line intersects with the corresponding diagonal line segment in the frame according to the lane line identification result in real time for each vehicle, if so, judging that the corresponding vehicle is in line, otherwise, judging that the corresponding vehicle is not in line.
5. The vehicle intended plugging behavior recognition method according to claim 1, wherein, for each wire pressing vehicle in the at least one vehicle, a corresponding vehicle body direction is estimated in real time from a corresponding vehicle image, comprising:
for each line pressing vehicle in the at least one vehicle, importing a corresponding vehicle image into a vehicle body direction estimation model which is based on a mask-CNN framework and is trained in advance, and outputting to obtain the probability that the vehicle body direction of the corresponding vehicle belongs to each sector area in a plurality of sector areas, wherein the plurality of sector areas refer to all non-overlapping sector areas discretizing the circumferential direction by 360 degrees;
For each line pressing vehicle, a corresponding vehicle body direction angle R is calculated according to the following formula:
Figure FDA0004060223450000021
wherein K represents the total number of the sectors of the plurality of sector-shaped partitions, K represents a positive integer less than or equal to K, and w k Representing probability that the body direction of the wire pressing vehicle belongs to the kth sector among the plurality of sector areas, θ loc,k Representing the angular extent of the kth sector, media () represents a median function.
6. The vehicle intended plugging behavior recognition method according to claim 1, wherein, for a certain wire pressing vehicle of the at least one vehicle, a corresponding vehicle intended plugging determination evidence is retained, comprising:
identifying the license plate number of the certain line pressing vehicle from the vehicle image of the certain line pressing vehicle by adopting a license plate number identification algorithm;
and taking the license plate number of the certain line pressing vehicle and the field video data acquired in the target period as vehicle intention jam judgment evidence of the certain line pressing vehicle, and sending a rule breaking driving report message carrying the vehicle intention jam judgment evidence to a rule breaking driving report platform so as to carry out evidence preservation on the rule breaking driving report platform, wherein the target period comprises the line pressing period of the certain line pressing vehicle and a fixed-length adjacent period positioned behind the line pressing period.
7. The method for identifying an intended vehicle congestion behavior according to claim 6, wherein identifying the license plate number of the certain line-pressing vehicle from the vehicle image of the certain line-pressing vehicle using a license plate number identification algorithm, comprising:
carrying out license plate rectangular outline detection processing on the vehicle image of the certain line pressing vehicle to obtain a license plate marking frame;
according to the license plate mark frame, a license plate image is intercepted from the vehicle image of the certain line pressing vehicle;
and carrying out character recognition processing on the license plate image by adopting a character recognition packet pytesseract to obtain a character string, and taking the character string as a license plate number of the certain line pressing vehicle.
8. The device is characterized by comprising a data acquisition module, a vehicle identification module, a lane line identification module, a line pressing judgment module, a lane determination module and a behavior confirmation module;
the data acquisition module is used for acquiring on-site video data acquired in real time by the road monitoring camera;
the vehicle identification module is in communication connection with the data acquisition module and is used for carrying out vehicle identification real-time processing on the on-site video data by adopting a target detection algorithm to obtain a vehicle identification result, wherein the vehicle identification result comprises at least one identified vehicle and a vehicle body marking frame of each vehicle in the at least one vehicle;
The lane line identification module is in communication connection with the data acquisition module and is used for carrying out lane line identification real-time processing on the field video data by adopting a lane line identification algorithm to obtain a lane line identification result;
the line pressing judging module is respectively in communication connection with the vehicle identification module and the lane line identification module and is used for judging whether the corresponding vehicle is pressed according to the lane line identification result in real time for each vehicle in the at least one vehicle, if so, the corresponding vehicle image is intercepted from the field video data according to the corresponding vehicle body mark frame;
the lane determining module is in communication connection with the line pressing judging module and is used for estimating corresponding vehicle body directions according to corresponding vehicle images in real time for each line pressing vehicle in the at least one vehicle and determining corresponding lanes to be driven into based on the vehicle body directions;
the behavior confirmation module is respectively in communication connection with the lane determination module and the vehicle identification module, and is used for judging whether the corresponding vehicle body mark frame is intersected with the vehicle body mark frames of the front and rear vehicles on the corresponding lane to be driven according to the vehicle identification result for each line pressing vehicle, if so, determining that the corresponding vehicle has the intention jam behavior, and reserving the corresponding vehicle intention jam judgment evidence.
9. A computer device comprising a memory, a processor and a transceiver in communication connection in sequence, wherein the memory is adapted to store a computer program, the transceiver is adapted to receive and transmit messages, and the processor is adapted to read the computer program and to perform the method for identifying a vehicle's intended plugging behaviour according to any one of claims 1-7.
10. A computer-readable storage medium having instructions stored thereon that, when executed on a computer, perform the vehicle intended plug behavior recognition method of any one of claims 1-7.
CN202310054998.2A 2023-02-03 2023-02-03 Vehicle intention plug behavior identification method, device, equipment and storage medium Active CN116052440B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310054998.2A CN116052440B (en) 2023-02-03 2023-02-03 Vehicle intention plug behavior identification method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310054998.2A CN116052440B (en) 2023-02-03 2023-02-03 Vehicle intention plug behavior identification method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116052440A true CN116052440A (en) 2023-05-02
CN116052440B CN116052440B (en) 2024-02-02

Family

ID=86121635

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310054998.2A Active CN116052440B (en) 2023-02-03 2023-02-03 Vehicle intention plug behavior identification method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116052440B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170068862A1 (en) * 2014-04-04 2017-03-09 Delphi Technologies, Inc. Method for lane detection
CN112507874A (en) * 2020-12-10 2021-03-16 上海芯翌智能科技有限公司 Method and device for detecting motor vehicle jamming behavior
CN113743316A (en) * 2021-09-07 2021-12-03 北京建筑大学 Vehicle jamming behavior identification method, system and device based on target detection
CN115359438A (en) * 2022-08-25 2022-11-18 京东方科技集团股份有限公司 Vehicle jam detection method, system and device based on computer vision

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170068862A1 (en) * 2014-04-04 2017-03-09 Delphi Technologies, Inc. Method for lane detection
CN112507874A (en) * 2020-12-10 2021-03-16 上海芯翌智能科技有限公司 Method and device for detecting motor vehicle jamming behavior
CN113743316A (en) * 2021-09-07 2021-12-03 北京建筑大学 Vehicle jamming behavior identification method, system and device based on target detection
CN115359438A (en) * 2022-08-25 2022-11-18 京东方科技集团股份有限公司 Vehicle jam detection method, system and device based on computer vision

Also Published As

Publication number Publication date
CN116052440B (en) 2024-02-02

Similar Documents

Publication Publication Date Title
WO2021208275A1 (en) Traffic video background modelling method and system
EP3916627A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN104766071B (en) A kind of traffic lights fast algorithm of detecting applied to pilotless automobile
CN103400150B (en) A kind of method and device that road edge identification is carried out based on mobile platform
CN109711407B (en) License plate recognition method and related device
CN112528878A (en) Method and device for detecting lane line, terminal device and readable storage medium
EP4002268A1 (en) Medical image processing method, image processing method, and device
CN103034983B (en) A kind of defogging method capable based on anisotropic filtering
CN111899515B (en) Vehicle detection system based on wisdom road edge calculates gateway
CN112330593A (en) Building surface crack detection method based on deep learning network
Shaikh et al. A novel approach for automatic number plate recognition
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
CN115063785B (en) Method and device for positioning license plate in expressway scene by using target recognition model
Chang et al. An efficient method for lane-mark extraction in complex conditions
CN111626241A (en) Face detection method and device
CN111027564A (en) Low-illumination imaging license plate recognition method and device based on deep learning integration
CN116052440B (en) Vehicle intention plug behavior identification method, device, equipment and storage medium
CN117132990A (en) Railway carriage information identification method, device, electronic equipment and storage medium
CN116052090A (en) Image quality evaluation method, model training method, device, equipment and medium
CN110633705A (en) Low-illumination imaging license plate recognition method and device
CN111126248A (en) Method and device for identifying shielded vehicle
CN116403200A (en) License plate real-time identification system based on hardware acceleration
CN112446292B (en) 2D image salient object detection method and system
CN116862920A (en) Portrait segmentation method, device, equipment and medium
CN110321973B (en) Combined vehicle detection method based on vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant