CN111476245A - Vehicle left-turn violation detection method and device, computer equipment and storage medium - Google Patents

Vehicle left-turn violation detection method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111476245A
CN111476245A CN202010472153.1A CN202010472153A CN111476245A CN 111476245 A CN111476245 A CN 111476245A CN 202010472153 A CN202010472153 A CN 202010472153A CN 111476245 A CN111476245 A CN 111476245A
Authority
CN
China
Prior art keywords
detected
image
target vehicle
vehicle
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010472153.1A
Other languages
Chinese (zh)
Inventor
周康明
王赛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eye Control Technology Co Ltd
Original Assignee
Shanghai Eye Control Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eye Control Technology Co Ltd filed Critical Shanghai Eye Control Technology Co Ltd
Priority to CN202010472153.1A priority Critical patent/CN111476245A/en
Publication of CN111476245A publication Critical patent/CN111476245A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a vehicle left-turn violation detection method and device, computer equipment and a storage medium. The method comprises the following steps: acquiring an image set to be detected; the image set to be detected comprises a plurality of images to be detected of target vehicle running, which are acquired according to a preset time interval; determining the position of a target vehicle in each image to be detected in the image set to be detected based on a preset target license plate number; acquiring the position of the center point of each intersection of the images to be detected in the image set to be detected; acquiring the driving posture of the target vehicle in each image to be detected in the image set to be detected; and determining whether the target vehicle has a left-turn violation according to the driving posture, the position of the target vehicle and the position of the intersection central point. By adopting the method, the automation of the vehicle left-turn violation detection can be realized, and the detection efficiency of the vehicle left-turn violation detection is improved.

Description

Vehicle left-turn violation detection method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for detecting a vehicle left turn violation, a computer device, and a storage medium.
Background
In modern society, with the rapid development of cities, the urban traffic flow is increasing, so that the monitoring of urban traffic becomes more difficult. Along with the traffic violation, various vehicle violations occur anytime and anywhere, and great potential safety hazards are caused to personnel participating in road traffic.
When the motor vehicle turns left, the motor vehicle does not turn left near the center point of the intersection, which is one of the important reasons for the occurrence of road traffic accidents.
In the traditional technology, the method for judging whether the left turn of the vehicle at the intersection is illegal is to manually check a group of pictures shot by a camera at the intersection for judgment, and the judging method needs to consume a large amount of manpower and has low detection efficiency.
Disclosure of Invention
In view of the above, it is necessary to provide a vehicle left-turn violation detecting method, device, computer device and storage medium for solving the above technical problems.
In one aspect, a vehicle left turn violation detection method is provided, the method comprising:
acquiring an image set to be detected; the image set to be detected comprises a plurality of images to be detected, acquired according to a preset time interval, of the target vehicle running;
determining the position of a target vehicle in each image to be detected in the image set to be detected based on a preset target license plate number;
acquiring the position of the center point of each intersection of the images to be detected in the image set to be detected;
acquiring the driving posture of the target vehicle in each image to be detected in the image set to be detected;
and determining whether the target vehicle has a left-turn violation according to the driving posture, the position of the target vehicle and the position of the intersection central point.
In another aspect, a vehicle left turn violation detection apparatus is provided, the apparatus comprising:
the image acquisition module is used for acquiring an image set to be detected; the image set to be detected comprises a plurality of images to be detected, acquired according to a preset time interval, of the target vehicle running;
the target acquisition module is used for determining the position of a target vehicle in each image to be detected in the image set to be detected based on a preset target license plate number;
the center line point acquisition module is used for acquiring the intersection center point position of each image to be detected in the image set to be detected;
the attitude acquisition module is used for acquiring the driving attitude of the target vehicle in each image to be detected in the image set to be detected;
and the violation judgment module is used for determining whether the target vehicle has a left-turn violation or not according to the driving posture, the position of the target vehicle and the position of the intersection center point.
In another aspect, a computer device is provided, comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring an image set to be detected; the image set to be detected comprises a plurality of images to be detected, acquired according to a preset time interval, of the target vehicle running;
determining the position of a target vehicle in each image to be detected in the image set to be detected based on a preset target license plate number;
acquiring the position of the center point of each intersection of the images to be detected in the image set to be detected;
acquiring the driving posture of the target vehicle in each image to be detected in the image set to be detected;
and determining whether the target vehicle has a left-turn violation according to the driving posture, the position of the target vehicle and the position of the intersection central point.
In another aspect, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of:
acquiring an image set to be detected; the image set to be detected comprises a plurality of images to be detected, acquired according to a preset time interval, of the target vehicle running;
determining the position of a target vehicle in each image to be detected in the image set to be detected based on a preset target license plate number;
acquiring the position of the center point of each intersection of the images to be detected in the image set to be detected;
acquiring the driving posture of the target vehicle in each image to be detected in the image set to be detected;
and determining whether the target vehicle has a left-turn violation according to the driving posture, the position of the target vehicle and the position of the intersection central point.
The vehicle left-turn violation detection method, the vehicle left-turn violation detection device, the computer equipment and the storage medium are provided, and the method comprises the following steps: determining the position of a target vehicle in each image to be detected in the acquired image set to be detected based on a preset target license plate number so as to accurately detect the subsequent vehicle corresponding to the target license plate number, namely the target vehicle; the method comprises the steps of obtaining the position of the center point of each intersection of each image to be detected in the image set to be detected, obtaining the driving posture of the target vehicle in each image to be detected in the image set to be detected, determining whether a left-turn violation exists in the target vehicle according to the driving posture, the position of the target vehicle and the position of the center point of each intersection, converting left-turn violation detection of the vehicle into posture and position detection on the image, facilitating programmed application of a computer, realizing automation of left-turn violation detection of the vehicle, and improving detection efficiency of left-turn violation detection of the vehicle.
Drawings
FIG. 1 is a diagram of an exemplary implementation of a vehicle left turn violation detection method;
FIG. 2 is a schematic flow chart illustrating a vehicle left turn violation detection method according to one embodiment;
FIG. 3 is a schematic diagram of a process for determining a location of a target vehicle in an image under test according to one embodiment;
FIG. 4 is a schematic flow chart illustrating the process of determining the location of an Nth target vehicle based on a first target vehicle zone according to one embodiment;
FIG. 5 is a schematic diagram illustrating a process of obtaining a center point position of an intersection of an image to be measured according to an embodiment;
FIG. 6 is a schematic flow chart illustrating the process of screening and fusing initial lane line positions to obtain target lane line positions according to one embodiment;
FIG. 7 is a schematic flow chart illustrating the process of obtaining the driving posture of the target vehicle in the image to be measured according to an embodiment;
FIG. 8 is a schematic flow chart illustrating the determination of whether a left turn violation exists for a target vehicle based on a driving gesture, a target vehicle position, and an intersection center point position according to an embodiment;
FIG. 9 is a block diagram of an exemplary vehicle left turn violation detection apparatus;
FIG. 10 is a diagram showing an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The vehicle left-turn violation detection method provided by the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The server 104 acquires an image set to be detected acquired by the terminal 102, determines a target vehicle position of a target vehicle in the image set to be detected based on a preset target license plate number, acquires an intersection center point position of each image to be detected in the image set to be detected and a driving posture of the target vehicle, and determines whether the target vehicle has a left-turn violation according to the driving posture, the target vehicle position and the intersection center point position. The terminal 102 may be, but is not limited to, various video monitoring devices such as a road snapshot machine and a camera, and the server 104 may be implemented by an independent server or a server cluster formed by a plurality of servers.
In one embodiment, as shown in fig. 2, a method for detecting a vehicle left-turn violation is provided, which is described by taking the method as an example applied to the terminal in fig. 1, and includes the following steps:
s201, obtaining an image set to be detected.
The image set to be detected comprises a plurality of images to be detected, acquired according to a preset time interval, of the target vehicle running.
The image to be detected can be an image captured by an image capturing device such as a road capturing machine arranged on a road, or a video frame in a video captured by a monitoring camera arranged on the road. The preset time interval may be a uniform time interval or a non-uniform time interval, and is used for enabling the target vehicle in the acquired image to be detected to be at different positions, so that the subsequent vehicle driving posture can be conveniently recognized and judged. The target vehicle is a vehicle needing to be captured and determined by the front end of the system.
Specifically, the computer device may receive a plurality of images captured by the road capture machine at preset time intervals, such as 5s, for the target vehicle, or receive a plurality of video frames with time intervals of 5s in a video acquired by the monitoring camera for the target vehicle, as the set of images to be detected. And the image set to be detected can also be directly obtained from a road supervision management database of a traffic management department.
S202, determining the position of a target vehicle in each image to be detected in the image set to be detected based on a preset target license plate number.
And the preset target license plate number is the license plate number of the target vehicle received by the computer equipment. The road monitoring system acquires license plate numbers of all vehicles which run illegally in the jurisdiction area, the license plate numbers are used as target license plate numbers and are sent to the computer equipment, and the computer equipment selects one target license plate number to carry out targeted detection.
Specifically, the computer device can extract a vehicle region from each image to be detected according to image features of the vehicle, such as contour and color features, wherein each vehicle region is used for representing a vehicle; further extracting a license plate region from each obtained vehicle region according to the image features of the license plate, such as shape, color and texture features, specifically blue quadrilateral features with character textures, wherein each license plate region is used for representing a license plate; and further extracting the license plate number in each license plate area according to the image characteristics of characters such as Chinese characters and Arabic numerals, such as color and texture characteristics, specifically white character texture characteristics. And the computer equipment performs character-symbol code matching on the obtained license plate number and the target license plate number, takes the vehicle corresponding to the license plate number matched with the target license plate number as the target vehicle, and acquires the position of the vehicle area representing the target vehicle in the corresponding image to be detected as the position of the target vehicle.
S203, acquiring the intersection central point position of each image to be detected in the image set to be detected.
The image to be detected is an image of a traffic intersection such as a crossroad or a T-shaped intersection obtained by an image acquisition device, and the position of the center line of the intersection is the position of the intersection center point of the traffic intersection in the image to be detected. Because the image acquisition device is generally fixedly arranged at a certain traffic intersection, the intersection central points of the same traffic intersection have the same position in the image to be detected acquired by the same image acquisition device.
Specifically, the intersection center point position may be stored in the image acquisition device in advance in the form of a structured file. And when the computer equipment receives the image set to be detected acquired by the image acquisition device, the computer equipment can correspondingly call the intersection central point position stored in the image acquisition device.
And S204, acquiring the driving posture of the target vehicle in each image to be detected in the image set to be detected.
The driving posture is a driving posture state of the vehicle and can include left turning, right turning, straight running, head dropping and the like.
Specifically, the computer device may classify the vehicle tires of the target vehicle according to a tire detection model, which may be a classification model trained from tire sample images of different inclination angles, which may be specifically divided into three inclination directions of left-leaning, right-leaning, and not-leaning. And inputting the image of the target vehicle into the tire detection model by the computer equipment, so that the inclination direction of each vehicle tire can be obtained as a classification result of the tire detection model. The computer device can obtain vehicle tire classification results of the target vehicle, such as leftward inclination, rightward inclination and no inclination, through the tire detection model, and determine the driving posture of the target vehicle according to the vehicle tire classification results of the target vehicle. For example, when all the vehicle tires of the target vehicle are in the same inclined direction, the running posture of the target vehicle is straight; when the vehicle tires of the target vehicle are not all in the same tangential inclination direction and have right inclination, the driving posture of the target vehicle is right turn; when the vehicle tires of the target vehicle are not all in the same tangential inclination direction, a left inclination exists, and the inclination angle difference between the vehicle tires is smaller than a preset threshold value, the driving posture of the target vehicle is a left turn; and when the inclination directions of the vehicle tires of the target vehicle are not all the same, a left inclination exists, and the inclination angle difference between the vehicle tires is larger than or equal to the preset threshold, the driving posture of the target vehicle is a U-turn.
S205, determining whether the target vehicle has a left-turn violation according to the driving posture, the position of the target vehicle and the position of the intersection center point.
Specifically, if the computer device obtains that the driving posture of the target vehicle is not a left turn (is other driving posture) according to the classification result, the computer device determines that the target vehicle does not have a left turn violation. If the computer equipment obtains that the driving posture of the target vehicle is left turn according to the classification result, further obtaining the distance between the position of the target vehicle and the position of the center point of the intersection in each image to be detected, and obtaining a distance set. And the computer equipment judges whether the distance between the position of the target vehicle in the image to be detected and the position of the central point of the intersection at the later acquisition time exists in the distance set or not, wherein the distance is smaller than the distance between the position of the target vehicle in the image to be detected and the position of the central point of the intersection at the earlier acquisition time. If not, the computer equipment determines that the target vehicle has a left-turn violation; if so, further judging whether the minimum distance value in the distance set is smaller than a preset distance threshold value. If not, the computer equipment determines that the target vehicle has a left-turn violation; if so, the computer device determines that the target vehicle does not have a left turn violation.
Further, after the step S205 of determining whether the target vehicle has a left turn violation according to the driving posture, the target vehicle position and the intersection center point position, the method further includes:
and feeding back the detection result of the existence/nonexistence of the left-turn violation of the target vehicle, continuously returning to execute S201, and detecting whether the next target vehicle has the left-turn violation.
In the embodiment, the computer device determines the position of a target vehicle in each image to be detected in the acquired image set to be detected based on a preset target license plate number, so as to perform subsequent targeted accurate detection on the vehicle corresponding to the target license plate number, namely the target vehicle; the method comprises the steps of obtaining the position of the center point of each intersection of each image to be detected in the image set to be detected, obtaining the driving posture of the target vehicle in each image to be detected in the image set to be detected, determining whether a left-turn violation exists in the target vehicle according to the driving posture, the position of the target vehicle and the position of the center point of each intersection, converting left-turn violation detection of the vehicle into posture and position detection on the image, facilitating programmed application of a computer, realizing automation of left-turn violation detection of the vehicle, and improving detection efficiency of left-turn violation detection of the vehicle.
In an embodiment, the image set to be measured includes a first image to be measured to an nth image to be measured, which are sequentially acquired according to the preset time interval. Wherein N is a natural number greater than or equal to 2, and the larger N is, the more accurate the obtained detection result is. The target vehicle position includes a first target vehicle position of the target vehicle in the first image to be measured to an nth target vehicle position of the target vehicle in the nth image to be measured. As shown in fig. 3, the step S202 of determining a target vehicle position in each image to be detected in the image set to be detected based on a preset target license plate number includes:
s301, performing vehicle detection on the first image to be detected in the image set to be detected by adopting a preset vehicle detection model to obtain at least one first vehicle area in the first image to be detected.
Wherein the vehicle detection model is a classification model for identifying a vehicle type in the image. The vehicle types may include various types of motor vehicles, such as passenger cars, trucks, cars, and the like.
Specifically, the computer device takes a large number of images including different vehicle types as vehicle samples in advance, and labels the vehicle types of the vehicles in each vehicle sample, for example, the passenger car in the vehicle sample is labeled as a passenger car, the truck is labeled as a truck, and the car is labeled as a car, so as to obtain a labeled vehicle sample. And training the marked vehicle sample to obtain the vehicle detection model. And inputting the acquired first image to be detected into the vehicle detection model by the computer equipment, and performing classification detection on the vehicle types in the image to obtain at least one first vehicle area in the first image to be detected.
Further, the pixel size of each vehicle region may be obtained, and the first vehicle region smaller than a preset pixel size threshold, such as 20 × 20, may be removed to implement the refinement of the first vehicle region, and reduce the subsequent unnecessary processing time.
S302, license plate detection is carried out on each first vehicle area by adopting a preset license plate detection model, and a first license plate area of each first vehicle area is obtained.
The license plate detection model is a classification model used for recognizing license plates in images.
Specifically, the computer device takes a large number of images comprising license plates of different color types as license plate samples in advance, and carries out license plate labeling on the license plates in each license plate sample to obtain a labeled license plate sample. And training the marked license plate sample to obtain the license plate detection model. And inputting the acquired image of each first vehicle area into the license plate detection model by the computer equipment, and performing classification detection on license plate types in the image to obtain a first license plate area of each first vehicle area.
S303, performing character recognition on each first license plate area by adopting a preset character recognition model to obtain a first license plate number of each first license plate area.
Wherein the character recognition model is a classification model for recognizing different characters in an image. The characters include chinese, letters, and arabic numerals.
Specifically, the computer device uses a large number of images including different characters as character samples in advance, labels the corresponding characters in each character sample, such as marking the character "shanghai" as "shanghai", marking the character "a" as "a", marking the character "1" as "1", and obtaining the labeled character samples. And training the marked character sample to obtain the character recognition model. And inputting the acquired image of each first license plate area into the character recognition model by the computer equipment, and performing classification detection on characters in the image to obtain a first license plate number of each first license plate area.
S304, according to the target license plate number and the first license plate number, acquiring the first license plate area matched with the target license plate number in the first image to be detected as the first target vehicle area, and taking the position of the center point of the first target vehicle area in the first image to be detected as the first target vehicle position.
The first license plate region is used for representing the vehicle in the first image to be measured, and may be a region formed by an outline of the represented vehicle, or a region formed by a regular shape including the whole vehicle, such as a rectangle. The first target license plate region is used for representing a target vehicle in the first image to be detected, and may be a region formed by contour lines of the represented target vehicle or a region formed by a regular shape including the whole target vehicle, such as a rectangle.
Specifically, the computer device obtains a plurality of first license plate numbers in the first image to be tested through the character recognition model, matches each first license plate number with the target license plate number, and specifically can match each character of each first license plate number with a character at a corresponding position in the target license plate number. If the first license plate numbers which are the same as the characters at the corresponding positions in the target license plate numbers exist in the obtained plurality of first license plate numbers, namely the target license plate numbers are matched with one first license plate number, the computer equipment takes the first vehicle area corresponding to the first license plate number matched with the target license plate number as the first target vehicle area. The first image to be detected is obtained by the computer equipment through the image acquisition device, the target vehicle is at the stop line, the image with the clearest target vehicle and the image with the clearest target vehicle are determined to exist, the situation that the target vehicle cannot be detected generally does not exist, and the situation that the target vehicle cannot be detected due to the fact that the numbers are not matched under the influence of the detection precision of the character recognition model is not eliminated. Therefore, if the first license plate numbers which are the same as the characters at the corresponding positions in the target license plate numbers do not exist in the obtained plurality of first license plate numbers, that is, the target license plate numbers are not matched with any first license plate number, the computer device determines that the target vehicle does not exist in the first image to be detected, returns to execute the step 201, re-acquires the image to be detected, and continues to perform vehicle left-turn violation detection for the next target license plate number.
S305, determining the Nth target vehicle position according to the first target vehicle area.
Specifically, the computer device may obtain an nth vehicle region in the nth region to be detected by using the method in S301, extract image features, such as contours and color features, of the first target vehicle region as target features, and extract image features, such as contours and color features, of each nth vehicle region as comparison features. And the computer equipment acquires the feature similarity between each comparison feature and the target feature, takes the N vehicle region corresponding to the comparison feature with the maximum feature similarity as an Nth target vehicle region, and acquires the position of the geometric center point of the Nth target region in the Nth region to be detected as the Nth target vehicle position.
In this embodiment, the computer device obtains the vehicle area in each image to be detected through the vehicle detection model, obtains the license plate area in each vehicle area through the license plate detection model, and obtains the license plate number of each license plate area through the character recognition model. The vehicle detection model, the license plate detection model and the character recognition model are based on a deep learning detection model, a large number of network models obtained by training different types of training samples are adopted, the vehicle detection model can accurately recognize and classify the vehicle region in each image to be detected, an accurate recognition object is provided for subsequent license plate detection, and the accuracy of license plate detection and recognition is improved; the license plate detection model can accurately identify and classify the license plate region in the image of each vehicle region, provides an accurate identification object for the subsequent identification and extraction of characters in the license plate, and improves the accuracy of character identification and extraction; the character recognition model can accurately recognize and classify the character codes in the image of each license plate area, provides an accurate matching object for subsequent matching with the target license plate number, and improves the accuracy of number matching. Based on the three detection models, the accuracy of identification and classification is improved layer by layer in the three detection aspects, so that the accuracy of the detection of the left turn violation of the whole vehicle is improved.
In one embodiment, as shown in fig. 4, the S305 determining the nth target vehicle location according to the first target vehicle region includes:
s401, vehicle detection is carried out on the (i + 1) th image to be detected in the image set to be detected by adopting the vehicle detection model, and at least one (i + 1) th vehicle area in the (i + 1) th image to be detected is obtained.
Wherein i is a natural number from 1 to N-1 in sequence.
Taking N as an example, 3, i takes 1 and 2 in sequence. The method comprises the steps that computer equipment obtains an image set to be detected formed by three images to be detected at preset time intervals, and left-turning violation detection is carried out on a target vehicle, wherein the image set to be detected comprises a first image to be detected, a second image to be detected and a third image to be detected, and the position of the target vehicle comprises a first target vehicle position, a second target vehicle position and a third target vehicle position.
Specifically, the computer device performs vehicle detection on a second image to be detected in the image set to be detected by using the vehicle detection model to obtain at least one second vehicle region in the second image to be detected, and performs vehicle detection on a third image to be detected in the image set to be detected by using the vehicle detection model to obtain at least one third vehicle region in the third image to be detected.
S402, inputting the ith target vehicle area of the target vehicle in the ith image to be detected and each (i + 1) th vehicle area into a vehicle similarity model to obtain an ith similarity set.
Wherein the ith similarity set includes an ith similarity between the ith target vehicle region and each of the (i + 1) th vehicle regions.
The vehicle similarity model is a pedestrian re-identification (ReID) model based on metric learning, and the similarity between two vehicle images can be obtained through network learning.
Specifically, the computer device takes a large number of vehicle images with different similarities as training samples in advance, labels the corresponding similarities of two vehicle images in the training samples, for example, labels 90% of two vehicle images with 90% of similarity, 50% of two vehicle images with 50% of similarity, and 30% of two vehicle images with 30% of similarity, to obtain labeled training samples. And training through the labeled training sample to obtain the vehicle similarity model. And inputting the image of the first target vehicle area and the image of each second vehicle area in the acquired first image to be detected into the vehicle similarity model by the computer equipment to obtain a first similarity between the image of the first target vehicle area and the image of each second vehicle area, so as to form a first similarity set.
S403, judging whether the ith similarity in the ith similarity set is greater than a similarity threshold.
If not, returning to execute the step of obtaining the image set to be detected;
if so, taking the position of the central point of the (i + 1) th vehicle area corresponding to the maximum (i) th similarity in the (i + 1) th image to be detected as the (i + 1) th target vehicle position.
Specifically, the computer device obtains the largest first similarity value in the first similarity set, and compares the largest first similarity value in the first similarity set with a preset similarity threshold value to determine whether the largest first similarity value in the first similarity set is greater than the similarity threshold value. If not, the computer device determines that the target vehicle is not detected in the second image to be detected, determines that the target vehicle does not have a left turn violation, further feeds back a detection result that the target vehicle does not have the left turn violation, and continuously returns to execute S201 to detect whether the next target vehicle has the left turn violation. If yes, the computer device determines that the target vehicle is detected in the second image to be detected, and takes the position of the geometric center point of the second vehicle area corresponding to the maximum first similarity in the second image to be detected as the position of the second target vehicle.
The computer device determines the third target vehicle position according to the second target area, and the specific process can refer to S402 and S403.
In this embodiment, since the first image to be detected is the sharpest image of the target vehicle, the computer device uses the area corresponding to the target vehicle in the first image to be detected, that is, the first target vehicle area, as the basis for determining the position of the target vehicle in the image to be detected, so as to improve the accuracy of the position of the target vehicle. Because the target vehicle is always in motion change, the computer device determines the (i + 1) th target vehicle area according to the similarity between the (i) th target vehicle area and each (i + 1) th vehicle area of the target vehicle in the (i) th image to be detected so as to obtain the (i + 1) th target vehicle position until the (N) th target vehicle position, namely the computer device determines the area of the target vehicle in the later image to be detected according to the area of the target vehicle in the previous image to be detected, the time interval between the previous image to be detected and the later image to be detected is smaller, the driving posture of the target vehicle is not changed too much, the area of the target vehicle in the later image to be detected can be accurately obtained according to the similarity between the target vehicle area in the previous image to be detected and the vehicle area in the next image to be detected, and therefore the accuracy of obtaining the target vehicle position is improved. The vehicle similarity detection model is a detection model based on deep learning, is a network model obtained by training a large number of training samples with different similarities, and can accurately identify and classify the similarity between vehicle regions in each image to be detected, so that the region where a target vehicle is located is accurately determined, and the accuracy of obtaining the position of the target vehicle is further improved.
In an embodiment, as shown in fig. 5, the step S203 of acquiring a position of a center point of an intersection of each image to be measured in the image set to be measured includes:
s501, performing lane line segmentation on each image to be detected by adopting a lane line segmentation model to obtain a plurality of initial lane line positions in each image to be detected.
The lane line segmentation model is used for segmenting and extracting a lane line in each image to be detected and acquiring the position of the lane line in the image to be detected.
Specifically, the computer device takes a large number of road marking lines with different types, such as stop lines, lane lines, zebra stripes and the like, as marking line samples in advance, carries out type marking on the marking lines in each marking line sample to obtain marking line samples, and obtains the lane line segmentation model through training of the marking line samples. And inputting each acquired image to be detected into the lane line segmentation model by the computer equipment, and performing classification detection on the lane lines in the image to obtain a plurality of initial lane line positions in each region to be detected. In this embodiment, the computer device inputs the first image to be detected into the lane line segmentation image to obtain a first initial lane line position, inputs the second image to be detected into the lane line segmentation image to obtain a second initial lane line position, and inputs the third image to be detected into the lane line segmentation image to obtain a third initial lane line position.
S502, screening and fusing the initial lane line positions according to lane line pixels to obtain the same target lane line position in each image to be detected.
Specifically, the computer device may further screen the initial lane line position in each of the obtained images to be detected according to the lane line pixel color, remove noise, and obtain a screened lane line position in each of the images to be detected. For example, the lane lines on both sides of the general road are white, and the computer device may obtain the lane line pixel color corresponding to each lane line position in each image to be detected, and determine whether the lane line pixel color is white. If so, reserving the lane line position corresponding to the white lane line pixel color as the screening lane line position in each image to be detected; if not, removing the lane line position corresponding to the lane line pixel with the color not being white. Because the shielding degrees of the lane lines in the different images to be detected by vehicles and pedestrians are different, the computer equipment further performs image fusion of the lane lines at the same position on the screening lane line position in each image to be detected to obtain a relatively complete lane line, and obtains the position of the lane line in each image to be detected as the position of the target lane line. For example, the computer device performs image fusion at the same position on the lane line obtained by segmentation and extraction in the first image to be detected, the second image to be detected and the third image to be detected, so as to obtain a fused lane line. Because the three images to be detected are images obtained by the image acquisition device fixedly arranged at the same position, the position of the fused lane line in any image to be detected, such as the first image to be detected, can be used as the position of the target lane line in the first image to be detected, the second image to be detected and the third image to be detected.
S503, in any image to be detected, acquiring a first central line between two target lane line positions with the farthest distance in the target lane line positions in the first direction.
S504, a second central line between two target lane line positions which are farthest away in the target lane line positions in the second direction is obtained.
Wherein the first direction intersects the second direction.
And S505, acquiring the position of the intersection point of the first central line and the second central line in any image to be detected, and taking the position as the intersection central point position of each image to be detected in the image set to be detected.
Specifically, the computer device may obtain center lane lines of the target lane lines in any two different directions, perform straight line fitting on the two center lane lines by using an opencv function, and find a position of an intersection point of the two fitted straight lines in any one of the images to be detected as a position of the intersection center line. For example, in any one of the images to be measured, as in the first image to be measured, the computer device obtains, as the first center line, a center line between two positions of the target lane line farthest from each other in a first direction, as in a position of the target lane line on the left side of the intersection, and obtains, as the second center line, a center line between two positions of the target lane line farthest from each other in a second direction, as in a position of the target lane line on the lower side of the intersection. And the computer equipment acquires the position of the intersection point of the first central line and the second central line in the first image to be detected as the position of the intersection central point in the first image to be detected, the second image to be detected and the third image to be detected.
In this embodiment, the computer device implements extraction and segmentation of the lane lines in the image to be detected by using a lane line segmentation model. The lane line segmentation model is a detection model based on deep learning, is a network model obtained by training by adopting a large number of different types of road marking lines as training samples, and can accurately segment and extract to obtain the initial lane line position in each image to be detected. The computer equipment can realize the denoising and completion of the lane lines according to the process of screening and fusing the obtained initial lane line positions by the lane line pixels, can obtain more accurate target lane line positions, accurately determines the position of the center point of the intersection according to the lane line center lines in two directions, provides a reliable data basis for subsequently judging whether the target has left-turn violation or not, and accordingly improves the accuracy of the detection of the left-turn violation of the vehicle.
In an embodiment, as shown in fig. 6, the step S502 of screening and fusing the initial lane line positions according to the lane line pixels to obtain the same target lane line position in each of the images to be detected includes:
s601, obtaining the number of the lane line pixels corresponding to each initial lane line position.
S602, judging whether the number of the lane line pixels corresponding to each initial lane line position is larger than a preset pixel threshold value.
If not, removing the initial lane line positions corresponding to the number of the lane line pixels;
if so, retaining the initial lane line positions corresponding to the number of the lane line pixels to obtain the screened lane line positions.
Specifically, the computer device may further screen the initial lane line position in each of the obtained images to be detected according to the number of lane line pixels, so as to obtain a screened lane line position in each of the images to be detected. For example, the computer device may obtain the number of lane line pixels corresponding to each initial lane line position in each to-be-detected image, and determine whether the number of lane line pixels is greater than a preset number of pixels. If so, retaining the initial lane line positions corresponding to the number of the lane line pixels as the screening lane line positions in each image to be detected; and if not, removing the initial lane line positions corresponding to the number of the lane line pixels.
S603, obtaining the screening lane line position with the maximum number of the lane line pixels corresponding to the same position lane line in the image to be detected, and using the screening lane line position as the target lane line position of each image to be detected.
And the computer equipment further performs image fusion of the lane lines at the same position on the screening lane line position in each image to be detected to obtain a relatively complete lane line, and acquires the position of the lane line in each image to be detected as the position of the target lane line. For example, the computer device performs lane line segmentation, extraction and screening on the first to-be-detected image, the second to-be-detected image and the third to-be-detected image to obtain the location of the screened lane line in each to-be-detected image. And in the first image to be detected, the second image to be detected and the third image to be detected, the lane line at the same position represents the same lane line. Computer equipment is in above-mentioned three images that await measuring screening lane line position, acquire the same position lane line pixel quantity, acquire the pixel quantity of same lane line in first image, the second image and the third image that awaits measuring promptly, with same lane line in three images that await measuring the lane line pixel quantity corresponds screening lane line position is regarded as first image, the second image and the third image that awaits measuring the target lane line position.
In this embodiment, the computer device specifically adopts lane line pixel quantity to further screen initial lane line position, gets rid of and corresponds lane line pixel quantity less than or equal to predetermine the pixel quantity initial lane line position remains to correspond lane line pixel quantity is greater than predetermine the pixel quantity initial lane line position obtains screening lane line position, and what the same position lane line corresponds in the further image that awaits measuring lane line is the biggest in the image that awaits measuring lane line pixel quantity screening lane line position is regarded as each the image that awaits measuring target lane line position. The number of the lane line pixels is used as a screening and fusion basis, the image problem of removing the lane line noise points and completing the lane lines is converted into the mathematical problem of the number of the pixels, the number of the lane line pixels can be acquired through a computer body, the removal of the lane line noise points and the completion of the lane lines are realized through a computer, and the automatic realization of the vehicle left-turn violation detection is facilitated.
In one embodiment, as shown in fig. 7, the step S204 of acquiring a driving posture of the target vehicle in each of the images to be measured in the image set to be measured includes:
s701, carrying out attitude detection on the target vehicle in each image to be detected in the image set to be detected by adopting a preset attitude detection model to obtain a target attitude set.
And the target attitude set comprises the driving attitude of the target vehicle in each image to be detected. The posture detection model is a classification model for recognizing the vehicle driving posture in the image. The driving posture may include posture states existing in the driving of the vehicle such as left-turn, right-turn, straight-going, turning around, and the like.
Specifically, the computer device takes a large number of vehicle images with different driving postures as posture samples in advance, labels the driving postures of the vehicles in the posture samples correspondingly, and if the vehicle image turning left is labeled as a left turn, the vehicle image turning right is labeled as a right turn, and the vehicle image going straight is labeled as a straight run to obtain an labeled posture sample. And training the labeled posture sample to obtain the posture detection model. And inputting the acquired image of the target vehicle in each image to be detected into the attitude detection model by the computer equipment to obtain the driving attitude of the target vehicle in each image to be detected so as to form a target attitude set.
S702, judging whether the target posture set comprises left turning.
If not, the driving posture of the target vehicle is not left-turning. If so, the driving posture of the target vehicle is left turn.
Specifically, the target posture set includes driving postures of the target vehicle in the first to-be-detected image, the second to-be-detected image, and the third to-be-detected image. And the computer equipment judges whether the target posture set comprises a left turn, namely judges whether the target vehicle has a driving posture of the left turn in any image to be detected. If so, namely the target vehicle has a left-turn driving posture in the at least one image to be detected, and the computer equipment determines that the driving posture of the target vehicle is left-turn; if not, namely the target vehicle does not have a left-turn driving posture in any image to be detected, the computer equipment determines that the driving posture of the target vehicle is not left-turn.
In this embodiment, the computer device uses the gesture detection model to realize the classification detection of the driving gesture of the target vehicle in the image to be detected. The posture detection model is a detection model based on deep learning, is a network model obtained by training by adopting a large number of different vehicle driving postures as training samples, can accurately identify and extract the driving posture of the target vehicle in each image to be detected, and provides reliable posture basis for subsequently judging whether the target has a left-turn violation, so that the accuracy of vehicle left-turn violation detection is improved.
In one embodiment, as shown in fig. 8, the determining, at S204, whether the target vehicle has a left-turn violation according to the driving posture, the target vehicle position, and the intersection center point position includes:
s801, when the driving posture of the target vehicle is not left-turning, returning to the step of acquiring the image set to be detected.
Specifically, when the computer device determines that the driving posture of the target vehicle is not a left turn, it determines that the target vehicle does not have a left turn violation, and may further feed back a detection result that the target vehicle does not have a left turn violation, and return to perform S201 to detect whether a next target vehicle has a left turn violation.
S802, when the driving posture of the target vehicle is left turn, acquiring the spacing distance between the position of the target vehicle and the position of the intersection center point in each image to be detected.
And S803, judging whether the minimum value in the spacing distances is smaller than a preset distance threshold value.
If not, the target vehicle has a left-turn violation.
If so, the target vehicle does not have a left turn violation.
Specifically, when the computer device determines that the driving posture of the target vehicle is a left turn, the computer device obtains the separation distance between the position of the target vehicle and the position of the intersection center point in each image to be detected. For example, a spacing distance between the position of the target vehicle in the first image to be measured and the position of the center point of the intersection is obtained as S1, a spacing distance between the position of the target vehicle in the second image to be measured and the position of the center point of the intersection is obtained as S2, and a spacing distance between the position of the target vehicle in the third image to be measured and the position of the center point of the intersection is obtained as S3. The computer device further determines whether the minimum value of the spacing distances in each of the obtained images to be detected is smaller than a preset distance threshold, that is, whether min { S1, S2, S3} is smaller than a preset distance threshold S0. If not, the computer equipment determines that the target vehicle has a left-turn violation; if so, the computer device determines that the target vehicle does not have a left turn violation. The computer device may further feed back a detection result of the presence/absence of the left-turn violation of the target vehicle, and continue to return to perform S201 to detect whether the next target vehicle has the left-turn violation.
In this embodiment, when the driving posture of the target vehicle is a left turn, the computer device further determines whether a separation distance between the position of the target vehicle and the position of the intersection center point in each image to be detected is smaller than a preset distance, so as to determine whether the target vehicle has a left turn violation. The judgment of the vehicle left-turn violation is converted into a distance relation, the distance relation is specifically quantized, the automatic execution of a computer is facilitated, the judgment accuracy is improved, and the vehicle left-turn violation detection is efficient and accurate.
It should be understood that although the various steps in the flow charts of fig. 2-8 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-8 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 9, there is provided a vehicle left-turn violation detection device, including: image acquisition module 901, target acquisition module 902, central point acquisition module 903, gesture acquisition module 904 and violation judgment module 905, wherein:
the image acquisition module 901 is used for acquiring an image set to be detected; the image set to be detected comprises a plurality of images to be detected, acquired according to a preset time interval, of the target vehicle running; the target obtaining module 902 is configured to determine a target vehicle position in each to-be-detected image in the to-be-detected image set based on a preset target license plate number; the central point obtaining module 903 is configured to obtain a position of a central point of an intersection of each image to be detected in the image set to be detected; the posture acquisition module 904 is configured to acquire a driving posture of the target vehicle in each image to be detected in the image set to be detected; the violation judging module 905 is configured to determine whether the target vehicle has a left-turn violation according to the driving posture, the position of the target vehicle, and the position of the intersection center point.
For specific limitations of the vehicle left-turn violation detection device, reference may be made to the above limitations of the vehicle left-turn violation detection method, which is not described herein again. The modules in the vehicle left-turning violation detection device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 10. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a vehicle left turn violation detection method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 10 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring an image set to be detected; the image set to be detected comprises a plurality of images to be detected, acquired according to a preset time interval, of the target vehicle running; determining the position of a target vehicle in each image to be detected in the image set to be detected based on a preset target license plate number; acquiring the position of the center point of each intersection of the images to be detected in the image set to be detected; acquiring the driving posture of the target vehicle in each image to be detected in the image set to be detected; and determining whether the target vehicle has a left-turn violation according to the driving posture, the position of the target vehicle and the position of the intersection central point.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
the image set to be detected comprises a first image to be detected to an Nth image to be detected, which are sequentially acquired according to the preset time interval, wherein the first image to be detected is an image of the target vehicle at a stop line, and the position of the target vehicle comprises a first target vehicle position of the target vehicle in the first image to be detected to an Nth target vehicle position of the target vehicle in the Nth image to be detected; wherein N is a natural number greater than or equal to 2; performing vehicle detection on the first image to be detected in the image set to be detected by adopting a preset vehicle detection model to obtain at least one first vehicle area in the first image to be detected; performing license plate detection on each first vehicle area by adopting a preset license plate detection model to obtain a first license plate area of each first vehicle area; performing character recognition on each first license plate area by adopting a preset character recognition model to obtain a first license plate number of each first license plate area; according to the target license plate number and the first license plate number, acquiring the first license plate area matched with the target license plate number in the first image to be detected as the first target vehicle area, and taking the position of the center point of the first target vehicle area in the first image to be detected as the first target vehicle position; and determining the Nth target vehicle position according to the first target vehicle area.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
carrying out vehicle detection on the (i + 1) th image to be detected in the image set to be detected by adopting the vehicle detection model to obtain at least one (i + 1) th vehicle area in the (i + 1) th image to be detected; wherein i is a natural number from 1 to N-1 in sequence; inputting the ith target vehicle area of the target vehicle in the ith image to be detected and each (i + 1) th vehicle area into a vehicle similarity model respectively to obtain an ith similarity set; wherein the ith similarity set includes an ith similarity between the ith target vehicle region and each of the (i + 1) th vehicle regions; judging whether the ith similarity in the ith similarity set is greater than a similarity threshold value or not; if not, returning to execute the step of obtaining the image set to be detected; if so, taking the position of the (i + 1) th vehicle area corresponding to the maximum (i) th similarity in the (i + 1) th image to be detected as the (i + 1) th target vehicle position.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
performing lane line segmentation on each image to be detected by adopting a lane line segmentation model to obtain a plurality of initial lane line positions in each image to be detected; screening and fusing the initial lane line positions according to lane line pixels to obtain the same target lane line position in each image to be detected; in any image to be detected, acquiring a first central line between two target lane line positions with the farthest distance in the target lane line positions in the first direction; acquiring a second central line between two target lane line positions which are farthest away from the target lane line positions in a second direction; the first direction intersects the second direction; and acquiring the position of the intersection point of the first central line and the second central line in the image to be detected as the intersection central point position of each image to be detected in the image set to be detected.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring the number of the lane line pixels corresponding to each initial lane line position; judging whether the number of the lane line pixels corresponding to each initial lane line position is greater than a preset pixel threshold value or not; if not, removing the initial lane line positions corresponding to the number of the lane line pixels; if so, retaining the initial lane line positions corresponding to the number of the lane line pixels to obtain screened lane line positions; and acquiring the screening lane line position with the maximum number of lane line pixels corresponding to the same position lane line in each image to be detected as the target lane line position of each image to be detected.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
carrying out attitude detection on the target vehicle in each image to be detected in the image set to be detected by adopting a preset attitude detection model to obtain a target attitude set; the target attitude set comprises the driving attitude of the target vehicle in each image to be detected; judging whether the target posture set comprises left turn or not; if not, the driving posture of the target vehicle is not left-turning; if so, the driving posture of the target vehicle is left turn.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
when the driving posture of the target vehicle is not left-turning, returning to the step of acquiring the image set to be detected; when the driving posture of the target vehicle is left turn, acquiring a spacing distance between the position of the target vehicle in each image to be detected and the position of the central point of the intersection, and judging whether the minimum value in the spacing distances is smaller than a preset distance threshold value or not; if not, the target vehicle has a left-turn violation; if so, the target vehicle does not have a left turn violation.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring an image set to be detected; the image set to be detected comprises a plurality of images to be detected, acquired according to a preset time interval, of the target vehicle running; determining the position of a target vehicle in each image to be detected in the image set to be detected based on a preset target license plate number; acquiring the position of the center point of each intersection of the images to be detected in the image set to be detected; acquiring the driving posture of the target vehicle in each image to be detected in the image set to be detected; and determining whether the target vehicle has a left-turn violation according to the driving posture, the position of the target vehicle and the position of the intersection central point.
In one embodiment, the computer program when executed by the processor further performs the steps of:
the image set to be detected comprises a first image to be detected to an Nth image to be detected, which are sequentially acquired according to the preset time interval, wherein the first image to be detected is an image of the target vehicle at a stop line, and the position of the target vehicle comprises a first target vehicle position of the target vehicle in the first image to be detected to an Nth target vehicle position of the target vehicle in the Nth image to be detected; wherein N is a natural number greater than or equal to 2; performing vehicle detection on the first image to be detected in the image set to be detected by adopting a preset vehicle detection model to obtain at least one first vehicle area in the first image to be detected; performing license plate detection on each first vehicle area by adopting a preset license plate detection model to obtain a first license plate area of each first vehicle area; performing character recognition on each first license plate area by adopting a preset character recognition model to obtain a first license plate number of each first license plate area; according to the target license plate number and the first license plate number, acquiring the first license plate area matched with the target license plate number in the first image to be detected as the first target vehicle area, and taking the position of the center point of the first target vehicle area in the first image to be detected as the first target vehicle position; and determining the Nth target vehicle position according to the first target vehicle area.
In one embodiment, the computer program when executed by the processor further performs the steps of:
carrying out vehicle detection on the (i + 1) th image to be detected in the image set to be detected by adopting the vehicle detection model to obtain at least one (i + 1) th vehicle area in the (i + 1) th image to be detected; wherein i is a natural number from 1 to N-1 in sequence; inputting the ith target vehicle area of the target vehicle in the ith image to be detected and each (i + 1) th vehicle area into a vehicle similarity model respectively to obtain an ith similarity set; wherein the ith similarity set includes an ith similarity between the ith target vehicle region and each of the (i + 1) th vehicle regions; judging whether the ith similarity in the ith similarity set is greater than a similarity threshold value or not; if not, returning to execute the step of obtaining the image set to be detected; if so, taking the position of the (i + 1) th vehicle area corresponding to the maximum (i) th similarity in the (i + 1) th image to be detected as the (i + 1) th target vehicle position.
In one embodiment, the computer program when executed by the processor further performs the steps of:
performing lane line segmentation on each image to be detected by adopting a lane line segmentation model to obtain a plurality of initial lane line positions in each image to be detected; screening and fusing the initial lane line positions according to lane line pixels to obtain the same target lane line position in each image to be detected; in any image to be detected, acquiring a first central line between two target lane line positions with the farthest distance in the target lane line positions in the first direction; acquiring a second central line between two target lane line positions which are farthest away from the target lane line positions in a second direction; the first direction intersects the second direction; and acquiring the position of the intersection point of the first central line and the second central line in the image to be detected as the intersection central point position of each image to be detected in the image set to be detected.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring the number of the lane line pixels corresponding to each initial lane line position; judging whether the number of the lane line pixels corresponding to each initial lane line position is greater than a preset pixel threshold value or not; if not, removing the initial lane line positions corresponding to the number of the lane line pixels; if so, retaining the initial lane line positions corresponding to the number of the lane line pixels to obtain screened lane line positions; and acquiring the screening lane line position with the maximum number of lane line pixels corresponding to the same position lane line in each image to be detected as the target lane line position of each image to be detected.
In one embodiment, the computer program when executed by the processor further performs the steps of:
carrying out attitude detection on the target vehicle in each image to be detected in the image set to be detected by adopting a preset attitude detection model to obtain a target attitude set; the target attitude set comprises the driving attitude of the target vehicle in each image to be detected; judging whether the target posture set comprises left turn or not; if not, the driving posture of the target vehicle is not left-turning; if so, the driving posture of the target vehicle is left turn.
In one embodiment, the computer program when executed by the processor further performs the steps of:
when the driving posture of the target vehicle is not left-turning, returning to the step of acquiring the image set to be detected; when the driving posture of the target vehicle is left turn, acquiring a spacing distance between the position of the target vehicle in each image to be detected and the position of the central point of the intersection, and judging whether the minimum value in the spacing distances is smaller than a preset distance threshold value or not; if not, the target vehicle has a left-turn violation; if so, the target vehicle does not have a left turn violation.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A vehicle left turn violation detection method, the method comprising:
acquiring an image set to be detected; the image set to be detected comprises a plurality of images to be detected, acquired according to a preset time interval, of the target vehicle running;
determining the position of a target vehicle in each image to be detected in the image set to be detected based on a preset target license plate number;
acquiring the position of the center point of each intersection of the images to be detected in the image set to be detected;
acquiring the driving posture of the target vehicle in each image to be detected in the image set to be detected;
and determining whether the target vehicle has a left-turn violation according to the driving posture, the position of the target vehicle and the position of the intersection central point.
2. The method according to claim 1, wherein the set of images to be measured includes a first image to be measured to an nth image to be measured, which are sequentially acquired according to the preset time interval, the first image to be measured is an image of the target vehicle at a stop line, and the target vehicle position includes a first target vehicle position of the target vehicle in the first image to be measured to an nth target vehicle position of the target vehicle in the nth image to be measured; wherein N is a natural number greater than or equal to 2;
the determining the position of the target vehicle in each image to be detected in the image set to be detected based on the preset target license plate number comprises the following steps:
performing vehicle detection on the first image to be detected in the image set to be detected by adopting a preset vehicle detection model to obtain at least one first vehicle area in the first image to be detected;
performing license plate detection on each first vehicle area by adopting a preset license plate detection model to obtain a first license plate area of each first vehicle area;
performing character recognition on each first license plate area by adopting a preset character recognition model to obtain a first license plate number of each first license plate area;
according to the target license plate number and the first license plate number, acquiring the first license plate area matched with the target license plate number in the first image to be detected as the first target vehicle area, and taking the position of the center point of the first target vehicle area in the first image to be detected as the first target vehicle position;
and determining the Nth target vehicle position according to the first target vehicle area.
3. The method of claim 2, wherein the determining the Nth target vehicle location from the first target vehicle zone comprises:
carrying out vehicle detection on the (i + 1) th image to be detected in the image set to be detected by adopting the vehicle detection model to obtain at least one (i + 1) th vehicle area in the (i + 1) th image to be detected; wherein i is a natural number from 1 to N-1 in sequence;
inputting the ith target vehicle area of the target vehicle in the ith image to be detected and each (i + 1) th vehicle area into a vehicle similarity model respectively to obtain an ith similarity set; wherein the ith similarity set includes an ith similarity between the ith target vehicle region and each of the (i + 1) th vehicle regions;
judging whether the ith similarity in the ith similarity set is greater than a similarity threshold value or not;
if not, returning to execute the step of obtaining the image set to be detected;
if so, taking the position of the (i + 1) th vehicle area corresponding to the maximum (i) th similarity in the (i + 1) th image to be detected as the (i + 1) th target vehicle position.
4. The method of claim 1, wherein said obtaining the intersection center point position of each of the images to be measured in the image set comprises:
performing lane line segmentation on each image to be detected by adopting a lane line segmentation model to obtain a plurality of initial lane line positions in each image to be detected;
screening and fusing the initial lane line positions according to lane line pixels to obtain the same target lane line position in each image to be detected;
in any image to be detected, acquiring a first central line between two target lane line positions with the farthest distance in the target lane line positions in the first direction;
acquiring a second central line between two target lane line positions which are farthest away from the target lane line positions in a second direction; wherein the first direction intersects the second direction;
and acquiring the position of the intersection point of the first central line and the second central line in the image to be detected as the intersection central point position of each image to be detected in the image set to be detected.
5. The method of claim 4, wherein the screening and fusing the initial lane line positions according to lane line pixels to obtain the same target lane line position in each of the images to be tested, comprises:
acquiring the number of the lane line pixels corresponding to each initial lane line position;
judging whether the number of the lane line pixels corresponding to each initial lane line position is greater than a preset pixel threshold value or not;
if not, removing the initial lane line positions corresponding to the number of the lane line pixels;
if so, retaining the initial lane line positions corresponding to the number of the lane line pixels to obtain screened lane line positions;
and acquiring the screening lane line position with the maximum number of lane line pixels corresponding to the same position lane line in each image to be detected as the target lane line position of each image to be detected.
6. The method according to claim 1, wherein the obtaining of the driving posture of the target vehicle in each of the images to be measured in the image set to be measured comprises:
carrying out attitude detection on the target vehicle in each image to be detected in the image set to be detected by adopting a preset attitude detection model to obtain a target attitude set; the target attitude set comprises the driving attitude of the target vehicle in each image to be detected;
judging whether the target posture set comprises left turn or not;
if not, the driving posture of the target vehicle is not left-turning;
if so, the driving posture of the target vehicle is left turn.
7. The method of claim 1, wherein said determining whether the target vehicle has a left turn violation based on the driving gesture, the target vehicle location, and the intersection center point location comprises:
when the driving posture of the target vehicle is not left-turning, returning to the step of acquiring the image set to be detected;
when the driving posture of the target vehicle is left turn, acquiring a spacing distance between the position of the target vehicle in each image to be detected and the position of the central point of the intersection, and judging whether the minimum value in the spacing distances is smaller than a preset distance threshold value or not;
if not, the target vehicle has a left-turn violation;
if so, the target vehicle does not have a left turn violation.
8. A vehicle left turn violation detection device, comprising:
the image acquisition module is used for acquiring an image set to be detected; the image set to be detected comprises a plurality of images to be detected, acquired according to a preset time interval, of the target vehicle running;
the target acquisition module is used for determining the position of a target vehicle in each image to be detected in the image set to be detected based on a preset target license plate number;
the center line point acquisition module is used for acquiring the intersection center point position of each image to be detected in the image set to be detected;
the attitude acquisition module is used for acquiring the driving attitude of the target vehicle in each image to be detected in the image set to be detected;
and the violation judgment module is used for determining whether the target vehicle has a left-turn violation or not according to the driving posture, the position of the target vehicle and the position of the intersection center point.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202010472153.1A 2020-05-29 2020-05-29 Vehicle left-turn violation detection method and device, computer equipment and storage medium Pending CN111476245A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010472153.1A CN111476245A (en) 2020-05-29 2020-05-29 Vehicle left-turn violation detection method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010472153.1A CN111476245A (en) 2020-05-29 2020-05-29 Vehicle left-turn violation detection method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111476245A true CN111476245A (en) 2020-07-31

Family

ID=71765335

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010472153.1A Pending CN111476245A (en) 2020-05-29 2020-05-29 Vehicle left-turn violation detection method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111476245A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112613370A (en) * 2020-12-15 2021-04-06 浙江大华技术股份有限公司 Target defect detection method, device and computer storage medium
CN112712708A (en) * 2020-12-28 2021-04-27 上海眼控科技股份有限公司 Information detection method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016130933A (en) * 2015-01-14 2016-07-21 オムロン株式会社 Traffic violation management system and traffic violation management method
CN206441337U (en) * 2016-12-30 2017-08-25 上海泓鎏智能科技有限公司 Detect the grasp shoot device of break in traffic rules and regulations
CN108269403A (en) * 2016-12-30 2018-07-10 上海泓鎏智能科技有限公司 Detect the grasp shoot device and grasp shoot method of break in traffic rules and regulations
CN110533925A (en) * 2019-09-04 2019-12-03 上海眼控科技股份有限公司 Processing method, device, computer equipment and the storage medium of vehicle illegal video
CN110706261A (en) * 2019-10-22 2020-01-17 上海眼控科技股份有限公司 Vehicle violation detection method and device, computer equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016130933A (en) * 2015-01-14 2016-07-21 オムロン株式会社 Traffic violation management system and traffic violation management method
CN206441337U (en) * 2016-12-30 2017-08-25 上海泓鎏智能科技有限公司 Detect the grasp shoot device of break in traffic rules and regulations
CN108269403A (en) * 2016-12-30 2018-07-10 上海泓鎏智能科技有限公司 Detect the grasp shoot device and grasp shoot method of break in traffic rules and regulations
CN110533925A (en) * 2019-09-04 2019-12-03 上海眼控科技股份有限公司 Processing method, device, computer equipment and the storage medium of vehicle illegal video
CN110706261A (en) * 2019-10-22 2020-01-17 上海眼控科技股份有限公司 Vehicle violation detection method and device, computer equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ARASH JAHANGIRI 等: "Red-light running violation prediction using observational and simulator data" *
李强: "基于AI和云计算架构的智慧交通非现场执法综合检测系统设计" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112613370A (en) * 2020-12-15 2021-04-06 浙江大华技术股份有限公司 Target defect detection method, device and computer storage medium
CN112712708A (en) * 2020-12-28 2021-04-27 上海眼控科技股份有限公司 Information detection method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109740469B (en) Lane line detection method, lane line detection device, computer device, and storage medium
CN109284674B (en) Method and device for determining lane line
CN106652465B (en) Method and system for identifying abnormal driving behaviors on road
US9082038B2 (en) Dram c adjustment of automatic license plate recognition processing based on vehicle class information
US8902053B2 (en) Method and system for lane departure warning
CN109711264B (en) Method and device for detecting occupation of bus lane
CN111402329A (en) Vehicle line pressing detection method and device, computer equipment and storage medium
CN110298300B (en) Method for detecting vehicle illegal line pressing
CN111382704A (en) Vehicle line-pressing violation judgment method and device based on deep learning and storage medium
US9928424B2 (en) Side window detection through use of spatial probability maps
CN111891061B (en) Vehicle collision detection method and device and computer equipment
CN111274926B (en) Image data screening method, device, computer equipment and storage medium
CN111931683B (en) Image recognition method, device and computer readable storage medium
CN110675637A (en) Vehicle illegal video processing method and device, computer equipment and storage medium
CN111476245A (en) Vehicle left-turn violation detection method and device, computer equipment and storage medium
CN112001378B (en) Lane line processing method and device based on feature space, vehicle-mounted terminal and medium
CN114708547A (en) Vehicle weight recognition method and device, computer equipment and storage medium
CN112307989B (en) Road surface object identification method, device, computer equipment and storage medium
CN112712703A (en) Vehicle video processing method and device, computer equipment and storage medium
CN111860219B (en) High-speed channel occupation judging method and device and electronic equipment
CN112115800A (en) Vehicle combination recognition system and method based on deep learning target detection
CN111259971A (en) Vehicle information detection method and device, computer equipment and readable storage medium
CN110766009A (en) Tail plate identification method and device and computer readable storage medium
CN112183206B (en) Traffic participant positioning method and system based on road side monocular camera
CN112580457A (en) Vehicle video processing method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned

Effective date of abandoning: 20240419

AD01 Patent right deemed abandoned