CN110415529B - Automatic processing method and device for vehicle violation, computer equipment and storage medium - Google Patents
Automatic processing method and device for vehicle violation, computer equipment and storage medium Download PDFInfo
- Publication number
- CN110415529B CN110415529B CN201910832274.XA CN201910832274A CN110415529B CN 110415529 B CN110415529 B CN 110415529B CN 201910832274 A CN201910832274 A CN 201910832274A CN 110415529 B CN110415529 B CN 110415529B
- Authority
- CN
- China
- Prior art keywords
- vehicle
- violation
- target
- image
- frame picture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000003860 storage Methods 0.000 title claims abstract description 14
- 238000003672 processing method Methods 0.000 title claims abstract description 11
- 238000000034 method Methods 0.000 claims abstract description 35
- 238000013135 deep learning Methods 0.000 claims abstract description 24
- 238000012550 audit Methods 0.000 claims abstract description 18
- 238000012549 training Methods 0.000 claims description 74
- 238000013528 artificial neural network Methods 0.000 claims description 55
- 238000013145 classification model Methods 0.000 claims description 55
- 238000005520 cutting process Methods 0.000 claims description 38
- 238000012795 verification Methods 0.000 claims description 28
- 230000015572 biosynthetic process Effects 0.000 claims description 25
- 238000003786 synthesis reaction Methods 0.000 claims description 25
- 238000001514 detection method Methods 0.000 claims description 24
- 230000006870 function Effects 0.000 claims description 23
- 238000007781 pre-processing Methods 0.000 claims description 19
- 230000033228 biological regulation Effects 0.000 claims description 16
- 238000012545 processing Methods 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 14
- 238000004422 calculation algorithm Methods 0.000 claims description 12
- 238000011176 pooling Methods 0.000 claims description 12
- 238000000605 extraction Methods 0.000 claims description 11
- 230000011218 segmentation Effects 0.000 claims description 11
- 238000010276 construction Methods 0.000 claims description 6
- 238000005457 optimization Methods 0.000 claims description 5
- 230000009467 reduction Effects 0.000 claims description 5
- 230000006399 behavior Effects 0.000 claims description 3
- 238000011217 control strategy Methods 0.000 claims 3
- 238000010586 diagram Methods 0.000 description 13
- 238000002372 labelling Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000010200 validation analysis Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 231100001261 hazardous Toxicity 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/14—Image acquisition
- G06V30/148—Segmentation of character regions
- G06V30/153—Segmentation of character regions using recognition of characters or words
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/017—Detecting movement of traffic to be counted or controlled identifying vehicles
- G08G1/0175—Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
The application relates to an automatic processing method and device for vehicle violation, computer equipment and a storage medium, wherein the method comprises the steps of obtaining an image of a violation vehicle to be checked, identifying the violation type of a target violation vehicle in the image of the violation vehicle to be checked, identifying the use category of the target violation vehicle by adopting a vehicle category identification model based on deep learning, and judging whether the use category of the target violation vehicle is matched with the use category of an unmanaged vehicle in a traffic safety policy according to the violation type to determine whether the target violation vehicle actually violates the rule or not. Therefore, intelligent violation audit of special vehicles is realized, manual audit is not needed, audit efficiency is improved, and the problem of fairness and fairness caused by audit fatigue and subjective influence of personnel during manual audit is solved.
Description
Technical Field
The application relates to the technical field of automatic identification, in particular to a method and a device for automatically processing vehicle violation based on deep learning, computer equipment and a storage medium.
Background
With the continuous development of social economy and the continuous improvement of the living standard of people in China, more and more vehicles run on the road; in order to reduce the frequency of traffic accidents and guarantee safe traffic trip of people, vehicle violation auditing is more and more important.
In the field of off-site intelligent violation auditing, pictures (such as figure 1) which are taken by an electronic police in succession need to be intelligently audited for violations, but vehicles with different use properties have corresponding traffic running rules, and even in different regions, certain vehicles are made to have specific traffic rules. For example, according to the fifty-third regulation of the road traffic safety law of the people's republic of China: when the police car, the fire truck, the ambulance and the engineering rescue vehicle execute emergency tasks, an alarm and a sign lamp can be used; on the premise of ensuring safety, the vehicle is not limited by a driving route, a driving direction, a driving speed and a signal lamp, and other vehicles and pedestrians can give way. For some non-specific vehicle types, such as the muck trucks in the freight car, the individual regions make rules for road restrictions. Therefore, the special vehicle cannot be checked against the regulations completely, and a manual auditing mode needs to be adopted, so that the problems of low auditing efficiency, high labor cost, auditing fatigue and fairness caused by influence of subjectivity of personnel on auditing exist.
Disclosure of Invention
Based on this, it is necessary to provide an automatic processing method, an apparatus, a computer device and a storage medium for vehicle violation, which can automatically perform violation audit, for the problem that the violation of the special type of vehicle needs to be manually audited.
In order to achieve the above object, in one aspect, an embodiment of the present application provides an automatic processing method for vehicle violation, including:
acquiring an image of the violation vehicle to be audited, wherein the image of the violation vehicle comprises a target violation vehicle;
identifying an illegal type of the target violation vehicle;
identifying the use category of the target violation vehicle in the violation vehicle image to be audited by adopting a vehicle category identification model based on deep learning;
judging whether the use category of the target violation vehicle is matched with the use category of the vehicle which is not controlled in the traffic safety policy or not according to the violation type;
and if so, determining that the target violation vehicle does not violate the regulations actually.
In one embodiment, obtaining an image of a violation vehicle to be reviewed comprises: acquiring a violation image to be audited, which is captured by an electronic police, wherein the violation image comprises a synthesized multi-frame picture; carrying out target detection on the synthesized multi-frame picture, and determining a target violation vehicle; detecting the violation image through a mode classification model to determine a synthesis mode of a plurality of frames of pictures in the violation image, and segmenting each frame of picture in the violation image according to the synthesis mode to obtain a plurality of segmented single-frame pictures; identifying and cutting off the character area in each single-frame picture by using a character area segmentation model to obtain a single-frame picture obtained after cutting the character area; detecting each cut single-frame picture through a target detection model to obtain the position of the target violation vehicle in each single-frame picture; and segmenting the target violation vehicles in each single frame picture according to the positions of the target violation vehicles in each single frame picture to obtain the violation vehicle image to be audited.
In one embodiment, the step of segmenting the target violation vehicle in each single frame picture according to the position of the target violation vehicle in each single frame picture comprises the following steps: and according to the position of the target violation vehicle in each single frame picture, cutting the single frame picture in the vertical direction and the horizontal direction according to a set cutting range to obtain the cut target violation vehicle which is cut from the single frame picture, wherein the cutting range in the vertical direction is smaller than the cutting range in the horizontal direction.
In one embodiment, the method for acquiring the deep learning-based vehicle class identification model comprises the following steps: acquiring vehicle sample image data sets respectively corresponding to a plurality of vehicle use categories, wherein the vehicle sample image data sets comprise training sample sets; preprocessing each vehicle sample image in the training sample set to obtain a preprocessed training sample set; and acquiring hyper-parameters for model training, training the deep neural network classification model according to the hyper-parameters and by adopting a preprocessed training sample set, and acquiring a vehicle class identification model until a loss function reaches a minimum value.
In one embodiment, the vehicle sample image dataset includes a validation sample set; the method further comprises: preprocessing each vehicle sample image in the verification sample set to obtain a preprocessed verification sample set; and when the deep neural network classification model is trained according to the hyper-parameters and by adopting the preprocessed training sample set to reach the iteration times, verifying the deep neural network classification model by adopting the preprocessed verification sample set to obtain the recognition accuracy of the deep neural network classification model.
In one embodiment, preprocessing each vehicle sample image includes: randomly adjusting a color mode of each vehicle sample image in the training sample set or the verification sample set, wherein the color mode comprises at least one of brightness, contrast, saturation and ambiguity; and randomly clipping the adjusted vehicle sample images according to the set size to respectively obtain the vehicle sample images which correspond to each vehicle sample image and have different color modes after random clipping.
In one embodiment, the construction method of the deep neural network classification model comprises the following steps: forming a feature extraction network using the plurality of convolutional layers and pooling layers; sequentially connecting a plurality of inclusion sections after the feature extraction network, and inserting a Reduction module between every two adjacent inclusion sections; and sequentially connecting the global average pooling layer, the network optimization layer and the forward algorithm layer at the rear end of the last Incep section to obtain the deep neural network classification model.
In one embodiment, the Loss function is a Focal local Loss function, which is:wherein,the prediction classification value is obtained by calling a forward algorithm layer of the deep neural network classification model, and lambda is set to be 2.
In one embodiment, obtaining vehicle sample image data sets respectively corresponding to a plurality of vehicle usage categories includes: acquiring original images of a plurality of vehicles, and performing image processing on the original images to obtain images of target vehicles in the original images; marking the use category of the target vehicle in the image of the target vehicle; and classifying the images of the target vehicle according to the marked use categories to obtain a plurality of classified vehicle use categories and images of the target vehicle respectively corresponding to the vehicle use categories, and generating vehicle sample image data sets respectively corresponding to the vehicle use categories.
In one embodiment, the method further includes: and if not, determining that the target violation vehicle actually violates the rules.
On the other hand, the embodiment of the application provides an automatic processing device for vehicle violation, which comprises:
the system comprises a to-be-audited data acquisition module, a to-be-audited data acquisition module and a to-be-audited auditing data acquisition module, wherein the to-be-audited violating vehicle image comprises a target violating vehicle;
the violation type identification module is used for identifying the violation type of the target violation vehicle;
the vehicle type identification module is used for identifying the use type of the target violation vehicle in the violation vehicle image to be audited by adopting a deep learning-based vehicle type identification model;
the violation judging module is used for judging whether the use category of the target violation vehicle is matched with the use category of the vehicle which is not controlled in the traffic safety policy or not according to the violation type; and if so, determining that the target violation vehicle does not violate the regulations actually.
In yet another aspect, the present application provides a computer device, including a memory and a processor, where the memory stores a computer program, and the processor implements the steps of the method as described above when executing the computer program.
In yet another aspect, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the method as described above.
According to the automatic processing method, the device, the computer equipment and the storage medium for the vehicle violation, the violation type of the target violation vehicle in the violation vehicle image to be checked is identified by obtaining the violation vehicle image to be checked, the use type of the target violation vehicle is identified by adopting the vehicle type identification model based on deep learning, and whether the use type of the target violation vehicle is matched with the use type of the vehicle which is not controlled in the traffic safety policy is judged according to the violation type so as to determine whether the target violation vehicle actually violates the violation vehicle. Therefore, intelligent violation audit of special vehicles is realized, manual audit is not needed, audit efficiency is improved, and the problem of fairness and fairness caused by audit fatigue and subjective influence of personnel during manual audit is solved.
Drawings
FIG. 1 is a schematic flow diagram of a method for automated vehicle violation processing in one embodiment;
FIG. 2 is a diagram of a neural network structure of a deep neural network classification model in one embodiment;
FIG. 3 is a schematic flow chart illustrating the steps of obtaining an image of a violation vehicle to be reviewed in one embodiment;
FIG. 4 is a schematic diagram of a violation image to be audited with multiple frames of pictures captured by the image capture device in one embodiment;
FIG. 5A is a schematic diagram of a single frame of the upper left corner divided from FIG. 4;
FIG. 5B is a schematic diagram of the cut text region in FIG. 5A;
FIG. 6 is a schematic diagram of an image of the violation vehicle to be audited obtained after segmentation of the target violation vehicle in FIG. 5B;
FIG. 7 is a flowchart illustrating steps of a method for obtaining a deep learning-based vehicle classification identification model according to an embodiment;
FIG. 8 is a schematic diagram of an implementation of the inclusion-V4 neural network framework in one embodiment;
FIG. 9 is a block diagram showing the structure of an apparatus according to the flowchart of one embodiment;
FIG. 10 is a diagram showing an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The embodiment of the application provides an automatic processing method for vehicle violation, which comprises the following steps as shown in fig. 1:
and 102, acquiring an image of the violation vehicle to be audited.
The violation vehicle image is an image containing a target vehicle, which is determined to have a violation condition after target detection, and the target vehicle can be a motor vehicle. Because some special vehicles are not controlled, in the traditional violation auditing, whether a target vehicle with violation conditions is actually violated or not needs to be further audited manually for preliminarily judging whether the target vehicle is violated, namely, the use property of the target vehicle is manually distinguished, and whether the target vehicle violates the rules or not is finally judged. Therefore, in this embodiment, the intelligent traffic police violation auditing system obtains the image of the target vehicle preliminarily determined to have the violation as the image of the violation vehicle to be audited, and specifically, the target vehicle preliminarily determined to have the violation can be defined as the target violation vehicle.
The type of the violation refers to the behavior that the motor vehicle violates the road traffic safety law of the people's republic of China. For example, running a red light, unlawfully using a bus lane, stopping in violation, etc. In this embodiment, the illegal type of the target illegal vehicle can be identified by detecting the image of the illegal vehicle to be checked by the intelligent traffic police illegal checking system.
And step 106, identifying the use category of the target violation vehicle in the violation vehicle image to be audited by adopting a vehicle category identification model based on deep learning.
The usage category refers to a plurality of categories divided according to an application scenario or usage property of the vehicle, and may include, for example, police cars, fire trucks, ambulances, construction wrecks, and private cars. The deep learning-based vehicle category identification model can be obtained by training a deep neural network classification model based on deep learning. Specifically, the deep neural network classification model may be an artificial neural network structure having a hierarchical structure as shown in fig. 2, which includes an input layer, an output layer, and an intermediate layer (also called a hidden layer). In the artificial neural network structure, the number of nodes of an input layer and an output layer is always fixed, and an intermediate layer can be freely specified; the topology and arrows in the structure diagram of the neural network represent the flow direction of data in the prediction process, the key in the structure diagram is the connection between neurons, and each connection line corresponds to a different weight (the value of which is called weight), which needs to be obtained through training.
In the embodiment, the image of the violation vehicle to be checked is input into the vehicle type identification model obtained after training, the image of the violation vehicle to be checked enters the intermediate layer after feature extraction is carried out on the image of the violation vehicle through the input layer, a series of complex operations such as convolution, pooling and regression are carried out through the intermediate layer to identify the probability distribution of the use type of the target violation vehicle in the image of the violation vehicle, the probability distribution is output through the output layer, and therefore the use type of the target violation vehicle is obtained.
And step 108, judging whether the use category of the target violation vehicle is matched with the use category of the vehicle which is not controlled in the traffic safety policy according to the violation type.
The traffic safety policy may be a regulatory policy or an unregulated policy according to vehicle use categories set according to traffic regulations and regional traffic regulations. For example, the management policy may be a restriction policy, a parking restriction policy, or the like set for a certain class or classes of vehicles; the unregulated policy may be a restriction policy set for a certain class or classes of vehicles without a travel route, a travel direction, a travel speed, or a signal light, and the like. Of course, the specific control policy and the policy that is not controlled are not limited to this, and may be configured or adjusted according to the actual situation, and the present application does not limit the policy.
In this embodiment, after the intelligent traffic police violation auditing system identifies the usage category of the target violation vehicle according to the deep learning-based vehicle category identification model in the previous step, it is further determined whether the usage category of the target violation vehicle matches the usage category of an unmanaged vehicle in the traffic safety policy according to the violation type, and if the usage category of the vehicle which is the same as the usage category of the target violation vehicle is matched in the usage category of the unmanaged vehicle, step 110 is executed; otherwise, step 112 is performed.
And step 112, determining the actual violation of the target violation vehicle.
In this embodiment, when the vehicle usage category which is the same as the usage category of the target violation vehicle is matched in the unmanaged vehicle usage categories, the target violation vehicle is indicated as a special vehicle, and therefore, the intelligent traffic police violation auditing system determines that the target violation vehicle does not actually violate the regulations according to the management and control policy of the special vehicle, and otherwise determines that the target violation vehicle actually violates the regulations. Therefore, the intelligent traffic police violation auditing system is used for completing automatic auditing for preliminarily judging whether the target vehicle with the violation condition violates the regulations or not.
According to the automatic processing method for the vehicle violation, the violation vehicle image to be checked is obtained, the use class of the target violation vehicle in the violation vehicle image to be checked is identified by adopting the deep learning-based vehicle class identification model, and whether the use class of the target violation vehicle is matched with the use class of the vehicle which is not controlled in the traffic safety policy is judged according to the violation type so as to determine whether the target violation vehicle actually violates the rule or not. Therefore, intelligent violation audit of special vehicles is realized, manual audit is not needed, audit efficiency is improved, and the problem of fairness and fairness caused by audit fatigue and subjective influence of personnel during manual audit is solved.
In one embodiment, as shown in fig. 3, the acquiring of the image of the violation vehicle to be audited may specifically include the following steps:
Wherein, the electronic police can be the image acquisition equipment who sets up in the key traffic way. The violation image to be checked is a large image shown in fig. 4 formed by splicing multiple frames of pictures continuously captured by the image acquisition device in a very short time period, and the intelligent traffic police violation checking system can judge whether the target vehicle with the violation condition exists through the large image, so that the vehicle in the large image needs to be positioned and detected to preliminarily check whether the target vehicle has the violation condition.
And step 304, carrying out target detection on the synthesized multi-frame picture, and determining a target violation vehicle.
The target detection refers to identifying an interested target, such as a motor vehicle, from a violation image to be audited (namely, the large picture synthesized by the multi-frame picture). Each frame of picture of the violation image to be audited comprises at least one motor vehicle, the surrounding environment and the like, so that the motor vehicles in each frame of picture of the violation image to be audited are subjected to target detection and positioning one by one, the running direction, the position lamp information and the like of each motor vehicle can be judged according to the detected positioning information and the surrounding environment of each motor vehicle, and the target violation vehicle (namely the target vehicle) can be positioned by integrating the information, wherein the target violation vehicle is the target vehicle which is preliminarily audited and judged to have the violation condition.
And step 306, detecting the violation image through the mode classification model to determine a synthesis mode of the multiple frames of pictures in the violation image, and segmenting each frame of picture in the violation image according to the synthesis mode to obtain a plurality of segmented single frames of pictures.
The mode classification model is used for identifying the frame number mode of the violation image to be audited, namely the large picture. And obtaining a synthesis mode of a plurality of frames in the violation image after pattern recognition, and then segmenting the violation image according to the synthesis mode. Specifically, in this embodiment, the mode classification model can be obtained by using an initiation (convolutional neural network) framework after light weight, and after data labeling and model training. According to the method and the device, the use category of the target vehicle with the violation condition needs to be preliminarily judged through machine identification, and then whether the target vehicle violates the regulations is actually judged according to the use category and the traffic safety policy, so that the use category of the target vehicle with the violation condition is preliminarily judged through city identification, but if the target vehicle is only identified aiming at the violation image to be audited, namely a large picture, the detection effect is poor due to the fact that the target vehicle is small. Therefore, in this embodiment, it is necessary to determine that the violation image to be checked is synthesized from several frames, that is, determine a synthesis mode of the violation image to be checked, such as a 2-frame synthesis mode (1 × 2), a 3-frame synthesis mode (1 × 3), a 4-frame synthesis mode (2 × 2), a 6-frame synthesis mode (2 × 3), or a synthesis mode of other frames. And then, according to the rule that the length and the width of each frame of picture are the same, averagely dividing the large picture to obtain a plurality of divided single-frame pictures, for example, if the violation image to be checked is identified to be a 4-frame synthesis mode through the mode classification model, the large picture in the 4-frame synthesis mode can be divided into 4 frames of pictures, and similarly, the large picture in the 6-frame synthesis mode can be divided into 6 frames of pictures. The large picture shown in fig. 4 belongs to a 2 × 2 4-frame composition mode, and fig. 5A is a single frame picture divided from fig. 4 and located at the upper left corner.
And 308, identifying and cutting the character area in each single frame image by using the character area segmentation model to obtain the single frame image obtained after cutting the character area.
The character region segmentation model is a machine learning model for identifying and segmenting a character region in a single frame picture. Specifically, in this embodiment, the text region segmentation model can be obtained by a lightweight yolo-V3 (target detection network) framework after data labeling and model training. And (3) carrying out target detection on the character region and the background (namely the part outside the character region in the picture) in the single-frame picture by using the character region segmentation model, and cutting off the character region, thereby avoiding the subsequent influence of the part. For example, the target detection is performed for each single frame screen (as shown in fig. 5A) including a character region, and the detected character region (the character region at the bottom in fig. 5A) is partially cut off to obtain a cut single frame screen as shown in fig. 5B.
And 310, detecting each cut single-frame picture through the target detection model to obtain the position of the target violation vehicle in each single-frame picture.
The target detection model is a machine learning model for identifying and positioning a target vehicle, namely a target violation vehicle, in a single frame picture. Specifically, in this embodiment, the yolo target detection network may be used to detect the position of the target vehicle, i.e., the target violation vehicle, in each single-frame picture cut from the same large picture, and position the position of the target vehicle in the picture.
And step 312, segmenting the target violation vehicle in each single frame picture according to the position of the target violation vehicle in each single frame picture to obtain the violation vehicle image to be audited.
And (3) segmenting the target violation vehicle in each single frame picture according to the positioning, wherein the segmented image is the violation vehicle image to be examined, which needs to be further examined (for example, fig. 6 is the violation vehicle image to be examined, which is obtained by segmenting the target violation vehicle in fig. 5B), namely the segmented image can be used as the input of a vehicle class identification model based on deep learning, so that the use class of the target violation vehicle is identified.
Specifically, in this embodiment, because the target illegal vehicle in the single frame image may have half of its lamps exposed due to the shooting reason, if the conventional method of average cutting up, down, left, and right is used, it is very likely that the lamps are completely cut off, so that the judgment features of some vehicles in use categories, such as the lamps of police cars, the lamps of ambulance cars, etc., are important features for judging the corresponding use categories. Therefore, in this embodiment, the target violation vehicles in each single frame picture are divided, and specifically, the target violation vehicles in each single frame picture can be cut in the up-down direction and the left-right direction of the single frame picture according to the position of the target violation vehicle in each single frame picture and the set cutting range, so as to obtain the target violation vehicles divided from the single frame picture after cutting, where the cutting range in the up-down direction is smaller than the cutting range in the left-right direction. According to the embodiment, the important judgment features of the vehicle use categories are not lost through relatively small amount of clipping in the vertical direction, so that the identification accuracy of the vehicle use categories can be improved to a certain extent.
In one embodiment, as shown in fig. 7, the method for obtaining the deep learning-based vehicle category identification model includes:
Wherein the plurality of vehicle use categories are a plurality of categories divided according to application scenarios or use properties of the motor vehicle. For example, different categories may include police cars, fire trucks, ambulances, construction wreckers, and private cars. In the present embodiment, in order to obtain the vehicle class identification model, it is first necessary to prepare a multi-class vehicle sample image dataset in which a base model is trained. In the present embodiment, the original images of a plurality of vehicles are collected, and image processing is performed on the original images by using a similar method as that shown in fig. 3, so as to obtain an image of the target vehicle in the original images. And then labeling and classifying the use categories of the target vehicles in the images of the target vehicles to obtain a plurality of classified vehicle use categories and images of the target vehicles respectively corresponding to the vehicle use categories, and generating vehicle sample image data sets respectively corresponding to the vehicle use categories.
For example, the labeling of the usage category of the target vehicle in the image of the target vehicle may specifically be labeling according to the category of the target vehicle and the image characteristics, such as determining whether the corresponding image is the head or the tail of the target vehicle according to the image characteristics, and labeling in combination with the category of the target vehicle. For example, the following category labels may be added to the image of the target vehicle: such as 0 private _ head, 1 private _ tail, 2 rental _ head, 3 rental _ tail, 4 passenger _ head, 5 passenger _ tail, 6 cargo head, 7 cargo _ tail, 8 engineering special vehicle _ head, 9 engineering special vehicle _ tail, 10 bus _ head, 11 bus _ tail, 12 police _ head, 13 police _ tail, 14 ambulance _ head, 15 ambulance _ tail, 16 engineering emergency vehicle _ head, 17 engineering emergency vehicle _ tail, 18 highway special vehicle _ head, 19 highway special vehicle _ tail, 20 law enforcement vehicle _ head, 21 law enforcement vehicle _ tail, 22 fire truck _ head, 23 fire truck _ tail, 24 school bus _ head, 25 school vehicle _ tail, 26 hazardous vehicle _ head or 27 hazardous vehicle _ tail, etc.
The number of samples in the vehicle sample image data set for each category is controlled to vary from 5000 to 30000. And the samples in the vehicle sample image data set of each category are divided into two parts, one part is a training sample set, and the other part is a verification sample set. The training sample set accounts for 80% -90% of the number of samples in the vehicle sample image data set, and the verification sample set accounts for 10% -20% of the number of samples in the vehicle sample image data set. The training sample set is used for training the model, the verification sample set is used for verifying and testing the effect of the model, and the training sample set and the verification sample set are not used in a crossed mode.
In this embodiment, in order to improve the accuracy of the model, vehicle sample images of a plurality of vehicle usage categories in a training sample set are used as a basis and are preprocessed, so that vehicle sample images of different color modes are generated by randomly cutting the vehicle sample images, and the preprocessed training sample set is obtained. Wherein the color mode may be at least one of brightness, contrast, saturation, and blur. The preprocessing may specifically be to randomly adjust a color mode (i.e., at least one of brightness, contrast, saturation, and blur) of each vehicle sample image in the training sample set to obtain a plurality of adjusted vehicle sample images with different color modes corresponding to each vehicle sample image, and then randomly crop the plurality of adjusted vehicle sample images with different color modes to obtain randomly cropped vehicle sample images with different color modes corresponding to each vehicle sample image, so as to obtain a preprocessed training sample set.
For example, the random cropping may be performed by cropping a plurality of adjusted vehicle sample images of different color modes according to a set size, and assuming that the set size is 200 × 200 and the size of the vehicle sample image is 220 × 220, the vehicle sample images may be randomly cropped in the left-right direction by 0 to (220) 200, and randomly cropped in the up-down direction by 0 to (220) 200/2, that is, the cropping edge portion in the up-down direction is smaller than the cropping edge portion in the left-right direction. If the random clipping in the up-down direction is too much, the situation that the car lights are only exposed in half or completely disappear can be caused, so that the main characteristics of some types of vehicles such as police cars are lost, and the learning of the characteristics of the types of the vehicles by the model is not favorable, so that the recognition accuracy of the model on the vehicle use types can be higher due to the relatively small clipping in the up-down direction.
The hyper-parameters for model training comprise basic learning rate, momentum weight, maximum iteration number and the like of the model during training. After the training sample set is preprocessed, the hyper-parameters of the deep neural network classification model are configured according to actual needs, the preprocessed training sample set is adopted, a forward algorithm and a reverse algorithm are alternately called through Caffe (deep learning framework) to train the deep neural network classification model, model parameters are updated in continuous iterative learning until the model converges, a loss function value does not decline, namely the loss function reaches a minimum value, the model parameters are stored, and therefore the vehicle class identification model is obtained.
The loss function is used for expressing the difference degree between the model prediction result and the true value (namely the labeled value), the loss function is calculated after each round of training iteration, and the smaller the loss function obtained by the final training is, the better the robustness of the model is. The Loss function in this embodiment may adopt a Focal local Loss function, and its calculation formula is:wherein,the prediction classification value obtained by calling a forward algorithm layer of the deep neural network classification model is between 0 and 1, and lambda can be set to be 2.
Specifically, the deep neural network classification model may be implemented by using an inclusion-V4 neural network framework shown in fig. 8, where the inclusion-V4 neural network framework is constructed by a method including: forming a feature extraction network (such as Stem in fig. 8) by using a plurality of convolution layers and pooling layers, sequentially connecting a plurality of inclusion sections (such as 4x inclusion-a, 7x inclusion-B and 3x inclusion-C in fig. 8) behind the feature extraction network, and inserting a Reduction (calculation model) module (such as Reduction-a and Reduction-B in fig. 8) between two adjacent inclusion sections; and sequentially connecting a global average Pooling layer (such as Avarage Pooling in FIG. 8), a network optimization layer (such as Dropout in FIG. 8) and a forward algorithm layer (such as Softmax in FIG. 8) at the rear end of the last increment segment to obtain an increment-V4 neural network framework. Each of the inclusion segments comprises a plurality of inclusion modules, for example, the 4x inclusion-a segment is composed of 4 inclusion-a modules, the 7x inclusion-B module is composed of 7 inclusion-B modules, and the 3x inclusion-C module is composed of 3 inclusion-C modules. And the inclusion-A, Inception-B and the inclusion-C are different types of inclusion modules respectively. Reduction-A, Reduction-B represents a downsampled network in the inclusion-V4 neural network framework.
In an embodiment, when the training of the deep neural network classification model reaches the iteration times, the preprocessed verification sample set can be used for verifying the deep neural network classification model, so that the recognition accuracy of the deep neural network classification model is obtained, and the recognition effect of the model can be evaluated through the recognition accuracy, namely, the higher the recognition accuracy of the deep neural network classification model is, the better the recognition effect is. The specific process of preprocessing the verification sample set is similar to the preprocessing of the training sample set, and is not described herein again.
In an embodiment, the method for automatically processing vehicle violations in the present application is further described by a specific embodiment, taking a violation image to be checked as an example of a picture shown in fig. 4, inputting the picture into an intelligent traffic police violation checking system, and hopefully checking whether a target vehicle (i.e., an ambulance in the figure) violates a traffic rule, where the specific flow is as follows:
1) and (3) carrying out mode judgment on the pictures by using the trained mode classification model, wherein the picture mode is 2 multiplied by 2, then averagely dividing the large picture into four equal parts according to the rule that the length and the width of each picture are the same, and dividing the large picture, for example, a single picture at the upper left corner in the divided picture 4 is shown in a picture 5A.
2) And respectively detecting the character areas of the four divided single pictures by using a character area division model, and dividing a picture without the character areas, for example, cutting the character areas of the picture shown in the figure 5A to obtain the picture shown in the figure 5B.
3) And (3) utilizing the target detection model to detect the vehicle target in the image shown in fig. 5B, namely segmenting a single vehicle picture, as shown in fig. 6.
4) And (5) classifying the vehicle in the figure 6 by using a vehicle class identification model based on deep learning, and obtaining the use class of the vehicle as an ambulance.
5) And finally, judging whether the vehicle actually violates the law or not according to the use categories of the vehicles which are not controlled in the traffic safety policy. That is, when the violation judging module judges that the vehicle violates the traffic rules but the vehicle type is a special vehicle (such as a police vehicle, a fire truck, an ambulance, and an engineering rescue vehicle), the corresponding target vehicle in the electronic police splice picture is finally judged not to be illegal.
It should be understood that although the various steps in the flow charts of fig. 1-8 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1-8 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 9, there is provided an automatic vehicle violation processing device comprising: a to-be-audited data acquisition module 901, an illegal type identification module 902, a vehicle category identification module 903 and a violation judgment module 904, wherein:
the system comprises a to-be-audited data acquisition module 901, which is used for acquiring a to-be-audited violation vehicle image, wherein the violation vehicle image comprises a target violation vehicle;
an illegal type identification module 902, configured to identify an illegal type of the target violation vehicle;
the vehicle type identification module 903 is used for identifying the use type of the target violation vehicle in the violation vehicle image to be audited by adopting a deep learning-based vehicle type identification model;
a violation judging module 904, configured to judge, according to the violation type, whether the usage category of the target violation vehicle matches a usage category of an unmanaged vehicle in the traffic safety policy; and if the vehicle is matched with the target violation vehicle, determining that the target violation vehicle does not violate the regulations actually.
In one embodiment, the to-be-audited data obtaining module 901 is specifically configured to obtain a violation image to be audited, which is captured by an electronic police, where the violation image includes a synthesized multi-frame picture; carrying out target detection on the synthesized multi-frame picture, and determining a target violation vehicle; detecting the violation image through a mode classification model to determine a synthesis mode of a plurality of frames of pictures in the violation image, and segmenting each frame of picture in the violation image according to the synthesis mode to obtain a plurality of segmented single-frame pictures; identifying and cutting off the character area in each single-frame picture by using a character area segmentation model to obtain a single-frame picture obtained after cutting the character area; detecting each cut single-frame picture through a target detection model to obtain the position of the target violation vehicle in each single-frame picture; and segmenting the target violation vehicles in each single frame picture according to the positions of the target violation vehicles in each single frame picture to obtain the violation vehicle image to be audited.
In one embodiment, the vehicle class identification model comprises: the vehicle sample image acquisition unit is used for acquiring vehicle sample image data sets respectively corresponding to a plurality of vehicle use categories, wherein the vehicle sample image data sets comprise training sample sets; the image preprocessing unit is used for preprocessing each vehicle sample image in the training sample set to obtain a preprocessed training sample set; and the model training unit is used for acquiring the hyper-parameters for model training, training the deep neural network classification model according to the hyper-parameters and by adopting the preprocessed training sample set until the loss function reaches the minimum value, and acquiring the vehicle class identification model.
In one embodiment, the vehicle verification system further comprises a model verification unit, a model verification unit and a verification processing unit, wherein the model verification unit is used for preprocessing each vehicle sample image in the verification sample set to obtain a preprocessed verification sample set; and when the deep neural network classification model is trained according to the hyper-parameters and by adopting the preprocessed training sample set to reach the iteration times, verifying the deep neural network classification model by adopting the preprocessed verification sample set to obtain the recognition accuracy of the deep neural network classification model.
In one embodiment, the image preprocessing unit is specifically configured to randomly adjust a color pattern of each vehicle sample image, wherein the color pattern includes at least one of brightness, contrast, saturation, and ambiguity; and randomly clipping the adjusted vehicle sample images according to the set size to respectively obtain the vehicle sample images which correspond to each vehicle sample image and have different color modes after random clipping.
In one embodiment, the vehicle sample image acquiring unit is specifically configured to acquire original images of a plurality of vehicles, perform image processing on the original images, and obtain an image of a target vehicle in the original images; marking the use category of the target vehicle in the image of the target vehicle; and classifying the images of the target vehicle according to the marked use categories to obtain a plurality of classified vehicle use categories and images of the target vehicle respectively corresponding to the vehicle use categories, and generating vehicle sample image data sets respectively corresponding to the vehicle use categories.
Specific limitations of the automatic vehicle violation processing device can be found in the above limitations of the automatic vehicle violation processing method, and are not described in detail herein. The modules in the automatic processing device for vehicle violation can be realized by software, hardware and their combination. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 10. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer equipment is used for storing the image data of the violation vehicles to be audited. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of automatic handling of vehicle violations.
Those skilled in the art will appreciate that the architecture shown in fig. 10 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring an image of the violation vehicle to be audited, wherein the image of the violation vehicle comprises a target violation vehicle;
identifying an illegal type of the target violation vehicle;
identifying the use category of the target violation vehicle in the violation vehicle image to be audited by adopting a vehicle category identification model based on deep learning;
judging whether the use category of the target violation vehicle is matched with the use category of the vehicle which is not controlled in the traffic safety policy or not according to the violation type;
and if so, determining that the target violation vehicle does not violate the regulations actually.
In one embodiment, obtaining an image of a violation vehicle to be reviewed comprises: acquiring a violation image to be audited, which is captured by an electronic police, wherein the violation image comprises a synthesized multi-frame picture; carrying out target detection on the synthesized multi-frame picture, and determining a target violation vehicle; detecting the violation image through a mode classification model to determine a synthesis mode of a plurality of frames of pictures in the violation image, and segmenting each frame of picture in the violation image according to the synthesis mode to obtain a plurality of segmented single-frame pictures; identifying and cutting off the character area in each single-frame picture by using a character area segmentation model to obtain a single-frame picture obtained after cutting the character area; detecting each cut single-frame picture through a target detection model to obtain the position of the target violation vehicle in each single-frame picture; and segmenting the target violation vehicles in each single frame picture according to the positions of the target violation vehicles in each single frame picture to obtain the violation vehicle image to be audited.
In one embodiment, segmenting the target violation vehicle in each single frame picture according to the position of the target violation vehicle in each single frame picture comprises: and according to the position of the target violation vehicle in each single frame picture, cutting the single frame picture in the vertical direction and the horizontal direction according to a set cutting range to obtain the cut target violation vehicle which is cut from the single frame picture, wherein the cutting range in the vertical direction is smaller than the cutting range in the horizontal direction.
In one embodiment, the method for obtaining the vehicle class identification model based on deep learning comprises the following steps: acquiring vehicle sample image data sets respectively corresponding to a plurality of vehicle use categories, wherein the vehicle sample image data sets comprise training sample sets; preprocessing each vehicle sample image in the training sample set to obtain a preprocessed training sample set; and acquiring hyper-parameters for model training, training the deep neural network classification model according to the hyper-parameters and by adopting a preprocessed training sample set, and acquiring a vehicle class identification model until a loss function reaches a minimum value.
In one embodiment, the vehicle sample image dataset includes a validation sample set; the method further comprises: preprocessing each vehicle sample image in the verification sample set to obtain a preprocessed verification sample set; and when the deep neural network classification model is trained according to the hyper-parameters and by adopting the preprocessed training sample set to reach the iteration times, verifying the deep neural network classification model by adopting the preprocessed verification sample set to obtain the recognition accuracy of the deep neural network classification model.
In one embodiment, each vehicle sample image is pre-processed, including: randomly adjusting a color mode of each vehicle sample image in the training sample set or the verification sample set, wherein the color mode comprises at least one of brightness, contrast, saturation and ambiguity; and randomly clipping the adjusted vehicle sample images according to the set size to respectively obtain the vehicle sample images which correspond to each vehicle sample image and have different color modes after random clipping.
In one embodiment, the construction method of the deep neural network classification model comprises the following steps: forming a feature extraction network using the plurality of convolutional layers and pooling layers; sequentially connecting a plurality of inclusion sections after the feature extraction network, and inserting a Reduction module between every two adjacent inclusion sections; and sequentially connecting the global average pooling layer, the network optimization layer and the forward algorithm layer at the rear end of the last Incep section to obtain the deep neural network classification model.
In one embodiment, the Loss function employs a Focal local Loss function, which is:wherein,the prediction classification value is obtained by calling a forward algorithm layer of the deep neural network classification model, and lambda is set to be 2.
In one embodiment, obtaining vehicle sample image datasets corresponding to respective ones of a plurality of vehicle usage categories includes: acquiring original images of a plurality of vehicles, and performing image processing on the original images to obtain images of target vehicles in the original images; marking the use category of the target vehicle in the image of the target vehicle; and classifying the images of the target vehicle according to the marked use categories to obtain a plurality of classified vehicle use categories and images of the target vehicle respectively corresponding to the vehicle use categories, and generating vehicle sample image data sets respectively corresponding to the vehicle use categories.
In one embodiment, the method further comprises: and if not, determining that the target violation vehicle actually violates the rules.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring an image of the violation vehicle to be audited, wherein the image of the violation vehicle comprises a target violation vehicle;
identifying an illegal type of the target violation vehicle;
identifying the use category of the target violation vehicle in the violation vehicle image to be audited by adopting a vehicle category identification model based on deep learning;
judging whether the use category of the target violation vehicle is matched with the use category of the vehicle which is not controlled in the traffic safety policy or not according to the violation type;
and if so, determining that the target violation vehicle does not violate the regulations actually.
In one embodiment, obtaining an image of a violation vehicle to be reviewed comprises: acquiring a violation image to be audited, which is captured by an electronic police, wherein the violation image comprises a synthesized multi-frame picture; carrying out target detection on the synthesized multi-frame picture, and determining a target violation vehicle; detecting the violation image through a mode classification model to determine a synthesis mode of a plurality of frames of pictures in the violation image, and segmenting each frame of picture in the violation image according to the synthesis mode to obtain a plurality of segmented single-frame pictures; identifying and cutting off the character area in each single-frame picture by using a character area segmentation model to obtain a single-frame picture obtained after cutting the character area; detecting each cut single-frame picture through a target detection model to obtain the position of the target violation vehicle in each single-frame picture; and segmenting the target violation vehicles in each single frame picture according to the positions of the target violation vehicles in each single frame picture to obtain the violation vehicle image to be audited.
In one embodiment, segmenting the target violation vehicle in each single frame picture according to the position of the target violation vehicle in each single frame picture comprises: and according to the position of the target violation vehicle in each single frame picture, cutting the single frame picture in the vertical direction and the horizontal direction according to a set cutting range to obtain the cut target violation vehicle which is cut from the single frame picture, wherein the cutting range in the vertical direction is smaller than the cutting range in the horizontal direction.
In one embodiment, the method for obtaining the vehicle class identification model based on deep learning comprises the following steps: acquiring vehicle sample image data sets respectively corresponding to a plurality of vehicle use categories, wherein the vehicle sample image data sets comprise training sample sets; preprocessing each vehicle sample image in the training sample set to obtain a preprocessed training sample set; and acquiring hyper-parameters for model training, training the deep neural network classification model according to the hyper-parameters and by adopting a preprocessed training sample set, and acquiring a vehicle class identification model until a loss function reaches a minimum value.
In one embodiment, the vehicle sample image dataset includes a validation sample set; the method further comprises: preprocessing each vehicle sample image in the verification sample set to obtain a preprocessed verification sample set; and when the deep neural network classification model is trained according to the hyper-parameters and by adopting the preprocessed training sample set to reach the iteration times, verifying the deep neural network classification model by adopting the preprocessed verification sample set to obtain the recognition accuracy of the deep neural network classification model.
In one embodiment, each vehicle sample image is pre-processed, including: randomly adjusting a color mode of each vehicle sample image in the training sample set or the verification sample set, wherein the color mode comprises at least one of brightness, contrast, saturation and ambiguity; and randomly clipping the adjusted vehicle sample images according to the set size to respectively obtain the vehicle sample images which correspond to each vehicle sample image and have different color modes after random clipping.
In one embodiment, the construction method of the deep neural network classification model comprises the following steps: forming a feature extraction network using the plurality of convolutional layers and pooling layers; sequentially connecting a plurality of inclusion sections after the feature extraction network, and inserting a Reduction module between every two adjacent inclusion sections; and sequentially connecting the global average pooling layer, the network optimization layer and the forward algorithm layer at the rear end of the last Incep section to obtain the deep neural network classification model.
In one embodiment, the Loss function employs a Focal local Loss function, which is:wherein,the prediction classification value is obtained by calling a forward algorithm layer of the deep neural network classification model, and lambda is set to be 2.
In one embodiment, obtaining vehicle sample image datasets corresponding to respective ones of a plurality of vehicle usage categories includes: acquiring original images of a plurality of vehicles, and performing image processing on the original images to obtain images of target vehicles in the original images; marking the use category of the target vehicle in the image of the target vehicle; and classifying the images of the target vehicle according to the marked use categories to obtain a plurality of classified vehicle use categories and images of the target vehicle respectively corresponding to the vehicle use categories, and generating vehicle sample image data sets respectively corresponding to the vehicle use categories.
In one embodiment, the method further comprises: and if not, determining that the target violation vehicle actually violates the rules.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. A method for automatic handling of vehicle violations, the method comprising:
acquiring a violation vehicle image to be checked, wherein the violation vehicle image comprises a target violation vehicle, the violation vehicle image to be checked is an image of an intelligent traffic police violation checking system for preliminarily judging that a violation condition exists, and the target violation vehicle is a vehicle preliminarily judged that the violation condition exists;
detecting the image of the violation vehicle to be checked through an intelligent traffic police violation checking system, and identifying the violation type of the target violation vehicle, wherein the violation type refers to the violation behavior of the target violation vehicle;
identifying the use category of the target violation vehicle in the violation vehicle image to be audited by adopting a vehicle category identification model based on deep learning, wherein the use category is divided according to the application scene or use property of the vehicle;
judging whether the use category of the target violation vehicle is matched with an unmanaged vehicle use category in a traffic safety policy according to the violation type, wherein the traffic safety policy is a control policy or unmanaged policy according to the vehicle use category and set according to traffic laws and regional traffic rules, and the control policy is a traffic control policy or a parking limit policy set for a certain class or several classes of vehicles; the unmanaged strategy is a limit strategy which is set aiming at a certain type or several types of vehicles and is not limited by a driving route, a driving direction, a driving speed or a signal lamp;
if so, determining that the target violation vehicle does not violate the regulations actually;
the step of obtaining the image of the violation vehicle to be audited comprises the following steps: acquiring a violation image to be audited, which is captured by an electronic police, wherein the violation image comprises a synthesized multi-frame picture; carrying out target detection on the synthesized multi-frame picture, and determining a target violation vehicle; detecting the violation image through a mode classification model to determine a synthesis mode of a plurality of frames of pictures in the violation image, and segmenting each frame of picture in the violation image according to the synthesis mode to obtain a plurality of segmented single frames of pictures; identifying and cutting off a character area in each single-frame picture by using a character area segmentation model to obtain a single-frame picture obtained after cutting the character area; detecting each cut single-frame picture through a target detection model to obtain the position of the target violation vehicle in each single-frame picture; segmenting the target violation vehicles in each single frame picture according to the positions of the target violation vehicles in each single frame picture to obtain violation vehicle images to be audited;
the method for acquiring the vehicle category identification model based on deep learning comprises the following steps: obtaining vehicle sample image data sets respectively corresponding to a plurality of vehicle use categories, wherein the vehicle sample image data sets comprise training sample sets; preprocessing each vehicle sample image in the training sample set to obtain a preprocessed training sample set; obtaining a hyper-parameter for model training, training a deep neural network classification model by adopting a preprocessed training sample set according to the hyper-parameter, and obtaining a vehicle class identification model until a loss function reaches a minimum value, wherein the deep neural network classification model is realized by adopting an inclusion-V4 neural network framework.
2. The method of automatically handling vehicle violations of claim 1, wherein said segmenting the target violation vehicle in each single frame based on the position of the target violation vehicle in each single frame comprises:
and according to the position of the target violation vehicle in each single frame picture, cutting the single frame picture in the vertical direction and the horizontal direction according to a set cutting range to obtain the cut target violation vehicles which are cut from the single frame picture, wherein the cutting range in the vertical direction is smaller than the cutting range in the horizontal direction.
3. The method of automatically handling vehicle violations of claim 1, wherein pre-processing each vehicle sample image in said training sample set includes:
randomly adjusting a color pattern of each vehicle sample image in the training sample set, wherein the color pattern comprises at least one of brightness, contrast, saturation and ambiguity;
and randomly clipping the adjusted vehicle sample images according to the set size to respectively obtain the vehicle sample images which correspond to each vehicle sample image and have different color modes after random clipping.
4. The automatic processing method of vehicle violations as claimed in claim 1, wherein the construction method of the deep neural network classification model includes:
forming a feature extraction network using the plurality of convolutional layers and pooling layers;
sequentially connecting a plurality of inclusion sections behind the feature extraction network, and inserting a Reduction module between every two adjacent inclusion sections;
and sequentially connecting a global average pooling layer, a network optimization layer and a forward algorithm layer at the rear end of the last increment section to obtain the deep neural network classification model.
5. The automatic processing method for vehicle violations of claim 4, wherein said Loss function is a Focal local Loss function, said Focal local Loss function being:
6. The method for automatic handling of vehicle violations as claimed in any one of claims 1 to 4, further including:
and if the usage category of the target violation vehicle is not matched with the usage category of the vehicle which is not controlled in the traffic safety policy, determining that the target violation vehicle actually violates the rules.
7. An automatic vehicle violation processing device, comprising:
the system comprises a to-be-audited data acquisition module, a to-be-audited verification module and a verification module, wherein the to-be-audited violation vehicle image comprises a target violation vehicle, the to-be-audited violation vehicle image is an image of an intelligent traffic police violation audit system for preliminarily judging that violation conditions exist, and the target violation vehicle is a vehicle preliminarily judged that the violation conditions exist;
the illegal type identification module is used for detecting the image of the violation vehicle to be checked through an intelligent traffic police illegal checking system and identifying the illegal type of the target violation vehicle, wherein the illegal type refers to the illegal behavior of the target violation vehicle;
the vehicle type identification module is used for identifying the use type of the target violation vehicle in the violation vehicle image to be audited by adopting a vehicle type identification model based on deep learning, and the use type is divided according to the application scene or the use property of the vehicle;
the violation judging module is used for judging whether the use category of the target violation vehicle is matched with the use category of an unmanaged vehicle in a traffic safety strategy according to the violation type, wherein the traffic safety strategy is a control strategy or an unmanaged strategy according to the use category of the vehicle, which is set according to traffic laws and regulations and regional traffic rules, and the control strategy is a traffic control strategy or a parking limit strategy, which is set aiming at a certain class or several classes of vehicles; the unmanaged strategy is a limit strategy which is set aiming at a certain type or several types of vehicles and is not limited by a driving route, a driving direction, a driving speed or a signal lamp; if so, determining that the target violation vehicle does not violate the regulations actually;
the pending audit data acquisition module is specifically configured to: acquiring a violation image to be audited, which is captured by an electronic police, wherein the violation image comprises a synthesized multi-frame picture; carrying out target detection on the synthesized multi-frame picture, and determining a target violation vehicle; detecting the violation image through a mode classification model to determine a synthesis mode of a plurality of frames of pictures in the violation image, and segmenting each frame of picture in the violation image according to the synthesis mode to obtain a plurality of segmented single frames of pictures; identifying and cutting off a character area in each single-frame picture by using a character area segmentation model to obtain a single-frame picture obtained after cutting the character area; detecting each cut single-frame picture through a target detection model to obtain the position of the target violation vehicle in each single-frame picture; segmenting the target violation vehicles in each single frame picture according to the positions of the target violation vehicles in each single frame picture to obtain violation vehicle images to be audited;
the vehicle category identification model includes: the vehicle sample image acquisition unit is used for acquiring vehicle sample image data sets respectively corresponding to a plurality of vehicle use categories, wherein the vehicle sample image data sets comprise training sample sets; the image preprocessing unit is used for preprocessing each vehicle sample image in the training sample set to obtain a preprocessed training sample set; and the model training unit is used for acquiring the hyper-parameters for model training, training the deep neural network classification model according to the hyper-parameters and by adopting the preprocessed training sample set until the loss function reaches the minimum value, and acquiring the vehicle class identification model, wherein the deep neural network classification model is realized by adopting an increment-V4 neural network framework.
8. The automatic vehicle violation processing device of claim 7 wherein the violation determination module is further configured to:
and if the usage category of the target violation vehicle is not matched with the usage category of the vehicle which is not controlled in the traffic safety policy, determining that the target violation vehicle actually violates the rules.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910832274.XA CN110415529B (en) | 2019-09-04 | 2019-09-04 | Automatic processing method and device for vehicle violation, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910832274.XA CN110415529B (en) | 2019-09-04 | 2019-09-04 | Automatic processing method and device for vehicle violation, computer equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110415529A CN110415529A (en) | 2019-11-05 |
CN110415529B true CN110415529B (en) | 2021-09-28 |
Family
ID=68370126
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910832274.XA Expired - Fee Related CN110415529B (en) | 2019-09-04 | 2019-09-04 | Automatic processing method and device for vehicle violation, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110415529B (en) |
Families Citing this family (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110853364B (en) * | 2019-11-18 | 2022-03-08 | 珠海研果科技有限公司 | Data monitoring method and device |
CN110910654B (en) * | 2019-12-03 | 2021-07-27 | 上海眼控科技股份有限公司 | Illegal information processing method and device, electronic equipment and readable storage medium |
CN111179971A (en) * | 2019-12-03 | 2020-05-19 | 杭州网易云音乐科技有限公司 | Nondestructive audio detection method and device, electronic equipment and storage medium |
CN110992703A (en) * | 2019-12-03 | 2020-04-10 | 浙江大华技术股份有限公司 | Traffic violation determination method and device |
CN110969860B (en) * | 2019-12-11 | 2021-12-07 | 上海眼控科技股份有限公司 | Background auditing system and method for traffic law violation behaviors |
CN111339337A (en) * | 2019-12-18 | 2020-06-26 | 贵州智诚科技有限公司 | Method for labeling penalty treatment based on road traffic law-violation behaviors |
CN111292539A (en) * | 2020-02-18 | 2020-06-16 | 上海眼控科技股份有限公司 | Method and device for determining school bus violation behaviors, computer equipment and storage medium |
CN111340811B (en) * | 2020-02-19 | 2023-08-11 | 浙江大华技术股份有限公司 | Resolution method, device and computer storage medium for violation synthetic graph |
CN111353444A (en) * | 2020-03-04 | 2020-06-30 | 上海眼控科技股份有限公司 | Marker lamp monitoring method and device, computer equipment and storage medium |
CN111401162A (en) * | 2020-03-05 | 2020-07-10 | 上海眼控科技股份有限公司 | Illegal auditing method for muck vehicle, electronic device, computer equipment and storage medium |
CN111860491A (en) * | 2020-04-17 | 2020-10-30 | 北京嘀嘀无限科技发展有限公司 | Vehicle authenticity authentication method, authentication device and readable storage medium |
CN111539317A (en) * | 2020-04-22 | 2020-08-14 | 上海眼控科技股份有限公司 | Vehicle illegal driving detection method and device, computer equipment and storage medium |
CN111613225A (en) * | 2020-04-27 | 2020-09-01 | 深圳壹账通智能科技有限公司 | Method and system for automatically reporting road violation based on voice and image processing |
CN111680633A (en) * | 2020-06-10 | 2020-09-18 | 上海眼控科技股份有限公司 | Vehicle violation identification method and device, computer equipment and storage medium |
CN111598054A (en) * | 2020-06-19 | 2020-08-28 | 上海眼控科技股份有限公司 | Vehicle detection method and device, computer equipment and storage medium |
CN112329724B (en) * | 2020-11-26 | 2022-08-05 | 四川大学 | Real-time detection and snapshot method for lane change of motor vehicle |
CN112699827B (en) * | 2021-01-05 | 2023-07-25 | 长威信息科技发展股份有限公司 | Traffic police treatment method and system based on blockchain |
CN112766115B (en) * | 2021-01-08 | 2022-04-22 | 广州紫为云科技有限公司 | Traffic travel scene violation intelligence based analysis method and system and storage medium |
CN112863184B (en) * | 2021-01-12 | 2022-11-11 | 山西省交通运输运行监测与应急处置中心 | Traffic information management system |
CN112820116A (en) * | 2021-01-29 | 2021-05-18 | 上海眼控科技股份有限公司 | Vehicle detection method, device, computer equipment and storage medium |
CN113851001B (en) * | 2021-09-17 | 2023-08-29 | 同济大学 | Multi-lane merging illegal automatic auditing method, system, device and storage medium |
CN114782904A (en) * | 2022-04-26 | 2022-07-22 | 平安普惠企业管理有限公司 | Vehicle compaction line detection method, device, equipment and storage medium |
CN114693722B (en) * | 2022-05-31 | 2022-09-09 | 山东极视角科技有限公司 | Vehicle driving behavior detection method, detection device and detection equipment |
TWI849653B (en) * | 2022-12-28 | 2024-07-21 | 李子介 | Intelligent auxiliary technology traffic-enforcement system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101485512B1 (en) * | 2014-03-28 | 2015-01-23 | 주식회사 성우음향정보통신 | The sequence processing method of images through hippocampal neual network learning of behavior patterns in case of future crimes |
CN109948418A (en) * | 2018-12-31 | 2019-06-28 | 上海眼控科技股份有限公司 | A kind of illegal automatic auditing method of violation guiding based on deep learning |
CN109948416A (en) * | 2018-12-31 | 2019-06-28 | 上海眼控科技股份有限公司 | A kind of illegal occupancy bus zone automatic auditing method based on deep learning |
-
2019
- 2019-09-04 CN CN201910832274.XA patent/CN110415529B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101485512B1 (en) * | 2014-03-28 | 2015-01-23 | 주식회사 성우음향정보통신 | The sequence processing method of images through hippocampal neual network learning of behavior patterns in case of future crimes |
CN109948418A (en) * | 2018-12-31 | 2019-06-28 | 上海眼控科技股份有限公司 | A kind of illegal automatic auditing method of violation guiding based on deep learning |
CN109948416A (en) * | 2018-12-31 | 2019-06-28 | 上海眼控科技股份有限公司 | A kind of illegal occupancy bus zone automatic auditing method based on deep learning |
Also Published As
Publication number | Publication date |
---|---|
CN110415529A (en) | 2019-11-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110415529B (en) | Automatic processing method and device for vehicle violation, computer equipment and storage medium | |
CN107967806B (en) | Vehicle fake-license detection method, device, readable storage medium storing program for executing and electronic equipment | |
Peng et al. | Uncertainty evaluation of object detection algorithms for autonomous vehicles | |
CN110197589B (en) | Deep learning-based red light violation detection method | |
CN112712057B (en) | Traffic signal identification method and device, electronic equipment and storage medium | |
CN109948416A (en) | A kind of illegal occupancy bus zone automatic auditing method based on deep learning | |
CN110188807A (en) | Tunnel pedestrian target detection method based on cascade super-resolution network and improvement Faster R-CNN | |
CN110738842A (en) | Accident responsibility division and behavior analysis method, device, equipment and storage medium | |
CN107563265A (en) | A kind of high beam detection method and device | |
CN113033604A (en) | Vehicle detection method, system and storage medium based on SF-YOLOv4 network model | |
CN111723854B (en) | Expressway traffic jam detection method, equipment and readable storage medium | |
CN111724607B (en) | Steering lamp use detection method and device, computer equipment and storage medium | |
CN109508659A (en) | A kind of face identification system and method for crossing | |
CN112270244A (en) | Target violation monitoring method and device, electronic equipment and storage medium | |
CN109948419A (en) | A kind of illegal parking automatic auditing method based on deep learning | |
CN111325256A (en) | Vehicle appearance detection method and device, computer equipment and storage medium | |
CN115546742A (en) | Rail foreign matter identification method and system based on monocular thermal infrared camera | |
CN113221760A (en) | Expressway motorcycle detection method | |
CN114495060B (en) | Road traffic marking recognition method and device | |
CN111652137A (en) | Illegal vehicle detection method and device, computer equipment and storage medium | |
CN111598054A (en) | Vehicle detection method and device, computer equipment and storage medium | |
CN117576674A (en) | License plate recognition method, device, equipment and medium | |
CN114693722B (en) | Vehicle driving behavior detection method, detection device and detection equipment | |
CN114882448B (en) | Vehicle monitoring method and electronic equipment | |
Khan | Vehicle and pedestrian detection using YOLOv3 and YOLOv4 for self-driving cars |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
PE01 | Entry into force of the registration of the contract for pledge of patent right |
Denomination of invention: Automatic processing method, device, computer equipment and storage medium of vehicle violation Effective date of registration: 20220211 Granted publication date: 20210928 Pledgee: Shanghai Bianwei Network Technology Co.,Ltd. Pledgor: SHANGHAI EYE CONTROL TECHNOLOGY Co.,Ltd. Registration number: Y2022310000023 |
|
PE01 | Entry into force of the registration of the contract for pledge of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20210928 |
|
CF01 | Termination of patent right due to non-payment of annual fee |