CN115762178B - Intelligent electronic police violation detection system and method - Google Patents

Intelligent electronic police violation detection system and method Download PDF

Info

Publication number
CN115762178B
CN115762178B CN202310028546.7A CN202310028546A CN115762178B CN 115762178 B CN115762178 B CN 115762178B CN 202310028546 A CN202310028546 A CN 202310028546A CN 115762178 B CN115762178 B CN 115762178B
Authority
CN
China
Prior art keywords
feature
features
convolution
target
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310028546.7A
Other languages
Chinese (zh)
Other versions
CN115762178A (en
Inventor
赵群东
朱宁锦
梁飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHANGXUN COMMUNICATION SERVICE CO LTD
Original Assignee
CHANGXUN COMMUNICATION SERVICE CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHANGXUN COMMUNICATION SERVICE CO LTD filed Critical CHANGXUN COMMUNICATION SERVICE CO LTD
Priority to CN202310028546.7A priority Critical patent/CN115762178B/en
Publication of CN115762178A publication Critical patent/CN115762178A/en
Application granted granted Critical
Publication of CN115762178B publication Critical patent/CN115762178B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The application relates to the technical field of traffic electronics, in particular to an intelligent electronic police violation detection system and method; the method comprises the following steps: acquiring a plurality of real-time image data in a plurality of target ranges; respectively inputting the real-time image data into an image recognition device to perform first feature extraction to obtain a plurality of first feature graphs containing first features; comparing a plurality of first features in the plurality of first feature maps, and integrating the first feature maps corresponding to the same first features into a real-time image stream; extracting second features in the plurality of feature images, comparing the plurality of second features, judging the change rate of every two adjacent second features by taking time nodes of the plurality of first feature images as judging nodes, and determining target second features based on a preset change rate threshold; and extracting an abnormal feature in the target second feature based on an abnormal recognition device, and taking the abnormal feature as abnormal information.

Description

Intelligent electronic police violation detection system and method
Technical Field
The application relates to the technical field of traffic electronics, in particular to an intelligent electronic police violation detection system and method.
Background
Of all vehicle traffic accidents, the probability of occurrence of traffic accidents at night is about 1.5 times of the probability of occurrence of traffic accidents at daytime, wherein major traffic accidents occurring at night account for about 60% of the total number, and 30-40% of the night traffic accidents are caused by the action of using high beam lights by drivers according to data statistics.
In order to reduce the number of traffic accidents caused by the high beam, inhibit the potential safety hazard of traffic and improve the urban traffic safety, government departments adopt numerous supervision schemes and dispatch a great deal of manpower for distribution control, but due to the variability of the car lamps and the faster driving speed of the road, the difficulties faced by the supervision personnel are larger, the evidence is difficult to obtain, and the illegal use of the high beam is difficult to effectively treat. Therefore, a set of automatic high beam detection system needs to be developed, so that the behavior of the vehicle using the high beam is detected by an intelligent and automatic scheme, the supervision difficulty of related departments is reduced, and the control force of the high beam is improved.
Disclosure of Invention
In order to solve the technical problems, the application provides the intelligent electronic police violation detection system and the intelligent electronic police violation detection method, which realize the targeted processing of different abnormal information and improve the accuracy and the high efficiency of violation identification by configuring the target identification model for identifying the target to be identified and the abnormal identification model for carrying out abnormal identification based on the identified target.
In order to achieve the above purpose, the technical solution adopted in the embodiment of the present application is as follows:
the intelligent electronic police violation detection system comprises a plurality of image acquisition devices, a violation information management terminal and an edge server which is respectively communicated with the image acquisition devices and the violation information management terminal, wherein the image acquisition devices are used for acquiring real-time image data in a target area, the edge server processes the real-time image data based on the external environment data, an image recognition device and an abnormality recognition device are arranged in the edge server, the image acquisition devices acquire the real-time image data based on an arrangement sequence, the image recognition device extracts feature images with the same first features in the real-time image data, the abnormality recognition device extracts second features in the feature images and recognizes the abnormal features in the second features, and the abnormal features are used as abnormality information; the first feature is a vehicle feature, the feature image is a vehicle image containing the vehicle feature, and the second feature is a halo feature in the corresponding vehicle image; identifying an outlier feature in the second feature includes comparing halo range values in the halo feature.
In a second aspect, an intelligent electronic police violation detection method, based on the intelligent electronic police violation detection system, comprises the following steps: acquiring a plurality of real-time image data in a plurality of target ranges; respectively inputting the real-time image data into an image recognition device to perform first feature extraction to obtain a plurality of first feature graphs containing first features; comparing a plurality of first features in the plurality of first feature maps, and integrating the first feature maps corresponding to the same first features into a real-time image stream; extracting second features in the plurality of feature images, comparing the plurality of second features, judging the change rate of every two adjacent second features by taking time nodes of the plurality of first feature images as judging nodes, and determining target second features based on a preset change rate threshold; extracting an abnormal feature in the target second feature based on an abnormal recognition device, and taking the abnormal feature as abnormal information, wherein the method specifically comprises the following steps:
acquiring a halation range in the second feature, and acquiring a halation change index based on the halation range, wherein the halation change index is as follows:
Figure 132991DEST_PATH_IMAGE001
wherein->
Figure 734874DEST_PATH_IMAGE002
Index of change for halation>
Figure 771969DEST_PATH_IMAGE003
Is a halo range value;
obtaining a light intensity index based on the halation variation index and the halation range value, the light intensity index being:
Figure 674066DEST_PATH_IMAGE004
wherein->
Figure 702196DEST_PATH_IMAGE005
Is the light intensity index;
and comparing the light intensity index with a preset light intensity threshold value to obtain a comparison result.
In a first implementation manner of the second aspect, a neural network model is configured in the image recognition device, the neural network model includes a plurality of convolution layers, a plurality of convolution blocks and residual blocks are configured in each convolution layer, an attention processing module is further disposed between the two residual blocks, an input of the attention processing module is an output characteristic of the convolution block, an input of the convolution block is a characteristic of a residual block of a first convolution layer, an output of the attention processing module is an attention characteristic, the attention characteristic is input into a residual block of a second convolution layer, after feature extraction is performed, the attention characteristic is input into a next convolution layer, a circulation operation is performed until a final convolution layer outputs a target characteristic, and the output target characteristic is processed through a first convolution set to obtain a first target feature map.
With reference to the first possible implementation manner of the second aspect, in a second possible implementation manner, the middle convolution layer and a next convolution layer of the middle convolution layer are extracted to obtain the output features as inputs to a second convolution set and a third convolution set to process to obtain a second feature map and a third feature map.
With reference to the second possible implementation manner of the second aspect, in a third possible implementation manner, the method further includes a plurality of prediction networks corresponding to the first feature map, the second feature map and the third feature map, the plurality of prediction networks are provided with convolution kernels corresponding to the number of preset recognition types, correction is performed on a preset detection frame through convolution kernel processing to obtain a target detection frame, target data is obtained based on the target detection frame, the target data is a first feature, and image data including the first feature is the first feature map.
With reference to the third possible implementation manner of the second aspect, in a fourth possible implementation manner, extracting second features in the plurality of first feature graphs includes: and carrying out binarization processing on the first characteristic image to obtain a first characteristic gray image, and extracting the gray area which accords with a preset gray area threshold as a second characteristic.
With reference to the fourth possible implementation manner of the second aspect, in a fifth possible implementation manner, extracting the gray scale region that meets the preset gray scale area threshold value as the second feature further includes identifying whether the gray scale region belongs to the same vehicle.
With reference to the fifth possible implementation manner of the second aspect, in a sixth possible implementation manner, the gray area is identified asWhether belonging to the same vehicle comprises: acquiring center points of two gray areas, calculating the slope of a connecting line of the two center points, judging whether the slope is smaller than a preset value, and calculating whether the difference of the horizontal coordinates of the two center points meets a constraint formula when the slope is smaller than the preset value, wherein the constraint formula is as follows:
Figure 56953DEST_PATH_IMAGE006
the method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>
Figure 697407DEST_PATH_IMAGE007
Is the abscissa distance of the two center points, +.>
Figure 454010DEST_PATH_IMAGE008
For the preset judgment threshold value, < >>
Figure 777675DEST_PATH_IMAGE009
、/>
Figure 134576DEST_PATH_IMAGE010
Is the abscissa of the two center points.
With reference to the sixth possible implementation manner of the second aspect, in a seventh possible implementation manner, when a difference between horizontal coordinates of two center points satisfies a constraint equation, the two center points belong to the same vehicle, that is, the two gray areas are the second feature pair.
With reference to the seventh possible implementation manner of the second aspect, in an eighth possible implementation manner, the determining of the judgment threshold includes obtaining a plurality of derived sample images matched with the first feature, and obtaining derived second feature points in the plurality of derived sample images and abscissa distances between the plurality of derived second feature points, and performing intermediate values on the abscissa distances between the plurality of derived second feature points.
In a third aspect, there is provided a terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the method of any one of the preceding claims when executing the computer program.
In a fourth aspect, a computer readable storage medium is provided, the computer readable storage medium storing a computer program which, when executed by a processor, implements the method of any of the above.
According to the technical scheme, whether the target vehicle starts the high beam or not is achieved through tracking the target vehicles in the plurality of image data, identifying the lamps in the plurality of target vehicles and extracting the halation information in the lamps. According to the method, the traditional algorithm is combined with the deep learning algorithm, a plurality of vehicles are correspondingly identified in a video stream mode, the accuracy of vehicle identification is improved, and whether high beam is started or not is identified and tracked by identifying and processing the halation of the vehicles as a key feature.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
The methods, systems, and/or programs in the accompanying drawings will be described further in terms of exemplary embodiments. These exemplary embodiments will be described in detail with reference to the drawings. These exemplary embodiments are non-limiting exemplary embodiments, wherein the exemplary numbers represent like mechanisms throughout the various views of the drawings.
Fig. 1 is a schematic structural diagram of a terminal device provided in an embodiment of the present application.
FIG. 2 is a flow chart of intelligent electronic police violation detection as shown in some embodiments of the present application.
Fig. 3 is a schematic diagram of a channel attention module according to an embodiment of the present application.
Fig. 4 is a schematic diagram of a spatial attention module according to an embodiment of the present application.
Detailed Description
In order to better understand the technical solutions described above, the following detailed description of the technical solutions of the present application is provided through the accompanying drawings and specific embodiments, and it should be understood that the specific features of the embodiments and embodiments of the present application are detailed descriptions of the technical solutions of the present application, and not limit the technical solutions of the present application, and the technical features of the embodiments and embodiments of the present application may be combined with each other without conflict.
In the following detailed description, numerous specific details are set forth by way of examples in order to provide a thorough understanding of the relevant teachings. However, it will be apparent to one skilled in the art that the present application may be practiced without these details. In other instances, well-known methods, procedures, systems, components, and/or circuits have been described at a relatively high-level, without detail, in order to avoid unnecessarily obscuring aspects of the present application.
The flowcharts are used in this application to describe implementations performed by systems according to embodiments of the present application. It should be clearly understood that the execution of the flowcharts may be performed out of order. Rather, these implementations may be performed in reverse order or concurrently. Additionally, at least one other execution may be added to the flowchart. One or more of the executions may be deleted from the flowchart.
Before describing embodiments of the present invention in further detail, the terms and terminology involved in the embodiments of the present invention will be described, and the terms and terminology involved in the embodiments of the present invention will be used in the following explanation.
(1) In response to a condition or state that is used to represent the condition or state upon which the performed operation depends, the performed operation or operations may be in real-time or with a set delay when the condition or state upon which it depends is satisfied; without being specifically described, there is no limitation in the execution sequence of the plurality of operations performed.
(2) Based on the conditions or states that are used to represent the operations that are being performed, one or more of the operations that are being performed may be in real-time or with a set delay when the conditions or states that are being relied upon are satisfied; without being specifically described, there is no limitation in the execution sequence of the plurality of operations performed.
(3) Convolutional neural networks are a class of feedforward neural networks that involve convolutional computations and have a deep structure. Convolutional neural networks are proposed by biological Receptive Field (fielded) mechanisms. Convolutional neural networks are dedicated to neural networks that process data having a grid-like structure. For example, time-series data (which may be regarded as a one-dimensional grid formed by regularly sampling on a time axis) and image data (which may be regarded as a two-dimensional grid of pixels), the convolutional neural network employed in the present embodiment processes the image data.
According to the technical scheme provided by the embodiment of the application, the main application scene is that the state of the vehicle using the high beam in the night road is identified. In the existing use scene, a lot of difficulties are faced in the actual solving process, the dynamic range of the camera is only 60-80dB due to the limitation of the prior art, the actual natural light intensity variation range is very large, and the variation range from direct sunlight to starlight is 8 orders of magnitude, namely one hundred million times. Because the dynamic range of camera formation of image is far less than the dynamic range of actual light, under the highlight background, can not only the image of the open-close state of vehicle far-reaching headlamp of driving at night of the difficult capture of following from individual camera, in addition at the vehicle driving at night in-process, there is very different surrounding environment, for example, the street lamp on some highway sections is denser, and light is very bright, and the street lamp on some highway sections is rarefaction, and light is very dark, and at this moment, the form of the vehicle far-reaching headlamp of shooing has comparatively big difference, hardly carries out accurate discernment.
The existing detection device for mainly realizing high beam recognition mainly comprises a projection type detector, a screen type detector, a condensing type detector and an automatic tracking optical axis type detector.
The detection principle of the head lamp light beam is the same with that of the projection, the head lamp light beam is projected on the screen or the projection, and then the center of the light beam projection area, the light and shade edges of the projection area and the included angle of the connecting line between the head lamp positions are calculated to determine the projection angle and the divergence angle of the head lamp. The method relies on manual measurement and calculation, is time-consuming, has low efficiency and is not suitable for automatic detection. But the detection cost is lower. The light-gathering detector and the tracking optical axis detector adopt photoelectric sensors as detection means, the sensors comprise selenium photocells, silicon photocells and the like, and the projection angle and the divergence angle of the head lamp are automatically detected by analyzing current signals generated when light beams reach the sensor array, but the method only can detect the head lamp with symmetrical light shapes at present. At present, the detection of the head lamp based on computer vision is also mature day by day, the method uses a CCD or CMOS of a camera as a photosensitive element, analyzes images acquired by the camera, and utilizes a machine vision method and an image processing algorithm to achieve the purpose of quantitatively measuring the projection optical axis and the divergence angle of the head lamp with a relatively complex light shape, so that the method can be applied to automatic detection, and the accuracy and the rapidity are better than those of a head lamp detector. However, the method has high requirements on the measuring environment, and only detects the irradiation direction and intensity of the high beam or the low beam of the vehicle, so that only whether the motor vehicle headlamp is illegally modified or not can be detected, and the behavior of whether a driver illegally uses the high beam or not can not be detected.
Aiming at the background of the prior art, the embodiment provides the electronic equipment which can be used as an intelligent electronic police violation detection system to realize the recognition of the violation behaviors of traffic participants in a traffic environment and obtain the violation information.
In this embodiment, referring to fig. 1, an electronic device, i.e. an intelligent electronic police violation detection system 10 includes an image acquisition device 100, a violation information management terminal 200, and an edge server 300 in communication with the image acquisition device 100 and the violation information management terminal 200, where the edge server 300 is a core component for performing target recognition and anomaly recognition of a target based on a plurality of image information acquired by the image acquisition device 100, so as to obtain violation information.
In this embodiment, the image acquisition device 100 is a video device, so as to acquire video of the environment according to a specific acquisition time interval in a traffic environment, and extract key frames of the acquired video to obtain key image data. In this embodiment, the key frame is obtained by using the existing key frame extraction method, and no further description is given.
Referring to fig. 2, an intelligent electronic police violation detection method is provided, which includes the following steps:
step S210, acquiring real-time image data in a target range.
The capturing of the real-time image data is achieved based on the image capturing device, and the image capturing device in the embodiment is a video capturing device, wherein the video capturing device achieves capturing of the video image in the target range, and the basic data identified in the embodiment is the picture data, so that the video data needs to be converted into the picture data, and the method for converting the video data into the picture data in the embodiment is to extract the key frames in the video data. The extraction method for the key frame is based on a method in the prior art, and is not described in detail in this embodiment. Also, it is noted that the acquisition of the video data is performed based on a preset acquisition time, wherein the setting of the acquisition time is set based on experience, for example, the acquisition time interval is shorter in a condition of heavy traffic.
Step S220, respectively inputting the real-time image data into an image recognition device for first feature extraction to obtain a plurality of first feature graphs containing first features.
In this embodiment, for a plurality of traffic participants included in the real-time image data, the plurality of traffic participants need to be classified and identified in a specific processing procedure, so as to obtain an identification result, and the corresponding traffic participants needing to be abnormally identified are determined according to the identification result. Specifically, the traffic scene includes an object that needs to be subjected to violation identification and an object that does not need to be subjected to violation identification, and then the objects need to be classified and extracted before the violation identification is performed.
In this embodiment, the classification recognition for the traffic participants is determined by a segmentation model. The method comprises the steps that for a segmentation model, a plurality of convolution layers are included, a plurality of convolution blocks and residual blocks are configured in each convolution layer, fusion data are input to a first convolution layer, an attention processing module is further arranged between two residual blocks between the two convolution layers, the input of the attention processing module is the output characteristic of the convolution blocks, the input of the convolution blocks is the characteristic of the residual blocks of the first convolution layer, the output of the attention processing module is the attention characteristic, the attention characteristic is input to the residual blocks of a second convolution layer, the characteristic is extracted, the extracted attention characteristic is input to the next convolution layer, the operation is circulated until the final convolution layer outputs target characteristics, and the output target characteristics are processed through a first convolution set to obtain a first target characteristic diagram.
And the second target feature map and the third target feature map are obtained by aiming at extracting the middle convolution layer and the next-stage convolution layer of the middle convolution layer, and taking the output features as input to a second convolution set and a third convolution set for processing.
The method comprises the steps of identifying a target detection frame, obtaining a plurality of target data based on the target detection frame, and obtaining the target data, wherein the target data are target images.
The extraction of features is performed in this embodiment by introducing an attention model, wherein the attention model is arranged in an existing YOLOv3 network, wherein the model for YOLOv3 is a dark net-53 based network.
The structure of the YOLOv3 model based on dark-53 comprises a convolutional neural network in which a plurality of convolutional layers are arranged, wherein the structure of the YOLOv3 model based on dark-53 comprises a plurality of convolutional blocks and residual blocks. This structure is a YOLOv3 structure in the prior art, and in this embodiment, for the attention processing module disposed between two residual blocks between two convolution layers, the convolution feature is processed by the attention processing module, so as to obtain a more characteristic feature. Because the convolution features include spatial information and channel information, in this embodiment, the attention processing module is placed in these two major dimensions to focus on the meaningful feature, the channel dimension information and the spatial dimension information, i.e., including the channel attention module and the spatial attention module.
Referring to fig. 3, the channel attention module is shown, and includes two pooling layers, namely a maximum pooling layer and an average pooling layer, wherein the maximum pooling layer and the average pooling layer respectively take the characteristics after convolution processing as input, the maximum pooling layer and the average pooling layer are respectively connected to a first full-connection layer, a first activation layer, a second full-connection layer and a second activation layer, and weighting processing is performed on the processed characteristics and then the processed characteristics are input to a third activation layer, so that the channel attention characteristics are obtained. In this embodiment, for the first and second active layers, the ReLu active function is configured, and for the third active layer, the sigmoid active function is configured.
This process generates a channel attention map for the inter-channel associations. The features of each channel are extracted from a convolution kernel and the channel attention model will focus on channel features that are representative of the input features. The average pooling and the maximum pooling are adopted as the method for compressing the space dimension of the input features, and the information loss of the input features is reduced through two channel outputs. After pooling operation, two different spatial information features are generated, and a weight sharing multi-layer perceptron network is used for extracting the associated information among spatial information feature channels. And finally, adding the two features, and generating a one-dimensional channel attention map through a Sigmoid function.
Referring to fig. 4 for the spatial attention model, the max-pooling layer and the average pooling layer are configured to receive input features, combine the features obtained by the max-pooling layer and the average pooling layer and input the features processed by the convolution layer to the convolution layer, and input the features processed by the convolution layer to the activation layer to obtain the spatial attention features, wherein a sigmoid activation function is configured for the activation layer.
Through two pooling processes, two different spatial information features are extracted, and the two features are combined to generate a new feature map. A co-scale spatial attention map is generated by a convolution, which records the attention weights of the spatial points of the input features.
The specific process comprises the following steps:
based on a certain convolution characteristic
Figure 592102DEST_PATH_IMAGE011
As input feature, in order through a one-dimensional channel attention map->
Figure 953945DEST_PATH_IMAGE012
And a two-dimensional spatial attention map +.>
Figure 838724DEST_PATH_IMAGE013
. The overall convolution attention processing module process can be seen as the following tensor process:
Figure 685851DEST_PATH_IMAGE014
Figure 415909DEST_PATH_IMAGE015
wherein->
Figure 366679DEST_PATH_IMAGE016
A weighting function for the channels of the input feature is used to assign a weight of interest to each channel of the input feature F.
And for the model, extracting the middle convolution layer and the next convolution layer of the middle convolution layer to obtain the output characteristics as the characteristics input into a second convolution set and a third convolution set to obtain a second target characteristic diagram and a third target characteristic diagram.
The three target feature maps respectively correspond to different sizes, and specifically are as follows:
for the first feature map, since it does not incorporate some information of a relatively low dimension, it is suitable to predict some objects of a relatively large size, i.e. its anchor points will use three types of objects (116×90), (156×198), (373×326).
For the second feature map, it is suitable to predict some medium-sized targets since it fuses relatively low-dimensional information, i.e., the anchor point of the layer will use (30×61), (62×45), (59×119) three types of targets.
For the third feature map, since it fuses information of a relatively low dimension, more detailed information on the image is preserved, which is suitable for predicting some small-sized targets, i.e., targets of three types (10×13), (16×30), (33×23) are used by the anchors of the layer.
Because the feature map of each layer predicts three bounding boxes with different sizes, and each bounding box has x, y, w, h, the attention weight is 5 parameters in total, and the bounding grids also need to predict various kinds of information, for the feature matrix of nxn, that is, nxn grids, the tensor of the network output should be: nxNx [3 ∗ (4+1+M) ], wherein M is the number of types to be predicted, and the number of types to be predicted can be configured according to the use scene.
The above process is to acquire a corresponding feature map, divide the feature map based on the acquired feature map to obtain target data, and predict the feature map by a target detection frame in this embodiment, wherein the three feature maps are predicted by (4+1+c) k convolution kernels with the size of 11, where k is the number of preset boundary frames (k is 3 by default), c is the number of categories of the predicted target, 4k parameters are responsible for predicting the offset of the target boundary frame, k parameters are responsible for predicting the probability of containing the target in the target boundary frame, and ck parameters are responsible for predicting the probability of the k preset boundary frames corresponding to c target categories.
Wherein the method comprises the steps of
Figure 796261DEST_PATH_IMAGE017
For presetting the central coordinates of the bounding box on the feature map,/for the feature map>
Figure 81749DEST_PATH_IMAGE018
For the width and height of the preset bounding box on the feature map, (tx, ty, tw, th) are respectively the bounding box center offset (tx, ty) and the width-to-height scaling ratio (tw, th) of the network prediction, and (bx, by, bw, bh) are the target bounding box of the final prediction, wherein the conversion process from the preset bounding box to the final prediction bounding box is shown in the following formula:
Figure 225285DEST_PATH_IMAGE019
Figure 63185DEST_PATH_IMAGE020
Figure 430712DEST_PATH_IMAGE021
Figure 452764DEST_PATH_IMAGE022
wherein, therein
Figure 55783DEST_PATH_IMAGE023
The function is a sigmoid function for scaling the prediction offset to between 0 and 1.
Through the processing, the corresponding target detection frame can be generated, and the identification of the corresponding target is realized through the target detection frame, namely, the segmentation of the corresponding target from the original target feature map is realized, and a plurality of target data are obtained.
Step S230, comparing a plurality of first features in the plurality of first feature maps, and integrating the first feature maps corresponding to the same first features into a real-time image stream.
In this embodiment, the process mainly sorts the plurality of first feature maps into a real-time image stream according to time sequence, and sorts the real-time image stream into a real-time image stream for subsequent identification of the second feature anomalies.
Step S240, extracting second features in the feature images, comparing the second features, judging the change rates of every two adjacent second features by taking time nodes of the first feature images as judging nodes, and determining target second features based on a preset change rate threshold.
In this embodiment, for extracting the second features in the plurality of first feature graphs, the first feature gray scale map is obtained mainly by performing binarization processing on the first feature graphs, and the gray scale region meeting the preset gray scale area threshold is extracted as the second feature.
The binarization processing for the first feature map may be performed by using an existing binarization processing method, and in this embodiment, detailed description will not be made, wherein the obtained first feature gray map includes different gray areas, because the brightness of the vehicle lamp and the brightness of the environment in the binarized image are greatly different, and the light area of the vehicle lamp is easily obtained through the binarized image.
However, in this process, since a plurality of vehicles conforming to the first feature in the first feature map are involved, that is, a plurality of vehicles exist in the same image, when the vehicles are too close to each other, it is necessary to determine that the two lamps belong to the same vehicle, and in this embodiment, the process further includes identifying whether the grayscale region belongs to the same vehicle. The method specifically comprises the following processing procedures:
acquiring center points of two gray areas, calculating the slope of a connecting line of the two center points, judging whether the slope is smaller than a preset value, and calculating whether the difference of the horizontal coordinates of the two center points meets a constraint formula when the slope is smaller than the preset value, wherein the constraint formula is as follows:
Figure 246724DEST_PATH_IMAGE006
the method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>
Figure 236020DEST_PATH_IMAGE007
Is the abscissa distance of the two center points, +.>
Figure 761679DEST_PATH_IMAGE008
For the preset judgment threshold value, < >>
Figure 653543DEST_PATH_IMAGE009
Figure 948258DEST_PATH_IMAGE010
Is the abscissa of the two center points.
When the horizontal coordinate difference of the two center points meets a constraint formula, the two center points belong to the same vehicle, namely, the two gray areas are second characteristic pairs, namely, belong to the same vehicle.
Specifically, in this embodiment, the determining of the judgment threshold includes obtaining a plurality of derived sample images matched with the first feature, obtaining derived second feature points in the plurality of derived sample images and abscissa distances between the plurality of derived second feature points, and performing intermediate value on the abscissa distances between the plurality of derived second feature points. Wherein, for the actual scene, the judgment threshold is 125 after the above processing.
Step S250, based on the abnormality recognition device, extracting the abnormal characteristics in the target second characteristics, and taking the abnormal characteristics as abnormality information.
After the processing in step S240, performing anomaly identification on the anomaly identification device, mainly traversing all paired lamps processed in step S240 in the current image, generating corresponding lamp gray scale curves of the lamps, respectively obtaining the gray scale points of the two lamp gray scale curves, respectively searching the gray scale point of the gray scale +20 of the gray scale point of the lowest point, respectively reading the abscissa values of the two lamps, and obtaining the average value of the two abscissas to obtain the halation range of the vehicle at the position.
For the vehicle in the moving process, whether the vehicle starts the high beam can not be accurately judged only by judging the halation range. Therefore, in this embodiment, by introducing the halation change index, a comprehensive judgment is made according to the halation range having a set of two indexes of halation change index.
The method comprises the following steps: when the vehicle is far away from the image acquisition device, namely when the longitudinal coordinate value of the position of the target vehicle is smaller, the halation of the high beam is larger than that of the low beam, and the halation range of the high beam and the low beam is close to the edge of the vehicle lamp in the near position, so that the change rate of the high beam in the process of being far and near in the image is larger, and the numerical distribution of the high beam is more discrete.
In probability statistics, standard deviation may measure the degree of dispersion of individuals within a group. The larger the standard deviation value of one group of data is, the poor stability of the data in the group is indicated, the wide data change range is achieved, and the change rate is high. Otherwise, if the standard deviation is small, the data in the group is stable, the data change range is small, and the change rate is small. In this embodiment, the size of the halation index is expressed by the size of the standard deviation of all halation range values in the vehicle image stream, wherein the calculation process for the halation index is:
Figure 765910DEST_PATH_IMAGE001
wherein->
Figure 44445DEST_PATH_IMAGE002
Index of change for halation>
Figure 739999DEST_PATH_IMAGE003
Is a halo range value. />
In this embodiment, in order to further improve accuracy of the determination of the high beam, the halation range and the halation change index are fused to obtain the light intensity index, and whether to turn on the high beam is determined based on the light intensity index.
Figure 420379DEST_PATH_IMAGE004
Wherein->
Figure 677442DEST_PATH_IMAGE005
Is the light intensity index.
Where n is the number of second features in the plurality of feature images,
Figure 443272DEST_PATH_IMAGE003
i in the feature map is the number of frames of the second feature in the feature map. In the present embodiment, the acquisition for the halo range is based on the abscissa of the second feature in the lamp, i.e., the feature image, wherein the abscissa for the second feature is determined based on the number of frames of the second feature in the feature image.
Wherein the obtaining for the halo range is based on
Figure 942518DEST_PATH_IMAGE024
Determining, wherein->
Figure 477405DEST_PATH_IMAGE025
And->
Figure 528537DEST_PATH_IMAGE026
The abscissa values of the second feature in the current frame are respectively determined based on the number of frames of the second feature in the feature map, and the detailed description thereof is omitted in this embodiment.
And comparing the light intensity index with the light intensity threshold by setting the light intensity threshold, wherein the high beam is the light when the light intensity index is larger than the light intensity threshold, and the low beam is the light when the light intensity index is smaller than the light intensity threshold.
It is to be understood that the terminology which is not explained by terms of nouns in the foregoing description is not intended to be limiting, as those skilled in the art can make any arbitrary deduction from the foregoing disclosure.
The person skilled in the art can undoubtedly determine technical features/terms of some preset, reference, predetermined, set and preference labels, such as threshold values, threshold value intervals, threshold value ranges, etc., from the above disclosure. For some technical feature terms which are not explained, a person skilled in the art can reasonably and unambiguously derive based on the logical relation of the context, so that the technical scheme can be clearly and completely implemented. The prefixes of technical feature terms, such as "first", "second", "example", "target", etc., which are not explained, can be unambiguously deduced and determined from the context. Suffixes of technical feature terms, such as "set", "list", etc., which are not explained, can also be deduced and determined unambiguously from the context.
The foregoing of the disclosure of the embodiments of the present application will be apparent to and complete with respect to those skilled in the art. It should be appreciated that the process of deriving and analyzing technical terms not explained based on the above disclosure by those skilled in the art is based on what is described in the present application, and thus the above is not an inventive judgment of the overall scheme.
While the basic concepts have been described above, it will be apparent to those skilled in the art that the foregoing detailed disclosure is by way of example only and is not intended to be limiting. Although not explicitly described herein, various modifications, improvements, and adaptations may occur to one skilled in the art. Such modifications, improvements, and modifications are intended to be suggested within this application, and are therefore within the spirit and scope of the exemplary embodiments of this application.
Meanwhile, the present application uses specific terminology to describe embodiments of the present application. Reference to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic is associated with at least one embodiment of the present application. Thus, it should be emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, certain features, structures, or characteristics of at least one embodiment of the present application may be combined as suitable.
In addition, those of ordinary skill in the art will understand that the various aspects of the present application may be illustrated and described in terms of several patentable categories or cases, including any novel and useful processes, machines, products, or combinations of materials, or any novel and useful improvements thereto. Accordingly, aspects of the present application may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.) or by a combination of hardware and software. The above hardware or software may be referred to as a "unit," component, "or" system. Furthermore, aspects of the present application may be embodied as a computer product in at least one computer-readable medium, the product comprising computer-readable program code.
The computer readable signal medium may comprise a propagated data signal with computer program code embodied therein, for example, on a baseband or as part of a carrier wave. The propagated signal may take on a variety of forms, including electro-magnetic, optical, etc., or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer readable signal medium may be propagated through any suitable medium including radio, electrical, fiber optic, RF, or the like, or any combination of the foregoing.
Computer program code required for execution of aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming such as Java, scala, smalltalk, eiffel, JADE, emerald, C ++, c#, vb net, python, etc., or similar conventional programming languages such as the "C" programming language, visual Basic, fortran 2003,Perl,COBOL 2002,PHP,ABAP, dynamic programming languages such as Python, ruby and Groovy or other programming languages. The programming code may execute entirely on the user's computer, or as a stand-alone software package, or partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any form of network, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service, such as software as a service (SaaS).
Furthermore, the order in which the processing elements and sequences are described, the use of numerical letters, or other designations are used is not intended to limit the order in which the processes and methods of the present application are performed, unless specifically indicated in the claims. While in the foregoing disclosure there has been discussed, by way of various examples, some embodiments of the invention which are presently considered to be useful, it is to be understood that this detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments of this application. For example, while the system components described above may be implemented by hardware devices, they may also be implemented solely by software solutions, such as installing the described system on an existing server or mobile device.
It should also be appreciated that in the foregoing description of the embodiments of the present application, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of at least one of the embodiments of the invention. This method of disclosure, however, is not intended to imply that more features than are presented in the claims are required for the subject application. Indeed, less than all of the features of a single embodiment disclosed above.

Claims (9)

1. The intelligent electronic police violation detection method is characterized by comprising the following steps of:
acquiring a plurality of real-time image data in a plurality of target ranges;
respectively inputting the real-time image data into an image recognition device to perform first feature extraction to obtain a plurality of first feature graphs containing first features;
comparing a plurality of first features in the plurality of first feature maps, and integrating the first feature maps corresponding to the same first features into a real-time image stream;
extracting second features in the plurality of feature images, comparing the plurality of second features, judging the change rate of every two adjacent second features by taking time nodes of the plurality of first feature images as judging nodes, and determining target second features based on a preset change rate threshold;
extracting an abnormal feature in the target second feature based on an abnormal recognition device, and taking the abnormal feature as abnormal information, wherein the method specifically comprises the following steps:
acquiring a halation range in the second feature, and acquiring a halation change index based on the halation range, wherein the halation change index is as follows:
Figure FDA0004133813520000011
obtaining a light intensity index based on the halation variation index and the halation range value, the light intensity index being:
Figure FDA0004133813520000012
wherein G is Q Is the light intensity index;
wherein sigma GY For halation change index, G GYi For halo range values, n is the number of second features in the plurality of feature images, G GYi I in (a) is the number of frames of the second feature in the feature map;
and comparing the light intensity index with a preset light intensity threshold value to obtain a comparison result.
2. The intelligent electronic police violation detection method according to claim 1, wherein a neural network model is configured in the image recognition device, the neural network model comprises a plurality of convolution layers, a plurality of convolution blocks and residual blocks are configured in each convolution layer, an attention processing module is further arranged between the two residual blocks, the input of the attention processing module is the output characteristic of the convolution blocks, the input of the convolution blocks is the characteristic of the residual blocks of a first convolution layer, the output of the attention processing module is the attention characteristic, the attention characteristic is input into the residual blocks of a second convolution layer, the characteristic is extracted, the extracted attention characteristic is input into a next convolution layer, the circulation operation is performed until the final convolution layer outputs the target characteristic, and the output target characteristic is processed through the first convolution set to obtain a first characteristic map.
3. The intelligent electronic police violation detection method according to claim 2, wherein the characteristics of the middle convolution layer and the characteristics of the output acquired by the next convolution layer of the middle convolution layer are extracted and used as the input to a second convolution set and a third convolution set to be processed to obtain a second characteristic map and a third characteristic map.
4. The intelligent electronic police violation detection method according to claim 3, further comprising a plurality of prediction networks corresponding to the first feature map, the second feature map and the third feature map, wherein the plurality of prediction networks are provided with convolution kernels corresponding to the number of preset identification types, the preset detection frames are rectified through convolution kernel processing to obtain target detection frames, target data are obtained based on the target detection frames, the target data are first features, and image data containing the first features are the first feature maps.
5. The intelligent electronic police violation detection method of claim 4, wherein extracting second features in a plurality of the first feature maps comprises: and carrying out binarization processing on the first characteristic image to obtain a first characteristic gray image, and extracting a gray area which accords with a preset gray area threshold value as a second characteristic.
6. The intelligent electronic police violation detection method of claim 5, wherein extracting the gray area meeting a preset gray area threshold as a second feature further comprises identifying whether the gray areas belong to the same vehicle.
7. The intelligent electronic police violation detection method of claim 6, wherein identifying whether the gray scale areas belong to the same vehicle comprises:
acquiring center points of two gray areas, calculating the slope of a connecting line of the two center points, judging whether the slope is smaller than a preset value, and calculating whether the difference of the horizontal coordinates of the two center points meets a constraint formula when the slope is smaller than the preset value, wherein the constraint formula is as follows:
D x =|x c2 -x c1 |≤T x the method comprises the steps of carrying out a first treatment on the surface of the Wherein D is x Is the abscissa distance of two center points, T x For presetting a judgment threshold value, x c1 、x c2 Is the abscissa of the two center points.
8. The intelligent electronic police violation detection method of claim 7, wherein when a horizontal coordinate difference of two center points satisfies a constraint formula, the two center points belong to the same vehicle, namely, the two gray areas are the second feature pairs.
9. The intelligent electronic police violation detection method of claim 7, wherein the determining of the judgment threshold includes obtaining a plurality of derived sample images matched with the first feature, obtaining derived second feature points in the plurality of derived sample images and abscissa distances among the plurality of derived second feature points, and performing intermediate values on the abscissa distances among the plurality of derived second feature points.
CN202310028546.7A 2023-01-09 2023-01-09 Intelligent electronic police violation detection system and method Active CN115762178B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310028546.7A CN115762178B (en) 2023-01-09 2023-01-09 Intelligent electronic police violation detection system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310028546.7A CN115762178B (en) 2023-01-09 2023-01-09 Intelligent electronic police violation detection system and method

Publications (2)

Publication Number Publication Date
CN115762178A CN115762178A (en) 2023-03-07
CN115762178B true CN115762178B (en) 2023-04-25

Family

ID=85348729

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310028546.7A Active CN115762178B (en) 2023-01-09 2023-01-09 Intelligent electronic police violation detection system and method

Country Status (1)

Country Link
CN (1) CN115762178B (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875458B (en) * 2017-05-15 2020-11-06 杭州海康威视数字技术股份有限公司 Method and device for detecting turning-on of high beam of vehicle, electronic equipment and camera
CN108229447B (en) * 2018-02-11 2021-06-11 陕西联森电子科技有限公司 High beam light detection method based on video stream
CN108538060A (en) * 2018-04-13 2018-09-14 上海工程技术大学 A kind of intelligence based on vehicle-mounted back vision camera is broken rules and regulations monitoring method and system
CN111310738A (en) * 2020-03-31 2020-06-19 青岛讯极科技有限公司 High beam vehicle snapshot method based on deep learning
CN111783573B (en) * 2020-06-17 2023-08-25 杭州海康威视数字技术股份有限公司 High beam detection method, device and equipment
CN114882451A (en) * 2022-05-12 2022-08-09 浙江大华技术股份有限公司 Image processing method, device, equipment and medium

Also Published As

Publication number Publication date
CN115762178A (en) 2023-03-07

Similar Documents

Publication Publication Date Title
CN112233097B (en) Road scene other vehicle detection system and method based on space-time domain multi-dimensional fusion
US11967052B2 (en) Systems and methods for image processing
CN113034378B (en) Method for distinguishing electric automobile from fuel automobile
Tao et al. Smoky vehicle detection based on range filtering on three orthogonal planes and motion orientation histogram
CN116258940A (en) Small target detection method for multi-scale features and self-adaptive weights
Wang Vehicle image detection method using deep learning in UAV video
CN112949578B (en) Vehicle lamp state identification method, device, equipment and storage medium
Lam et al. Real-time traffic status detection from on-line images using generic object detection system with deep learning
KR101705061B1 (en) Extracting License Plate for Optical Character Recognition of Vehicle License Plate
Ma et al. Deconvolution Feature Fusion for traffic signs detection in 5G driven unmanned vehicle
CN114359196A (en) Fog detection method and system
CN111898427A (en) Multispectral pedestrian detection method based on feature fusion deep neural network
Zhang et al. A front vehicle detection algorithm for intelligent vehicle based on improved gabor filter and SVM
CN115762178B (en) Intelligent electronic police violation detection system and method
CN114550220B (en) Training method of pedestrian re-recognition model and pedestrian re-recognition method
CN109034171B (en) Method and device for detecting unlicensed vehicles in video stream
CN116597411A (en) Method and system for identifying traffic sign by unmanned vehicle in extreme weather
Lashkov et al. Edge-computing-facilitated nighttime vehicle detection investigations with CLAHE-enhanced images
Haryono et al. Accuracy in Object Detection Based on Image Processing at the Implementation of Motorbike Parking on the Street
CN115331162A (en) Cross-scale infrared pedestrian detection method, system, medium, equipment and terminal
CN115393743A (en) Vehicle detection method based on double-branch encoding and decoding network, unmanned aerial vehicle and medium
Vikruthi et al. A Novel Framework for Vehicle Detection and Classification Using Enhanced YOLO-v7 and GBM to Prioritize Emergency Vehicle
CN116030542B (en) Unmanned charge management method for parking in road
Luo et al. Object detection based on binocular vision with convolutional neural network
KR102574540B1 (en) Method and system for extracting privacy area in image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant