CN113255580A - Method and device for identifying sprinkled objects and vehicle sprinkling and leaking - Google Patents

Method and device for identifying sprinkled objects and vehicle sprinkling and leaking Download PDF

Info

Publication number
CN113255580A
CN113255580A CN202110675973.5A CN202110675973A CN113255580A CN 113255580 A CN113255580 A CN 113255580A CN 202110675973 A CN202110675973 A CN 202110675973A CN 113255580 A CN113255580 A CN 113255580A
Authority
CN
China
Prior art keywords
projectile
vehicle
identifying
acquiring
video frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110675973.5A
Other languages
Chinese (zh)
Inventor
李圣权
葛俊
毛云青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CCI China Co Ltd
Original Assignee
CCI China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CCI China Co Ltd filed Critical CCI China Co Ltd
Priority to CN202110675973.5A priority Critical patent/CN113255580A/en
Publication of CN113255580A publication Critical patent/CN113255580A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats

Abstract

The application provides a method and a device for identifying a shed object and identifying vehicle shedding and water dripping, wherein the method for identifying the vehicle shedding and water dripping comprises the following steps: processing the video frame sequence by using a projectile identification method to obtain a projectile; extracting vehicles from video frames, wherein the video frames are obtained from the sequence of video frames; identifying a direction of motion of a vehicle in the video frame, setting a threshold based on the direction of motion; and acquiring the time when the position of the projectile is close to that of the vehicle, calculating the distance between the projectile and the vehicle at the time, and if the distance is smaller than the threshold value, judging that the vehicle is a case involved vehicle. The method and the device can obtain the sprinkled object through multiple screening in the video frame sequence, and accuracy of identification of the sprinkled object is improved; and filtering the sprinkled objects at unreasonable positions based on the moving direction of the vehicle to obtain a correct matching result of the vehicle and the sprinkled objects.

Description

Method and device for identifying sprinkled objects and vehicle sprinkling and leaking
Technical Field
The application relates to the technical field of image processing and machine learning, in particular to a method and a device for identifying a sprinkled object and identifying vehicle sprinkled dripping leakage.
Background
The development of cities can not leave the construction of infrastructure, along with the rapid development of economic society of China, the government has higher and higher requirements on urban management, and a brute force type engineering truck transportation mode becomes a problem to be solved urgently in current urban management; the throwing leakage is the most frequently-occurring phenomenon in the transportation process of the engineering truck, and the accidental throwing leakage in the transportation process is caused by the nonstandard loading of the slag soil and the slurry, so that the problems of unclean urban roads and dust raising are caused, and large-scale throwing objects are easy to cause traffic accidents and harm the life and property safety of driving personnel; the throw dribbling has created significant challenges for traffic management and city managers.
The problem of the present solution to shed water clock mainly has two kinds of common tactics, manual inspection and intelligent monitoring. The method comprises the following steps that (1) an experienced urban manager is required to manually check road sections which are frequently splashed and leaked according to a muck transportation route, along with the development of urbanization, the number of the road sections to be checked is increased, and the difficulty of checking work is increased; the intelligent monitoring is to analyze traffic flow data and alarm suspected sprinklers in time by using an intelligent detection algorithm, and the strategy can greatly improve the efficiency of finding the sprinklers.
The intelligent detection algorithm benefits from the development of image processing and machine learning technology, is an important subject direction of current computer technology research, carries out modeling operation on images by using related technical methods, and efficiently solves the problems in actual production and life by using acquired characteristics. The moving target detection is a common machine learning algorithm, and objects with changed spatial positions in a video or image sequence are extracted and identified as a foreground, so that the foreground and the background can be constructed efficiently, and the method is widely applied to the fields of intelligent transportation and the like.
Although intelligent detection algorithms have been rapidly developed, many problems still face in the actual design and use of engineering vehicle shed drip identification tasks, mainly embodied in the following two aspects:
(1) under the background of a traffic environment, the scene application is complex, and various problems such as illumination change, weather change, camera shake and the like exist, so that more noise exists in the scene, the false alarm of the identification of the sprinkled object is high, and the actual application effect is directly influenced; how to reduce false alarm of the identification of the throw water leakage is still a problem;
(2) the throwing and dripping are small targets under most conditions, data are difficult to collect, sufficient data are not available, the advantage of deep learning cannot be exerted, the muck thrown by the engineering truck is very similar to the surrounding environment, and the problem of improving the accuracy of the thrown object is still a great challenge;
(3) in the traditional moving target detection algorithm, the existence time of the extracted sprinklers in the foreground is short, the identification capability of the extracted sprinklers is not favorably improved, and the key for improving the identification accuracy of the sprinklers is also the retention time of the sprinklers in the foreground.
Disclosure of Invention
The embodiment of the application provides a method and a device for identifying a projectile and identifying vehicle throwing and leaking drops, the projectile can be obtained through multiple screening in a video frame sequence, and the accuracy of projectile identification is improved; and filtering the sprinkled objects at unreasonable positions based on the moving direction of the vehicle to obtain a correct matching result of the vehicle and the sprinkled objects.
In a first aspect, an embodiment of the present application provides a method for identifying a projectile, including the following steps: acquiring a video frame sequence, and extracting a moving object from the video frame sequence; processing the moving target to obtain a suspected throwing object; acquiring the data of the tossing sample, and clustering pixels of the data of the tossing sample to obtain a clustering result; and acquiring a pixel value of the suspected projectile, and identifying the suspected projectile of the pixel value in the clustering result to obtain the projectile.
The embodiment extracts the suspected sprinkled object from the image, carries out rough division and subdivision positioning on the suspected sprinkled object through the pixel point clustering and the identification model to obtain the sprinkled object, and ensures the accuracy of identification of the sprinkled object through multiple screening.
In one embodiment, the "extracting a moving object in the sequence of video frames" includes: inputting the video frame sequence into a Gaussian mixture model, and calculating the mean value between pixels and Gaussian distribution corresponding to the pixels; comparing the mean value with an initial background threshold value, and if the mean value is smaller than the initial background threshold value, primarily dividing the foreground and the background of the moving target; if the mean value is larger than the initial background threshold value, setting a background change rate to update the initial background threshold value, comparing the mean value with the updated initial background threshold value, and if the mean value is smaller than the updated initial background threshold value, dividing the foreground and the background of the moving target again.
In one embodiment, the "processing the moving object to obtain the suspected projectile" includes: carrying out graying processing on the image frame sequence of the extracted moving target, and carrying out binarization processing on the obtained grayed image to obtain a binary image of the image frame sequence; performing morphological dilation operation on the binary image, performing connected domain detection on the binary image after the morphological dilation operation, and filling the detected connected domain to obtain an image after the moving target is enhanced; and denoising the enhanced image of the moving target, and extracting a suspected toss.
In the embodiment, the moving target is subjected to morphological processing, so that the moving target is enhanced and denoised, the details of useful information are highlighted, and interference information in an image is reduced, so that the identification efficiency of a suspected projectile is improved.
In one embodiment, the clustering method is a Kmeans algorithm, "clustering pixels of the projectile sample data" includes: acquiring a type of a projectile in the projectile sample data; clustering out and according to the throw sample dataThe distribution range of the pixel value of the projectile corresponding to the type of the projectile is expressed by
Figure 100002_DEST_PATH_IMAGE002
,
Figure 100002_DEST_PATH_IMAGE004
],i∈N,pix∈[0,255],
Figure 189221DEST_PATH_IMAGE002
Is the minimum value of the range of pixel values,
Figure DEST_PATH_IMAGE005
is the maximum value of the pixel value range; and N is the number of pixel value range distribution of the sprinkled objects obtained by clustering.
In this embodiment, the distribution results of the pixel values of the N sprinklers are clustered by the Kemeans algorithm, the obtained suspected sprinklers are preprocessed by using the clustered distribution results, and suspected sprinklers whose pixel ranges are not in the distribution results are filtered out.
In one embodiment, "identifying the suspected projectile having the pixel value within the clustered result, resulting in a projectile" includes: inputting the suspected projectile with the pixel values in the clustering result into a projectile recognition model, wherein the projectile recognition model comprises a feature enhancement network and a prediction network, the feature enhancement network performs feature enhancement on input features by using 1x1 convolution, 3x3 depth-wise convolution and 1x1 convolution on one branch, and the global pooling layer, 1x1 convolution and sigmoid gate control unit are used on the other branch to weight the input features into new features in depth; performing element-level addition on the features after the features are added and the features weighted in depth, and outputting target features; the target feature is input into the prediction network to predict the position of the projectile and thereby identify the projectile.
In the embodiment, the projectile identification model increases the feature selection capability, improves the attention of important features, and increases the detection precision of the projectile by improving the identification capability of the model.
In a second aspect, the present application provides a vehicle throwing and dripping method, including the following steps: extracting the projectile and the vehicle in a video frame, wherein the video frame is obtained from a sequence of video frames; identifying a direction of motion of a vehicle in the video frame, setting a threshold based on the direction of motion; and acquiring the time when the position of the projectile is close to that of the vehicle, calculating the distance between the projectile and the vehicle at the time, and if the distance is smaller than the threshold value, judging that the vehicle is a case involved vehicle.
In the present embodiment, the vehicle and the projectile are associated and matched based on the moving direction of the vehicle, and the vehicle and the projectile, the degree of which is in accordance with the threshold value, are output. And 4, the objects which are thrown at unreasonable positions are screened out through correlation matching, accurate matching results are finally obtained, and the judging efficiency of the throwing and leaking involved vehicles is improved.
In one embodiment, "extracting vehicles in a video frame" includes: and inputting the video frame containing the throwing object into a vehicle detection model, and obtaining a vehicle surrounding frame output by the vehicle detection model so as to identify the vehicle.
In one embodiment, "obtaining a time at which the projectile and the vehicle are located close together, and calculating a distance therebetween at the time" comprises: acquiring a projectile surrounding frame of the projectile and a vehicle surrounding frame of the vehicle; judging the position change between the throwing object surrounding frame and the vehicle surrounding frame in the video frame sequence, and acquiring the time when the positions are close; and calculating the distance between the two at the moment, wherein the distance comprises the Euclidean distance.
In the embodiment, the relation between the projectile and the vehicle is measured by the Euclidean distance between the projectile surrounding frame and the vehicle surrounding frame, so that the beneficial effect of quantifying the correlation degree of the projectile and the vehicle is achieved.
In one embodiment, the method comprises the following steps: acquiring the distance between a vehicle sample and a projectile sample in each frame of image in the projectile sample data; the threshold is associated with the distance between the vehicle sample and the projectile sample in each direction of motion.
In this embodiment, the threshold is used to filter out the projectile that is at an unreasonable position in the direction of vehicle motion compared to the projectile sample data, the threshold is related to the distance between the vehicle sample and the projectile sample in the projectile sample data, avoiding subjectivity and contingency in artificially setting the threshold.
In one embodiment, the method further comprises: and saving the video frame of the involved vehicle at the moment.
In this embodiment, the video frames are saved to facilitate subsequent management of the involved vehicles.
In one embodiment, prior to "extracting the projectile and the vehicle in a video frame", the method comprises: video frames of a vehicle and a projectile occurring simultaneously are searched for in the sequence of video frames.
In the embodiment, the video frame sequence is preprocessed, the video frames with the vehicles and the sprinklers appearing at the same time are found out, and only the video frames with the vehicles and the sprinklers appearing at the same time need to be processed in the subsequent extraction step, so that the processing efficiency is improved.
In one embodiment, the direction of motion of the vehicle in the video frame is determined based on the trajectory of the vehicle in the sequence of video frames.
In the embodiment, the motion trail can reflect the actual motion direction of the vehicle, and the recognition error of the motion direction caused by acquiring the vehicle direction in the single-frame image can be avoided.
In a third aspect, an embodiment of the present application provides a projectile identification device, including: the moving object extraction module is used for acquiring a video frame sequence and extracting a moving object from the video frame sequence; the processing module is used for processing the moving target to obtain a suspected throwing object; the clustering module is used for acquiring the tossing sample data and clustering pixels of the tossing sample data to obtain a clustering result; and the projectile identification module is used for acquiring the pixel value of the suspected projectile and identifying the suspected projectile of which the pixel value is in the clustering result to obtain the projectile.
In a fourth aspect, the present application provides a vehicle throwing water clock recognition device, including: the projectile and vehicle extraction module is used for processing the video frame sequence by applying a projectile identification method to obtain a projectile; extracting vehicles from video frames, wherein the video frames are obtained from the sequence of video frames; the motion direction identification module is used for identifying the motion direction of the vehicle in the video frame and setting a threshold value based on the motion direction; and the calculation module is used for acquiring the time when the position of the throwing object is close to that of the vehicle, calculating the distance between the throwing object and the vehicle at the time, and judging that the vehicle is a case involved vehicle if the distance is smaller than the threshold value.
In a fifth aspect, the present application provides an electronic device, comprising a memory and a processor, wherein the memory stores a computer program, and the processor is configured to execute the computer program to execute the projectile identification method according to the first aspect or the vehicle projectile dribble identification method according to the second aspect.
In a sixth aspect, the present application provides a storage medium having a computer program stored therein, wherein the computer program is configured to be executed by a processor to perform the method for identifying a projectile according to the first aspect or the method for identifying a vehicle projectile drip leakage according to the second aspect.
The main contributions and innovation points of the invention are as follows:
the embodiment of the application provides a method for identifying a projectile, which can extract a moving target in a video frame sequence, perform morphological strengthening and denoising operations on an image of the moving target, improve the identification precision of the suspected projectile, further perform rough classification and subdivision positioning on the suspected projectile through a pixel point clustering and identification model to obtain the projectile, and ensure the accuracy of projectile identification through multiple screening.
The embodiment of the application provides a vehicle throwing and dripping leakage identification method, and a throwing object at an unreasonable position is filtered in the motion direction of a vehicle, so that a correct matching result of the vehicle and the throwing object is obtained, and misjudgment is reduced.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a flow chart of a method of projectile identification according to an embodiment of the present application;
FIG. 2 is a diagram of an improved Gaussian mixture filter effect according to an embodiment of the present application;
FIG. 3 is a network architecture diagram of a feature enhancement network according to an embodiment of the present application;
FIG. 4 is a flow chart of acquiring a projectile and involved vehicle according to an embodiment of the present application;
FIG. 5 is a flow chart of a vehicle splash drip identification method according to an embodiment of the present application;
FIG. 6 is a schematic illustration of a suspected projectile in an irrational position in accordance with an embodiment of the present application;
FIG. 7 is a block diagram of a projectile identification device according to an embodiment of the present application;
fig. 8 is a block diagram of the structure of a vehicle splash drip recognition apparatus according to an embodiment of the present application;
fig. 9 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated.
It should be noted that: in other embodiments, the steps of the corresponding methods are not necessarily performed in the order shown and described herein. In some other embodiments, the method may include more or fewer steps than those described herein. Moreover, a single step described in this specification may be broken down into multiple steps for description in other embodiments; multiple steps described in this specification may be combined into a single step in other embodiments.
Example one
Fig. 1 is a flowchart of a projectile identification method according to an embodiment of the present application, as described in fig. 1, the method including the following steps S101 to S104:
s101, acquiring a video frame sequence, and extracting a moving object from the video frame sequence;
step S102, processing the moving target to obtain a suspected throwing object;
s103, acquiring the data of the tossing sample, and clustering pixels of the data of the tossing sample to obtain a clustering result;
and S104, acquiring a pixel value of the suspected projectile, and identifying the suspected projectile of which the pixel value is in the clustering result to obtain the projectile.
In step S101, a monitoring camera may be set to capture a vehicle of the target road segment, and obtain a captured video. A sequence of video frames is an ordered set of images into which the video is decomposed frame by frame. In the video frame sequence, a moving object is extracted through a Gaussian mixture model, and the Gaussian mixture model can be directly obtained from a video module of OpenCV in the prior art and used.
In the step, in order to improve the extraction precision of the moving target, the scheme provides an improved Gaussian mixture model for extracting the moving target. "extracting a moving object in the sequence of video frames" includes: inputting the video frame sequence into a Gaussian mixture model, and calculating the mean value between pixels and Gaussian distributions corresponding to the pixels; comparing the mean value with an initial background threshold value, and if the mean value is smaller than the initial background threshold value, primarily dividing the foreground and the background of the moving target; if the mean value is larger than the initial background threshold value, setting a background change rate to update the initial background threshold value, comparing the mean value with the updated initial background threshold value, and if the mean value is smaller than the updated initial background threshold value, dividing the foreground and the background of the moving target again.
Specifically, a background threshold value T which is default by an algorithm is used for distinguishing a foreground model and a background model of a rough motion projectile, the background threshold value T is updated again through an additionally arranged background change rate c, and the changed background threshold value T is a mixed Gaussian model weight with K Gaussian distributions and a mean value after normalization multiplied by the background change rate c; only when the weight sum in the background model is smaller than the background threshold value T after the change, the background is considered to be changed, and the background model is updated again; compared with the original Gaussian mixture model, the method prolongs the retention time of the projectile in the foreground, and improves the accuracy of projectile identification; in addition, when the Gaussian mixture model is initialized, a lower learning rate is set to reduce the updating speed of the Gaussian mixture model;
the formula of the gaussian mixture model is as follows:
Figure DEST_PATH_IMAGE007
whereinx j,t Representing the jth pixel in the image at time t,P(x j,t ) Representing the background distribution probability of the pixel,
Figure DEST_PATH_IMAGE009
representing the weight of the ith mixed gaussian model at time t,ρ(
Figure DEST_PATH_IMAGE011
,
Figure DEST_PATH_IMAGE013
,
Figure DEST_PATH_IMAGE015
) A probability density function representing a gaussian distribution,
Figure 358558DEST_PATH_IMAGE013
Figure 138295DEST_PATH_IMAGE015
respectively representA mean and covariance matrix of a jth pixel in an ith Gaussian mixture model at time t;
in a pixelx j,t In the process of matching with the model, if the matching is successful, the weight value of the model is used
Figure DEST_PATH_IMAGE016
Get bigger, if matching fails, then weight is given
Figure 595822DEST_PATH_IMAGE016
The adjustment is continuously reduced, and the related matching formula is as follows;
Figure DEST_PATH_IMAGE018
(2)
wherein
Figure DEST_PATH_IMAGE020
The variance of the jth pixel at the ith gaussian mixture model at time t,Dis a constant, typically set to 2.5;
the specific updating steps of the whole Gaussian mixture model are as follows:
(1) calculating a pixelx j,t Mean of Gaussian models corresponding to current pixel
Figure 285560DEST_PATH_IMAGE013
If less than the threshold value
Figure DEST_PATH_IMAGE022
If the matching is successful, increase
Figure 419607DEST_PATH_IMAGE016
Otherwise the match fails, decrease
Figure 748957DEST_PATH_IMAGE016
Then matching with other Gaussian models;
(2) pixelx j,t Successfully matching with the corresponding Gaussian model of the current pixel, updating the mean value and the variance of the model and weighting the Gaussian distribution of the modelSorting in descending order, counting the sum of the weights of K Gaussian distributions, normalizing all the weights, if all the Gaussian models fail to be matched, rejecting the model with the minimum weight, adding a new Gaussian distribution, and sorting again according to the weights;
(3) updating a background threshold value T by using the set background change rate c and the mean value of the sum of the statistical K Gaussian distribution weights;
(4) then, the pixel is countedx j,t And when the weight value is smaller than a set background attenuation threshold value c, considering that the background changes, and dividing the foreground and the background again.
The improved mixed gaussian filtering effect is shown in fig. 2, the GMM is a mixed gaussian model, a large white block in the figure represents the vehicle, a small white block represents the projectile, and the frames 275 to 350 represent the process of gradually separating the projectile from the vehicle during the motion of the vehicle. As can be seen from fig. 2, for the listed frames 275, 308, 320, 329, 350, the improved gaussian mixture model has better detection effect on moving targets (vehicles, sprinklers) than the gaussian mixture model, especially in the Frame350, the gaussian mixture model only extracts images of vehicles but not images of sprinklers, which affects the accuracy of subsequent judgment on whether the vehicles shed drips or not, and the improved gaussian mixture model improves the accuracy of identification of the sprinklers in the Frame350 by prolonging the residence time of the sprinklers in the foreground.
Aiming at the step S101, the improved gaussian mixture model provided in this step has the beneficial effect of improving the accuracy of identifying moving objects, especially sprinkled objects, in the video frame sequence, compared with the existing gaussian mixture model.
In step S102, the moving target is subjected to morphological processing, so as to enhance and remove noise, highlight details of useful information, and reduce interference information in the image, thereby improving the efficiency of identifying the suspected projectile.
Specifically, the "performing morphological processing on the moving object to obtain the suspected projectile" includes:
carrying out graying processing on the image frame sequence of the extracted moving target, and carrying out binarization processing on the obtained grayed image to obtain a binary image of the image frame sequence;
performing morphological dilation operation on the binary image, performing connected domain detection on the binary image after the morphological dilation operation, and filling the detected connected domain to obtain an image after the moving target is enhanced;
and denoising the enhanced image of the moving target, and extracting a suspected toss.
In step S102, the image frame sequence is processed into a binary image, each image frame is processed into a binary image, the binary image is morphologically processed by erosion and dilation to obtain a larger connected domain, and then a part of noise data points on the binary image is removed to reduce interference information. And finally finding out the position of the suspected sprinkled object by using FindContours in the Opencv image processing library. The efficiency of identification of suspected sprinkles is improved by morphological processing.
In step S103, a pixel clustering strategy is adopted to perform pixel range clustering on the labeled projectile data, specifically, in this step, the projectile sample data is required to be obtained first, the data includes multiple types of projectiles that may come from a moving vehicle, a pixel value distribution range of the multiple types of projectiles is obtained through the pixel clustering, and the projectiles within the pixel value range can be screened out subsequently based on the pixel value distribution range.
The method comprises the steps of adopting a machine learning Kmeans algorithm, clustering distribution results of pixel values of N sprinklers through the Kemeans algorithm, preprocessing the obtained suspected sprinklers by utilizing the clustered distribution results, and screening and filtering out the suspected sprinklers of which the pixel ranges are not in the distribution results.
And S103, acquiring sample data, clustering pixel points of the sprinkled objects in the sample to obtain a distribution result, screening the suspected sprinkled objects in the subsequent operation by using the clustered distribution result, and improving the identification accuracy of the sprinkled objects by screening the suspected sprinkled objects which are not in the distribution result.
In step S104, the pixel value of the suspected projectile is compared with
[
Figure DEST_PATH_IMAGE023
,
Figure DEST_PATH_IMAGE024
],i∈N,pix∈[0,255]
And comparing, if the pixel value falls into the range, reserving, and if the pixel value is out of the range, rejecting. Further identifying the suspected sprinkles that remain.
In this step, "identifying the suspected projectile having the pixel value within the clustering result to obtain a projectile" includes:
inputting the suspected projectile with the pixel values in the clustering result into a projectile recognition model, wherein the projectile recognition model comprises a feature enhancement network and a prediction network, the feature enhancement network performs feature enhancement on input features by using 1x1 convolution, 3x3 depth-wise convolution and 1x1 convolution on one branch, and the global pooling layer, 1x1 convolution and sigmoid gate control unit are used on the other branch to weight the input features into new features in depth; performing element-level addition on the features after the features are added and the features weighted in depth, and outputting target features;
the target feature is input into the prediction network to predict the position of the projectile and thereby identify the projectile.
Aiming at the step S104, this step proposes a new projectile identification model, which includes an improved feature enhancement network, and referring to fig. 3, the feature enhancement network is improved on the basis of the structure of restNet18, a new feature enhancement module is reconstructed, a depth separable convolution (depthwise-conv) is used in the left branch, structurally, a 1x1 convolution, a 3x3 depth-wise convolution and a 1x1 convolution are used, a batcm normalization feature is used between each layer of convolution, a common ReLu is used as an activation function, the nonlinear expression capability of the model is increased, and the structure can enhance the feature representation capability of the model without introducing additional computation quantity; the right branch uses a feature A extracted by an overall pooling layer (avg-posing), 1x1 convolution and sigmoid gate control unit to perform bitwise multiplication operation with a feature B subjected to convolution processing of 1x1 to obtain a weighted feature C, and finally the weighted feature C is added with the feature of the left branch; the sigmoid function is used as a control gate unit in the right branch to obtain a score weight of the importance of the previous layer of features, and then bitwise multiplication operation is carried out on the score weight and the features which are subjected to convolution processing of 1x1 to obtain weighted features.
In summary, the present embodiment provides a method for identifying a projectile, referring to the projectile extraction process in fig. 4, it should be noted that the coarse screening and the fine screening in fig. 4 are only to distinguish two screening steps, and the fine screening in the present embodiment refers to performing one more screening on the basis of the coarse screening to improve the screening identification rate.
As shown in fig. 4, in this embodiment, a moving object detection algorithm is used to detect surveillance video stream data to obtain a moving object, and the moving object is subjected to morphological processing and contour extraction to obtain a suspected projectile. Acquiring a distribution range of the tossing object pixels of the tossing object sample data set through Kmeans clustering, roughly screening suspected tosses based on the pixel distribution range, and finely screening the suspected tosses of the tossing object pixels in the pixel distribution range through a tossing object identification algorithm to finally obtain the tosses.
In the identification method, the improved Gaussian mixture model is provided to replace the existing moving target detection model, and the retention time of the projectile in the foreground is prolonged by additionally setting a background attenuation threshold, so that the accuracy of projectile identification is improved.
In addition, in the identification method, the scheme also provides an improved projectile identification model, the capability of feature selection is increased through the reconstructed feature enhancement model, the attention of important features is improved, and therefore the identification capability of the model on the projectile is improved.
Example two
The embodiment of the application provides a vehicle throwing and dripping leakage identification method, and with reference to fig. 5, the method comprises steps S501 to S503:
s501, processing the video frame sequence by applying a projectile identification method to obtain a projectile; extracting vehicles from video frames, wherein the video frames are obtained from the sequence of video frames;
step S502, identifying the motion direction of the vehicle in the video frame, and setting a threshold value based on the motion direction;
and S503, acquiring the time when the position of the projectile is close to that of the vehicle, calculating the distance between the projectile and the vehicle at the time, and if the distance is smaller than the threshold value, judging that the vehicle is a vehicle involved in a case.
In step S501, a video frame refers to an image, and in the present scheme, the video frame may be a frame image or a continuous multi-frame image obtained from a sequence of video frames. In addition, the video frame sequence may also be a set of single-frame images formed by capturing single-frame images from a video to be detected at a time interval set in a period of time, and acquiring video frames from the video frame sequence.
In step S501, a target detection model may be trained using a sample labeled with a projectile and a vehicle, thereby obtaining the projectile and the vehicle.
The type of vehicle involved in the present solution may be any one or more of a car, a bus, a trailer, a motorcycle, and an engineering vehicle. In the step, the engineering vehicle is taken as an example, the engineering vehicle is mainly used for carrying, digging, first-aid repair and the like in the building process, and the existing yolov5 engineering vehicle detector can be directly adopted for identifying the engineering vehicle so as to reduce the cost for training the vehicle identification model.
In step S501, before "extracting the projectile and the vehicle in the video frame", the method includes: video frames of a vehicle and a projectile occurring simultaneously are searched for in the sequence of video frames.
In the embodiment, the video frame sequence is preprocessed, the video frames with the vehicles and the sprinklers appearing at the same time are found out, and only the video frames with the vehicles and the sprinklers appearing at the same time need to be processed in the subsequent extraction step, so that the processing efficiency is improved.
Aiming at the step S501, the scheme is that video frames with vehicles and sprinklers appearing at the same time are searched in a video frame sequence to ensure that the video frames input into the target detection model contain two types of moving targets of the vehicles and the sprinklers; inputting the video frame into a target detection model to obtain the sprinkled object and the vehicle output by the target detection model, wherein the target detection model can be a model which is self-trained by adopting a sample marked with the sprinkled object and the vehicle; or the conventional yolov5 engineering vehicle detector which is a more advanced general target detector at the present stage and can identify the engineering vehicle.
In the present embodiment, the projectile is extracted by the projectile identification method described in the first embodiment, and specifically, the "extracting the projectile and the vehicle in the video frame" includes:
extracting a moving object from the video frame sequence;
processing the moving target to obtain a suspected throwing object;
acquiring the data of the tossing sample, and clustering pixels of the data of the tossing sample to obtain a clustering result;
acquiring a pixel value of the suspected projectile, and identifying the suspected projectile of the pixel value in the clustering result to obtain the projectile;
and inputting the video frame containing the throwing object into a vehicle detection model, and obtaining a vehicle surrounding frame output by the vehicle detection model so as to identify the vehicle.
The accuracy of discerning the object of shedding has been guaranteed through screening many times in this embodiment, provides the basis for subsequent vehicle is shed and is dripped hourglass discernment.
In step S502, the vehicle is matched in association with the projectile based on the direction of movement of the vehicle. Specifically, the moving direction of the vehicle in this step refers to the moving direction of the vehicle in the video frame. The same vehicle is shot through cameras at different angles, and the obtained moving directions of the vehicle are different. As shown in fig. 6, the moving direction of the vehicle is up, down, right, and left in sequence. And respectively setting the threshold value according to the four motion directions of the upper, the lower, the right and the left to filter the sprinklers with unreasonable positions in the motion direction. It should be noted that an unreasonable position refers to a position on the image where the distance of the vehicle from the projectile exceeds a threshold value, i.e. the projectile is not thrown from the vehicle according to normal driving speed.
In step S502, a vehicle direction recognition model is trained through the projectile recognition model described in step S104, and the vehicle direction recognition model is used to obtain the motion direction of the vehicle in the video frame. The structure of the projectile recognition model is described in detail in example one, and no redundant description is made in this step.
In this step, the moving direction of the vehicle in the video frame is determined based on the moving trajectory of the vehicle in the sequence of video frames. The motion trail can reflect the actual motion direction of the vehicle, and the recognition error of the motion direction caused by obtaining the vehicle direction in the single-frame image can be avoided.
Aiming at the step S502, the motion direction of the vehicle is determined through the vehicle direction recognition model, and the motion direction is judged based on the motion track of the vehicle in the video frame sequence, so that the false judgment of the motion direction of the reversing vehicle is prevented. And setting a threshold based on the identified direction of motion.
In step S503, a relationship between the vehicle and the projectile is established, and vehicles with different moving directions are associated and matched with the projectile respectively.
In one embodiment, "obtaining a time at which the projectile and the vehicle are located close together, and calculating a distance therebetween at the time" comprises:
acquiring a projectile surrounding frame of the projectile and a vehicle surrounding frame of the vehicle;
judging the position change between the throwing object surrounding frame and the vehicle surrounding frame in the video frame sequence, and acquiring the time when the positions are close;
and calculating the distance between the two at the moment, wherein the distance comprises the Euclidean distance.
In this embodiment, the relationship refers to the position relationship between the vehicle and the projectile. The position relation of the projectile and the vehicle is measured by the Euclidean distance between the projectile surrounding frame and the vehicle surrounding frame, and the beneficial effect of quantifying the correlation degree of the projectile and the vehicle can be achieved.
Specifically, in this step, the object is throwns i The target frame, width and height of (1) are respectively expressed as [, ]x 0,y 0,x 1,y 1],s i w si h si . Vehicle with a steering wheelc i The target frame, width and height of (1) are respectively expressed as [, ]x 2,y 2,x 3,y 3],c i w ci h ci . Wherein (A), (B), (C), (D), (C), (B), (C)x 0,y 0) Pointing to the upper left corner of the target frame: (x 1,y 1) Refer to the lower right corner of target frame, can quantitative representation the position of throwing thing and vehicle through the upper left corner, the lower right corner, width, the height of target frame, in order to let the reasonable control of throwing thing position scope be near the upper boundary of the lower border of vehicle upper direction or lower direction, the thing rationality measurement judgement formula of throwing to upper and lower direction of design is:
|y 3-y 1|/h ci ≤ 0.1(3)
|y 2-y 0|/h ci ≤ 0.1(4)
to the left side below of the vehicle left side border of the vehicle control object-throwing position scope of left direction, the right side below of the vehicle right side border of the vehicle control object-throwing position scope of right direction, rationality measurement judgement formula is:
Figure DEST_PATH_IMAGE026
(5)
wherein, rate is the aspect ratio of the vehicle detection frame.
That is, in this step, different reasonableness measure judgment formulas are set according to the difference of the moving directions, wherein the projectile reasonableness measure judgment formulas for the up and down directions are formula (3) and formula (4), a threshold value is set to 0.1, and a target within the range of 0.1 is taken as a final vehicle and projectile matching result. Similarly, the projectile rationality measurement judgment formula for the left and right directions is formula (5), and the target within the threshold range is taken as the final vehicle and projectile matching result.
Referring again to fig. 6, in fig. 6, the inside of the frame is the projectile, and fig. 6 lists the cases where the positional relationship in the up, down, right, and left movement directions, respectively, does not satisfy the threshold. By the above formula, the throws appearing at unreasonable positions in the above video frame shown in fig. 6 can be eliminated.
In one embodiment, the threshold is related to the projectile sample data. Specifically, the embodiment includes: acquiring the distance between a vehicle sample and a projectile sample in each frame of image in the projectile sample data;
setting a threshold based on a distance between the vehicle sample and the projectile sample, wherein the threshold is used to filter out projectiles that are at an unreasonable position in the direction of vehicle motion compared to projectile sample data.
In this embodiment, the projectile sample data refers to one or more projectile samples from a set of projectile samples. The set threshold is related to the distance between the vehicle sample and the projectile sample in the projectile sample data, so that subjectivity and contingency of manual threshold setting are avoided.
In step S503, the vehicle and the projectile are associated and matched based on the moving direction of the vehicle, and the vehicle and the projectile with the association degree meeting the threshold value are output. And 4, the objects which are thrown at unreasonable positions are screened out through correlation matching, accurate matching results are finally obtained, and the judging efficiency of the throwing and leaking involved vehicles is improved.
After step S503, the method further includes: and saving the video frame of the involved vehicle at the moment.
In this embodiment, the video frames are saved to facilitate subsequent management of the involved vehicles.
In summary, the present embodiment provides a method for identifying a vehicle spill and drip, referring again to fig. 4, a vehicle spill and drip identification process is shown in fig. 4, it should be noted that the spill in fig. 4 is identified by the spill identification method in the first embodiment, and the vehicle is identified by yolov5 engineering vehicle detection algorithm, in other embodiments, the spill and the vehicle can also be detected by the conventional target detection model as described in step S501.
As shown in fig. 4, the vehicle is identified by the yolov5 engineering truck detection algorithm in the embodiment, the vehicle direction is detected based on the vehicle direction identification model, the incidence relation between the sprinkled objects and the vehicle in different vehicle directions is set, the incidence relation between the sprinkled objects and the vehicle is represented by the position relation, the correlation between the engineering truck and the sprinkled objects is obtained by analyzing the reasonability of the locations of the sprinkled objects, the identification conditions of the farther (unreasonable) sprinkled and dribbling positions of the sprinkled objects and the vehicle in different vehicle directions are eliminated, the sprinkled objects and the engineering truck are finally and accurately positioned, and the sprinkled objects are marked.
The method associates and matches the vehicle and the projectile based on the moving direction of the vehicle, and outputs the vehicle and the projectile with the association degree meeting a threshold value. And 4, the objects which are thrown at unreasonable positions are screened out through correlation matching, accurate matching results are finally obtained, and the judging efficiency of the throwing and leaking involved vehicles is improved.
EXAMPLE III
Based on the same technical concept, fig. 7 exemplarily shows a projectile identification device provided by an embodiment of the present application, which includes:
a moving object extraction module 701, configured to obtain a sequence of video frames, and extract a moving object from the sequence of video frames;
a processing module 702, configured to process the moving object to obtain a suspected projectile;
a clustering module 703, configured to obtain the data of the tossing sample, and perform clustering on pixels of the data of the tossing sample to obtain a clustering result;
and a projectile identification module 704, configured to obtain a pixel value of the suspected projectile, and identify the suspected projectile with the pixel value in the clustering result to obtain the projectile.
The moving object extraction module 701 has an improved gaussian mixture model built therein, and the projectile identification module 704 has a projectile identification model built therein, and the structure of the model is as described in the first embodiment. Further, since the apparatus is operated by the method described above, the repetitive description is not intended.
Example four
Based on the same concept, fig. 8 exemplarily shows a vehicle throwing drip identification device provided by the embodiment of the application, and the device comprises:
a projectile and vehicle extraction module 801, configured to apply a projectile identification method to process a sequence of video frames to obtain a projectile; extracting vehicles from video frames, wherein the video frames are obtained from the sequence of video frames;
a motion direction identification module 802, configured to identify a motion direction of the vehicle in the video frame, and set a threshold value based on the motion direction;
and the calculating module 803 is configured to obtain a time when the position of the projectile is close to the position of the vehicle, calculate a distance between the projectile and the vehicle at the time, and determine that the vehicle is a vehicle involved in a case if the distance is smaller than the threshold.
Similarly, the object detection model is built in the projectile and vehicle extraction module 801, the vehicle direction recognition model is built in the movement direction recognition module 802, and the structure of the models is as described in embodiment two. Further, since the apparatus is operated by the method described above, the repetitive description is not intended.
EXAMPLE five
The present embodiment also provides an electronic device, referring to fig. 9, comprising a memory 904 and a processor 902, wherein the memory 904 stores a computer program, and the processor 902 is configured to execute the computer program to perform the steps of any of the above method embodiments.
Specifically, the processor 902 may include a Central Processing Unit (CPU), or A Specific Integrated Circuit (ASIC), or may be configured to implement one or more integrated circuits of the embodiments of the present application.
Memory 904 may include, among other things, mass storage 904 for data or instructions. By way of example, and not limitation, memory 904 may include a hard disk drive (hard disk drive, HDD for short), a floppy disk drive, a solid state drive (SSD for short), flash memory, an optical disk, a magneto-optical disk, tape, or a Universal Serial Bus (USB) drive or a combination of two or more of these. Memory 904 may include removable or non-removable (or fixed) media, where appropriate. The memory 904 may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory 904 is a Non-Volatile (Non-Volatile) memory. In particular embodiments, memory 904 includes Read-only memory (ROM) and Random Access Memory (RAM). The ROM may be mask-programmed ROM, Programmable ROM (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or FLASH memory (FLASH), or a combination of two or more of these, where appropriate. The RAM may be a static random-access memory (SRAM) or a dynamic random-access memory (DRAM), where the DRAM may be a fast page mode dynamic random-access memory 904 (FPMDRAM), an extended data output dynamic random-access memory (EDODRAM), a synchronous dynamic random-access memory (SDRAM), or the like.
The memory 904 may be used to store or cache various data files for processing and/or communication purposes, as well as possibly computer program instructions for execution by the processor 902.
The processor 902 implements any of the projectile identification methods, vehicle projectile drip identification methods in the above embodiments by reading and executing computer program instructions stored in the memory 904.
Optionally, the electronic apparatus may further include a transmission device 906 and an input/output device 908, wherein the transmission device 906 is connected to the processor 902, and the input/output device 908 is connected to the processor 902.
The transmitting device 906 may be used to receive or transmit data via a network. Specific examples of the network described above may include wired or wireless networks provided by communication providers of the electronic devices. In one example, the transmission device includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmitting device 906 can be a Radio Frequency (RF) module configured to communicate with the internet via wireless.
The input-output device 908 is used to input or output information. For example, the input/output device may be a mobile terminal, a display screen, a sound box, a microphone, a mouse, a keyboard, or other devices. In the present embodiment, the input information may be a video to be processed, a sequence of video frames, a video frame, or the like, and the output information may be a projectile recognition result, a vehicle projectile drip recognition result, or the like.
Alternatively, in this embodiment, the processor 902 may be configured to execute the following steps by a computer program:
step S101, a video frame sequence is obtained, and a moving object is extracted from the video frame sequence.
And S102, processing the moving target to obtain a suspected throwing object.
S103, acquiring the data of the tossing sample, and clustering pixels of the data of the tossing sample to obtain a clustering result.
And S104, acquiring a pixel value of the suspected projectile, and identifying the suspected projectile of which the pixel value is in the clustering result to obtain the projectile.
Step S501, processing a video frame sequence by applying the method for identifying the projectile in the first embodiment to obtain the projectile; extracting vehicles in video frames, wherein the video frames are obtained from the sequence of video frames.
And step S502, identifying the motion direction of the vehicle in the video frame, and setting a threshold value based on the motion direction.
And S503, acquiring the time when the position of the projectile is close to that of the vehicle, calculating the distance between the projectile and the vehicle at the time, and if the distance is smaller than the threshold value, judging that the vehicle is a vehicle involved in a case.
It should be noted that, for specific examples in this embodiment, reference may be made to examples described in the foregoing embodiments and optional implementations, and details of this embodiment are not described herein again.
In addition, in combination with the method for identifying a projectile and the method for identifying vehicle throwing and dripping leakage in the above embodiments, the embodiments of the present application may be implemented by providing a storage medium. The storage medium having stored thereon a computer program; the computer program, when executed by a processor, implements any of the projectile identification methods, vehicle projectile drip identification methods in the above embodiments.
It should be understood by those skilled in the art that various features of the above embodiments can be combined arbitrarily, and for the sake of brevity, all possible combinations of the features in the above embodiments are not described, but should be considered as within the scope of the present disclosure as long as there is no contradiction between the combinations of the features.
The above examples are merely illustrative of several embodiments of the present application, and the description is more specific and detailed, but not to be construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (16)

1. A method of identifying a projectile comprising the steps of:
acquiring a video frame sequence, and extracting a moving object from the video frame sequence;
processing the moving target to obtain a suspected throwing object;
acquiring the data of the tossing sample, and clustering pixels of the data of the tossing sample to obtain a clustering result;
and acquiring a pixel value of the suspected projectile, and identifying the suspected projectile of the pixel value in the clustering result to obtain the projectile.
2. The projectile recognition method of claim 1 wherein said extracting moving objects in said sequence of video frames comprises:
inputting the video frame sequence into a Gaussian mixture model, and calculating the mean value between pixels and Gaussian distribution corresponding to the pixels;
comparing the mean value with an initial background threshold value, and if the mean value is smaller than the initial background threshold value, primarily dividing the foreground and the background of the moving target;
if the mean value is larger than the initial background threshold value, setting a background change rate to update the initial background threshold value, comparing the mean value with the updated initial background threshold value, and if the mean value is smaller than the updated initial background threshold value, dividing the foreground and the background of the moving target again.
3. The projectile identification method of claim 1 wherein said processing said moving object to obtain a suspected projectile comprises:
carrying out graying processing on the image frame sequence of the extracted moving target, and carrying out binarization processing on the obtained grayed image to obtain a binary image of the image frame sequence;
performing morphological dilation operation on the binary image, performing connected domain detection on the binary image after the morphological dilation operation, and filling the detected connected domain to obtain an image after the moving target is enhanced;
and denoising the enhanced image of the moving target, and extracting a suspected toss.
4. The projectile identification method of claim 1,
the clustering method is a Kmeans algorithm, and the clustering of the pixels of the projectile sample data comprises the following steps:
acquiring a type of a projectile in the projectile sample data;
clustering a distribution range of projectile pixel values corresponding to the projectile type according to the projectile sample data, and expressing the result as
[
Figure DEST_PATH_IMAGE002
,
Figure DEST_PATH_IMAGE004
],i∈N,pix∈[0,255],
Figure 603177DEST_PATH_IMAGE002
Is the minimum value of the range of pixel values,
Figure 85105DEST_PATH_IMAGE004
is the maximum value of the pixel value range;
and N is the number of pixel value range distribution of the sprinkled objects obtained by clustering.
5. The projectile identification method of claim 1 wherein identifying said suspected projectiles having said pixel values within said clustered results to obtain a projectile comprises:
inputting the suspected projectile with the pixel values in the clustering result into a projectile recognition model, wherein the projectile recognition model comprises a feature enhancement network and a prediction network, the feature enhancement network performs feature enhancement on input features by using 1x1 convolution, 3x3 depth-wise convolution and 1x1 convolution on one branch, and the global pooling layer, 1x1 convolution and sigmoid gate control unit are used on the other branch to weight the input features into new features in depth; performing element-level addition on the features after the features are added and the features weighted in depth, and outputting target features;
the target feature is input into the prediction network to predict the position of the projectile and thereby identify the projectile.
6. A vehicle throwing and dripping method is characterized by comprising the following steps:
processing the sequence of video frames using the method of projectile identification according to any one of claims 1 to 5 to obtain a projectile;
extracting vehicles from video frames, wherein the video frames are obtained from the sequence of video frames;
identifying a direction of motion of a vehicle in the video frame, setting a threshold based on the direction of motion;
and acquiring the time when the position of the projectile is close to that of the vehicle, calculating the distance between the projectile and the vehicle at the time, and if the distance is smaller than the threshold value, judging that the vehicle is a case involved vehicle.
7. The vehicle shed drip identification method according to claim 6,
"extracting vehicles in video frames" includes:
and inputting the video frame containing the throwing object into a vehicle detection model, and obtaining a vehicle surrounding frame output by the vehicle detection model so as to identify the vehicle.
8. The vehicle throwing drip identification method according to claim 6, wherein the step of obtaining a time at which the position of the projectile and the vehicle approach each other and calculating a distance therebetween comprises:
acquiring a projectile surrounding frame of the projectile and a vehicle surrounding frame of the vehicle;
judging the position change between the throwing object surrounding frame and the vehicle surrounding frame in the video frame sequence, and acquiring the time when the positions are close;
and calculating the distance between the two at the moment, wherein the distance comprises the Euclidean distance.
9. The vehicle splash drip identification method of claim 6, comprising:
acquiring the distance between a vehicle sample and a projectile sample in each frame of image in the projectile sample data;
the threshold is associated with the distance between the vehicle sample and the projectile sample in each direction of motion.
10. The vehicle splash drip identification method of claim 6, further comprising: and saving the video frame of the involved vehicle at the moment.
11. The vehicle shed drip identification method of claim 6, wherein prior to "extracting the shed and vehicle in video frames", the method comprises:
video frames of a vehicle and a projectile occurring simultaneously are searched for in the sequence of video frames.
12. The vehicle shed drip identification method of claim 11 wherein the direction of motion of the vehicle in the video frame is determined based on the trajectory of the vehicle in the sequence of video frames.
13. A projectile identification device, comprising:
the moving object extraction module is used for acquiring a video frame sequence and extracting a moving object from the video frame sequence;
the processing module is used for processing the moving target to obtain a suspected throwing object;
the clustering module is used for acquiring the tossing sample data and clustering pixels of the tossing sample data to obtain a clustering result;
and the projectile identification module is used for acquiring the pixel value of the suspected projectile and identifying the suspected projectile of which the pixel value is in the clustering result to obtain the projectile.
14. The utility model provides a vehicle is shed and is dripped hourglass recognition device which characterized in that includes:
the projectile and vehicle extraction module is used for processing the video frame sequence by applying a projectile identification method to obtain a projectile; extracting vehicles from video frames, wherein the video frames are obtained from the sequence of video frames;
the motion direction identification module is used for identifying the motion direction of the vehicle in the video frame and setting a threshold value based on the motion direction;
and the calculation module is used for acquiring the time when the position of the throwing object is close to that of the vehicle, calculating the distance between the throwing object and the vehicle at the time, and judging that the vehicle is a case involved vehicle if the distance is smaller than the threshold value.
15. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the method of identifying a projectile in any one of claims 1 to 5 or the method of identifying a vehicle projectile drip leak in any one of claims 6 to 12.
16. A storage medium having a computer program stored therein, wherein the computer program is arranged to be executed by a processor to perform the method of identifying a projectile identification as claimed in any one of claims 1 to 5 or the method of identifying a vehicle projectile drip as claimed in any one of claims 6 to 12.
CN202110675973.5A 2021-06-18 2021-06-18 Method and device for identifying sprinkled objects and vehicle sprinkling and leaking Pending CN113255580A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110675973.5A CN113255580A (en) 2021-06-18 2021-06-18 Method and device for identifying sprinkled objects and vehicle sprinkling and leaking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110675973.5A CN113255580A (en) 2021-06-18 2021-06-18 Method and device for identifying sprinkled objects and vehicle sprinkling and leaking

Publications (1)

Publication Number Publication Date
CN113255580A true CN113255580A (en) 2021-08-13

Family

ID=77188681

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110675973.5A Pending CN113255580A (en) 2021-06-18 2021-06-18 Method and device for identifying sprinkled objects and vehicle sprinkling and leaking

Country Status (1)

Country Link
CN (1) CN113255580A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113850241A (en) * 2021-11-30 2021-12-28 城云科技(中国)有限公司 Vehicle window parabolic detection method and device, computer program product and electronic device
CN115546704A (en) * 2022-11-28 2022-12-30 城云科技(中国)有限公司 Vehicle projectile identification method, device and application
CN117789141A (en) * 2024-02-23 2024-03-29 中邮建技术有限公司 Pavement throwing event detection method based on artificial intelligence
CN117789141B (en) * 2024-02-23 2024-04-26 中邮建技术有限公司 Pavement throwing event detection method based on artificial intelligence

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2014134451A (en) * 2012-02-24 2016-03-20 Самсунг Электроникс Ко., Лтд. METHOD AND DEVICE FOR MOVING CONTENT IN THE TERMINAL
CN111274982A (en) * 2020-02-04 2020-06-12 浙江大华技术股份有限公司 Method and device for identifying projectile and storage medium
CN111339824A (en) * 2019-12-31 2020-06-26 南京艾特斯科技有限公司 Road surface sprinkled object detection method based on machine vision

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2014134451A (en) * 2012-02-24 2016-03-20 Самсунг Электроникс Ко., Лтд. METHOD AND DEVICE FOR MOVING CONTENT IN THE TERMINAL
CN111339824A (en) * 2019-12-31 2020-06-26 南京艾特斯科技有限公司 Road surface sprinkled object detection method based on machine vision
CN111274982A (en) * 2020-02-04 2020-06-12 浙江大华技术股份有限公司 Method and device for identifying projectile and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113850241A (en) * 2021-11-30 2021-12-28 城云科技(中国)有限公司 Vehicle window parabolic detection method and device, computer program product and electronic device
CN115546704A (en) * 2022-11-28 2022-12-30 城云科技(中国)有限公司 Vehicle projectile identification method, device and application
CN115546704B (en) * 2022-11-28 2023-02-17 城云科技(中国)有限公司 Vehicle projectile identification method, device and application
CN117789141A (en) * 2024-02-23 2024-03-29 中邮建技术有限公司 Pavement throwing event detection method based on artificial intelligence
CN117789141B (en) * 2024-02-23 2024-04-26 中邮建技术有限公司 Pavement throwing event detection method based on artificial intelligence

Similar Documents

Publication Publication Date Title
US20230326006A1 (en) Defect detection method and device for an lcd screen
Cheng et al. Automated detection of sewer pipe defects in closed-circuit television images using deep learning techniques
CN104700099A (en) Method and device for recognizing traffic signs
CN111709416B (en) License plate positioning method, device, system and storage medium
Wang et al. Automated sewer pipe defect tracking in CCTV videos based on defect detection and metric learning
CN112200143A (en) Road disease detection method based on candidate area network and machine vision
CN109961057A (en) A kind of vehicle location preparation method and device
CN111274926B (en) Image data screening method, device, computer equipment and storage medium
CN108898101B (en) High-resolution SAR image road network detection method based on sketch and prior constraint
CN114998852A (en) Intelligent detection method for road pavement diseases based on deep learning
Fan et al. A novel automatic dam crack detection algorithm based on local-global clustering
CN113255580A (en) Method and device for identifying sprinkled objects and vehicle sprinkling and leaking
CN113111727A (en) Method for detecting rotating target in remote sensing scene based on feature alignment
CN112149503A (en) Target event detection method and device, electronic equipment and readable medium
CN111833353B (en) Hyperspectral target detection method based on image segmentation
CN113627229A (en) Object detection method, system, device and computer storage medium
CN113505769B (en) Target detection method and vehicle throwing and dripping identification method applying same
CN113378912B (en) Forest illegal reclamation land block detection method based on deep learning target detection
CN110765900B (en) Automatic detection illegal building method and system based on DSSD
CN112836590A (en) Flood disaster monitoring method and device, electronic equipment and storage medium
Ranyal et al. Enhancing pavement health assessment: An attention-based approach for accurate crack detection, measurement, and mapping
CN115376106A (en) Vehicle type identification method, device, equipment and medium based on radar map
CN114332144A (en) Sample granularity detection method and system, electronic equipment and storage medium
Kumar et al. Feasibility analysis of convolution neural network models for classification of concrete cracks in Smart City structures
CN113936300A (en) Construction site personnel identification method, readable storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination