CN117853971A - Method, device, equipment and storage medium for detecting sprinkled object - Google Patents

Method, device, equipment and storage medium for detecting sprinkled object Download PDF

Info

Publication number
CN117853971A
CN117853971A CN202311765117.4A CN202311765117A CN117853971A CN 117853971 A CN117853971 A CN 117853971A CN 202311765117 A CN202311765117 A CN 202311765117A CN 117853971 A CN117853971 A CN 117853971A
Authority
CN
China
Prior art keywords
learning rate
video frame
background
current
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311765117.4A
Other languages
Chinese (zh)
Inventor
郭旭
朱林
蒋松涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Keyuan Software Technology Development Co ltd
Suzhou Keda Technology Co Ltd
Original Assignee
Suzhou Keyuan Software Technology Development Co ltd
Suzhou Keda Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Keyuan Software Technology Development Co ltd, Suzhou Keda Technology Co Ltd filed Critical Suzhou Keyuan Software Technology Development Co ltd
Priority to CN202311765117.4A priority Critical patent/CN117853971A/en
Publication of CN117853971A publication Critical patent/CN117853971A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a method, a device, equipment and a storage medium for detecting a throwing object, comprising the following steps: aiming at each original video frame in the obtained video sequence to be detected, determining the current learning rate according to video frame information corresponding to the original video frame and a preset learning rate model; carrying out mixed Gaussian background modeling according to the current learning rate and the original video frame to generate a fast background image and a slow background image corresponding to the original video frame; inputting the fast background image and the slow background image into a pre-constructed lightweight feature extraction model for feature extraction, and determining a corresponding fast background feature map and slow background feature map; determining a frame difference image according to the fast background feature image and the slow background feature image, identifying a casting object on the frame difference image, and determining a casting object detection result; the learning rate model is constructed according to an exponential decay principle. The method meets the requirements of robustness and anti-interference, reduces the overall operation data volume and improves the effectiveness of the detection result.

Description

Method, device, equipment and storage medium for detecting sprinkled object
Technical Field
The application relates to the technical field of computer vision, in particular to a method, a device, equipment and a storage medium for detecting a casting object.
Background
Digital traffic is an important field of digital economic development, promotes the deep fusion of advanced information technology and traffic field, and also promotes the intelligent, digital and informationized development of traffic industry. Problems caused by traffic anomalies and road sprinklers are becoming increasingly important. For example, under the traffic application scenes of urban roads, tunnel highways, expressways, railways, water transportation and the like, the existence of the sprinkle objects can easily cause a series of traffic accidents, the traffic capacity is seriously affected, serious safety problems are brought, and how to accurately and quickly realize traffic event detection and timely identify and process the sprinkle objects becomes an important topic in the field of intelligent traffic security.
At present, current and vertical tracks are tracked through the acquired radar map so as to detect the casting objects, however, the method is poor in feasibility and has the risk of road section omission. And because the road surface state is comparatively complicated under the actual use condition, and the track initialization needs to gather under quiet environment and generate for the error to throwing thing discernment is higher, simultaneously because only be applicable to the highway section that has the radar, whole cost is higher, is difficult to the universality and uses widely.
When the method is used for identifying the casting object in a Gaussian modeling mode aiming at the image acquired on the detection site, the real-time requirement is difficult to meet due to huge calculated amount required by modeling, and the constructed background image is updated in a fixed learning rate mode in the current Gaussian background modeling process, so that the adaptability to the condition of changeable light weather in a road casting object detection scene is poor, the detection accuracy of the casting object is low, and the requirement of intelligent traffic security processing is difficult to meet.
Disclosure of Invention
The application provides a method, a device, equipment and a storage medium for detecting a casting object, which are used for determining an adaptive learning rate for each original video frame in a video frame sequence to be detected, so that the recognition result of the casting object detection by updating and generating a fast and slow background according to the learning rate is more accurate, the real-time detection of the casting object is realized, the overall operation data volume is reduced while the requirements of robustness and anti-interference performance are met, and the effectiveness of the detection result is improved.
In a first aspect, an embodiment of the present application provides a method for detecting a casting object, including:
aiming at each original video frame in the obtained video sequence to be detected, determining the current learning rate according to video frame information corresponding to the original video frame and a preset learning rate model;
Carrying out mixed Gaussian background modeling according to the current learning rate and the original video frame to generate a fast background image and a slow background image corresponding to the original video frame;
inputting the fast background image and the slow background image into a pre-constructed lightweight feature extraction model for feature extraction, and determining a corresponding fast background feature map and slow background feature map;
determining a frame difference image according to the fast background feature image and the slow background feature image, identifying a casting object on the frame difference image, and determining a casting object detection result;
the learning rate model is constructed according to an exponential decay principle and is used for determining a learning rate required in a mixed Gaussian background modeling process.
In a second aspect, an embodiment of the present application further provides a device for detecting a casting object, including:
the learning rate determining module is used for determining the current learning rate according to the video frame information corresponding to the original video frames and a preset learning rate model aiming at each original video frame in the obtained video sequence to be detected;
the background image generation module is used for carrying out mixed Gaussian background modeling according to the current learning rate and the original video frame, and generating a fast background image and a slow background image corresponding to the original video frame;
The feature map determining module is used for inputting the fast background image and the slow background image into a pre-constructed lightweight feature extraction model to perform feature extraction and determining a corresponding fast background feature map and a corresponding slow background feature map;
the casting detection module is used for determining a frame difference image according to the fast background feature image and the slow background feature image, identifying casting for the frame difference image and determining a casting detection result;
the learning rate model is constructed according to an exponential decay principle and is used for determining a learning rate required in a mixed Gaussian background modeling process.
In a third aspect, embodiments of the present application further provide a projectile detection apparatus, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the method of detecting a casting compound provided by the embodiments of the present application.
In a fourth aspect, embodiments of the present application also provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform the method of detecting a casting that is provided by embodiments of the present application.
According to the method, the device, the equipment and the storage medium for detecting the casting object, the current learning rate is determined according to video frame information corresponding to the original video frames and a preset learning rate model aiming at each original video frame in the obtained video sequence to be detected; carrying out mixed Gaussian background modeling according to the current learning rate and the original video frame to generate a fast background image and a slow background image corresponding to the original video frame; inputting the fast background image and the slow background image into a pre-constructed lightweight feature extraction model for feature extraction, and determining a corresponding fast background feature map and slow background feature map; determining a frame difference image according to the fast background feature image and the slow background feature image, identifying a casting object on the frame difference image, and determining a casting object detection result; the learning rate model is constructed according to an exponential decay principle and is used for determining a learning rate required in a mixed Gaussian background modeling process. By adopting the technical scheme, when the video sequence to be detected is acquired, aiming at each original video frame contained in the video sequence, the required current learning rate is determined for the original video frame through the learning rate model which is constructed according to the exponential decay principle and is used for determining the required learning rate in the modeling process of the Gaussian mixture background, and then the generation of fast and slow background images is respectively completed according to the learning rate. Meanwhile, as the model for extracting the characteristics of the generated fast and slow background images is a lightweight characteristic extraction model, the extracted characteristic images can meet the requirements of real-time performance and anti-interference performance while retaining effective information to the greatest extent, so that the accuracy of the detection result of the casting object determined by the fast and slow background characteristic images after the characteristic extraction is higher, the real-time detection of the casting object is realized, the overall operation data volume is reduced while the requirements of robustness and anti-interference performance are met, and the validity of the detection result is improved.
It should be understood that the description of this section is not intended to identify key or critical features of the embodiments of the application or to delineate the scope of the application. Other features of the present application will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for detecting a casting according to an embodiment of the present disclosure;
FIG. 2 is a diagram illustrating an exemplary structure of a lightweight feature extraction model according to an embodiment of the present application;
FIG. 3 is a flowchart of a method for detecting a casting object according to another embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a device for detecting a casting object according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a device for detecting a casting object according to an embodiment of the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Fig. 1 is a flowchart of a method for detecting a casting object according to an embodiment of the present application, where the embodiment of the present application may be applicable to a case of identifying and detecting a road casting object existing in a traffic scene, where the method may be performed by a casting object detection device, where the casting object detection device may be implemented by software and/or hardware, and where the casting object detection device may be mounted in a casting object detection apparatus. Alternatively, the casting detection device may be an electronic device, which may be a notebook, a desktop computer, an intelligent tablet, or the like, which is not limited in this embodiment of the present application.
As shown in fig. 1, the method for detecting a throwing object provided in the embodiment of the present application specifically includes the following steps:
s101, determining a current learning rate according to video frame information corresponding to the original video frames and a preset learning rate model aiming at each original video frame in the obtained video sequence to be detected.
The learning rate model is constructed according to an exponential decay principle and is used for determining a learning rate required in a mixed Gaussian background modeling process.
In this embodiment, the video sequence to be detected may be specifically understood as a video collected by a video collecting device that needs to set a detecting area of the casting object, and it may be understood that the video sequence to be detected may include a plurality of continuously collected video frames. The original video frames are specifically understood to be those extracted from the video sequence to be detected, in which the images contained therein need to be analyzed to determine whether or not there are any video frames of the casting. The video frame information may be specifically understood as parameter information indicating characteristics of an original video frame, such as information of a relative position of the original video frame in a video sequence to be detected, which is not limited in the embodiment of the present application. The learning rate model can be specifically understood as a mathematical model constructed according to the exponential decay principle, which is used for determining the learning rate of the background update in the process of modeling the background of the Gaussian mixture. The current learning rate can be specifically understood as the learning rate required for updating the fast and slow backgrounds when modeling the mixed gaussian background of the original video frame.
Specifically, when the video acquisition device in the object throwing detection area needs to acquire the video sequence to be detected, the video frame extraction can be performed on the video sequence to be detected according to a preset frame extraction frequency, so as to obtain a video frame set, and each video frame in the video frame set is determined to be an original video frame. And determining video frame information for indicating the characteristics of the original video frames according to the relative relation between each original video frame and the video sequence to be detected, substituting the video frame information corresponding to the original video frames into a preset learning rate model, and determining the current learning rate related to the original video frames based on the characteristics of the original video frames, wherein the current learning rate is used for updating the generated fast and slow backgrounds when the original video frames are subjected to mixed Gaussian background modeling.
S102, carrying out mixed Gaussian background modeling according to the current learning rate and the original video frame, and generating a fast background image and a slow background image corresponding to the original video frame.
In this embodiment, the mixed gaussian background modeling can be specifically understood as a background representation method based on statistical information of pixel samples, and statistical information such as probability density of a large number of sample values of pixels in a longer time can be used to represent the background, and the basic idea is to represent the pixel value represented by each pixel point by using superposition of K gaussian distributions. The background can be distinguished from the foreground by the mixed Gaussian background modeling mainly based on that in a scene observed for a long time, the background occupies most of time, more data support background distribution, and the mixed Gaussian background modeling is that the background update needs to change along with time to obtain a new background, so that the distinction of unchanged information and changed information in the background is realized.
Specifically, when the mixed gaussian background modeling is performed according to the original video frame through the current learning rate adjustment, a foreground mask map required for updating the fast and slow background maps is used, and further the fast and slow background maps determined by a frame on the original video frame are updated through the determined foreground mask map, so that a fast background image and a slow background image corresponding to the original video frame can be generated.
In the embodiment of the application, because the dual backgrounds constructed by the traditional Gaussian background modeling are updated by adopting the fixed learning rate, the phenomenon that the throwing object is fused into the slow background image due to overlong leaving time of the throwing object easily occurs, so that the current learning rate matched with each original video frame is determined according to the video frame information with stronger association relation with each original video frame by setting a learning rate model in the embodiment of the application, and further, the effective information contained in the fast background image and the slow background image updated according to the current learning rate is richer, so that the accuracy of the follow-up detection of the throwing object according to the fast and slow background images is higher.
S103, inputting the fast background image and the slow background image into a pre-constructed lightweight feature extraction model to perform feature extraction, and determining a corresponding fast background feature map and slow background feature map.
In this embodiment, the lightweight feature extraction model is specifically understood as a neural network model with fewer parameters, less calculation amount and less memory occupation for feature extraction of an image input therein. Optionally, the lightweight feature extraction model is a set of neural network layers for image feature extraction, which is obtained by clipping a pre-trained video structured model in a traffic scene.
In this embodiment, the video structuring model may be specifically understood as a neural network model that is trained in advance according to actual situations, and is used to perform target structuring processing on personnel, vehicles, etc. in the acquired video in the traffic scene, so as to extract attribute information such as age, sex, clothes color, etc. of the personnel, and information such as license plate number, vehicle type, vehicle color, etc. of the vehicles. Because the video structured model has completed the parameter training adjustment of each neural network layer, the neural network layer used for feature extraction is cut out after the training is completed, and the neural network layer is used as a lightweight feature extraction model. It can be understood that the number of layers of the neural network layer intercepted by the video structural model for feature extraction can be adjusted according to the different sizes of the objects to be identified, and fig. 2 is a schematic diagram of a structure of a lightweight feature extraction model according to the first embodiment of the present application, as shown in fig. 2, in the embodiment of the present application, two convolutional neural network layers for feature extraction are taken, and the size of the input image is 416×416 as an example.
Specifically, the obtained fast background image and slow background image corresponding to the original video frame are respectively input into a pre-constructed lightweight feature extraction model, a feature map obtained after feature extraction of the fast background image is determined to be a fast background feature map, and a feature map obtained after feature extraction of the slow background image is determined to be a slow background feature map.
In the embodiment of the application, the lightweight feature extraction model is a feature extraction model of a video structured model which is cut to complete training, and the complete training video structured model can have good feature extraction and identification effects on a target object in a video image in a traffic scene, so that the lightweight feature extraction model cut from the video structured model not only reserves the precision and robustness of the video structured model for feature extraction in the traffic scene, but also reduces real-time updating parameters, reduces time consumption when feature extraction is carried out on a fast and slow background image, and improves the feature extraction precision of the obtained fast and slow background feature image.
S104, determining a frame difference image according to the fast background feature image and the slow background feature image, identifying a casting object on the frame difference image, and determining a casting object detection result.
Specifically, a difference value is made between the fast background feature image and the slow background feature image, a frame difference image corresponding to the fast background feature image and the slow background feature image is obtained, and then the object to be thrown is identified according to the obtained frame difference image through a target identification algorithm, and the object to be thrown detection result is determined according to target information obtained through identification.
According to the technical scheme of the embodiment, the current learning rate is determined according to video frame information corresponding to the original video frames and a preset learning rate model aiming at each original video frame in the obtained video sequence to be detected; carrying out mixed Gaussian background modeling according to the current learning rate and the original video frame to generate a fast background image and a slow background image corresponding to the original video frame; inputting the fast background image and the slow background image into a pre-constructed lightweight feature extraction model for feature extraction, and determining a corresponding fast background feature map and slow background feature map; determining a frame difference image according to the fast background feature image and the slow background feature image, identifying a casting object on the frame difference image, and determining a casting object detection result; the learning rate model is constructed according to an exponential decay principle and is used for determining a learning rate required in a mixed Gaussian background modeling process. By adopting the technical scheme, when the video sequence to be detected is acquired, aiming at each original video frame contained in the video sequence, the required current learning rate is determined for the original video frame through the learning rate model which is constructed according to the exponential decay principle and is used for determining the required learning rate in the modeling process of the Gaussian mixture background, and then the generation of fast and slow background images is respectively completed according to the learning rate. Meanwhile, as the model for extracting the characteristics of the generated fast and slow background images is a lightweight characteristic extraction model, the extracted characteristic images can meet the requirements of real-time performance and anti-interference performance while retaining effective information to the greatest extent, so that the accuracy of the detection result of the casting object determined by the fast and slow background characteristic images after the characteristic extraction is higher, the real-time detection of the casting object is realized, the overall operation data volume is reduced while the requirements of robustness and anti-interference performance are met, and the validity of the detection result is improved.
Fig. 3 is a flowchart of a method for detecting a casting object according to an embodiment of the present application, where the method is further optimized based on the above-mentioned alternative technical solutions, a current video frame number is determined according to position information of an original video frame in a video sequence to be detected, the current video frame number is substituted into a preset learning rate model corresponding formula, and a current learning rate corresponding to the original video frame is determined based on an exponential decay principle. In addition, according to the difference of the current video frames, the mode of determining the current learning frames is different, so that the video frames reaching the preset total frames can be adjusted based on the attenuation index constructed by the current video frames while the initialization stability of the fast and slow backgrounds is guaranteed, the update of the fast and slow background pictures according to the fast and slow background learning rate is flatter, and the robustness is better. Further, through combination of a frame difference method and a target identification method, objects possibly existing in an original video frame are determined, the identified objects are distinguished based on a preset area threshold, and the objects belonging to the casting object are determined, so that noise in the original video frame is filtered effectively, and accuracy of casting object detection is improved.
As shown in fig. 3, the method for detecting a throwing object provided in the embodiment of the present application specifically includes the following steps:
s201, for each original video frame in the obtained video sequence to be detected, determining the current video frame number according to the position of the original video frame in the video sequence to be detected.
Specifically, when the video acquisition device in the object throwing detection area needs to acquire the video sequence to be detected, which is shot in a period of time, the video frame extraction can be performed on the video sequence to be detected according to a preset frame extraction frequency, so as to obtain a video frame set. Because each original video frame in the video frame set is sequentially extracted according to the sequence of acquisition and frame extraction, according to the arrangement position of each original video frame in the video frame set, the number of the video frames belonging to the video frame set can be determined, and then the number of the video frames is determined as the current video frame number.
S202, substituting the current video frame number into a preset learning rate model to determine the current learning rate.
Optionally, substituting the current video frame number into a preset learning rate model, and determining the current learning rate may include the following two cases:
1) If the current video frame number is smaller than or equal to the preset total frame number in the preset learning rate model, determining the initial learning rate in the learning rate model as the current learning rate corresponding to the original video frame.
In this embodiment, the preset total frame number may be specifically understood as the number of video frames preset according to the actual situation to ensure the stability of the initialization background of the video frames, or may be understood as the number of video frames used to ensure the update stability of the fast and slow background images constructed by the mixed gaussian background modeling method. For example, the preset total frame number may be 100 frames, which is not limited in the embodiment of the present application. The initial learning rate can be specifically understood as the learning rate of the learning rate model, which is predetermined according to actual conditions when the learning rate model is constructed, meets the requirement of updating the fast and slow backgrounds under the object to be thrown, and it can be understood that too small initial learning rate can enable the updating of the fast and slow backgrounds to be slow, and too large initial learning rate can enable the fast and slow backgrounds to be too close to provide an effective modeling background map for object to be thrown, and the initial learning rate can be set to be 0.2 in the embodiment of the application, and other data can be set according to the actual requirements, which is not limited in the embodiment of the application.
Specifically, if the current video frame number is smaller than or equal to the preset total frame number in the preset learning rate model, the current corresponding original video frame can be considered to be the video frame acquired in the initial shorter time in the video frame sequence to be detected, and the fast and slow background images constructed by the mixed gaussian background modeling mode are not stable, so that the updating of the fast and slow background images is completed by adopting different learning rates, and a larger error exists between the generated fast background image and the slow background image, so that the accuracy of the subsequent detection of the sprinkled object is affected. At this time, the initial learning rate preset in the learning rate model can be determined as the current learning rate corresponding to the original video frame, that is, the current fast background learning rate for updating the fast background image and the slow background image is kept consistent with the current slow background learning rate, so that the stability of initialization of the fast and slow backgrounds is ensured.
2) If the current video frame number is greater than the preset total frame number, determining the ratio of the preset total frame number to the current video frame number as an attenuation index of the learning rate model, and determining the current learning rate according to the attenuation index, the attenuation coefficient in the learning rate model and the initial learning rate.
The current learning rate comprises a current fast background learning rate and a current slow background learning rate.
In this embodiment, the current fast background learning rate may be specifically understood as a learning rate used for updating the fast background image corresponding to the original video frame in the mixed gaussian background modeling. The current slow background learning rate can be understood as specifically the learning rate used to update the slow background image corresponding to the original video frame in the mixed gaussian background modeling.
Specifically, if the current video frame number is greater than the preset total frame number in the preset learning rate model, it may be considered that the construction of the initial background image for the video frame sequence to be detected has been completed, and at this time, different learning rates may be set for the fast background image and the slow background image to perform different updates thereto. At this time, the ratio of the preset total frame number to the current video frame number can be determined as the attenuation index of the learning rate model, and then the attenuation index is substituted into the learning rate model containing the preset attenuation coefficient and the initial learning rate, so that the determination of the current learning rate is completed.
In the embodiment of the application, aiming at the original video frames with the preset total frame number, the fast and slow backgrounds have the same learning rate, and the initialization stability of the fast and slow backgrounds is ensured; and after the original video frames reach the preset total frame number, the fast and slow background learning rate is adjusted based on the attenuation index constructed by the current video frame number, so that the update of the fast and slow background images according to the fast and slow background learning rate is flatter, and the robustness is better.
Optionally, determining the current learning rate according to the attenuation index, the attenuation coefficient in the learning rate model and the initial learning rate, specifically may be implemented by the following steps:
1) And determining a learning rate attenuation variable by taking an attenuation coefficient in the learning rate model as a base and taking an attenuation index as an index.
2) And calculating a difference value between the first learning rate and the learning rate decay variable, and determining the product of the initial learning rate and the difference value as a current fast background learning rate corresponding to the original video frame.
3) And determining the product of the current fast background learning rate and the preset slow background updating rate as the current slow background learning rate corresponding to the original video frame.
It can be understood that the above steps 1) -2) are a Chinese description manner for a formula corresponding to the pre-built learning rate model, and the above steps 1) -2) can be regarded as substituting the current video frame number into the formula corresponding to the pre-built learning rate model, and determining the output of the formula as the current fast background learning rate corresponding to the original video frame, where the formula corresponding to the learning rate model can be represented by the following formula:
the current_learning_rate is the current learning rate, that is, the current fast background learning rate in the embodiment of the application; initial_learning_rate is the initial learning rate;
The decay_rate is a decay coefficient, which can be specifically understood as a coefficient for controlling the decay rate in the exponential decay process, and the decay coefficient can be set to 0.99 by way of example, and can also be adaptively set according to practical situations; global_steps is a preset total frame number, and the preset total frame number can be 100 or adaptively set according to actual conditions by way of example; the fade_steps is the current video frame number, and correspondingly, global_steps/fade_steps is the decay index proposed in the embodiment of the present application,namely the learning rate decay variable proposed in the embodiments of the present application.
Alternatively, in order to update the fast background image and the slow background image at different rates and bias, after determining a current fast background learning rate having a higher association with the original video frame through the learning rate model, the current fast background learning rate may be multiplied by a preset slow background update rate, and the product may be determined as the current slow background learning rate corresponding to the original video frame. For example, the preset slow background update rate may be determined according to a preset total number of frames in the learning rate model, e.g., when the preset total number of frames is 100, the preset slow background update rate may be 0.01.
S203, performing size transformation and Gaussian blur processing on the original video frame to determine a preprocessed video frame.
Specifically, in order to adapt to the input requirement of a lightweight feature extraction model for performing feature extraction subsequently, when performing hybrid gaussian background modeling, the original video frame for performing hybrid gaussian background modeling can be subjected to size transformation, and converted into an image size suitable for being input into the lightweight feature extraction model, so that a fast and slow background image obtained by performing hybrid gaussian background modeling according to the original video frame also meets the image size input into the lightweight feature extraction model. Meanwhile, as background features in the original video frames are mainly focused and detail levels contained in the original video frames are not focused when the mixed Gaussian background modeling is carried out, gaussian blur processing can be carried out on the original video frames subjected to size conversion to obtain preprocessed video frames, so that noise in the preprocessed video frames is reduced, the overall detail processing level is reduced, and the data calculation amount determined for background images is further reduced.
Optionally, the original video frame may be resized by the size function, where the size of the size transform may be set to 416×416, or the size of other size transforms may be set according to the actual situation, which is not limited in the embodiment of the present application. Meanwhile, when the original video frame after the size conversion is processed by the gaussian blur, the larger the Kernel value used for the gaussian blur is, the more complex the scene can be represented, and the corresponding calculation amount will be increased, so that the Kernel value in the embodiment of the application can be set according to the actual situation, so that the preprocessed video frame obtained after processing according to the Kernel value can not only meet the feature expression requirement, but also reduce the calculation amount required in the whole processing process, and the Kernel value can be taken as 3 or can be adaptively adjusted according to the actual situation.
S204, respectively carrying out mixed Gaussian background modeling on the preprocessed video frames according to the current fast background learning rate and the current slow background learning rate in the current learning rate, and determining a current fast background mask map and a current slow background mask map.
Specifically, updating the mask map corresponding to the preprocessed video frame according to the learning rate according to the current fast background learning rate and the current slow background learning rate in the current learning rate respectively to obtain a current fast background mask map and a current slow background mask map.
S205, updating the last fast background image corresponding to the original video frame through the current fast background mask image, and generating a fast background image corresponding to the original video frame.
In this embodiment, the previous fast background image may be specifically understood as a fast background image corresponding to a previous original video frame obtained by performing mixed gaussian background modeling according to the previous original video frame of the original video frame.
Specifically, the background information corresponding to the original video frame is determined through the current fast background mask map, the difference information between the previous fast background map corresponding to the original video frame is compared with the difference information between the previous fast background map corresponding to the original video frame, the previous fast background map corresponding to the original video frame is updated according to the difference information, and the generated image is determined to be the fast background image corresponding to the original video frame.
S206, updating the last slow background image corresponding to the original video frame through the current slow background mask image, and generating a slow background image corresponding to the original video frame.
In this embodiment, the last slow background image can be specifically understood as a slow background image corresponding to the last original video frame obtained by performing mixed gaussian background modeling according to the last original video frame of the original video frames.
Specifically, the background information corresponding to the original video frame is determined through the current slow background mask map, the difference information between the last slow background map corresponding to the original video frame is compared with the difference information between the last slow background map corresponding to the original video frame, the last slow background map corresponding to the original video frame is updated according to the difference information, and the generated image is determined to be the slow background image corresponding to the original video frame.
S207, inputting the fast background image and the slow background image into a pre-constructed lightweight feature extraction model to perform feature extraction, and determining a corresponding fast background feature map and slow background feature map.
Optionally, when the features of the fast background image and the slow background image are extracted, the obtained feature map data can be subjected to data locking, and the data is locked to be in the range of 0-255 so as to prevent the data from overflowing, and the corresponding fast background feature map and slow background feature map are obtained.
S208, determining a frame difference image according to the fast background feature image and the slow background feature image, and performing binarization processing on the frame difference image to determine a binarization image.
Specifically, the fast background feature map and the slow background feature map are subjected to frame difference operation to obtain a frame difference map corresponding to the fast background feature map and the slow background feature map, and further binarization processing is performed on each pixel point on the frame difference map, namely, the gray value of each pixel point is set to be 0 or 255, so that a binarization map corresponding to the frame difference map is generated.
S209, performing expansion and corrosion treatment on the binarized graph, and determining a treatment graph to be identified.
Specifically, expansion and corrosion treatment in morphological treatment are carried out on the binarized graph to eliminate bridging gaps, tiny black spots and fine line-shaped white stripes in the binarized graph, so as to obtain a to-be-identified treatment graph.
S210, carrying out target recognition on the to-be-recognized processing diagram, and determining an object with the area larger than a preset area threshold value in a target recognition result as a throwing object.
Specifically, the target recognition method is used for carrying out target recognition processing on the processing diagram to be recognized, and the obtained target recognition result can comprise information such as the position information of the recognized object in the processing diagram to be recognized, the size of the recognized object and the like. Because the throwing objects are often objects occupying a certain area on the road, an area threshold value can be set according to different types of the throwing objects to be identified, so that noise with smaller area in the target identification result and the throwing objects meeting the area threshold value can be distinguished. And determining the area of the object corresponding to each target identification result through an area determining function, and determining the object with the area larger than a preset area threshold as a throwing object.
S211, generating a casting detection result according to the casting existence state and the casting position information.
Specifically, when no object larger than the preset area threshold is determined according to the target identification result, the existence state of the throwing object can be considered as nonexistent, and a throwing object detection result with the content of no throwing object can be generated at the moment; when the object with the area larger than the preset area threshold is determined according to the target identification result, the existence state of the throwing object can be considered to exist, the position information of the throwing object in the original video frame can be determined and identified, the position information is taken as the position information of the throwing object, at the moment, the throwing object detection result with the throwing object content and the throwing object position information stored in the corresponding format information can be generated.
According to the technical scheme of the embodiment, the current video frame number is determined according to the position information of the original video frame in the video sequence to be detected, and is substituted into a preset learning rate model corresponding formula, and the current learning rate corresponding to the original video frame is determined based on an exponential decay principle. In addition, according to the difference of the current video frames, the mode of determining the current learning frames is different, so that the video frames reaching the preset total frames can be adjusted based on the attenuation index constructed by the current video frames while the initialization stability of the fast and slow backgrounds is guaranteed, the update of the fast and slow background pictures according to the fast and slow background learning rate is flatter, and the robustness is better. Further, through combination of a frame difference method and a target identification method, objects possibly existing in an original video frame are determined, the identified objects are distinguished based on a preset area threshold, and the objects belonging to the casting object are determined, so that noise in the original video frame is filtered effectively, and accuracy of casting object detection is improved.
Fig. 4 is a schematic structural diagram of a device for detecting a casting object according to an embodiment of the present application, and as shown in fig. 4, the device for detecting a casting object includes a learning rate determining module 31, a background image generating module 32, a feature map determining module 33, and a casting object detecting module 34.
The learning rate determining module 31 is configured to determine, for each original video frame in the acquired video sequence to be detected, a current learning rate according to video frame information corresponding to the original video frame and a preset learning rate model; the background image generating module 32 is configured to perform mixed gaussian background modeling according to the current learning rate and the original video frame, and generate a fast background image and a slow background image corresponding to the original video frame; the feature map determining module 33 is configured to input the fast background image and the slow background image into a pre-constructed lightweight feature extraction model for feature extraction, and determine a corresponding fast background feature map and slow background feature map; the casting detection module 34 is configured to determine a frame difference map according to the fast background feature map and the slow background feature map, and perform casting identification on the frame difference map to determine a casting detection result; the learning rate model is constructed according to an exponential decay principle and is used for determining a learning rate required in a mixed Gaussian background modeling process.
According to the technical scheme, when the video sequence to be detected is obtained, the required current learning rate is determined for each original video frame through the learning rate model which is constructed according to the exponential decay principle and is used for determining the required learning rate in the modeling process of the Gaussian mixture background, and then the generation of the fast and slow background images is respectively completed according to the learning rate. Meanwhile, as the model for extracting the characteristics of the generated fast and slow background images is a lightweight characteristic extraction model, the extracted characteristic images can meet the requirements of real-time performance and anti-interference performance while retaining effective information to the greatest extent, so that the accuracy of the detection result of the casting object determined by the fast and slow background characteristic images after the characteristic extraction is higher, the real-time detection of the casting object is realized, the overall operation data volume is reduced while the requirements of robustness and anti-interference performance are met, and the validity of the detection result is improved.
Optionally, the lightweight feature extraction model is a set of neural network layers for image feature extraction, which is obtained by clipping a pre-trained video structured model in a traffic scene.
Optionally, the learning rate determination module 31 includes:
the current frame number determining unit is used for determining the current frame number according to the position of the original video frame in the video sequence to be detected;
and the learning rate determining unit is used for substituting the current video frame number into a preset learning rate model to determine the current learning rate.
Optionally, the learning rate determining unit is specifically configured to:
if the current video frame number is smaller than or equal to the preset total frame number in the preset learning rate model, determining the initial learning rate in the learning rate model as the current learning rate corresponding to the original video frame;
if the current video frame number is greater than the preset total frame number, determining the ratio of the preset total frame number to the current video frame number as an attenuation index of the learning rate model, and determining the current learning rate according to the attenuation index, the attenuation coefficient in the learning rate model and the initial learning rate;
the current learning rate comprises a current fast background learning rate and a current slow background learning rate.
Optionally, determining the current learning rate according to the decay index and the decay coefficient and the initial learning rate in the learning rate model includes:
Determining a learning rate attenuation variable by taking an attenuation coefficient in the learning rate model as a base number and an attenuation index as an index;
calculating a difference value between the first learning rate and the learning rate attenuation variable, and determining the product of the initial learning rate and the difference value as a current fast background learning rate corresponding to the original video frame;
and determining the product of the current fast background learning rate and the preset slow background updating rate as the current slow background learning rate corresponding to the original video frame.
Optionally, the background image generating module 32 includes:
the video frame preprocessing unit is used for carrying out size transformation and Gaussian blur processing on the original video frame and determining a preprocessed video frame;
the mask map determining unit is used for respectively carrying out mixed Gaussian background modeling on the preprocessed video frames according to a current fast background learning rate and a current slow background learning rate in the current learning rate to determine a current fast background mask map and a current slow background mask map;
a fast background image generating unit, configured to update a previous fast background image corresponding to an original video frame through a current fast background mask image, and generate a fast background image corresponding to the original video frame;
and the slow background image generation unit is used for updating the last slow background image corresponding to the original video frame through the current slow background mask image to generate a slow background image corresponding to the original video frame.
Optionally, the projectile detection module 34 includes:
the image binarization unit is used for carrying out binarization processing on the frame difference image and determining a binarization image;
the processing diagram determining unit is used for performing expansion and corrosion processing on the binarization diagram and determining a processing diagram to be identified;
the object determining unit is used for carrying out object recognition on the processing diagram to be recognized and determining an object with the area larger than a preset area threshold value in the object recognition result as an object to be thrown;
and the detection result generation unit is used for generating a casting object detection result according to the casting object existence state and the casting object position information.
The casting detection device provided by the embodiment of the application can execute the casting detection method provided by any embodiment of the application, and has the corresponding functional modules and beneficial effects of the execution method.
Fig. 5 is a schematic structural diagram of a device for detecting a casting object according to an embodiment of the present application. The casting detection device 40 may be an electronic device intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smartphones, wearable devices (e.g., helmets, eyeglasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the application described and/or claimed herein.
As shown in fig. 5, the casting detection apparatus 40 includes at least one processor 41, and a memory, such as a Read Only Memory (ROM) 42, a Random Access Memory (RAM) 43, etc., communicatively connected to the at least one processor 41, in which a computer program executable by the at least one processor is stored, and the processor 41 can execute various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 42 or the computer program loaded from the storage unit 48 into the Random Access Memory (RAM) 43. In the RAM 43, various programs and data required for the operation of the casting inspection device 40 may also be stored. The processor 41, the ROM 42 and the RAM 43 are connected to each other via a bus 44. An input/output (I/O) interface 45 is also connected to bus 44.
The various components in the casting detection apparatus 40 are connected to the I/O interface 45, including: an input unit 46 such as a keyboard, a mouse, etc.; an output unit 47 such as various types of displays, speakers, and the like; a storage unit 48 such as a magnetic disk, an optical disk, or the like; and a communication unit 49 such as a network card, modem, wireless communication transceiver, etc. The communication unit 49 allows the casting detection device 40 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The processor 41 may be various general and/or special purpose processing components with processing and computing capabilities. Some examples of processor 41 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 41 performs the various methods and processes described above, such as the method of detecting a casting.
In some embodiments, the method of detecting a casting may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 48. In some embodiments, some or all of the computer program may be loaded and/or installed onto the casting detection device 40 via the ROM 42 and/or the communication unit 49. When the computer program is loaded into RAM 43 and executed by processor 41, one or more steps of the method of detecting a casting compound described above may be performed. Alternatively, in other embodiments, processor 41 may be configured to perform the method of detecting a casting by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out the methods of the present application may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this application, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solutions of the present application are achieved, and the present application is not limited herein.
The above embodiments do not limit the scope of the application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (10)

1. A method of detecting a projectile comprising:
aiming at each original video frame in the obtained video sequence to be detected, determining the current learning rate according to video frame information corresponding to the original video frame and a preset learning rate model;
performing mixed Gaussian background modeling according to the current learning rate and the original video frame to generate a fast background image and a slow background image corresponding to the original video frame;
Inputting the fast background image and the slow background image into a pre-constructed lightweight feature extraction model to perform feature extraction, and determining a corresponding fast background feature map and slow background feature map;
determining a frame difference image according to the fast background feature image and the slow background feature image, identifying a casting object on the frame difference image, and determining a casting object detection result;
the learning rate model is constructed according to an exponential decay principle and is used for determining a learning rate required in a Gaussian mixture background modeling process.
2. The method of claim 1, wherein determining the current learning rate according to the video frame information corresponding to the original video frame and the preset learning rate model comprises:
determining the current video frame number according to the position of the original video frame in the video sequence to be detected;
substituting the current video frame number into a preset learning rate model to determine a current learning rate.
3. The method of claim 2, wherein substituting the current video frame number into a preset learning rate model to determine a current learning rate comprises:
if the current video frame number is smaller than or equal to the preset total frame number in a preset learning rate model, determining the initial learning rate in the learning rate model as the current learning rate corresponding to the original video frame;
If the current video frame number is greater than the preset total frame number, determining the ratio of the preset total frame number to the current video frame number as an attenuation index of the learning rate model, and determining a current learning rate according to the attenuation index, an attenuation coefficient in the learning rate model and an initial learning rate;
the current learning rate comprises a current fast background learning rate and a current slow background learning rate.
4. A method according to claim 3, wherein said determining a current learning rate from said decay index and a decay coefficient and an initial learning rate in said learning rate model comprises:
determining a learning rate attenuation variable by taking an attenuation coefficient in the learning rate model as a base number and taking the attenuation index as an index;
calculating a difference value between the initial learning rate and the learning rate attenuation variable, and determining the product of the initial learning rate and the difference value as a current fast background learning rate corresponding to the original video frame;
and determining the product of the current fast background learning rate and a preset slow background updating rate as the current slow background learning rate corresponding to the original video frame.
5. The method of claim 3, wherein said generating fast and slow background images corresponding to the original video frame based on the current learning rate and the mixed gaussian background modeling of the original video frame comprises:
Performing size transformation and Gaussian blur processing on the original video frame to determine a preprocessed video frame;
according to the current fast background learning rate and the current slow background learning rate in the current learning rate, respectively carrying out mixed Gaussian background modeling on the preprocessed video frames, and determining a current fast background mask map and a current slow background mask map;
updating a last fast background image corresponding to the original video frame through the current fast background mask image to generate a fast background image corresponding to the original video frame;
and updating a last slow background image corresponding to the original video frame through the current slow background mask image, and generating a slow background image corresponding to the original video frame.
6. The method of claim 1, wherein said performing a casting identification on the frame difference map, determining a casting detection result, comprises:
performing binarization processing on the frame difference image to determine a binarization image;
performing expansion and corrosion treatment on the binarization map, and determining a treatment map to be identified;
performing target recognition on the to-be-recognized processing diagram, and determining an object with the area larger than a preset area threshold value in a target recognition result as a throwing object;
And generating a casting detection result according to the casting existence state and the casting position information.
7. The method according to any one of claims 1-6, wherein the lightweight feature extraction model is a set of neural network layers for image feature extraction, tailored to a pre-trained video structured model in a traffic scene.
8. A projectile detection device, comprising:
the learning rate determining module is used for determining the current learning rate according to the video frame information corresponding to the original video frames and a preset learning rate model aiming at each original video frame in the obtained video sequence to be detected;
the background image generation module is used for carrying out mixed Gaussian background modeling according to the current learning rate and the original video frame and generating a fast background image and a slow background image corresponding to the original video frame;
the feature map determining module is used for inputting the fast background image and the slow background image into a pre-constructed lightweight feature extraction model to perform feature extraction and determining a corresponding fast background feature map and slow background feature map;
the throwing object detection module is used for determining a frame difference image according to the fast background characteristic image and the slow background characteristic image, identifying throwing objects for the frame difference image and determining a throwing object detection result;
The learning rate model is constructed according to an exponential decay principle and is used for determining a learning rate required in a Gaussian mixture background modeling process.
9. A casting detection apparatus, characterized by comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the method of detecting a casting compound of any one of claims 1-7.
10. A storage medium containing computer executable instructions which, when executed by a computer processor, are for performing the method of detecting a casting compound as claimed in any one of claims 1 to 7.
CN202311765117.4A 2023-12-20 2023-12-20 Method, device, equipment and storage medium for detecting sprinkled object Pending CN117853971A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311765117.4A CN117853971A (en) 2023-12-20 2023-12-20 Method, device, equipment and storage medium for detecting sprinkled object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311765117.4A CN117853971A (en) 2023-12-20 2023-12-20 Method, device, equipment and storage medium for detecting sprinkled object

Publications (1)

Publication Number Publication Date
CN117853971A true CN117853971A (en) 2024-04-09

Family

ID=90544416

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311765117.4A Pending CN117853971A (en) 2023-12-20 2023-12-20 Method, device, equipment and storage medium for detecting sprinkled object

Country Status (1)

Country Link
CN (1) CN117853971A (en)

Similar Documents

Publication Publication Date Title
CN113947188A (en) Training method of target detection network and vehicle detection method
CN113177497B (en) Training method of visual model, vehicle identification method and device
CN113705380A (en) Target detection method and device in foggy days, electronic equipment and storage medium
CN117036457A (en) Roof area measuring method, device, equipment and storage medium
CN116563802A (en) Tunnel scene judging method, device, equipment and storage medium
CN116681932A (en) Object identification method and device, electronic equipment and storage medium
CN114724113B (en) Road sign recognition method, automatic driving method, device and equipment
CN115436900A (en) Target detection method, device, equipment and medium based on radar map
CN115376106A (en) Vehicle type identification method, device, equipment and medium based on radar map
CN115995075A (en) Vehicle self-adaptive navigation method and device, electronic equipment and storage medium
CN115330841A (en) Method, apparatus, device and medium for detecting projectile based on radar map
CN117853971A (en) Method, device, equipment and storage medium for detecting sprinkled object
CN113920273B (en) Image processing method, device, electronic equipment and storage medium
CN115578431A (en) Image depth processing method and device, electronic equipment and medium
CN113989300A (en) Lane line segmentation method and device, electronic equipment and storage medium
CN115424441B (en) Road curve optimization method, device, equipment and medium based on microwave radar
CN113177481B (en) Target detection method, target detection device, electronic equipment and storage medium
CN113806361B (en) Method, device and storage medium for associating electronic monitoring equipment with road
CN112700657B (en) Method and device for generating detection information, road side equipment and cloud control platform
CN117647852B (en) Weather state detection method and device, electronic equipment and storage medium
CN114445711B (en) Image detection method, image detection device, electronic equipment and storage medium
CN115431968B (en) Vehicle controller, vehicle and vehicle control method
CN117392000B (en) Noise removing method and device, electronic equipment and storage medium
CN115359087A (en) Radar image background removing method, device, equipment and medium based on target detection
CN117953451A (en) Road boundary detection method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination